Mounting accusations of social media censorship amid Israel’s invasion of Gaza

Mounting accusations of social media censorship amid Israel’s invasion of Gaza
Courtesy: Middle East Institute

As the violent invasion of the Gaza Strip continues into it’s second month, as do mounting reports of social media platforms censoring content or impeding access to accounts, particularly from Palestinian civilians, including journalists on-the-ground in Gaza, and other global users. Social media is a vital tool in allowing for the flow of accurate, close-to-source, information about the violence in Gaza to circulate. This is why impACT is deeply concerned with accounts from Gazan journalists of account suspension, individual reports of major decreases in profile traffic after posting ‘pro-Palestinian content’, and the locking of large accounts sharing videos documenting life in Occupied Palestine. The unimpeded dissemination of accurate information about what is happening in Gaza is a lifeline for civilians and vital to informing the world about the realities of what many humanitarian organisations have called ethnic cleansing in Occupied Palestine. In response to ongoing accusations, a plethora of civil society organisations, as well as, human rights groups, have signed a statement demanding that social media giants respect Palestinian Digital Rights

 

This is certainly not the first time that social media companies have been accused of censoring Palestinian voices. For instance, during the forcible displacement of Palestinians from East Jerusalem and subsequent raids of Al Aqsa Mosque in 2021, reports of Palestinian censorship proliferated. In May 2021, the Israeli Supreme Court ruled that the forcible displacement of Palestinians in Sheikh Jarrah, East Jerusalem, was legal. In the ensuing violence, where 52 000 Palestinians were displaced, 256 of them being killed, as were 12 Israelis, numerous reports of digital rights violations were reported in Palestine. According to 7amleh, the Arab Centre for Social Media Advancement, large amounts of Palestinian content was removed or restricted, hashtags were hidden, archived content was deleted, and journalists reported their accounts were removed or suspended. 50% of incidents concerned Instagram (45% of which were in relation to the removal of stories with no warning or explanation), 35% Facebook, 11% Twitter, and 1% TikTok. Worryingly, reports also indicated that the Israeli state tracked geolocation data of those who were worshipping and present as IDF Forces stormed Al Aqsa Mosque and then sent them threatening messages.

 

When asked about the problem by The Independent, Facebook said that “a technical bug affected Stories around the world” and that an “error had temporarily restricted content from being viewed on the Al Aqsa hashtag page”. In June 2021, a letter was signed by 200 Facebook employees urging the company to fully investigate the potential biases. This was encouraged by Human Rights Watch and other groups, with senior researcher Deborah Brown suggesting that “Facebook has suppressed content posted by Palestinians … speaking out about human rights issues”. Over a year later, a report commissioned by Meta and carried out by independent consultant Business for Social Responsibility was released. In September 2022, the report revealed that Meta’s Facebook and Instagram had shown an “over-enforcement” of Arabic content, that “proactive detection rates of potentially violating Arab content were significantly higher than detection of potentially violating Hebrew content”. Though, they concluded that this was “a lack of oversight at Meta that allowed content policy errors with significant consequences to occur”. Meta spokespeople, in particular, have been adamant that the censorship of Palestinian content is related to the unintentional impact of content moderation, rather than the specific targeting of content.

 

However, in the second month of ethnic cleansing in Gaza, the same issues, in high volumes, are being reported. Prominent journalists, including Ahmed Shihab-Eldin and Motaz Azaiza have reported numerous issues with accessing accounts. Mr Shihab-Eldin, with 100 000 followers on Instagram, had their account banned with little explanation. Motaz Azaiza, a journalist on-the-ground in Gaza, has also experienced numerous issues in posting journalistic content. His account on X (formerly Twitter) was suspended. His Instagram account was also suspended, although temporarily, shortly after posting a video showing the house where 15 relatives had been killed by Israeli bombardment on 13th October. Other writers and activists, like Mariam Barghouti, an American-Palestinian, was reportedly threatened with account suspension by Meta after posting a video of children with Hamas operatives. Meta indicated that this was posting ‘pro-Hamas’ content. Whilst suspension is of particular concern, reports other instances of ‘shadow-banning’, a method of muting user content without informing the posting user, have become rather common. Reportedly, Mohamed el Kurd, a Palestinian journalist with 797 000 Instagram followers, has indicated that views of content has fallen to around 91 000 after posting content related to the Israeli military operation. As reported by The Guardian, Pulitzer-prize winning reporter Azmat Khan has indicated that her account of 7 000 followers was shadow-banned after posting on her ‘Stories’ about ongoing humanitarian crisis in Gaza. Al Jazeera, similarly, reported that Thomas Maddens, a filmmaker and activist in Belgium, after a post concerning ‘genocide’ in Gaza, experienced a dramatic drop in engagement on his posts. In response to myriad accusations, Meta stated that “it is never our intention to suppress a particular community or point of view”, suggesting that “content that doesn’t violate our policies may be removed in error”, once again citing “global glitches” and bugs.

 

Other accounts, which have been documenting the ongoing occupation of Palestine for some years, were also suspended during the Israeli bombardment of Gaza. An account with 6 million followers, ‘Eye on Palestine’, was locked by Meta on 25th October. Now reactive, Meta spokespeople claimed that the “account was locked for security reasons” claiming that they “helped account holders regain access”. With similar reasoning, Meta also temporarily suspended accounts much alike Eye on Palestine; such as Let’s Talk Palestine. Palestinian media and news outlets were also suspended, including Quds News Network and 24FM. Worryingly, as accusations mount, Meta has also come under fire for labelling Palestinian Instagram users as “terrorists” in what they also called a “bug” in the auto-translation of content. 

 

We at impACT, considering such accounts from well-respected writers, journalists and other sources, suggest that allowing social media corporations to continue to blame the ‘unintentional’ affects of moderation tools, would be profoundly naive. Certainly, if, during the 2021 Al Aqsa raids, there was an unintentional censorship of Palestinian content, the fact that an independent report citing extensive issues was carried out which was then not rectified. It is entirely believable that Meta and other giants are actively, by neglect or realised censorship policy goals, are sequestering Palestinian online voices. 

 

Much of the worry about censorship has been centred around the collaboration between the Israeli state and Silicon Valley media giants, which lends weight to concerns that moderation tools are built to suppress views unfavourable to Israel. In 2016, such a collaboration between Facebook’s Tel Aviv office and the state in 2016 began. The Director for International Freedom of Expression at the Electronic Frontier Foundation, Jillian York, expressed significant neutrality concerns with such partnerships. Stating that the “US has historically been a strong supporter of Israel and has long dehumanised Palestinians, so it isn’t surprising that corporate policies would align with that worldview”. In 2021, this partnership came under increased scrutiny after it was reported that ‘Zionist’ or ‘Zionism’ had been entered into a ‘protective category’ which would mean critical posts/users would be removed. According to an anonymous Facebook moderator who spoke to The Intercept, this changed meant that there was “very little wiggle for criticism of Zionism” online. Further, Rabbi Alissa Wise, Director at Jewish Voice for Peace, also illustrated concern as the change “prevents users from holding the Israeli government accountable for harming Palestinians”. 

 

The state of Israel’s digital presence has also been raised as an issue concerning biased regulation, in particular by Article19. Advertisements, targeting specific countries and demographics to garner support for its ongoing invasion of Gaza were numerous across many platforms, including YouTube. The use of paid digital advertisement has been an issue raised by a report from the UN Special Rapporteur of freedom of expression in 2022. Concerning disinformation in armed conflicts, the report warned that the monetisation of conflict-related content in advertising spaces will “incentivise the manipulation of information”. Concluding that, whilst companies are aware of the issues, “are not sufficiently comprehensive or adequately enforced, and are not regularly updated to reflect global conflict developments”. In light of accounts of ‘pro-Palestinian’ censorship, allowing the presence of incredibly graphic, propagandistic advertising on sites such as YouTube illustrate a worrying dichotomy. 

 

As Israel’s invasion of Gaza continues, social media companies must work harder in creating a space in which accounts of realities in the Strip are unmarred by “bugs”, ideological partnerships, and unbalanced content moderation. What has become increasingly clear is that content discussing the violence posted by Palestinians or those critical of Israeli policy is unevenly sequestered. Disappointingly, social media giants, at this moment, seem unwilling to fix what they have called “bugs” and develop moderator tools that evenly monitor content. Even when serious inadequacies are found by internal reports, as was the case with BSRs 2022  investigation into Meta’s practices. When platforms like Instagram, Facebook, X and TikTok become primary grounds where information and public opinion is exchanged, media giants hold significant authority when it comes to the accurate dissemination of information. When it is skewed to achieve policy goals, or to reflect state partnerships, particularly during periods of violence, social media giants must be held accountable for their impact on the ensuing/related violence. We cannot allow the parameters of dialogue concerning war crimes, genocide and ethnic cleansing to be set by global corporations. These instances of reported censorship cannot be ignored without further investigation. OFCOM, the UK’s communications regulator (as must other national independent regulators), must investigate the application of moderator tools and accusations of censorship. Taking accounts of ‘global glitches’ and ‘bugs’ at face value would be a profound misstep. If reports suggest a refusal to fix the same ‘bugs’ referenced in previous years, or in relation to the censorship of Palestinians or biased moderation sympathetic to Israeli policy, individuals directly related to the creation of policy and management of content must be held accountable by international criminal courts for failing to prevent ethnic cleansing in Gaza.

Related

Gaza asks: How is living under quarantine?

While most people around the world have led a free life until now, isolation is not new to Gazan Palestinians

Coronavirus threat now is real in Gaza, with Israel’s policies trigge...

If Israel does not lift its restrictions on the Gaza Strip, thus allowing the entry of fuel and medical supplies and the exit of patients needing urgent,...

Fake news and data breaches: the failure of Facebook to protect human...

Facebook should be obligated to take substantial measures to prevent the spread of hate speech and misinformation on its platform.