NEW DELHI, India (AP) – Facebook in India has been selective in containing hate speech, misinformation and inflammatory content, especially anti-Muslim content, according to leaked documents from The Associated Press, although the internet giant’s own staff cast doubt on its motivations and interests .
Based on research into 2019 corporate memos as recently as March this year, internal corporate documents on India shed light on Facebook’s constant efforts to suppress abusive content on its platforms in the world’s largest democracy and the company’s largest growth market. Local and religious tensions in India have a history of simmering social media and fueling violence.
The files show that Facebook has been aware of the issues for years and raises questions as to whether it has done enough to address the issues. Many critics and digital experts say this has been neglected, especially in cases involving members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party.
Around the world, Facebook has become increasingly prominent in politics, and India is no different.
Modi has been credited with using the platform to the benefit of his party during the election, and coverage in the Wall Street Journal last year cast doubt on Facebook’s selective enforcement of its hate speech guidelines to avoid backlash by the BJP. Modi and Facebook Chairman and CEO Mark Zuckerberg aired Bonhomie, remembered by a 2015 picture of the two hugging at Facebook headquarters.
The leaked documents contain an abundance of internal company reports of hate speech and misinformation in India, which in some cases appear to have been reinforced by proprietary “recommended” features and algorithms. This also includes the concerns of the company’s employees about the wrong handling of these topics and their dissatisfaction with the viral “Malcontent” on the platform.
According to the documents, Facebook viewed India as one of the most vulnerable countries in the world and identified both Hindi and Bengali as priorities for “automating hostile language violation”. However, Facebook didn’t have enough local moderators or content flagging to stop misinformation that sometimes led to real world violence.
In a statement to the AP, Facebook said it had “invested heavily in technology to find hate speech in various languages, including Hindi and Bengali,” which “cut the amount of hate speech people watch by half” in 2021.
“Hate speech against marginalized groups, including Muslims, is increasing worldwide. So we’re improving enforcement and pledging to update our policies as hate speech develops online, ”said a company spokesman.
This AP story, along with others that will be made public, is based on disclosures to the Securities and Exchange Commission and has been edited and made available to Congress by former Facebook employee and whistleblower Frances Haugen’s legal advisor. The edited versions were obtained from a consortium of news organizations, including the AP.
As early as February 2019 and before a general election, when concerns about misinformation grew, a Facebook employee wanted to understand what a new user in India would see in his news feed if he only followed pages and groups recommended exclusively by the platform itself will .
The employee created a test user account and kept it live for three weeks, a time when an extraordinary event rocked India – a militant attack in controversial Kashmir had killed over 40 Indian soldiers and brought the country close to war with rival Pakistan.
In the note titled “The Descent of an Indian Test User into a Sea of Polarizing, Nationalist Messages,” the employee, whose name has been blacked out, said he was “shocked” by the content flooding the news feed. The person described the content as “becoming an almost constant barrage of polarizing nationalist content, misinformation and violence and blood”.
Seemingly benign and harmless groups recommended by Facebook quickly morphed into something entirely different, where hate speech, unconfirmed rumors, and viral content were rampant.
The recommended groups have been inundated with fake news, anti-Pakistani rhetoric and Islamophobic content. Much of the content was extremely graphic.
One of them featured a man holding another man’s bloody head covered with a Pakistani flag with an Indian flag partially covering him. The Popular Across Facebook feature revealed a lot of unchecked content related to the Indian retaliation in Pakistan after the bombings, including an image of a napalm bomb from a video game clip exposed by one of Facebook’s fact-checking partners.
“According to this test user’s news feed, I’ve seen more pictures of the dead in the past three weeks than in my entire life,” the researcher wrote.
The report sparked deep concern about what such divisive content might lead to in the real world, where local news outlets at the time were covering Kashmiris attacked in the fallout.
“Should we as a company have an additional responsibility to prevent damage to integrity that results from recommended content?” Asked the researcher in their conclusion.
The memo that was circulated with other employees did not answer this question. However, it did show how the platform’s own algorithms or default settings played a role in creating such offensive content. The employee found that there were clear “blind spots”, particularly with regard to the “local language content”. They said they hoped these results would spark conversations about how to avoid such “damage to integrity”, especially for those that are “significantly” different from the typical US user.
Although the research was conducted over three weeks, which was not an average representation, they admitted that it showed how such “unmoderated” and problematic content could become “perfect” during a “major crisis event”.
The Facebook spokesman said the test study inspired “a deeper, stricter analysis” of its recommendation systems and “contributed to product changes to improve them.”
“Our work to curb hate speech continues separately and we have further strengthened our hate classifiers to include four Indian languages,” the spokesman said.
Copyright 2021 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed in any way without permission.