Forget Cambridge Analytica. What about Facebook’s role in ethnic conflict and genocide?


A big balance sheet is announced for Facebook. The revelation that political consultancy Cambridge Analytica secretly accessed data from 50 million Facebook accounts to help the Trump campaign in 2016 is just the latest in a string of damaging stories on the platform. Now US regulators and lawmakers are sharpening their knives. Two Democratic Senators, Mark R. Warner (Va.) and Amy Klobuchar (Minn.), have said it’s time for Facebook CEO Mark Zuckerberg to testify before Congress.

It’s understandable that Washington politicians are primarily concerned with the potential harm inflicted on US citizens by Facebook’s policies (or its lax enforcement, which may well be the issue in the Cambridge Analytica case). But if lawmakers can persuade the elusive Zuckerberg to appear before their committees, they should also take the opportunity to question him about Facebook’s global role – and specifically the allegations that the platform has allowed itself to become a tool for the instigators of hate speech, ethnic conflict and even genocide.

This last accusation may seem exaggerated. But that was precisely the concern raised earlier this month by United Nations officials. Beginning late last summer, the army of Myanmar (also known as Burma) launched a campaign of intimidation and violence that has now driven out nearly 700,000 members of the Muslim minority group known as the name of Rohingya out of the country and into neighboring Bangladesh – a crime which more and more international observers (including some at the United Nations) describe as potentially amounting to genocide.

Yanghee Lee, a UN official investigating events in the country, said Facebook’s overwhelming popularity in Myanmar made it a key factor in the spread of hate speech. “It has been used to deliver public messages but we know that the ultra-nationalist Buddhists have their own Facebook and are really inciting a lot of violence and a lot of hatred against the Rohingya or other ethnic minorities,” he said. she declared. “I’m afraid that Facebook has now become a beast, and not what it originally intended.”

That’s an understatement. As Myanmar’s military launched its campaign against the Rohingya last August, Facebook pages across the country were brimming with bigotry and misinformation targeting the group. Ultra-nationalist Buddhist monk Ashin Wirathu used his own page to spread racist tirades and images until Facebook deleted his account. “Wirathu compared Muslims to rabid dogs and posted photos of dead bodies he claimed were Buddhists killed by Muslims,” Annie Gowen and Max Bearak reported from The Post in December, “never acknowledging the brutality with which face the Rohingyas”.

Facebook is starting to recognize the problem and says it’s making efforts to fix it. “We don’t allow hate speech and incitement to violence on Facebook,” a company spokesperson told me. “However, our policies allow content that may be controversial and at times distasteful, which may include criticism of public figures, religions and political ideologies.” The company says it has worked with activists around the world and tried to educate users about the dangers of hate speech.

So what kind of moral responsibility does the company bear for its role as a vector of discourse, destructive or otherwise? It is true that we are only beginning to realize the complexity of the ethical, legal and practical dilemmas involved.

Yet our own confusion probably doesn’t seem like an excuse for the people of South Sudan who have watched some of their own turn Facebook into another weapon in that country’s bloody civil war. The same goes for journalists and activists in the Philippines who have faced vicious online harassment from the legions of Facebook trolls who aggressively support President Rodrigo Duterte – including his extrajudicial campaign to kill suspected drug traffickers. .

Kenyans fear the social media platform has heightened ethnic tensions in the disputed presidential elections. “Facebook, especially beginning in 2013 and more visibly in 2017, has been a site of ethnic hatred and incitement,” Michael M. Ndonye, ​​a university communications professor, said in an email. As mainstream media faces regulatory limits and the threat of lawsuits, he said, “Kenyans are turning to internet platforms and Facebook to talk about what mainstream media avoids, including hate speech and related content”.

Social media companies certainly cannot afford to sit idly by in the face of growing public outrage over their perceived unwillingness to address abuse. Just a week ago, the Sri Lankan government directly blamed the deaths of three people on Facebook’s failure to control hate speech that has contributed to communal violence between Buddhists and Muslims inside the country. . Government officials eventually resorted to temporarily blocking access to platforms such as Facebook and WhatsApp in a bid to stop the bloodshed. “This whole country could have burned down in a matter of hours,” Telecommunications Minister Harin Fernando told the Guardian. “Hate speech is not controlled by these organizations and has become a critical issue globally.”

Yet the people of these countries have relatively limited power over a company based in Menlo Park, California. What they have is a strong interest in maintaining the connectivity and access to knowledge that comes with participating in global communication platforms. Now, however, they are also losing patience with the unwillingness or inability of these same companies to coerce the kinds of speeches that threaten to tear societies apart. Maybe it’s time for US lawmakers to help undo the damage.

© The Washington Post 2018

Tech

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button