Twitter tackles online hate

December 22, 2017 by Andre Oboler
Read on for article

After Charlottesville American attitudes on Hate Speech are shifting. Artificial Intelligence tools that extract hate speech on social media are becoming more wide-spread among both academics and civil society organisations, creating greater transparency and pressure on Social Media companies…writes Andre Oboler.

Dr Andre Oboler

Twitter is caught in a perfect storm and trying to re-calibrate their policies and culture in response. They aren’t the worst offender, but they are the easiest to monitor, which puts them in the firing line.

The gap between violent extremism and hate speech has narrowed after the “Unite the Right” rally in Charlottesville in August 2017.

The rally which became violet, ultimately led to the vehicular attack by a 20 year-old far-right activist which killed Heather D. Heyer and injured 19 others. President Trump’s refusal to call out right-wing extremism following the attack became a tipping point in American attitudes to hate speech.

In November President Trump retweeted three tweets by the far-right Britain First group, a splinter group that broke away from the British National Party, itself defined as a neo-Nazi political party by the Oxford English Dictionary. This led to a rebuke from the UK’s Prime Minister. Today, Twitter closed the account of Britain First under its new policy, along with many others including Antipodean Resistance here in Australia (the group responsible for post campaign of hate around universities in Melbourne), but stated that the rules which would see an account closed do not apply to military or government entities.

The new rules seek to “reduce the amount of abusive behaviour and hateful conduct”, a move which still seeks to create some sort of distinction between “speech” and “action” even as it enlarged the scope of what crosses the line into unacceptable activity on Twitter. A spokesperson explain that, “If an account’s profile information includes a violent threat or multiple slurs, epithets, racist or sexist tropes, incites fear, or reduces someone to less than human, it will be permanently suspended”. When it comes to Tweets, hateful imagery will be hidden and users will have to click a button to see it. Such imagery includes “logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin”. This sends the message that those promoting hostility and malice against minorities can continue to use Twitter for this purpose provided their account isn’t clearly setup exclusively for this purpose.

Meantime, the far-right are have coined a new term, claiming they are being “shoahed” a play on the Hebrew work for the Holocaust, the Shoah. This Holocaust trivialisation in the face of the purge of the far-right on Twitter should perhaps be unsurprising. The responses we saw from accounts that remain online ranged from outright hate, to white pride ideology, to a supposedly anti-racism account that is clearly satirical. We also saw people making the argument that booting the far-right from Twitter would only make them stronger, only to see those same people in other tweets self identify as far-right activists. This reflects research discussed in the new book  Cyber Racism and Community Resilience: Strategies for Combating Online Race Hate in which it was primarily those who self identified as people engaging in cyber-racism who, unsurprisingly, are the ones opposed to laws against racism.

One interesting question is whether hidden content will still be accessible through the Twitter API (Application Programming Interface), the gateway which allows other software to interact with Twitter. A growing body of research into hate speech by both academics and civil society organisations is creating increased pressure on Twitter. This research is generally based on Artificial Intelligence approaches designed to detect online hate speech using relatively simple approaches based on text analysis which access Twitter through the API. These approached work by having a list of hate speech terms and searching through social media looking for content where these words or phrases occur. Finding hate symbols like swastikas or racial slurs with such approaches is straight forward as the context in which they appear seldom matters. Other hate speech which uses more general language is harder to detect.

Using these AI tools on Twitter is far easier than on other platforms like YouTube or Facebook because Twitter is a simpler platform in technical terms, all the content on Twitter is public, and the API gives access to everything a researcher would need. The result is that hate speech, particularly involving hate symbols and racial slurs, is now increasingly being detected and written about both academically and in reports by civil society. Based on the data they have, their attention is increasingly Twitter-focused.

Twitter does have a lot of room to improve, and the changes they are now making, including taking into account profile information and not just Tweets, is one important improvement. At the same time, the Online Hate Prevention Institute’s data, which is based on reports made by the public rather than automated approaches, suggests far more of the problem is on YouTube that Twitter. The analysis also shows that YouTube is far less effective at removing such hate. The problem is that the AI approaches being used have greater difficulty identifying hate speech on YouTube.

This data is supported by survey data in the Cyber Racism book which includes a surveys of the Australian population on the topic of online racism. The survey, conducted in 2013, examines where Australian’s encountered online racism. The top places were Facebook (40%), news sites (18.5%) and YouTube (15.7%) while only 1.9% of the racism occurred on Twitter.

Twitter has taken an important step to tackle online hate, but there is a growing concern that rather than tackling the online hate which has the most impact on society, the focus is instead drifting to those forms of hate which are the easiest and cheapest to find. If we want to stop the spread of hate, we need to do better.

Dr Andre Oboler is a Cyber-Security expert in the La Trobe University Law School and CEO of the Online Hate Prevention Institute.

Comments

3 Responses to “Twitter tackles online hate”
  1. Bill Rubinstein says:

    Readers may well not be aware that Dr Oboler is the self-appointed censor of online “Hate Speech.” What is “hate speech”? No one knows – it is whatever he thinks “hate speech” is. I and others have repeatedly asked him how he defines “hate speech,” and he has consistently refused to answer. In matters of defining “hate speech,” he is the prosecuting attorney, judge, jury, and executioner, and the court proceedings are secret. Sounds like North Korea? You bet it does. And how many examples of, say, leftists calling Israel “genocidal” or “an Apartheid State” has he zapped? How many online Islamic sites denouncing the Jews in anti-semitic terms has he zapped? Maybe zero is the answer? Only right-wingers need apply. Frankly, if it is a choice between some ratbag online nonsense and a self-appointed private censor answerable to no one, the latter is the more dangerous. America has a First Amendment to protect its people from self-appointed censors. I also notice that he is on the executive of the JCCV. Its present leadership is fond of excommunicating persons it doesn’t like, such as Rabbi Shimon Cowen, and no doubt he feels right at home.

  2. Lynda Hynes says:

    Sorry, but I don’t agree with “hate speech laws”. What a person thinks doesn’t hurt anyone: what they say can lead to someone being hurt, but it is what they DO that can hurt people, and that is punishable by law, as it should be.

    I have seen many good people hurt by these “hate speech laws” simply for telling us news we are not hearing, and expressing a concern for the safety of innocent people – and that often includes Jewish people. To be labelled “far right” these days is merely to be proud of your race, faith and country, and unwilling to throw it all away for some disastrous Utopian fantasy.

    Freedom of speech is the most important asset we have, and we must not sacrifice any of it in the false hope of protecting innocent people. Rather, we should meet any hateful speech we hear with a deafening crescendo of disapproval and ridicule.

  3. Steven Meyer says:

    I guess once again it is up to me to be the contrarian.

    Do we really want a privatised censorship?

    Do you all understand that one day pro-Israel speech could be deemed “hate speech”?

Speak Your Mind

Comments received without a full name will not be considered
Email addresses are NEVER published! All comments are moderated. J-Wire will publish considered comments by people who provide a real name and email address. Comments that are abusive, rude, defamatory or which contain offensive language will not be published

Got something to say about this?

This site uses Akismet to reduce spam. Learn how your comment data is processed.