Tech Friday: The challenges of policing speech on social media
Posted By RichC on July 12, 2019
There are challenges from users and governments facing several of the companies who make money advertising to the millions of eyeballs and the marketing of users personal data. We all want “free services,” but most people dislike the collection and selling of their personal data or having to sift through advertising … especially when it emulates a news or personal feed. BUT … besides the privacy issues associated with high profile social media sites like Facebook and Google, there is the recent mandate that they step-up their game when it comes to what users share on their platforms. Google’s YouTube component has been hammered with what shows up “next” in “kids” feeds to the point the company has moved to create a “playground” if you will of kids safe videos. The plan is that questionable content would not show up on a YouTube stream (good luck with that considering the billions of videos with embedded content – the algorithms better be good). This entire screening has taken on “policing speech” duties for supposedly non-biased employees who write the code to automatically screen and for “people” to make judgmental decisions.
Since I’m primarily only on Twitter, I’ve noticed the pro-active efforts that might be succeeding, but also has created a outrage when screeners without notice or any
arbitration have locked out users (their prerogative I suppose?). To understand the challenge would be to know and notice some of the high profile users who have lost their privileges. It has been heavily skewed towards provocative conservative speech, but for Twitter’s part they claim their evolving rules are protecting people, classes of people and groups from “hateful conduct” (a fairly broad and encompassing term).
Twitter’s recent update focused on “dehumanizing speech around religion.”
“After months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion,” the company wrote on its Twitter Safety blog.


