A few days ago, I posted a quick tweet before bed with an idea that struck me after weeks of thinking about GamerGate and online harassment.
Bayesian Spam filtering, but for harassment and threats. Think about it.
— Richard J. Anderson (@sanspoint) October 28, 2014
It seemed to strike a nerve. It was even retweeted by Marco Arment, which is officially a point of pride in my life on Twitter.
The only reason tools like this aren’t in the arsenal of social networks and their users, is that fighting online harassment isn’t a priority. Companies hire contractors to sort out porn and violence, but simple analysis of posts to identify users causing trouble and violating the terms of service is not a priority. Even more frustrating is that, as I’ve mentioned before, content analysis tools already exist on most social networks, they’re just used to figure out what ads to display.
Moderation for porn and violence is also for the benefit of advertisers over users. No company wants to see their promoted tweet or Facebook post next to someone’s engorged genitalia. The brands don’t have to worry about being attacked, so all the angry young men threatening women with rape can hide in the shadows, while Twitter identifies them as being more interested in seeing ads for Nintendo. The liability risk is greater if some unsuspecting minor sees naked people and severed heads (or both) than if someone commits suicide from abuse on social media.
No matter how many times it happens.
And it has happened many, many times.
Sure, Facebook is trying to “create empathy among [its] users”, but instead of coming at it after the fact, why not use all this data to stop it at the source? If you can identify a post as containing threats, or harassment, don’t allow it to be posted. Force the user to take a time out. Read their post aloud back to them. Do something, anything, to change the balance of power on social media away from the abusers and harassers, because anything is better than what we have now.