Abuse filters are a lot like anti-spam. They look for patterns in data. When I’m creating rules for filtering abuse in my own software, I look at a combination things like account date creation, if the profile pic is still the default, who the person interacts with, if that person interacts with people I’ve got blocked, who they follow, how many tweets they’ve sent, how many of their tweets are retweets versus original content, etc. It’s a huge list, and it creates a risk score. Any one or two or three of these things isn’t enough to get you caught by my anti-abuse filters, but a combination of many means I won’t have to see your tweets. As I was building out this system, many things became clear. While some mob harassment shares very distinct characteristics, this is generally limited to abuse that exists within communities on Twitter.
— The problem with Verified on Twitter
Verification is only one, small, tool out of many that needs to be in place for Twitter to protect its users from abuse. This is the sort of thing Twitter’s team would know and understand if solving abuse was a priority. Anyone who thinks being verified would be a panacea for the abuse problem on Twitter, (like a certain Mr. Calacanis) would do well to give Randi’s well-considered post a read.