Menu

Sanspoint.

Essays on Technology and Culture

The Racists Have Gamed Google’s Algorithm

At The Guardian, Carole Cadwalladr has noticed something disturbing about Google search suggestions:

Neither Google or Facebook make their algorithms public. Why did my Google search return nine out of 10 search results that claim Jews are evil? We don’t know and we have no way of knowing. Their systems are what Frank Pasquale describes as “black boxes”. He calls Google and Facebook “a terrifying duopoly of power” and has been leading a growing movement of academics who are calling for “algorithmic accountability”. “We need to have regular audits of these systems,” he says. “We need people in these companies to be accountable. In the US, under the Digital Millennium Copyright Act, every company has to have a spokesman you can reach. And this is what needs to happen. They need to respond to complaints about hate speech, about bias.”

Google, democracy and the truth about internet search

This is disturbing and terrifying. We know search algorithms can be gamed, but white nationalists and racists have taken it to a whole new level. That Google seems to not even think it’s a problem is even worse. It is Google’s responsibility to ensure its results are accurate, and linking to Daily Stormer and other hateful organizations when asking about the Holocaust, or stats on black crime is abdicating that responsibility. There is no neutrality when a system can be twisted to promote one group’s horrifying ideology.

The Great Medium Experiment

I’ve been running an experiment with my last few longer-form pieces of crossposting them to Medium. I’ve done it on the occasional, ad-hoc basis when I write something that I feel needs to reach an audience outside of my circle. This includes the expanded, and revised version of “A View From Inside the Welfare System”, which went viral. I’ve also written at least one piece, explicitly for Medium, “Geek Culture and its Discontents”. So far, it’s not had much of an impact, but it might also be too early to tell.

But why Medium? I’ve been, and remain skeptical of Medium as a platform, and I’m not the only one. Ownership of my words is important, even if I’m making a sum total of six bucks a month from them. (If you want to change that, you can become a subscriber here.) I’d be happy making nothing, if I knew I was reaching people, but I’m sometimes not even sure of that. It’s a tough time to be putting words on the Internet.

So, instead of trying to branch out into other media—because that worked out so well last time—cross-posting seems to be the best of the options. No matter what happens to medium, my writing will remain at this URL until EMPs wipe out all technology. Yet, I also exist on the largest platform for long-form writing. My name, profile, photo, are all out there, hopefully to be discovered. Maybe they’ll follow the links back to the source and start clicking around. Plus, Medium makes it a lot easier to share things. I’m not willing to install tracking scripts for Facebook and Twitter just for the sake of a few clicks—though I am willing to set up a Facebook page for my writing.

The goal of crossposting to Medium is to, I suppose, re-capture the lightning in a bottle moment of the most successful independent writing I’ve ever done: the aforementioned “A View From Inside the Welfare System.” It was not only a Medium Editor’s Pick, but made it to the front page of MetaFilter, which made me super-happy. The only thing I didn’t like is that nothing long-term came of it. I had nothing else to say about my time working for the Welfare Office, and it was off my usual—for lack of a better word—beat, anyway.

I just wish Medium’s WordPress plugin was more effective. It seems that scheduled posts and anything published through the WordPress API doesn’t get sent to Medium. This sucks, as I like to line stuff up ahead of time for publishing. At least tagging posts works, though I’ve never used the post tagging feature in WordPress since I relaunched the current version of Sanspoint six years ago. Perhaps the Medium folks will see this and fix it. Or, perhaps I’ll give up on this experiment after a few more weeks, and then I stop worrying about it. In the meantime, like and share, I suppose.

“Watch the Failson:” How the Internet is Radicalizing the Alt-Right

In the wake of the election, I took some time to read a few pieces of conservative commentary, and came across an interesting essay by Rod Dreher in The American Conservative comparing modern America to Weimar Germany. I’m don’t agree with all of Dreher’s points, especially as a queer atheist Liberal socialist, but a part of it caught my attention, and it makes the essay worth your time. Namely, Dreher links to a piece in The New Yorker on a podcast called “Chapo Trap House” that describes a phenomenon the podcast hosts call “failsons.”

The Chapo Trap House hosts describe a failson as “twenty-six,” in Community College, and more interested “gaming and masturbating” than spending time with their family at Thanksgiving. Or, more compassionately as “nonessential human beings who do not fit into the market as consumers or producers or as laborers… Some of them turn into Nazis… Others become aware of the consequences of capitalism.” [Emphasis mine.]

Reading this frightened me, because it rings true. As an example, Dreher identifies Dylann Roof, who committed a mass shooting at a black church in Charleston in 2015, as a failson and notes that:

“Sooner or later, somebody is going to find a way to radicalize those failsons. Some of the middle class failsons will gravitate to the Weimar Brooklyn worldview of the Chapo Trap House. Many other middle class white failsons, I suspect, will gravitate to the intellectualized neo-Nazism of Richard Spencer, highly educated and articulate son of Dallas’s posh Park Cities. The point is: watch the failson.”

What Dreher misses is that the failsons are already being radicalized. What are the meme squads and troll armies of the alt-right but failsons turned into radicalized digital shock troops for a modern fascist regime?

If you haven’t closed this essay already, let me explain by linking to a great Twitter thread by Siyanda Mohutsiwa. She draws a direct link between the racist alt-right, and men’s self-help spaces online. Jules Evans at Philosophy for Life goes into more detail on the same links. In particular, Evans notes how alt-right figureheads Mike Cernovich, Jack Donovan, and Roosh V wrote self-help books and pick-up artist guides before moving towards promoting the racist and sexist ideology that underpins the alt-right. Anyone who came to these men looking for a way to improve their lot likely ended up suckered into their hateful message.

It’s not all failsons in the alt-right, of course. There’s people with STEM degrees, and jobs who wouldn’t fit the failson stereotype, but they’re not usually the ones spending their days harassing people on social media or running disinformation campaigns. They’re more likely to operate like Oculus founder Palmer Luckey, away from the digital front lines. It’s also worth noting here that of all groups more likely to join terrorist groups, engineers are the most likely to become extremists. You can’t blame radicalization on being stupid.

But when you have a mass of under-employed and unemployed, poorly educated, white men who can’t get laid, they’re going to be very susceptible to anything that makes them forget their position—anything that gives them a target for their anger. Women, minorities, the LGBTQ community, and the progressives who promote their issues are the easy and obvious targets. And so the demagogues mobilize the failsons, point them to the target, and stand back as the horror unfolds. Because they never gave a direct order, they can keep their hands clean, whether it’s Milo Yiannopoulos using his Twitter followers to harass Leslie Jones, or Donald Trump saying he “disavows” the white supremacists using his election victory as an excuse for public hate.

All you need to do to see this phenomenon first-hand is take a peek into some of the various 4chan boards where it happens. Boards like /adv/, /r9k/, and /soc/, have built a support structure for young men who describe themselves as “NEETs”—“Not in Education, Employment or Training”. These are the failsons the Chapo Trap House hosts refer to. So much of the process occurs in public, from the initial steps into seeking a community of support, advice on love and life, and the slow redirection into alt-right radicalism. And it works. ISIS recruitment propaganda follows the same basic process. The only difference is that the alt-right is radicalizing white men, not Muslims.

Of course, one can hardly be blamed for not wanting to stick their nose into the cesspool of the various chan boards. But if anything is going to disrupt the radicalization of the failsons is disrupting that process. There’s already research under way to disrupt ISIS recruitment practices, but who’s taking up this mantle against white supremacy? The best we’ve seen is Twitter adding “hate speech” to their reporting process and banning several alt-right accounts, but this is too little, too late. It’s a band-aid on a plague sore.

This is a personal concern, not just because the people I love are at risk from what the radicalized failsons can do, but because it wasn’t that long ago when I too could have become a failson. Not long after I graduated college in 2008, I was unemployed, and struggling with my personal life and self-worth. I was lucky in that I had both a positive support network of family and friends, both online and off, that saved me from potential radicalization. I was also lucky in that this was before the toxic spaces of the internet like 4chan had fully mutated into their current form. But I know quite well the misery I was in, and how I longed for easy answers.

So, yes, I am watching the failsons. You should be too, because they’re going to play a major role in the next four years. They’re not the only cause or symptom of the new political climate, but they are motivated, they are inspired, and they are dangerous. Whether you are a Liberal or a Conservative, a new fascist movement is a danger to all of us. Even if the footsoldiers are hiding behind keyboards and seven proxies, what happens on the internet can and does bleed over into “real life.” We’ve seen it happen, before and it’ll happen again. It’s too late to stop the damage, but with luck and work, perhaps we can keep it contained.

Om Malik on Tech’s “Empathy Vacuum”

It’s hard to think about the human consequences of technology as a founder of a startup racing to prove itself or as a chief executive who is worried about achieving the incessant growth that keeps investors happy. Against the immediate numerical pressures of increasing users and sales, and the corporate pressures of hiring the right (but not too expensive) employees to execute your vision, the displacement of people you don’t know can get lost.

However, when you are a data-driven oligarchy like Facebook, Google, Amazon, or Uber, you can’t really wash your hands of the impact of your algorithms and your ability to shape popular sentiment in our society. We are not just talking about the ability to influence voters with fake news. If you are Amazon, you have to acknowledge that you are slowly corroding the retail sector, which employs many people in this country. If you are Airbnb, no matter how well-meaning your focus on delighting travellers, you are also going to affect hotel-industry employment.

Om Malik – “Silicon Valley Has an Empathy Vacuum”

In the political, sociological, and economic mess we’ve gotten ourselves into, we can’t ignore the role the tech industry plays in it. Facebook can’t court advertisers with one hand and act like its algorithms don’t influence behavior on the other. Tech as an industry is not, and cannot, pretend it is neutral. It can’t pretend that jobs will magically reappear for those it has unemployed. But as long as the impetus is short term growth to satisfy investors first, Valley companies can blind themselves to their impact.

Fear of Data

A while ago, I came across an interesting opinion piece on Lifelogging, and how interest is waning as people move towards more “forgetful” services like Snapchat. It’s certainly something I’ve thought a lot about, both in terms of the data I create, and who has access to it. It requires a lot of trust in the people who we share our data with. The essay’s author, Mike Elgan, has his own opinion:

I believe this apparent impulse to forget is not what it seems. In reality, people are just suffering from information overload and data anxiety. Collecting huge amounts of personal data – even large numbers of pictures – simply creates a problem without solving one. The problem it creates is: How do you manage all this data? How do you back it up? Store it? Annotate or tag it?

Following this election, I think Mike Elgan and others are asking another question: who else has access to our data? I know I am, but I was asking it before it was cool, or at least before we were all afraid of Donald Trump.

In the wake of the election, a lot more people who were pretty darn confident about their data, how safe it is, and how safe they are, started getting a lot more paranoid. Case in point, this excellent Buzzfeed article by Sheera Frenkel, which interviews a handful of—well, two—Valley entrepreneurs who run data-focused companies, along with noted tech wise-ass critic, Maciej Cegłowski of Pinboard fame. (I use “wise-ass” as a compliment, here.)

Maciej has been an outspoken critic of how the Valley treats data collection, likening it to nuclear waste in an excellent talk. Maciej also has an excellent set of six fixes to help restore privacy online. I’m grateful for his work, but rather than rehash his ideas and spend more time chiding Valley companies for treating the data of real people with as much care as you might give the wrapper for your chewing gum, let’s step back and think about how we got here.


Tech companies have two main uses for personal data. One is for improving products and the consumer experience. Google collects your every move, reads every email, and tracks cars in traffic, so that when you wake up in the morning, you can find out that traffic’s backed up on your way to the airport Then Google suggests you leave a few minutes early, and take an alternate route. Your life is measurably improved—you didn’t miss your flight—and everyone is happy.

The other use is to sell to advertisers. Google collects your every move, reads every email, and tracks cars in traffic so that it can suggest you pick up breakfast from Dunkin’ Donuts, or that it’s time to get your tires rotated at Jiffy Lube. Okay, maybe not you specifically, but recent changes there suggest that might be changing. Either way, the goal for advertisers is to leverage all of this data to know what you want or need before you do, get the right ad in your face, and make the sale.

Of course, this is all a review. We know this is the arrangement, and for every game-changing, life improving notification we get on our devices, there’s going to be a targeted ad somewhere to make sure it’s paid for. Implicit in all of this is trust that these companies will be good stewards of our data. What we haven’t fully defined is what good stewardship of data is.

We are getting a very good idea of bad stewardship, though. It’s the push notification to leave for work when you’ve already sat down at your desk, sure, but it’s also the badly targeted ad that you can’t escape no matter how hard you try. It’s the email from Have I Been Pwned? that tells you yet another company has been hacked and your email, password, credit card, and god knows what else now is for sale on the darknet to the highest bidder. It’s the service you loved and trusted going out of business, and their assets being picked up for pennies a gigabyte by some entity you never heard of in a country you can’t find on the map, and no way to get your data back.

The narrative has always been “trust us, we know what we’re doing,” followed up by “give us more data, so we can get better”—at least among those who are courteous enough to ask. Most places just change the Terms of Service, and throw up a screen that people will click through without reading so they can get their next dopamine hit. But, because so many techies want a better, smarter device, they’ll not only happily give in to what the companies ask, they’ll raise hell about those companies that take their time and try to do more with less, or protect user privacy in the process. (Yes, I’m talking about Apple.)

Which is why I’m so surprised to read about how Valley companies are freaking out over Donald Trump. Nobody can tell me they see this coming, if not from a Trump presidency, than perhaps under another in four to eight years. The fracas between the FBI and Apple should have been the latest of a whole flock of dead canaries in the data mine—it was never going to stop with one device. Whatever a company’s policies on encryption, authorized access, or what have you, men with guns can be very persuasive, whether they have a court order or not.


I’d like you to pretend for a moment that you work for Facebook, Google, Amazon, Apple, or Palantir—any company that collects a whole bunch of data on people’s online behavior. Imagine you work in the department that oversees data collection, and that you have access to the database. Imagine you can see, unencrypted, every scrap of data of every person in that database, should you choose so. That’s a lot of power, and in real life, it’s probably shared across a bunch of people. But imagine you’re one of them.

After work one day, a man trails you to your car. He holds a gun to your head, and hands you a list of people that he wants data on. If you give him all the data, he promises you ten million dollars. If you don’t, he will put a bullet through your head and make it look like a suicide.

What do you do?

It doesn’t matter whether the man with the gun is with the US Government, the Russian Government, or just some nutjob with a grudge and a list of ex-girlfriends. Your life is at risk, and you have access to the data he needs. Remember, you don’t need a supercomputer to crack encryption if you can beat the password out of the owner with a wrench. The only way to make sure nobody gets unauthorized access to data is not to have that data lying around in the first place.

And there’s the catch.

So much of our online lives run on data, not just to pay for it, but to make it usable. We’ve struck a Faustian bargain, and this is the result. Fear is justified, and at the risk of conspiracy-mongering, it’s no surprise that Peter Thiel was so willing to fund the Trump campaign in retrospect. Thiel is the co-founder of Palantir, one of the biggest of the big data companies, and they already have huge inroads with the New York City government. Plus, he sits on the board of Facebook. Now, Thiel has the ear of the President-Elect.

With no controls on what data is collected and how long it’s stored, these databases become increasingly tantalizing targets for malicious actors of all stripes. It’s a shame that it took this long for anyone to start waking up to the danger. Now, it might be too late. At least we’re having the discussion at last, and perhaps a few of the people who called me paranoid aren’t going to be quite so dismissive going forward. Boy is that some cold comfort.