The election flipped a switch for me, however, and I’ve gone from merely angry to outright furious. I’ve gone from concerned about how my data is being used to outright afraid of tech companies colluding with a government to create registries of Muslims, or other “dangerous” groups. I’m furious that the lack of moderation and oversight on social media has resulted in radical white nationalists dominating the platforms.
It’s tiresome, to be honest. All this anger and rage, it doesn’t make for good writing, and it doesn’t make for pleasant reading either. Beyond that, what good is my raging and screaming into the void even accomplishing? Maybe if I had the audience of Kara Swisher, who wrote a scathing editorial about the tech CEO’s coming to Trump’s “summit”, my words could have an impact on the industry. Instead, it feels like I’m trying to defend against a charging elephant while armed with a thumbtack.
The reason I write about technology is because I care. I complain because I love. I got my first computer in 1992, and I got online in 1997. Both events changed my life, and spawned a continuing fascinating with the potential of computers, the internet, and related gizmos that still lingers. Through technology, I made friends when I was a socially isolated teenager, I found love while I was a socially awkward college student, and I found a voice as an adult. There’s so much power and potential for good embedded in technology that seeing it all twisted to serve the ends of the greedy, the violent, and the hateful… well, can you blame me for being angry?
But what am I going to do about it? That’s the tricky part. I’m burned out, and I’ve been burned. Two short stints in the tech industry, even if one was on the periphery, taught my only that I don’t want to work in the tech industry—even if I were working for one of the better, more progressive tech companies. There’s no joy in being part of the solution, and no success in trying to solve the problem from outside—which also brings no joy. I find myself at an impasse.
In turn, I have to reassess the goal of the Sanspoint project. My technology writing has, ostensibly, been guided by a sense of wanting to use the technology we have better. I don’t mean this in just a personal productivity sense, but also towards the ends of peace, love, and economic equality.(Yes, there’s still the slightest bit of an idealist under my cynical exterior if you scratch hard enough.) What’s clear is that the direction my writing as of late is not going towards those goals. It’s past time to change that. I just don’t want to leave behind the important struggle for the future we face to do it.
You may not be aware of it, but we are in the middle of World War III. It is not nuclear bombs we must fear. The weapon is the human mind, or lack of it, on this planet. That will determine our fate.
I’ve had debates online about the theory Apple needs to weaken their stance on privacy if they want to be a leader in consumer AI products. My stance on this is simple: no. If anything, Apple should strengthen their stance on user privacy, both as good practice and as a way to protect its customers against the incoming Presidential administration. And if this means Apple can’t compete in the AI and machine learning space with Google, Facebook, Amazon, or whoever, I am more than happy to accept that.
Why? For one, I’m skeptical that all of this data is actually giving us better products. Big data, AI, and ML may certainly be useful in specialized applications, but in the consumer space, I’m not seeing the benefits. All you need do is look at the current space of consumer AI and ML. There are two main consumer-level applications, and both have the same general purpose in mind: getting you to consume things. It’s obvious in the case of the ad-supported model used by Facebook and Google. The more data they collect, the more accurate the ads will be. Whether this is the case or not is up to you, but the last thing I need is more ads telling me to buy shit I don’t need based on some random link I clicked.
The second is in the realm of home virtual assistants, of which the Amazon Echo is the most popular. Google’s also entered the game with the Google Home. The last thing I need is a hot microphone to Amazon or Google’s data centers living in my apartment, but let’s explore just what the heck these things actually do. At a fundamental level, these are devices that compel you to consume more from the companies that make them, along with their partners. The Amazon Echo lets you buy things (from Amazon), play music (from Amazon), and control various smart home devices you likely bought through Amazon. Google Home is similar, though I don’t know if its e-commerce functionality is as built out as Amazon’s.
Every command you issue: “Alexa, order more paper towels,” or “Hey, Google, start playing my Christmas playlist” is stored, analyzed, attributed to your profile, and used to sell you more stuff. And would be quite surprised if Amazon doesn’t have an agreement with whoever makes your various smart home devices to share usage data. The whole thing is a home spying device designed to build a profile of its users that will be monetized. It’s just given a servile, yet slightly snarky personality to make you feel at ease when you give up another useful nugget of personal data. And for what? To make it easier to buy paper towels, or control the lights?
We keep being promised us better products, if we just give up more data. We give up more data, and we still get crap that’s only better at selling us more crap. It’s crap all the way down. A more accurate playlist of music recommendations only keeps you paying $9.99 a month for more music—of which the artists only sees pennies. Better Alexa speech recognitions means you can order paper towels with the water running in the kitchen. Big whoop.
But all this data can also be used for more disturbing things down the line, and that’s what bothers me most.
“The data we’re collecting about people has this same odd property. Tech companies come and go, not to mention the fact that we share and sell personal data promiscuously.
“But information about people retains its power as long as those people are alive, and sometimes as long as their children are alive. No one knows what will become of sites like Twitter in five years or ten. But the data those sites own will retain the power to hurt for decades.”
By way of an example, Maciej uses LiveJournal, and how a “gay blogger in Moscow” who started a LiveJournal account in 2004 is now at risk of being outed because “[I]n 2007, LiveJournal [was] sold to a Russian company…” And, well, we know how the current Russian government feels about homosexuality right?
Even a company that is generally on the good side of user privacy, like Apple, could change its tune at any moment. Tomorrow, Tim Cook could step off the wrong curb, and get hit by a bus. Or, Wall Street could decide they’ve had enough and kick him out in favor of a CEO who is more willing to work with the Federal Government and the Trump Administration. Having sanctions slapped on every iPhone imported from Shenzhen isn’t going to be great for the stock price.
But we don’t even have to wait. Right now, a member of Facebook’s board, Peter Thiel, has the ear of a President-Elect who promised to deport Muslims, even those who were born in this country. Don’t tell me Thiel wouldn’t compel Facebook to help. They’ve already said they would do it if asked. Facebook is the largest of the tech data brokers we surrender our personal data to, wittingly or not. They promise us relevance, and they’ve given us filter bubbles full of fake news stories. This is what I’m paying my privacy for?
It’s not hard to imagine how all this data could be turned against us. Imagine a suspicious explosion in a major city. The FBI compels Apple and Google to hand over location data on every iPhone and Android device in the area before the explosion, along with device owners email addresses. (Currently Apple deletes last known location data after two hours, but as noted, this could change.) Run that data against everyone who has identified themselves as Muslim on Facebook. Then, scan all the profile and tagged pics of those Facebook users and compare them with the camera footage picked up by Peter Thiel-founded Palantir—who is already working with the NYPD. Now you have your suspects, ready for “enhanced interrogation” and potential imprisonment or deportation.
Creeped out yet?
And it’s all because you wanted better location-aware alerts and suggestions on what crap to buy. When giving up privacy is worth only crap products and enabling government and corporate surveillance, it’s not worth it. Unfortunately, as I’ve noted, before “our online lives run on data.” We can no more extricate ourselves from the web of services that collect and store our personal data than we can extricate ourselves from the plumbing in our houses. At least the water company isn’t analyzing our leavings to find new things to sell to us Yet.
These are all linked. You can’t demand a company roll back user privacy in one area without compromising everything. It’s not immediate, but like a single torn thread in a pair of jeans, that hole is going to stretch and tear more threads with every movement. You won’t be terribly happy when something gets through that you didn’t intend. I suppose I’d be less skeptical if someone could show me one useful product that genuinely improves lives beyond offering new things to consume, and does so in a way that won’t put the lives of its users at risk. Right now, we don’t have it, just a bunch of vague promises that could be broken in a heartbeat. If the alternative means that we have no AIs in our pockets and homes, well, that’s a trade I’d be happy to make.