This tweet has been circulating for a few years, but it remains relevant to technology discussion today, if not moreso.
Unlike so many people I follow online, I never came up on Cyberpunk. When it comes to Sci-Fi, I grew up on Star Trek—specifically The Next Generation. This might be why, when it comes to technology, there’s still some optimist under my cynical surface. You just need to scratch hard. Though later series and movies would muddy the waters (in a good way), Star Trek retains a utopian view of technology. Not one where technology undoes all human foibles, but where it helps us usher in a more peaceful world, free of material want, and with the freedom to seek fulfillment among the stars. Technology is the vehicle through which humanity’s better nature manifests into the universe.
Instead, I see technology turned against our better natures. Whether it’s governments and corporations alike spying on us through our communication tools, attempts to shove more consumer garbage down our throats, or just predicting our wants before we know we want something for the benefit of a corporate partner, I get mad. Can you blame me? This is not the future I signed up for, but as one of my favorite bands put it, “The future that [I] anticipated has been cancelled.”
So, we get the Cyberpunk future, with all the exploitive techno-capitalism, environmental disasters, and crappy music, but none of the cool fashion. At least we also don’t have to carry around as much gear. If this is the sci-fi view of the world you grew up on, I suppose it’s easy to accept it. While I read a bit of cyberpunk literature as a teen and young adult—Neuromancer and Snow Crash, specifically—I didn’t fall in love with the concept. Likely because I wouldn’t be the elite hacker, slashing his way through cyberspace, merely an office drone at one of the MegaCorps. Maybe there’s a story idea there, though.
But, the Cyberpunk dystopia is hiding its true face behind the lofty utopian rhetoric of the Star Trek future. Not that this is anything new, of course. Utopian rhetoric has been the marketing methodology of new technology for centuries. Which is why, I suppose, if there’s any Sci-Fi that truly reflects the state of technology today, it might be the other major Sci-Fi influence of my adolescence: Douglas Adams.
In The Hitchhiker’s Guide to the Galaxy series, Douglas extrapolates a view of technology that’s much the same as we have today: a bunch of pie-in-the-sky utopian promises that never work as advertised. This holds true whether it’s robots with “Genuine People Personalities”, or food synthesizers that analyze your body’s dietary needs and your brains desire before spitting out something “almost, but not quite, entirely unlike tea.” Hell, “Share and Enjoy” may as well be the slogan for Facebook, if not The Sirius Cybernetics Corporation.
The great thing about Douglas’s view of technology is that it adds the right spice of cynicism to my utopian Star Trek dreams. Even if we get our post-scarcity utopia of starships and matter replicators, they’ll probably still hang, and take every other system down with it, if someone asks for a cup of tea. But to get there, we need to decide which future we want. And we need to see past the doublespeak and false utopian nonsense spouted by Valley douchebags seeking another round of funding for their newest startup that promises to make your life easier by letting you pay someone to do menial work at a lower pay rate.
I don’t know about you, but I’m still gunning for the Star Trek future. The only way it arrived, however, was after political turmoil, and brutal war that left the planet devastated. Perhaps we must go through the cyberpunk future, the dystopia, and the horror, to reach the technological utopia on the other side. But if we can skip to utopia—or at least make the dystopia short—why shouldn’t we? Where’s my generation’s genuine Sci-Fi optimism? It sure isn’t coming from Silicon Valley.
The election flipped a switch for me, however, and I’ve gone from merely angry to outright furious. I’ve gone from concerned about how my data is being used to outright afraid of tech companies colluding with a government to create registries of Muslims, or other “dangerous” groups. I’m furious that the lack of moderation and oversight on social media has resulted in radical white nationalists dominating the platforms.
It’s tiresome, to be honest. All this anger and rage, it doesn’t make for good writing, and it doesn’t make for pleasant reading either. Beyond that, what good is my raging and screaming into the void even accomplishing? Maybe if I had the audience of Kara Swisher, who wrote a scathing editorial about the tech CEO’s coming to Trump’s “summit”, my words could have an impact on the industry. Instead, it feels like I’m trying to defend against a charging elephant while armed with a thumbtack.
The reason I write about technology is because I care. I complain because I love. I got my first computer in 1992, and I got online in 1997. Both events changed my life, and spawned a continuing fascinating with the potential of computers, the internet, and related gizmos that still lingers. Through technology, I made friends when I was a socially isolated teenager, I found love while I was a socially awkward college student, and I found a voice as an adult. There’s so much power and potential for good embedded in technology that seeing it all twisted to serve the ends of the greedy, the violent, and the hateful… well, can you blame me for being angry?
But what am I going to do about it? That’s the tricky part. I’m burned out, and I’ve been burned. Two short stints in the tech industry, even if one was on the periphery, taught my only that I don’t want to work in the tech industry—even if I were working for one of the better, more progressive tech companies. There’s no joy in being part of the solution, and no success in trying to solve the problem from outside—which also brings no joy. I find myself at an impasse.
In turn, I have to reassess the goal of the Sanspoint project. My technology writing has, ostensibly, been guided by a sense of wanting to use the technology we have better. I don’t mean this in just a personal productivity sense, but also towards the ends of peace, love, and economic equality.(Yes, there’s still the slightest bit of an idealist under my cynical exterior if you scratch hard enough.) What’s clear is that the direction my writing as of late is not going towards those goals. It’s past time to change that. I just don’t want to leave behind the important struggle for the future we face to do it.
You may not be aware of it, but we are in the middle of World War III. It is not nuclear bombs we must fear. The weapon is the human mind, or lack of it, on this planet. That will determine our fate.
I’ve had debates online about the theory Apple needs to weaken their stance on privacy if they want to be a leader in consumer AI products. My stance on this is simple: no. If anything, Apple should strengthen their stance on user privacy, both as good practice and as a way to protect its customers against the incoming Presidential administration. And if this means Apple can’t compete in the AI and machine learning space with Google, Facebook, Amazon, or whoever, I am more than happy to accept that.
Why? For one, I’m skeptical that all of this data is actually giving us better products. Big data, AI, and ML may certainly be useful in specialized applications, but in the consumer space, I’m not seeing the benefits. All you need do is look at the current space of consumer AI and ML. There are two main consumer-level applications, and both have the same general purpose in mind: getting you to consume things. It’s obvious in the case of the ad-supported model used by Facebook and Google. The more data they collect, the more accurate the ads will be. Whether this is the case or not is up to you, but the last thing I need is more ads telling me to buy shit I don’t need based on some random link I clicked.
The second is in the realm of home virtual assistants, of which the Amazon Echo is the most popular. Google’s also entered the game with the Google Home. The last thing I need is a hot microphone to Amazon or Google’s data centers living in my apartment, but let’s explore just what the heck these things actually do. At a fundamental level, these are devices that compel you to consume more from the companies that make them, along with their partners. The Amazon Echo lets you buy things (from Amazon), play music (from Amazon), and control various smart home devices you likely bought through Amazon. Google Home is similar, though I don’t know if its e-commerce functionality is as built out as Amazon’s.
Every command you issue: “Alexa, order more paper towels,” or “Hey, Google, start playing my Christmas playlist” is stored, analyzed, attributed to your profile, and used to sell you more stuff. And would be quite surprised if Amazon doesn’t have an agreement with whoever makes your various smart home devices to share usage data. The whole thing is a home spying device designed to build a profile of its users that will be monetized. It’s just given a servile, yet slightly snarky personality to make you feel at ease when you give up another useful nugget of personal data. And for what? To make it easier to buy paper towels, or control the lights?
We keep being promised us better products, if we just give up more data. We give up more data, and we still get crap that’s only better at selling us more crap. It’s crap all the way down. A more accurate playlist of music recommendations only keeps you paying $9.99 a month for more music—of which the artists only sees pennies. Better Alexa speech recognitions means you can order paper towels with the water running in the kitchen. Big whoop.
But all this data can also be used for more disturbing things down the line, and that’s what bothers me most.
“The data we’re collecting about people has this same odd property. Tech companies come and go, not to mention the fact that we share and sell personal data promiscuously.
“But information about people retains its power as long as those people are alive, and sometimes as long as their children are alive. No one knows what will become of sites like Twitter in five years or ten. But the data those sites own will retain the power to hurt for decades.”
By way of an example, Maciej uses LiveJournal, and how a “gay blogger in Moscow” who started a LiveJournal account in 2004 is now at risk of being outed because “[I]n 2007, LiveJournal [was] sold to a Russian company…” And, well, we know how the current Russian government feels about homosexuality right?
Even a company that is generally on the good side of user privacy, like Apple, could change its tune at any moment. Tomorrow, Tim Cook could step off the wrong curb, and get hit by a bus. Or, Wall Street could decide they’ve had enough and kick him out in favor of a CEO who is more willing to work with the Federal Government and the Trump Administration. Having sanctions slapped on every iPhone imported from Shenzhen isn’t going to be great for the stock price.
But we don’t even have to wait. Right now, a member of Facebook’s board, Peter Thiel, has the ear of a President-Elect who promised to deport Muslims, even those who were born in this country. Don’t tell me Thiel wouldn’t compel Facebook to help. They’ve already said they would do it if asked. Facebook is the largest of the tech data brokers we surrender our personal data to, wittingly or not. They promise us relevance, and they’ve given us filter bubbles full of fake news stories. This is what I’m paying my privacy for?
It’s not hard to imagine how all this data could be turned against us. Imagine a suspicious explosion in a major city. The FBI compels Apple and Google to hand over location data on every iPhone and Android device in the area before the explosion, along with device owners email addresses. (Currently Apple deletes last known location data after two hours, but as noted, this could change.) Run that data against everyone who has identified themselves as Muslim on Facebook. Then, scan all the profile and tagged pics of those Facebook users and compare them with the camera footage picked up by Peter Thiel-founded Palantir—who is already working with the NYPD. Now you have your suspects, ready for “enhanced interrogation” and potential imprisonment or deportation.
Creeped out yet?
And it’s all because you wanted better location-aware alerts and suggestions on what crap to buy. When giving up privacy is worth only crap products and enabling government and corporate surveillance, it’s not worth it. Unfortunately, as I’ve noted, before “our online lives run on data.” We can no more extricate ourselves from the web of services that collect and store our personal data than we can extricate ourselves from the plumbing in our houses. At least the water company isn’t analyzing our leavings to find new things to sell to us Yet.
These are all linked. You can’t demand a company roll back user privacy in one area without compromising everything. It’s not immediate, but like a single torn thread in a pair of jeans, that hole is going to stretch and tear more threads with every movement. You won’t be terribly happy when something gets through that you didn’t intend. I suppose I’d be less skeptical if someone could show me one useful product that genuinely improves lives beyond offering new things to consume, and does so in a way that won’t put the lives of its users at risk. Right now, we don’t have it, just a bunch of vague promises that could be broken in a heartbeat. If the alternative means that we have no AIs in our pockets and homes, well, that’s a trade I’d be happy to make.
It was an assault rifle being fired in a pizzeria that signaled the severity of Facebook’s fake news problem. Call it Pizzagate – a right-wing conspiracy theory based on a baseless lie by 4chan. The rifle being fired was far from the only danger to employees and the owner of Comet Ping Pong – they’ve faced death threats and violations of their private lives for weeks. The harassment and threats have now spilled over to affect neighboring businesses and the people who own and work in them.
The same tactics used against women in gaming, fueled by a vague, nonsensical internet conspiracy, are being used to fuel political violence. Just as Gamergate was fueled by a non-existent review of a video game, Pizzagate uses equally false information to drive a violent harassment campaign. Now, it has spilled into real-world gun violence, and shows no signs of stopping.
If you think what happens in digital spaces has no bearing in the “real world” this is your wake-up call. Answer it.
At The Guardian, Carole Cadwalladr has noticed something disturbing about Google search suggestions:
Neither Google or Facebook make their algorithms public. Why did my Google search return nine out of 10 search results that claim Jews are evil? We don’t know and we have no way of knowing. Their systems are what Frank Pasquale describes as “black boxes”. He calls Google and Facebook “a terrifying duopoly of power” and has been leading a growing movement of academics who are calling for “algorithmic accountability”. “We need to have regular audits of these systems,” he says. “We need people in these companies to be accountable. In the US, under the Digital Millennium Copyright Act, every company has to have a spokesman you can reach. And this is what needs to happen. They need to respond to complaints about hate speech, about bias.”
This is disturbing and terrifying. We know search algorithms can be gamed, but white nationalists and racists have taken it to a whole new level. That Google seems to not even think it’s a problem is even worse. It is Google’s responsibility to ensure its results are accurate, and linking to Daily Stormer and other hateful organizations when asking about the Holocaust, or stats on black crime is abdicating that responsibility. There is no neutrality when a system can be twisted to promote one group’s horrifying ideology.