This tweet has been circulating for a few years, but it remains relevant to technology discussion today, if not moreso.
Unlike so many people I follow online, I never came up on Cyberpunk. When it comes to Sci-Fi, I grew up on Star Trek—specifically The Next Generation. This might be why, when it comes to technology, there’s still some optimist under my cynical surface. You just need to scratch hard. Though later series and movies would muddy the waters (in a good way), Star Trek retains a utopian view of technology. Not one where technology undoes all human foibles, but where it helps us usher in a more peaceful world, free of material want, and with the freedom to seek fulfillment among the stars. Technology is the vehicle through which humanity’s better nature manifests into the universe.
Instead, I see technology turned against our better natures. Whether it’s governments and corporations alike spying on us through our communication tools, attempts to shove more consumer garbage down our throats, or just predicting our wants before we know we want something for the benefit of a corporate partner, I get mad. Can you blame me? This is not the future I signed up for, but as one of my favorite bands put it, “The future that [I] anticipated has been cancelled.”
So, we get the Cyberpunk future, with all the exploitive techno-capitalism, environmental disasters, and crappy music, but none of the cool fashion. At least we also don’t have to carry around as much gear. If this is the sci-fi view of the world you grew up on, I suppose it’s easy to accept it. While I read a bit of cyberpunk literature as a teen and young adult—Neuromancer and Snow Crash, specifically—I didn’t fall in love with the concept. Likely because I wouldn’t be the elite hacker, slashing his way through cyberspace, merely an office drone at one of the MegaCorps. Maybe there’s a story idea there, though.
But, the Cyberpunk dystopia is hiding its true face behind the lofty utopian rhetoric of the Star Trek future. Not that this is anything new, of course. Utopian rhetoric has been the marketing methodology of new technology for centuries. Which is why, I suppose, if there’s any Sci-Fi that truly reflects the state of technology today, it might be the other major Sci-Fi influence of my adolescence: Douglas Adams.
In The Hitchhiker’s Guide to the Galaxy series, Douglas extrapolates a view of technology that’s much the same as we have today: a bunch of pie-in-the-sky utopian promises that never work as advertised. This holds true whether it’s robots with “Genuine People Personalities”, or food synthesizers that analyze your body’s dietary needs and your brains desire before spitting out something “almost, but not quite, entirely unlike tea.” Hell, “Share and Enjoy” may as well be the slogan for Facebook, if not The Sirius Cybernetics Corporation.
The great thing about Douglas’s view of technology is that it adds the right spice of cynicism to my utopian Star Trek dreams. Even if we get our post-scarcity utopia of starships and matter replicators, they’ll probably still hang, and take every other system down with it, if someone asks for a cup of tea. But to get there, we need to decide which future we want. And we need to see past the doublespeak and false utopian nonsense spouted by Valley douchebags seeking another round of funding for their newest startup that promises to make your life easier by letting you pay someone to do menial work at a lower pay rate.
I don’t know about you, but I’m still gunning for the Star Trek future. The only way it arrived, however, was after political turmoil, and brutal war that left the planet devastated. Perhaps we must go through the cyberpunk future, the dystopia, and the horror, to reach the technological utopia on the other side. But if we can skip to utopia—or at least make the dystopia short—why shouldn’t we? Where’s my generation’s genuine Sci-Fi optimism? It sure isn’t coming from Silicon Valley.
The election flipped a switch for me, however, and I’ve gone from merely angry to outright furious. I’ve gone from concerned about how my data is being used to outright afraid of tech companies colluding with a government to create registries of Muslims, or other “dangerous” groups. I’m furious that the lack of moderation and oversight on social media has resulted in radical white nationalists dominating the platforms.
It’s tiresome, to be honest. All this anger and rage, it doesn’t make for good writing, and it doesn’t make for pleasant reading either. Beyond that, what good is my raging and screaming into the void even accomplishing? Maybe if I had the audience of Kara Swisher, who wrote a scathing editorial about the tech CEO’s coming to Trump’s “summit”, my words could have an impact on the industry. Instead, it feels like I’m trying to defend against a charging elephant while armed with a thumbtack.
The reason I write about technology is because I care. I complain because I love. I got my first computer in 1992, and I got online in 1997. Both events changed my life, and spawned a continuing fascinating with the potential of computers, the internet, and related gizmos that still lingers. Through technology, I made friends when I was a socially isolated teenager, I found love while I was a socially awkward college student, and I found a voice as an adult. There’s so much power and potential for good embedded in technology that seeing it all twisted to serve the ends of the greedy, the violent, and the hateful… well, can you blame me for being angry?
But what am I going to do about it? That’s the tricky part. I’m burned out, and I’ve been burned. Two short stints in the tech industry, even if one was on the periphery, taught my only that I don’t want to work in the tech industry—even if I were working for one of the better, more progressive tech companies. There’s no joy in being part of the solution, and no success in trying to solve the problem from outside—which also brings no joy. I find myself at an impasse.
In turn, I have to reassess the goal of the Sanspoint project. My technology writing has, ostensibly, been guided by a sense of wanting to use the technology we have better. I don’t mean this in just a personal productivity sense, but also towards the ends of peace, love, and economic equality.(Yes, there’s still the slightest bit of an idealist under my cynical exterior if you scratch hard enough.) What’s clear is that the direction my writing as of late is not going towards those goals. It’s past time to change that. I just don’t want to leave behind the important struggle for the future we face to do it.
You may not be aware of it, but we are in the middle of World War III. It is not nuclear bombs we must fear. The weapon is the human mind, or lack of it, on this planet. That will determine our fate.
I’ve had debates online about the theory Apple needs to weaken their stance on privacy if they want to be a leader in consumer AI products. My stance on this is simple: no. If anything, Apple should strengthen their stance on user privacy, both as good practice and as a way to protect its customers against the incoming Presidential administration. And if this means Apple can’t compete in the AI and machine learning space with Google, Facebook, Amazon, or whoever, I am more than happy to accept that.
Why? For one, I’m skeptical that all of this data is actually giving us better products. Big data, AI, and ML may certainly be useful in specialized applications, but in the consumer space, I’m not seeing the benefits. All you need do is look at the current space of consumer AI and ML. There are two main consumer-level applications, and both have the same general purpose in mind: getting you to consume things. It’s obvious in the case of the ad-supported model used by Facebook and Google. The more data they collect, the more accurate the ads will be. Whether this is the case or not is up to you, but the last thing I need is more ads telling me to buy shit I don’t need based on some random link I clicked.
The second is in the realm of home virtual assistants, of which the Amazon Echo is the most popular. Google’s also entered the game with the Google Home. The last thing I need is a hot microphone to Amazon or Google’s data centers living in my apartment, but let’s explore just what the heck these things actually do. At a fundamental level, these are devices that compel you to consume more from the companies that make them, along with their partners. The Amazon Echo lets you buy things (from Amazon), play music (from Amazon), and control various smart home devices you likely bought through Amazon. Google Home is similar, though I don’t know if its e-commerce functionality is as built out as Amazon’s.
Every command you issue: “Alexa, order more paper towels,” or “Hey, Google, start playing my Christmas playlist” is stored, analyzed, attributed to your profile, and used to sell you more stuff. And would be quite surprised if Amazon doesn’t have an agreement with whoever makes your various smart home devices to share usage data. The whole thing is a home spying device designed to build a profile of its users that will be monetized. It’s just given a servile, yet slightly snarky personality to make you feel at ease when you give up another useful nugget of personal data. And for what? To make it easier to buy paper towels, or control the lights?
We keep being promised us better products, if we just give up more data. We give up more data, and we still get crap that’s only better at selling us more crap. It’s crap all the way down. A more accurate playlist of music recommendations only keeps you paying $9.99 a month for more music—of which the artists only sees pennies. Better Alexa speech recognitions means you can order paper towels with the water running in the kitchen. Big whoop.
But all this data can also be used for more disturbing things down the line, and that’s what bothers me most.
“The data we’re collecting about people has this same odd property. Tech companies come and go, not to mention the fact that we share and sell personal data promiscuously.
“But information about people retains its power as long as those people are alive, and sometimes as long as their children are alive. No one knows what will become of sites like Twitter in five years or ten. But the data those sites own will retain the power to hurt for decades.”
By way of an example, Maciej uses LiveJournal, and how a “gay blogger in Moscow” who started a LiveJournal account in 2004 is now at risk of being outed because “[I]n 2007, LiveJournal [was] sold to a Russian company…” And, well, we know how the current Russian government feels about homosexuality right?
Even a company that is generally on the good side of user privacy, like Apple, could change its tune at any moment. Tomorrow, Tim Cook could step off the wrong curb, and get hit by a bus. Or, Wall Street could decide they’ve had enough and kick him out in favor of a CEO who is more willing to work with the Federal Government and the Trump Administration. Having sanctions slapped on every iPhone imported from Shenzhen isn’t going to be great for the stock price.
But we don’t even have to wait. Right now, a member of Facebook’s board, Peter Thiel, has the ear of a President-Elect who promised to deport Muslims, even those who were born in this country. Don’t tell me Thiel wouldn’t compel Facebook to help. They’ve already said they would do it if asked. Facebook is the largest of the tech data brokers we surrender our personal data to, wittingly or not. They promise us relevance, and they’ve given us filter bubbles full of fake news stories. This is what I’m paying my privacy for?
It’s not hard to imagine how all this data could be turned against us. Imagine a suspicious explosion in a major city. The FBI compels Apple and Google to hand over location data on every iPhone and Android device in the area before the explosion, along with device owners email addresses. (Currently Apple deletes last known location data after two hours, but as noted, this could change.) Run that data against everyone who has identified themselves as Muslim on Facebook. Then, scan all the profile and tagged pics of those Facebook users and compare them with the camera footage picked up by Peter Thiel-founded Palantir—who is already working with the NYPD. Now you have your suspects, ready for “enhanced interrogation” and potential imprisonment or deportation.
Creeped out yet?
And it’s all because you wanted better location-aware alerts and suggestions on what crap to buy. When giving up privacy is worth only crap products and enabling government and corporate surveillance, it’s not worth it. Unfortunately, as I’ve noted, before “our online lives run on data.” We can no more extricate ourselves from the web of services that collect and store our personal data than we can extricate ourselves from the plumbing in our houses. At least the water company isn’t analyzing our leavings to find new things to sell to us Yet.
These are all linked. You can’t demand a company roll back user privacy in one area without compromising everything. It’s not immediate, but like a single torn thread in a pair of jeans, that hole is going to stretch and tear more threads with every movement. You won’t be terribly happy when something gets through that you didn’t intend. I suppose I’d be less skeptical if someone could show me one useful product that genuinely improves lives beyond offering new things to consume, and does so in a way that won’t put the lives of its users at risk. Right now, we don’t have it, just a bunch of vague promises that could be broken in a heartbeat. If the alternative means that we have no AIs in our pockets and homes, well, that’s a trade I’d be happy to make.
A while ago, I came across an interesting opinion piece on Lifelogging, and how interest is waning as people move towards more “forgetful” services like Snapchat. It’s certainly something I’ve thought a lot about, both in terms of the data I create, and who has access to it. It requires a lot of trust in the people who we share our data with. The essay’s author, Mike Elgan, has his own opinion:
I believe this apparent impulse to forget is not what it seems. In reality, people are just suffering from information overload and data anxiety. Collecting huge amounts of personal data – even large numbers of pictures – simply creates a problem without solving one. The problem it creates is: How do you manage all this data? How do you back it up? Store it? Annotate or tag it?
Following this election, I think Mike Elgan and others are asking another question: who else has access to our data? I know I am, but I was asking it before it was cool, or at least before we were all afraid of Donald Trump.
In the wake of the election, a lot more people who were pretty darn confident about their data, how safe it is, and how safe they are, started getting a lot more paranoid. Case in point, this excellent Buzzfeed article by Sheera Frenkel, which interviews a handful of—well, two—Valley entrepreneurs who run data-focused companies, along with noted tech wise-ass critic, Maciej Cegłowski of Pinboard fame. (I use “wise-ass” as a compliment, here.)
Maciej has been an outspoken critic of how the Valley treats data collection, likening it to nuclear waste in an excellent talk. Maciej also has an excellent set of six fixes to help restore privacy online. I’m grateful for his work, but rather than rehash his ideas and spend more time chiding Valley companies for treating the data of real people with as much care as you might give the wrapper for your chewing gum, let’s step back and think about how we got here.
Tech companies have two main uses for personal data. One is for improving products and the consumer experience. Google collects your every move, reads every email, and tracks cars in traffic, so that when you wake up in the morning, you can find out that traffic’s backed up on your way to the airport Then Google suggests you leave a few minutes early, and take an alternate route. Your life is measurably improved—you didn’t miss your flight—and everyone is happy.
The other use is to sell to advertisers. Google collects your every move, reads every email, and tracks cars in traffic so that it can suggest you pick up breakfast from Dunkin’ Donuts, or that it’s time to get your tires rotated at Jiffy Lube. Okay, maybe not you specifically, but recent changes there suggest that might be changing. Either way, the goal for advertisers is to leverage all of this data to know what you want or need before you do, get the right ad in your face, and make the sale.
Of course, this is all a review. We know this is the arrangement, and for every game-changing, life improving notification we get on our devices, there’s going to be a targeted ad somewhere to make sure it’s paid for. Implicit in all of this is trust that these companies will be good stewards of our data. What we haven’t fully defined is what good stewardship of data is.
We are getting a very good idea of bad stewardship, though. It’s the push notification to leave for work when you’ve already sat down at your desk, sure, but it’s also the badly targeted ad that you can’t escape no matter how hard you try. It’s the email from Have I Been Pwned? that tells you yet another company has been hacked and your email, password, credit card, and god knows what else now is for sale on the darknet to the highest bidder. It’s the service you loved and trusted going out of business, and their assets being picked up for pennies a gigabyte by some entity you never heard of in a country you can’t find on the map, and no way to get your data back.
The narrative has always been “trust us, we know what we’re doing,” followed up by “give us more data, so we can get better”—at least among those who are courteous enough to ask. Most places just change the Terms of Service, and throw up a screen that people will click through without reading so they can get their next dopamine hit. But, because so many techies want a better, smarter device, they’ll not only happily give in to what the companies ask, they’ll raise hell about those companies that take their time and try to do more with less, or protect user privacy in the process. (Yes, I’m talking about Apple.)
Which is why I’m so surprised to read about how Valley companies are freaking out over Donald Trump. Nobody can tell me they see this coming, if not from a Trump presidency, than perhaps under another in four to eight years. The fracas between the FBI and Apple should have been the latest of a whole flock of dead canaries in the data mine—it was never going to stop with one device. Whatever a company’s policies on encryption, authorized access, or what have you, men with guns can be very persuasive, whether they have a court order or not.
I’d like you to pretend for a moment that you work for Facebook, Google, Amazon, Apple, or Palantir—any company that collects a whole bunch of data on people’s online behavior. Imagine you work in the department that oversees data collection, and that you have access to the database. Imagine you can see, unencrypted, every scrap of data of every person in that database, should you choose so. That’s a lot of power, and in real life, it’s probably shared across a bunch of people. But imagine you’re one of them.
After work one day, a man trails you to your car. He holds a gun to your head, and hands you a list of people that he wants data on. If you give him all the data, he promises you ten million dollars. If you don’t, he will put a bullet through your head and make it look like a suicide.
What do you do?
It doesn’t matter whether the man with the gun is with the US Government, the Russian Government, or just some nutjob with a grudge and a list of ex-girlfriends. Your life is at risk, and you have access to the data he needs. Remember, you don’t need a supercomputer to crack encryption if you can beat the password out of the owner with a wrench. The only way to make sure nobody gets unauthorized access to data is not to have that data lying around in the first place.
And there’s the catch.
So much of our online lives run on data, not just to pay for it, but to make it usable. We’ve struck a Faustian bargain, and this is the result. Fear is justified, and at the risk of conspiracy-mongering, it’s no surprise that Peter Thiel was so willing to fund the Trump campaign in retrospect. Thiel is the co-founder of Palantir, one of the biggest of the big data companies, and they already have huge inroads with the New York City government. Plus, he sits on the board of Facebook. Now, Thiel has the ear of the President-Elect.
With no controls on what data is collected and how long it’s stored, these databases become increasingly tantalizing targets for malicious actors of all stripes. It’s a shame that it took this long for anyone to start waking up to the danger. Now, it might be too late. At least we’re having the discussion at last, and perhaps a few of the people who called me paranoid aren’t going to be quite so dismissive going forward. Boy is that some cold comfort.
WatchOS 3’s reveal at WWDC might be the most successful OS announcement in Apple’s recent history. Of all the new features, UI changes, and overall improvements, only one got any groans: the Breathe app. In fairness, anything introduced with a quote by Deepak Chopra is worthy of skepticism at best. On the other hand, the science around deep breathing is legit. Enough so that promoting it with the nonsensical woo of Mr. Chopra isn’t going to undo it.
But, of course, the name and the presentation left so many chuckling and snarking “Oh, so the Watch is going to remind you to breathe now?” Six weeks after the release of watchOS 3, Breathe still is the butt of occasional jokes and snark from tech wags on Twitter. Well, let them snark all they want. I’m a believer. Here’s why.
Earlier this year, I was diagnosed with Adult Attention Deficit Disorder. Well, re-diagnosed. I wad diagnosed with ADD as a child, but my parents opted not to medicate me. This was the early 90s, when ADD was just becoming a thing, and there were a lot of questions and concerns. The whole thing faded into the background as I struggled with my temper, my moods, with anxiety—and my ability to focus. After taking matters into my hands as an adult and getting diagnosed, I have been taking 10 milligrams of Adderall, every day (except Sundays) for the last two months. It’s been life changing.
While the image of ADD in adults and children is either that of the daydreamer or the hyperactive thrill-seeker, ADD can also include a number of symptoms, including mood swings and anxiety. I have both, though anxiety is probably the more prevalent of the two. The Adderall keeps both in check, but anxiety still creeps in around the edges. This happens when I’m on my medication, and even more when I’m not. This is why I love the Breathe app.
Every three hours or so, I get a gentle tap on the wrist, reminding me to take a minute and do a deep breathing exercise. I might not be able to get to it right away—lately it’s been bugging me in the middle of my dinner—but I usually take the opportunity. It feels good to take my brain off whatever it’s chosen to chew on, center myself with my body, and just breathe. When I’m done, I feel a little calmer, a little more relaxed, and a little more in control. It’s incredible, and I’ve made it a point to try to get four “Mindful Minutes” in with the app every day. I track it with Streaks, so I have a record. I’m on an eleven day streak right as I write this.
Whether you have anxiety, or just want a proven way to be centered for a bit, deep breathing is a huge help. No, you don’t need an Apple Watch to do it, of course. I first learned about the general technique from a video Mara Wilson made for Project UROK. It was just hard to make time for it in my life, especially during an anxiety attack. What the Apple Watch and the Breathe app do is give me the first push towards making this a habit. That is powerful stuff that gets sold short when people brush off the app with dumb jokes. Instead of snarking, or turning off the notifications, why not give it a try. You can afford a minute or two to sit at your desk and breathe. I think you’ll be surprised.