Am I Too Paranoid About AIs?
Some of the tech people I follow online are slowly starting to crack my brain open about bots and AI stuff. John Gruber and Merlin Mann on The Talk Show were quite effective. There’s a lot of possibility, and as an Apple fanboy, I’m excited to see what’s happening with Siri at WWDC. In the interim, the Amazon Echo continues to tempt me, despite my misgivings. I’ve been a proponent of context-based computing for the past few years, and with better bots and AI, we seem to be getting there, at last.
Problem is, to get all those crazy cool context-aware systems, for our AIs to know what we need to know before we need to know it, they need a lot of data about us to make it happen. I don’t want to give all that data up to those systems. It’s less that I’m worried about giving up my personal data in the abstract. I’m more worried about what the people I give my data to are doing with it beyond what I want them to do. It’s a question of trust. Am I giving up more than I’m getting back?
To stick with Google, my experiences with their AI stuff have been sub-par to say the least, but I don’t know why. It’s a black box. Maybe I wasn’t giving Google enough data, maybe I was stymied by iOS limitations, or maybe I was some weird edge case. There’s no good way to diagnose where any of this stuff is failing, and—at the time—no easy way to make corrections when the AIs screw up. Why should I trust Google to know my commute, when it gives me reminders to leave for work once I’m already at the office?
iOS 9’s “Proactive” features got me more excited than anything Google’s done, not least because I knew most of the smarts were happening on the device, instead of the “cloud” where Apple could do squirrely things with the data. I trust Apple in a way I don’t with Google, but Proactive is a disappointment. Maybe there will be improvements with iOS 10, but even for a 1.0, Proactive is weak sauce. The most functional thing it does is show a little corner icon on my lock screen to open Overcast when my Bluetooth headphones connect. This does me no good, because I still hit the home button to view my lock screen like an animal, and since I have a 6S, I end up at my home screen.
So, I’m stuck between a service that barely works, and a service that might work if I’m willing to unload my entire digital life into its hungry, gaping maw. I know Google will use that data to give me something, but then they’ll slice it, dice it, mix it with the data of people it considers similar, and sell it as a package to advertisers. That’s how they make the money to keep the services going. We know that this is the deal, but the question is… should I really be that paranoid?
It’s a tricky question. How do I know what I’m missing out on until I try it? But I can’t try it without going all-in and surrendering my personal data to a service I don’t know if I can trust. As mentioned before, my previous experience with Google’s AI stuff have been phenomenally sub-par for reasons I can’t even begin to unpack. If they want me to go all in, they’ve got to give me a compelling argument to overlook where they’ve failed in the past. Google not only needs to overcome my paranoia, but to overcome their own failures.
My paranoia extends far beyond Google, though. I’ve made it a point not to connect anything with my Facebook account, because as little as I trust Google, I trust Facebook even less. I even disabled the ability to use Facebook with apps. That hasn’t stopped Facebook from figuring out pieces of my digital life I thought had been siloed. I’ve seen Facebook suggest Twitter friends as Facebook friends, and all I can think of is “How did they get that?” Then, I realized I linked my Instagram account, which uses the same email as my Facebook account, to Twitter, because I was unhappy with IFTTT over their poor treatment of Pinboard. That one’s on me, I guess.
The question remains. Even when I think I’ve drawn the barriers between myself and the prying eyes of the algorithm, something always leaks. You think you’re safe, and then the algorithm starts showing you stuff you never knew it was going to give you—correct stuff, but not the right stuff. The only way to correct it is to surrender, give up more data, and surrender more of myself as disembodied data points that will get sold to give me more and more “relevant” ads. It’s a Catch–22! I don’t want to have my data sold, but I want at least some of what the AI algorithms can give me.
I’m not a hard-liner on any of this, I just want to know what I’m giving up, and how it’ll be used to serve me, and their real customers. At least then I can make an informed decision. If I buy an Echo, can I be certain Amazon is really deleting anything I would say in my apartment before “Alexa?” Am I going to get ads for walnuts based on causal conversations with my fiancé about nutrition? I mean, I’m the kind of person who will give fake information when signing up for a store loyalty cards so I don’t get more junk mail and telemarketing calls. I can’t do that with bots and algorithms.
How long can I keep putting up the fight? At a certain point, it’s easier just to give in. My only hope is that I can hold out until the adtech bubble finally bursts. At which point, I might have to pay a monthly fee to get a decent AI system in my life, but I’ll be more comfortable that way. Either that, or Siri will get the long overdue upgrade it needs at WWDC ’16. There’s no rush, but that doesn’t mean I’m comfortable being left behind.