Menu

Sanspoint.

Essays on Technology and Culture

Thoughts on the Coming Chatbot Revolution

Well, folks, the tech press and VC establishment have shaken their Magic Eight-Ball and determined the Next Big Thing is… “Bots!”

Wait. Is this right? Grace, can you check on that? Really?! Okay, I’ll roll with it.

Yes, it’s bots. Specifically chat bots and AI virtual asistants like Siri and Alexa. Everybody’s getting on board with the bot revolution, and it’s going to revolutionize everything. Get your VC investments primed and ready for all the bot statups.

The rise of the “bot” as the Next Big Thing from the Valley utterly mystified me until recently. What are the advantages of a conversational interface over an explicit, directly manipulable one? There’s the hands-free aspect, something I’ve appreciated with Siri, even more on my Apple Watch, but that only works with the voice assistants. Chatbots? Not so much, though we have come a long way since the days of “YOU CAN’T GET YE FLASK”.

Then it hit me, in that way so many things do. Chatbots, especially when they have a playful personality, are a perfect way to extract more data from people. With Internet users becoming more mindful of their privacy, it’s getting harder for the data brokers and ad companies to get more info to sell advertisements on. What better way to learn consumer preferences than by having them give it to you directly? No more inferring user interests from cookies and browsing data! By presenting a conversational interface, you bypass the defenses of a user’s protectiveness, and get a direct tap into their needs and wants. No wonder it’s a growth industry.

Bots and AI seem like a useful solution being applied to the wrong set of problems. There are great applications for these tools. If I could sit here, at my desk, and be able to just capture a quick idea or OmniFocus task by yelling out loud to my Virtual Assistant, that would be great. I mean, I can… but it’s not great. The chatbot paradigm has the advantage of being simpler than a GUI, and for a number of simpler tasks, it should be a lot easier than one. But it won’t be for everything.

Anything that involves dealing large amounts of data is going to be worse. If you’re looking up pizza places, you’re already going to be overwhelmed in some New York neighborhoods. Instead of finding better ways to handle that data, you’re likely to just be defaulted to whatever chain pizza joint has a marketing deal with your bot provider. Hope you like Dominos, is all I’m saying. Because of this, visual, tactile, and direct interfaces will never go away. Even the voice-controlled universe on Star Trek, where the computer never has trouble understanding you, has a GUI all over the place.

It’s possible a conversational UI would help in allowing computers to be better at understanding nuance. Historically, this is something computers have always sucked at. This is, of course, based on the assumption that the people creating the bots have an understanding of nuance as well. Alyx Baldwin wrote a great piece on The Hidden Dangers of AI for Queer and Trans People. It’s worth your time, but here’s a summary: computers are really good at putting things in boxes, and humans are really bad at being put into boxes. The people who program computers are also lazy and tend to only think of a handful of boxes. Unless the developers of AI, Deep Learning Algorithms, and Chatbots understand the variety of people using them, the AIs, Algorithms, and Chatbots won’t understand them either.

As for understanding, even in terms of language, that’s still up in the air. Voice recognition has come a long way, and on a good day, Siri can understand me despite a whole mess of background noise. Voice recognition still sucks, however, for anyone who speaks with a heavy accent, or has a speech disorder. Since bots and voice recognition systems are often trained on a corpus of speech that assumes someone who speaks a standard language by default.

If you’re not going to come across that language or method of speech in a Silicon Valley development house, you’re not going to see it supported in a voice recognition app or device. It is possible to do single-user training, much like you would with old school voice-to-text apps like Dragon Dictate, but that’s a lot to ask of a user up-front. Easier to just let ’em dangle, though perhaps that might change in time.

Unlike, say, virtual reality, I can see a lot of potential in the “bot” ecosystem, assuming we can work past all these stumbling blocks in the way. I’ve eyed an Amazon Echo for a while, though its utility would be diminished since I refuse to use any streaming music service. We’ll see what happens after WWDC, there. I’m still uncomfortable letting Amazon have an always-on microphone in my apartment, if only because I can’t be sure it’s not going to be parsing my conversations for ad metadata. I could be more willing to trust an Apple device, even if it does less, because Apple is more in tune with me on privacy.

The dream of the AI/chatbot/virtual assistant world is one where everyone’s little earpieces, smartwatches, speaker dinguses, or whatever, seamlessly connects the entire world by our voice, enabling an easier lifestyle for everyone. The reality is likely to be a whole bunch of miserable walled gardens full of microphones that can deliver us crappy pizza while making sure we get ads about debt consolidation every time we complain about the credit card bill after buying one. The former is more preferable, but the later is much more lucrative.

What We Lose In A Streaming Music World

You can call it perfect timing. A couple of weeks ago, I decided to try iTunes Match, the iTunes in the Cloud solution that doesn’t involve Apple Music and subscription streaming. After using it on two Macs and my iPad for a few days, I found that some of my favorite albums and songs were being matched with the wrong versions, yet again. In response, I rage quit and got a refund. I’d never been quite so upset with an Apple product in my life.

Then, I came across this sad piece by James Pinkstone whose library was destroyed by an Apple Music and iTunes bug. A week after that, MacRumors reported that Apple was planning to end music downloads in two years. The rumor was squashed by an Apple spokesperson, and that was enough for many. Not me, though. “What if?” has been running through my mind since.

James Pinkstone wrote about how certain, unique versions of songs in his library were replaced by common versions. I ran into a similar issue with iTunes Match, replacing German-language versions of Kraftwerk songs with their English counterparts, or a remastered version of an album with the original CD master. At least I had recourse of a backup and my local files in situ on my main Mac…

I’ve written before about why I choose to own music in a streaming world, but I never felt as though my music collection would be taken from me. Now, I’m not so sure. Maybe music downloads will go away, and maybe the future of iTunes will be cloud first, and local files somewhere below Connect, but above syncing ringtones.

But the looming threat of a streaming only world opens up a bigger question about music ownership and the experience of music. I’ll be the first to admit that all forms of physical media are a huge pain the rear. LPs, CDs, and cassettes are all fragile. Digital files are less fragile, but just as annoying to organize and sync. I keep an external hard drive tethered to my MacBook just to store my media library. I have more music than could fit on the largest of iPhones. Managing this is a hassle.

Yet, these forms of music media are real in a way that streaming music is not. There are digital files in my collection that are over a decade old. They’ve traveled with me across multiple computers, and multiple lives. This is meaningful in a way that streaming can never be. How do you connect with music that you simply rent, and could disappear from your library the moment you turn your back? A record label dispute could mean that your favorite artist’s music might be locked down to a single streaming service—such as the library of the dearly departed Prince.

It feels like the move to streaming music means we’re losing something. What happens to music that isn’t available on a streaming service? How will you explore the music of a surprisingly good opening band when they don’t exist in the library of Apple Music, Spotify, or TIDAL? So much music that has touched my soul, you can’t stream it for love or money. I had to seek it out on my own, pawing through used music bins, or going to shows. When there’s an all-you-can eat buffet for $9.99, what’s the incentive to order something that isn’t on the menu?

Maybe I’m becoming a fossil, but I can’t help but worry. Music is one of the most personal forms of art in the world. The way we relate to it cannot be isolated to files stored somewhere remotely. There’s the thrill of discovery, the emotional connection to lyrics, a voice, or even a single sound. By owning my music library, I make sure that I can maintain those relationships to the art. I should never have to worry that, if I double-click an album in iTunes, I will hear the wrong thing. If I do, I know that restoring order is within my grasp, not something that requires technical support calls and arcane rituals.

I may, eventually, be left behind by a streaming world. I don’t expect I’ll be alone. If the world goes on without me, and I am left to my digital files, and my collection of plastic and wax discs, I will be okay. But there’s a big difference between being left behind, and being abandoned, and it’s the latter that scares me to death. If you care about music in any tangible form, it should scare you as well.

Mindful Tech, Part 7.5: Our Data Trails, Ourselves Continued

Before you read this, take a moment, and check out Take This Lollipop. Fair warning, the site requires Flash, and it needs to connect with your Facebook account to work. It’s worth trying, at least once, and you can always disable its Facebook access when you’re done watching.

Go ahead. I’ll be here when you’re done.


Take This Lollipop is creepy, and a bit heavy handed, yet it makes a point about who has access to your data. It also reveals the potential of our data to create narratives. In a world in which our data is constantly being used to create a specific narrative for us, e.g. you’re a White Male, age 18–35, with an interest in Consumer Technology, and 80s Music, who is also $36,000 in debt, so here are Relevant Advertisements—we have the power to use our data trails to create narratives about ourselves as well.

Recently, I had the pleasure of seeing a talk by Lam Thuy Vo, a Data Journalist and Data Artists, at Facets 2016. She showed off a series of personal projects that used data to examine the very human lives of herself an others. These include Quantified Breakup, which examined her own data on movement, messaging, finances, and more, in the wake of her divorce. It’s a fascinating and different way of thinking about data, and a great contrast to the almost paranoiac view in the previous Mindful Tech piece. She also introduced us to Take This Lollipop, as well.

Data trails are more than just what’s collected for advertising purposes. We collect data on ourselves, deliberately and not-so-deliberately, and in ways we don’t even think about. If you wear a fitness tracker, you’re collecting data on yourself deliberately. If you carry an iPhone, you have a record of everywhere you go, not so deliberately. Data trails encompass the thoughts we post to Twitter, the emails we send on Gmail, our browser histories, the music we listen to on Spotify, anything we do online, for better and for worse.

It’s becoming more and more impossible not to opt-in to even some of the most egregious data collection. For example, when I was looking for work, I discovered pretty quickly that if I don’t have a LinkedIn profile, as far as most employers were concerned, I did not exist. This may not be an issue if you’re working in manual labor fields, but if you want a desk job where you’re moving data around, if you’re not on LinkedIn, you don’t exist. When all of your friends and family are on Facebook, and you’re not, how does this change your social landscape in the real world? And, of course, what happens if you’re blocked from one of these networks for whatever reason? [1]

There’s no clear answers here. Lam brought up the idea of a Digital Bill of Rights that determines who has the right to our data and when. There’s a social difference between attitudes to data privacy between the United States and other parts of the world. You run into ideas like the Right to be Forgotten in Europe, but when the Internet is dominated by American corporations with American ideas of privacy and data retention, attempts to legislate our way out of this are doomed to be insufficient.

In the interim, the best option is to learn about your data, and to take ownership of it. Ownership of data matters. One thing that Lam pointed out in her talk is that it is possible to pull your data out of many of these services. Whether it’s human-readable is another matter. The best you can typically hope for are CSV files, which you can manipulate using the most humble of data analysis tools: Microsoft Excel and PivotTables. It’s then up to the viewer to create a cohesive narrative from that data: a story with a beginning, middle, and end.

A while back, I wrote about how I want to know what the services I use know about me.

“If I shouldn’t worry about the data I feed to Google, Facebook, and a whole holy host of similar companies and services out there, why not be more transparent about what data is being collected, how, and what they know about me? I want to see a simple, clean, human readable page on every service I feed my personal data to that tells me every last piece of information that they know…”

There’s an opening for services that can do this for people, though the privacy risks of aggregating all this data together are significant. If a malicious actor gets in to a service that houses the aggregation of all of our personal data, it’s not hard to see the potential for abuse. It would be a revolution in doxing alone. Instead, I’d like to see tools that exist in the user space, off the cloud, that let us analyze and identify the stories in our data. The better to know what we’re making, what we’re leaking, and what we should be deleting.

And even deleting our data is problematic. The database design of many websites is such where it is easier to mark a database record as inactive, rather than remove it entirely. This is one part lazy design, and one part technical limitation. How can we be sure that the data we’ve deleted is truly gone, when we want it gone? What happens when the data trails we thought were lost when a service dies get bought by another company? Truth is we don’t know. And that makes thinking about it all the more essential.


  1. This is huge. Facebook’s “Real Name” policy has had a chilling effect on transgender people, or anyone who needs a pseudonym to avoid harassment and abuse, locking them away from digital support networks, family, and friends.  ↩