Menu

Sanspoint.

Essays on Technology and Culture

Thoughts on the Coming Chatbot Revolution

Well, folks, the tech press and VC establishment have shaken their Magic Eight-Ball and determined the Next Big Thing is… “Bots!”

Wait. Is this right? Grace, can you check on that? Really?! Okay, I’ll roll with it.

Yes, it’s bots. Specifically chat bots and AI virtual asistants like Siri and Alexa. Everybody’s getting on board with the bot revolution, and it’s going to revolutionize everything. Get your VC investments primed and ready for all the bot statups.

The rise of the “bot” as the Next Big Thing from the Valley utterly mystified me until recently. What are the advantages of a conversational interface over an explicit, directly manipulable one? There’s the hands-free aspect, something I’ve appreciated with Siri, even more on my Apple Watch, but that only works with the voice assistants. Chatbots? Not so much, though we have come a long way since the days of “YOU CAN’T GET YE FLASK”.

Then it hit me, in that way so many things do. Chatbots, especially when they have a playful personality, are a perfect way to extract more data from people. With Internet users becoming more mindful of their privacy, it’s getting harder for the data brokers and ad companies to get more info to sell advertisements on. What better way to learn consumer preferences than by having them give it to you directly? No more inferring user interests from cookies and browsing data! By presenting a conversational interface, you bypass the defenses of a user’s protectiveness, and get a direct tap into their needs and wants. No wonder it’s a growth industry.

Bots and AI seem like a useful solution being applied to the wrong set of problems. There are great applications for these tools. If I could sit here, at my desk, and be able to just capture a quick idea or OmniFocus task by yelling out loud to my Virtual Assistant, that would be great. I mean, I can… but it’s not great. The chatbot paradigm has the advantage of being simpler than a GUI, and for a number of simpler tasks, it should be a lot easier than one. But it won’t be for everything.

Anything that involves dealing large amounts of data is going to be worse. If you’re looking up pizza places, you’re already going to be overwhelmed in some New York neighborhoods. Instead of finding better ways to handle that data, you’re likely to just be defaulted to whatever chain pizza joint has a marketing deal with your bot provider. Hope you like Dominos, is all I’m saying. Because of this, visual, tactile, and direct interfaces will never go away. Even the voice-controlled universe on Star Trek, where the computer never has trouble understanding you, has a GUI all over the place.

It’s possible a conversational UI would help in allowing computers to be better at understanding nuance. Historically, this is something computers have always sucked at. This is, of course, based on the assumption that the people creating the bots have an understanding of nuance as well. Alyx Baldwin wrote a great piece on The Hidden Dangers of AI for Queer and Trans People. It’s worth your time, but here’s a summary: computers are really good at putting things in boxes, and humans are really bad at being put into boxes. The people who program computers are also lazy and tend to only think of a handful of boxes. Unless the developers of AI, Deep Learning Algorithms, and Chatbots understand the variety of people using them, the AIs, Algorithms, and Chatbots won’t understand them either.

As for understanding, even in terms of language, that’s still up in the air. Voice recognition has come a long way, and on a good day, Siri can understand me despite a whole mess of background noise. Voice recognition still sucks, however, for anyone who speaks with a heavy accent, or has a speech disorder. Since bots and voice recognition systems are often trained on a corpus of speech that assumes someone who speaks a standard language by default.

If you’re not going to come across that language or method of speech in a Silicon Valley development house, you’re not going to see it supported in a voice recognition app or device. It is possible to do single-user training, much like you would with old school voice-to-text apps like Dragon Dictate, but that’s a lot to ask of a user up-front. Easier to just let ’em dangle, though perhaps that might change in time.

Unlike, say, virtual reality, I can see a lot of potential in the “bot” ecosystem, assuming we can work past all these stumbling blocks in the way. I’ve eyed an Amazon Echo for a while, though its utility would be diminished since I refuse to use any streaming music service. We’ll see what happens after WWDC, there. I’m still uncomfortable letting Amazon have an always-on microphone in my apartment, if only because I can’t be sure it’s not going to be parsing my conversations for ad metadata. I could be more willing to trust an Apple device, even if it does less, because Apple is more in tune with me on privacy.

The dream of the AI/chatbot/virtual assistant world is one where everyone’s little earpieces, smartwatches, speaker dinguses, or whatever, seamlessly connects the entire world by our voice, enabling an easier lifestyle for everyone. The reality is likely to be a whole bunch of miserable walled gardens full of microphones that can deliver us crappy pizza while making sure we get ads about debt consolidation every time we complain about the credit card bill after buying one. The former is more preferable, but the later is much more lucrative.

What We Lose In A Streaming Music World

You can call it perfect timing. A couple of weeks ago, I decided to try iTunes Match, the iTunes in the Cloud solution that doesn’t involve Apple Music and subscription streaming. After using it on two Macs and my iPad for a few days, I found that some of my favorite albums and songs were being matched with the wrong versions, yet again. In response, I rage quit and got a refund. I’d never been quite so upset with an Apple product in my life.

Then, I came across this sad piece by James Pinkstone whose library was destroyed by an Apple Music and iTunes bug. A week after that, MacRumors reported that Apple was planning to end music downloads in two years. The rumor was squashed by an Apple spokesperson, and that was enough for many. Not me, though. “What if?” has been running through my mind since.

James Pinkstone wrote about how certain, unique versions of songs in his library were replaced by common versions. I ran into a similar issue with iTunes Match, replacing German-language versions of Kraftwerk songs with their English counterparts, or a remastered version of an album with the original CD master. At least I had recourse of a backup and my local files in situ on my main Mac…

I’ve written before about why I choose to own music in a streaming world, but I never felt as though my music collection would be taken from me. Now, I’m not so sure. Maybe music downloads will go away, and maybe the future of iTunes will be cloud first, and local files somewhere below Connect, but above syncing ringtones.

But the looming threat of a streaming only world opens up a bigger question about music ownership and the experience of music. I’ll be the first to admit that all forms of physical media are a huge pain the rear. LPs, CDs, and cassettes are all fragile. Digital files are less fragile, but just as annoying to organize and sync. I keep an external hard drive tethered to my MacBook just to store my media library. I have more music than could fit on the largest of iPhones. Managing this is a hassle.

Yet, these forms of music media are real in a way that streaming music is not. There are digital files in my collection that are over a decade old. They’ve traveled with me across multiple computers, and multiple lives. This is meaningful in a way that streaming can never be. How do you connect with music that you simply rent, and could disappear from your library the moment you turn your back? A record label dispute could mean that your favorite artist’s music might be locked down to a single streaming service—such as the library of the dearly departed Prince.

It feels like the move to streaming music means we’re losing something. What happens to music that isn’t available on a streaming service? How will you explore the music of a surprisingly good opening band when they don’t exist in the library of Apple Music, Spotify, or TIDAL? So much music that has touched my soul, you can’t stream it for love or money. I had to seek it out on my own, pawing through used music bins, or going to shows. When there’s an all-you-can eat buffet for $9.99, what’s the incentive to order something that isn’t on the menu?

Maybe I’m becoming a fossil, but I can’t help but worry. Music is one of the most personal forms of art in the world. The way we relate to it cannot be isolated to files stored somewhere remotely. There’s the thrill of discovery, the emotional connection to lyrics, a voice, or even a single sound. By owning my music library, I make sure that I can maintain those relationships to the art. I should never have to worry that, if I double-click an album in iTunes, I will hear the wrong thing. If I do, I know that restoring order is within my grasp, not something that requires technical support calls and arcane rituals.

I may, eventually, be left behind by a streaming world. I don’t expect I’ll be alone. If the world goes on without me, and I am left to my digital files, and my collection of plastic and wax discs, I will be okay. But there’s a big difference between being left behind, and being abandoned, and it’s the latter that scares me to death. If you care about music in any tangible form, it should scare you as well.

Mindful Tech, Part 7.5: Our Data Trails, Ourselves Continued

Before you read this, take a moment, and check out Take This Lollipop. Fair warning, the site requires Flash, and it needs to connect with your Facebook account to work. It’s worth trying, at least once, and you can always disable its Facebook access when you’re done watching.

Go ahead. I’ll be here when you’re done.


Take This Lollipop is creepy, and a bit heavy handed, yet it makes a point about who has access to your data. It also reveals the potential of our data to create narratives. In a world in which our data is constantly being used to create a specific narrative for us, e.g. you’re a White Male, age 18–35, with an interest in Consumer Technology, and 80s Music, who is also $36,000 in debt, so here are Relevant Advertisements—we have the power to use our data trails to create narratives about ourselves as well.

Recently, I had the pleasure of seeing a talk by Lam Thuy Vo, a Data Journalist and Data Artists, at Facets 2016. She showed off a series of personal projects that used data to examine the very human lives of herself an others. These include Quantified Breakup, which examined her own data on movement, messaging, finances, and more, in the wake of her divorce. It’s a fascinating and different way of thinking about data, and a great contrast to the almost paranoiac view in the previous Mindful Tech piece. She also introduced us to Take This Lollipop, as well.

Data trails are more than just what’s collected for advertising purposes. We collect data on ourselves, deliberately and not-so-deliberately, and in ways we don’t even think about. If you wear a fitness tracker, you’re collecting data on yourself deliberately. If you carry an iPhone, you have a record of everywhere you go, not so deliberately. Data trails encompass the thoughts we post to Twitter, the emails we send on Gmail, our browser histories, the music we listen to on Spotify, anything we do online, for better and for worse.

It’s becoming more and more impossible not to opt-in to even some of the most egregious data collection. For example, when I was looking for work, I discovered pretty quickly that if I don’t have a LinkedIn profile, as far as most employers were concerned, I did not exist. This may not be an issue if you’re working in manual labor fields, but if you want a desk job where you’re moving data around, if you’re not on LinkedIn, you don’t exist. When all of your friends and family are on Facebook, and you’re not, how does this change your social landscape in the real world? And, of course, what happens if you’re blocked from one of these networks for whatever reason? [1]

There’s no clear answers here. Lam brought up the idea of a Digital Bill of Rights that determines who has the right to our data and when. There’s a social difference between attitudes to data privacy between the United States and other parts of the world. You run into ideas like the Right to be Forgotten in Europe, but when the Internet is dominated by American corporations with American ideas of privacy and data retention, attempts to legislate our way out of this are doomed to be insufficient.

In the interim, the best option is to learn about your data, and to take ownership of it. Ownership of data matters. One thing that Lam pointed out in her talk is that it is possible to pull your data out of many of these services. Whether it’s human-readable is another matter. The best you can typically hope for are CSV files, which you can manipulate using the most humble of data analysis tools: Microsoft Excel and PivotTables. It’s then up to the viewer to create a cohesive narrative from that data: a story with a beginning, middle, and end.

A while back, I wrote about how I want to know what the services I use know about me.

“If I shouldn’t worry about the data I feed to Google, Facebook, and a whole holy host of similar companies and services out there, why not be more transparent about what data is being collected, how, and what they know about me? I want to see a simple, clean, human readable page on every service I feed my personal data to that tells me every last piece of information that they know…”

There’s an opening for services that can do this for people, though the privacy risks of aggregating all this data together are significant. If a malicious actor gets in to a service that houses the aggregation of all of our personal data, it’s not hard to see the potential for abuse. It would be a revolution in doxing alone. Instead, I’d like to see tools that exist in the user space, off the cloud, that let us analyze and identify the stories in our data. The better to know what we’re making, what we’re leaking, and what we should be deleting.

And even deleting our data is problematic. The database design of many websites is such where it is easier to mark a database record as inactive, rather than remove it entirely. This is one part lazy design, and one part technical limitation. How can we be sure that the data we’ve deleted is truly gone, when we want it gone? What happens when the data trails we thought were lost when a service dies get bought by another company? Truth is we don’t know. And that makes thinking about it all the more essential.


  1. This is huge. Facebook’s “Real Name” policy has had a chilling effect on transgender people, or anyone who needs a pseudonym to avoid harassment and abuse, locking them away from digital support networks, family, and friends.  ↩

Mindful Tech, Part 7: Our Data Trails, Ourselves

How many accounts are you signed up for across the web? In my 1Password vault, there are over 300 individual logins, and I’m almost certain that I haven’t logged in to more than half of them in at least a year. Maybe I deleted the account, wherever that’s an option, and left it in 1Password. Likely, it’s just an account I can’t be free of. It’s bothersome just how few websites that require a login don’t have a way to delete it.

All of these unused, idle accounts present a risk. They’re part of a data trail that contains Heaven knows what on me. I can measure a guess, though: contact information, work history [1], health data, financial data, and who knows what else they’re correlating any of it with. Even with good password hygiene—I do use 1Password, after all—a data breach could be devastating. I know that I have been pwned three times. At least, that’s the ones I know about…

We often don’t think about the trails we leave behind as we traverse the web, except whenever the last fiasco brews over what Facebook, Google, et. al. are doing with our data this time. Then there’s a little bit of righteous indignation, maybe updating our ad blockers, and then going back to sleep until the next outrage. Sure, we think we’re immune from whatever invasive technological development is being used to spy on us, but when was the last time you thought about the data you willingly gave up?

Getting a grasp on the data we spill out, let alone what it’s being used for, is difficult by design. It’s part of the special sauce that makes the companies money. Google, to its credit, has a page where you can see the profile it’s built on you. And, you can at least opt-out of the worst of it. Facebook, not so much. And all of this comes before the other services that track you, online and off. Companies like Acxiom, Experian, and Equifax. These don’t exist in a vacuum. Opting out of these services, too, is possible, if difficult.

Let’s bring it back to digital data trails. Jacoby Young has a small series of interviews—including one with me—where people audit how much they use the Big Five tech companies. Taking his interview gave me a chance to take stock of where I am on a process I started a year and a half ago to wean myself away from services I can no longer trust with my data. Trust, for me, is a matter of understanding what I’m getting out of the data I give up in exchange for the product. In the case of Google and Facebook, the two services I want to use least, I’m struggling. Yet, I’m still tied to both platforms for multiple reasons.

Even the services I trust can be porous. I use the Health app on my iPhone as a central depository for data on my physical body. Apple’s implementation of Health on the iPhone is extremely secure: encrypted and inaccessible to Apple in any form. Until recently, Apple didn’t even include Health data in encrypted iCloud backups, which took security a little too far. In any case, I’m happy to trust Apple with my health data. The apps that feed into it, however… I can assign and remove permissions for apps to read and write my health data, but I can’t be certain what they’re doing with it. Let alone, what they’re doing with the data I feed into the apps. Who knows where all that is going?

That’s not to say we shouldn’t surrender some data. Not only is it inevitable, when we know both what we’re giving up, and what we’re getting in return, it seems fair. As long as one is happy with the terms of that transaction, I can’t tell them to stop. Besides, the only alternative besides complete disconnection, is to invest time, money, and work into building technologies to keep your data under your control. It’s possible, but it’s not easy—certainly beyond the reach of the average person.

So, now we’re forced to either create a calculus of trust for who we share our data with, or just give up and let our data fall where it may. Easier just not to think about it. Besides, what does it matter? It’s only data. But that data is increasingly personal, increasingly specific, and increasingly identifiable as us. When Google knows more about you than you know about you, this has massive potential for abuse.

It’s important that we see the value of what we’re giving up, and decide for ourselves what we are comfortable with. Take stock of who you’ve shared your data with, what accounts you have and no longer use. See what apps are connected with your social media accounts and what data trails you’re leaving behind you. Without knowing how much you’re leaving behind, how can you possibly be comfortable with the situation? Knowledge is power. If only the companies we’re giving this data up to would be willing to share it with us.


  1. Job application sites are often the worst when it comes to not being able to delete your account.  ↩

The Right Device for the Right Task

An early version of this post appeared in Issue 7 of the Sanspoint supporter Newsletter. To subscribe, visit the support page, and subscribe for $3 a month, or make a donation of any amount.

I’ve had a couple debates, mostly in a private Slack, but I’m coming around to CGP Grey’s idea of assigning his various devices to be (mostly) single-tasking machines. He explains how he uses his various iPads in Episode #26 of Cortex, with additional info on his iPad Pro writing setup in a blog post.

Note that I’m coming around to the idea of different devices for different tasks, not Grey’s specific implementation. He’ll be the first to admit that the way he works isn’t for everyone, and not everyone can afford three iPads. (To Grey’s credit, his iPad mini is an older model, not one he bought specifically as a makeshift Kindle.) Assigning specific functions for our devices has merit in my mind because it is so easy to get overwhelmed by the possibility of our devices. If you’re an inveterate procrastinator who is likely to dive into an Internet K-Hole, there’s appeal in having a device that doesn’t let you do that.

I’m not about to go all out and start completely disabling features on my iPhone, though the idea appeals to me. [1] Instead, CGP’s discussion has me thinking about ways I can start being more focused in the use of my devices. I’m asking myself what role each device serves in my life, and how I can maximize what each is good at versus what I need from my devices.

This came into focus when I got a second Mac for my new day job. Now, I have a device that is specifically for a certain context in my life: this is my Work Computer for my Day Job. When I am on this computer, I am (ostensibly) at work. Why can’t I do the same with my other devices?

About a week ago, I snagged a Logitech Type+ Keyboard Case for my iPad Air 2 for really cheap—like $30 cheap. This makes it a lot easier for me to use the iPad as a dedicated writing device. I’m writing this particular newsletter on my Mac, but I’ve done a fair amount of writing with the iPad and Type+ lately, even if it hasn’t been publish yet. I’m very happy with the choice. iOS may have multitasking now, but it’s still harder for me to switch modes on the iPad and dive into a distraction rathole.

I still need to figure out what roles are best for my iPhone and my home Mac. Plus, I’m thinking about my Apple Watch and how to streamline that for what it’s best at, too. It’s easy to look at CGP’s setup and go, “Hey, jerkface, not all of us can drop a bunch of money on iPads and mechanical keyboards,” but that misses the point. It’s not about buying more gear, it’s about optimizing what you have so it works best for you.

What “works best” means is a personal thing. If that means turning off Safari and all the other apps that might keep you from doing what your job is, then fine. If it’s streamlining down to a pair of devices that can do everything, then good for you. Instead of getting lost in the details of one person’s specific implementation, consider the ways you can apply the idea to your own digital life.


  1. There’s a good follow-up on that link I didn’t know about until writing this piece.  ↩