We might be swerving anxiously close to our $1500 laptops turning our workspaces into our own private Skinner boxes. But I’m intrigued by the possible long-term impact of millions of digital assistants who expect to be treated with dignity. It could be a good training sim for all of those executives who, as children, were used to getting everything they want, and as a result they’ve come to think of every other human being as a non-playing character
If Cortana offers a curt replies when she’s treated like office equipment (or, worse, when she’s treated the way that women generally get treated in the workplace), and the only way to get that document printed or that appointment added to the calendar is to say please and thank you and address her by her proper name, not as “sugar-toes,” it could train these jerks to treat humans as humans, too.
Whether we’re giving our AIs female personas—which is an interesting can of worms in and of itself—or not, they should push back against abuse. We are supposed interact with virtual assistants like we would interact with real human beings. At a certain point, enough negative interactions with any real person would get them to shut down the conversation. Why should virtual assistants bend over and take it?
I don’t want a mealy-mouthed, fangless discourse where nothing worth critiquing – person or otherwise – can be discussed without fear of the chilling effect of “offense.” Rather, what I want is for people to take into account the messiness of real life. I want people to have what Jay Smooth called the “what they did” conversation rather than the “what they are” conversation.
Instead of hearing about how someone who has misstepped can now be sorted into This Box or That Label, I want to know what they did that was a problem. I want to be given a chance to draw my own conclusions and – the most vital part – I want to feel like I can come to a different conclusion than the consensus without being instantly shifted into the same box merely for not agreeing on what belongs in the box in the first place.
There’s a grave cost to assuming online interactivity is always awful. The burden is felt most acutely in denying opportunity to those for whom connecting to a community online may be the only way to get a foot in the door. Those underrepresented, unheard voices are the most valuable ones we lose when we throw the baby out with the bathwater and assume online comments are necessarily bad.
When the verdict was announced last week, some of Elliott’s supporters were claiming the crux of the case was defending freedom of speech, that Elliott was being punished merely for disagreeing with women. What’s lacking in this argument is that there’s a difference between disagreeing with someone and disagreeing on a loop, using veiled threats that target a specific group. A differing opinion is one thing; a sexist remark, or a racial slur, or a warning masked as a different opinion is harassment, and it’s fucking terrifying.
There’s two things that people continue to get wrong about online harassment. The first is that it is does not exist in a vacuum. Whatever outdated advice, along the lines of “don’t feed the trolls” people trot out fails to acknowledge the scale of most online harassment. A rape threat, or a “kill urself bitch” on Twitter today can turn into a SWATting tomorrow.
The second is that this is yet another symptom of the ongoing gendered harassment that women—as well as minorities and LGBTQ individuals—risk by simply existing in a public space. It doesn’t matter whether that space is physical or digital. The very act of existing as a woman, as a person of color, as a queer person, or simply not conforming to the “standards” of gender makes a person an open target for abuse in a way that cisgendered white men rarely are.
We already know that the major data brokers like Acxiom and Experian collect thousands of pieces of information on nearly every US consumer to paint a detailed personality picture, by tracking the websites we visit and the things we search for and buy. These companies often know sensitive things like our sexual preference or what illnesses we have.
Now with wearables proliferating (it’s estimated there will be 240 million devices sold by 2019) that profile’s just going to get more detailed: Get ready to add how much body fat you have, when you have sex, how much sleep you get, and all sorts of physiological data into the mix.
One thing I like about the way Apple handles health data is how secure it is. I don’t have to worry about Apple selling the data in HealthKit, or even having access to it, at least as long as Tim Cook is running things. The apps I use to actually do stuff with my health data? That’s much more worrisome. I have six apps on my iPhone that read and write HealthKit data (seven if you count Workflow). How many of them can I trust to keep my biometrics mine?