Menu

Sanspoint.

Essays on Technology and Culture

Ben Hammersley on Politeness and AIs

Ben Hammersley has some thoughts on how we interact with our personal AIs:

“It’s a little wrinkle in what is really a miraculous device, but it’s a serious thing: The Amazon Echo differs from Siri in that it’s a communally available service. Interactions with Alexa are available to, and obvious to, everyone in the house, and my inability to be polite with her has a knock-on effect. My daughter is too young to speak yet, but she does see and hear all of our interactions with Alexa. I worry what sort of precedent we are setting for her, in terms of her own future interactions with bots and AIs as well as with people, if she hears me being forced into impolite conversations because of the limitations of her household AI’s interface. It’s the computing equivilent of being rude to waitresses. We shouldn’t allow it, and certainly not by lack of design. Worries about toddler screen time are nothing, compared to future worries about not inadvertently teaching your child to be rude to robots.”

The Miscellaneous Tumbling of Mr Ben Hammersley – Possible Problems of Persona Politeness

I’ve not tried Alexa—the idea of having Amazon potentially listening in on everything in my apartment kinda freaks me out—but I use Siri. Now that I have an Apple Watch, I want to use it more. It’s telling that there’s still, four years on, people talking up Siri’s “Easter eggs” when you speak to it, and nobody’s said a thing about Alexa’s personality, or lack thereof, and what it could mean, until now.

That Time the Internet Sent a SWAT Team to My Mom’s House

“There are a lot of digital “truths” that have been instilled in our society about accessibility and findability, meaning we were taught, as users, that we needed to be trackable, we needed a visible footprint to exist in society, such as credit, a listed address, etc. Being trackable, and being “seen” meant safety. But online harassment has proven otherwise.”

That Time the Internet Sent a SWAT Team to My Mom’s House | Narratively | Human stories, boldly told.

Frightening. Just frightening.

Ellen Pao: The trolls are winning the battle for the Internet

“Reddit is the Internet, and it exhibits all the good, the bad and the ugly of the Internet. It has been fighting this harassment in the trenches. In February, we committed to removing revenge porn from our site, and others followed our lead. In May, the company banned harassment of individuals from the site. Last month, we took down sections of the site that drew repeat harassers. Then, after making these policy changes to prevent and ban harassment, I, along with several colleagues, was targeted with harassing messages, attempts to post my private information online and death threats. These were attempts to demean, shame and scare us into silence.”

Former Reddit CEO Ellen Pao: The trolls are winning the battle for the Internet – The Washington Post

After this, I’m amazed Ms. Pao is as optimistic as she comes across in this piece. One thing is certain—the lassiez faire, “free speech” above all attitude is no longer going to work on the Internet and its communities. The trolls are winning, but only because the opposing forces are just beginning to mobilize.

Sarah Jeong — The Internet of Garbage

Sarah Jeong, a journalist trained as a lawyer at Harvard Law School, discusses the problem of “online harassment,” with various accounts of harassment that have made their way into mainstream media, as well as lesser-known ones. The Internet of Garbage considers why and how to recalibrate this ongoing project of garbage-removal from content platforms and social media networks. It’s not as simple as policing offensive material and hitting the delete button online: Jeong tackles precarious issues like free speech, behavior vs. content, doxing and SPAM.

The Internet of Garbage

This looks like it will be a fascinating, and timely, read. I’ll be throwing down to buy it tomorrow, and you should too.

The Bias in Our Algorithms

“Algorithms are not designed in a vacuum, but rather in conjunction with the designer’s analysis of their data. There are two points of failure here: the designer can unwittingly encode biases into the algorithm based on a biased exploration of the data, and the data itself can encode biases due to human decisions made to create it. Because of this, the burden of proof is (or should be!) on the practitioner to guarantee they are not violating discrimination law.”

What does it mean for an algorithm to be fair? | Math ∩ Programming

There is a sense that technology is something separate from humanity, that it is free of our human flaws and foibles. Nothing could be further from the truth. The very human biases we have, conscious and unconscious, are infused into the technology we create. To claim otherwise is disingenuous and dangerous.