When the verdict was announced last week, some of Elliott’s supporters were claiming the crux of the case was defending freedom of speech, that Elliott was being punished merely for disagreeing with women. What’s lacking in this argument is that there’s a difference between disagreeing with someone and disagreeing on a loop, using veiled threats that target a specific group. A differing opinion is one thing; a sexist remark, or a racial slur, or a warning masked as a different opinion is harassment, and it’s fucking terrifying.
There’s two things that people continue to get wrong about online harassment. The first is that it is does not exist in a vacuum. Whatever outdated advice, along the lines of “don’t feed the trolls†people trot out fails to acknowledge the scale of most online harassment. A rape threat, or a “kill urself bitch†on Twitter today can turn into a SWATting tomorrow.
The second is that this is yet another symptom of the ongoing gendered harassment that women—as well as minorities and LGBTQ individuals—risk by simply existing in a public space. It doesn’t matter whether that space is physical or digital. The very act of existing as a woman, as a person of color, as a queer person, or simply not conforming to the “standards†of gender makes a person an open target for abuse in a way that cisgendered white men rarely are.
Like many nerds with blogs, I have a very large Instapaper queue. It got to the point where, at the start of the new year, I gave up, declared bankruptcy, and deleted anything older than a week. This included, of course, an article on “How to Rebuild an Attention Span” that sat, partially read, for a good three months. You can write your own joke.
Then, I came across M.G. Siegler’s piece on using text-to-speech to get through his reading backlog. There was some initial skepticism on my part. Sure, I listen to a lot of podcasts, and I try to listen to audiobooks, but text-to-speech to get through my Instapaper backlog? Surely this is madness. But, dang it, M.G. was convincing enough that I had to give it a try. So, one ride home on the subway, I kept my headphones in, and had Instapaper read back a couple of articles to me.
Well, it’s been two weeks, and right now the oldest item in my Instapaper queue is… three weeks old. But that’s a video, and I should probably turn it into a podcast instead. Aside from that, and an article on a new way to do responsive HTML email that isn’t really suited to audio, I’ve maintained a steady turnover of Instapaper articles. But text-to-speech on the iPhone isn’t just limited to read-later apps. M.G. shows how to turn on speech as an accessibility option, so your phone can read anything to you.
I’ve taken to having Alex, an optional text-to-speech voice, read my email newsletters on the ride into work in the morning. I love reading Dan Lewis’s Now I Know, and I love having it read to me even more. The text-to-speech feature isn’t dependent on an Internet connection, so I can listen underground with no cell service, leaving me free to keep my phone in my pocket, or more often, play a few rounds of Threes. There’s probably more I could do with text-to-speech—I haven’t used it with the Kindle app or iBooks, but some testing as I write this shows that it does work, at least with some books. I’m sure it would be useful in plenty of other apps too.
Oh, sure, it’s not as good as an actual recording with a human narrator. Alex—and the default Siri and Samantha voices—often stumble on names and get confused by homophones. Their cadence is a little stilted and weird, and the simulated breaths Alex makes before starting a new sentence are a bit of an uncanny valley like audio skeuomorph that I’m still not used to. On the usability side, the two-finger swipe to start speaking often takes a couple tries to get right. Also, Instapaper’s speech playlist feature doesn’t play well with its Apple Watch app, which is probably among the most First World of First World Problems.
Quibbles aside, it’s been a great way to keep my queue of web reading manageable. If you’re suffering from the dreaded Instapaper Overload Syndrome, consider giving text-to-speech a try for powering through it. Honestly, I’m amazed the idea never came up sooner. To M.G. Siegler, I give my thanks, and to Alex, this could be the start of a beautiful friendship. Well, maybe not a friendship, but a beautiful reader-audience relationship. I hope he figures out homophones better in iOS 10, though.
We already know that the major data brokers like Acxiom and Experian collect thousands of pieces of information on nearly every US consumer to paint a detailed personality picture, by tracking the websites we visit and the things we search for and buy. These companies often know sensitive things like our sexual preference or what illnesses we have.
Now with wearables proliferating (it’s estimated there will be 240 million devices sold by 2019) that profile’s just going to get more detailed: Get ready to add how much body fat you have, when you have sex, how much sleep you get, and all sorts of physiological data into the mix.
One thing I like about the way Apple handles health data is how secure it is. I don’t have to worry about Apple selling the data in HealthKit, or even having access to it, at least as long as Tim Cook is running things. The apps I use to actually do stuff with my health data? That’s much more worrisome. I have six apps on my iPhone that read and write HealthKit data (seven if you count Workflow). How many of them can I trust to keep my biometrics mine?
Every day on social media, we encounter situations where what we want to convey to others is misinterpreted, misheard, or in some way responded to in an unexpected way. It’s easy to deflect blame onto the other parties when this happens, perhaps by saying they misunderstood your intention. But it’s important to take efforts to communicate in ways that ensure you don’t come off as, well, an asshole.
The iPad has always struggled to find a place in my computing life. Not that I haven’t wanted to use it, but it’s historically been a much more limited device than my laptop, while lacking the portability of my iPhone. It didn’t help that I opted for an iPad 3, which was a compromised device thanks to the retina display. While it was fine for a couple versions, OS updates only dragged the performance down—and as for the fancy new features? Forget it.
It did find a niche as where I read my RSS feeds in the morning, read comic books at night, and occasionally banging out words on the go. Not that I did much of the latter. The iPad 3 lacked the portability of the previous models—it felt heavy in my bag, so I mostly left it at home on the dining table. In some ways, it felt like I’d spent $500 on an entertainment device that I could occasionally use for “real†work if I wanted to put up with the limitations of the hardware and software.
About a month in with the iPad Air 2, however, and I’m singing a very different tune. Where the iPad 3 was fun to use, it never made me want to use it more, even before OS updates caused it to slow down. The Air 2 is fast and flexible enough that it can not only do a huge chunk of what I can do on my Mac, but it does it well enough that I want to use it more. I understand how Myke Hurley feels about his iPad Pro now. Doing some stuff on the iPad is slower, but it feels… better somehow.
Case in point: I’ve wanted to see if I could use the iPad for my web programming project. Matt Birchler posted a guide for his web development workflow, but it didn’t fit my needs. I’m developing in JavaScript, and storing code on GitHub, so Coda was out. I’d found a couple Git clients for iOS, so I could access my code on the go. It was just a question of editing it and testing it. After listening to the second episode of Canvas, a podcast on iOS productivity, I found a way.
It turns out the Git client, Working Copy, works as an iOS Document Provider. So, I can use a programming text editor, like Textastic to do the editing, and the changes propagate back into Working Copy where I can test in its integrated browser. It’s not perfect: Textastic hasn’t been updated for split-screen multitasking yet, but it works well enough that I was able to push some bug fixes back into my GitHub repository right from my iPad. That’s incredible.
I don’t expect I’ll be sitting at my dining table and banging away at JavaScript when I can have a more functional coding environment on my Mac. When I’m away from the Mac, though—and the Air 2 is two-thirds the weight of the iPad 3, so I’m carrying it around a lot more—it’s great for quick fixes. Besides, that’s just what I can do with it now. Who knows what incredible functionality iOS 10 will bring, or what apps someone will develop to make what I do on my Mac more appealing to do on the iPad.
Maybe the iPad won’t have a niche. Maybe it’ll become the computer I choose to do most of my work on. I don’t see that happening any time soon. There’s still too many limitations to iOS and the iPad hardware right now, but that’s a temporary problem. Apple’s shown they want make the iPad into something powerful enough for more than just content consumption. I wouldn’t be surprised in a year or two if Apple releases a version of Xcode for the iPad, if only because I can’t imagine iOS Engineers not wanting to write code for their platform on the platform. Until then, I’m happy with my Air 2 and it’s capabilities—but I’m also eyeing the iPad Pro with more than a bit of gadget lust.