Menu

Sanspoint.

Essays on Technology and Culture

Why I Still Use Google

In the wake of the PRISM scandal, I’ve seen people I admire and follow online write about cutting Google out of their digital lives. Many of them were already unhappy with Google poking their nose into their personal date, and their collusion with (or capitulation to) the NSA is all the more reason for them to jump ship. They’re right to do so. NSA aside, I still use Google for a lot of things—including all of my email—and will continue to do so until the tradeoffs become unbearable.

Let’s put the NSA stuff aside for a moment, except to say that unless I host my email myself, the possibility exists that someone agent other than myself will be able to get in and access it. This is the risk you take. Not all of us can host our own email on a server we control. [1] Harry Marks switched to iCloud, which is a reasonable solution but still leaves your data in the hands of a third party. While Marco Arment advises against keeping your life in Google, because “it’s downright foolish to tie that much of your data and functionality into proprietary services run by one company in one account that sometimes gets disabled permanently with no warning, no recourse, and no support.” That can even happen with iCloud, so you takes your chances.

I’m not worried about Google looking through my email to serve advertisements—especially since I don’t see them. [2] Even if I didn’t block the ads, gMail’s ads are unobtrusive for online ads. Google is also fairly up front, as far as internet companies go, about looking at email contents to target ads. As long as that’s all they do, and there’s little reason to believe otherwise (again, ignoring PRISM), I’m okay with that. They have to pay for the service somehow. It’s a fair trade: Google lets someone pay them to show me an ad for something I won’t click on (or see), and I get a really great web-based email service. Roundcube looks nice, and may be a good solution if I ever end up hosting my own, but in the browser and with Mailbox on my iOS devices, I can tear through my email with ease. [3] Switching to another service means I have to give this up, and I’m not prepared to do that.

The pain from pulling Google out of my life isn’t worth the tradeoffs I have to make in my workflow or my wallet.
Abandoning Google also means giving up Google Voice, which I love, not least of which because text messaging prices are outright usury. It means convoluted hacks to set a privacy-friendly search engine as default on my iOS devices, when Google works well enough for my needs. Yes, Google altered the deal when they shut down Reader, but I can still live with the new terms. [4]

Of course, I don’t use Google for everything. Now that iCloud mostly has itself together, I prefer to use that to keep my contacts and calendar in sync across my devices, rather than rely on Google. I’ve opted out of Google+ for good. My photos are on Flickr, my data on Dropbox, and I pay for App.Net. Google is just one part of a balanced ecosystem of services I use online. It’s safer than giving one company control of my digital life, even if Google is the company with the deepest integration into the heart of it. But it works, and that’s more important to me than control, or some notion of privacy from advertisers.


  1. It costs $50/mo to host a Mac mini with Macminicolo.net, plus the cost of a Mac mini, or an additional $100/mo to rent one. Either way, that’s money that comes out of more pressing expenditures for me. (Damn student loans…)  ↩

  2. Yes, I use ad blocking software in all my browsers, though I’m willing to unblock sites that are unobtrusive, or ad networks that are ethical. In any case, it means I have a gMail experience, in the browser, that is ad-free and painless.  ↩

  3. I only signed up for Mailbox after the Dropbox acquisition. Dropbox is another company that I am willing to trust with my data (yet again, ignoring PRISM). They have a valid business model, and though they had a security issue last year, they seem to be on top of their game now.  ↩

  4. It helps that many of the alternatives to Reader are better than what they replace, in no small part due because they’re actively developed.  ↩

Our Job as Technology Writers

Yesterday, I described the job of technology writers as “helping people choose what’s right for them.” All too often, however, we establish our camps then relentlessly defend those products we use and/or attack those that others do. There’s an almost feudalistic world in technology where one is either an Apple fan, or an Android fan. (Or one of those other OSes out there. Bet someone still has a Palm Pre they use every day.) This worldview may come from spending a lot of time in the Apple sphere, which only comes with the territory of being an Apple user. I’m not likely to get information about what I want on Paul Thurott’s Windows Super Site, or from Gizmodo who has been pissed at Apple ever since the stolen iPhone 4 saga.

One the one hand, the second-class citizen status of Apple users from the 90s until the release of the iPhone and iPad, was guaranteed to foster an attitude of snark towards the “competition.” This is something that even permeated Apple’s advertising and messaging, culminating in the—admittedly hilarious—PC vs. Mac ads. Yet, this sort of behavior isn’t exclusive to Apple fans. Stick your head into the comments on any (non-Apple-focused) technology news site, and see the flamewars for yourself. If this the discourse that passes among passionate users of technology, we’re in trouble.

It’s not the fault of technology writers for inspiring polarized discussion in comments. It’s not polarized discussion in comments that inspires polarized discussion in the technology sphere. These are both symptoms of a larger problem of psychology and tribalism that occurs in any sphere of human endeavor. I’m sure you’ll find corners of the internet where supporters of linguistic relativity flame supporters of linguistic determinism with the same fervor as Apple and PC/Android users.

The people with the ability to control and define the terms of the debate are technology journalists and pundits. I’d love, and would even pay good money, to read pieces along the lines of Andy Ihnatko’s switch to Android that provide legit, unbiased comparisons of products on a level much deeper than feature comparison checklists. I want to read people smarter than me discuss the benefits and tradeoffs of a closed, App Store environment like iOS versus Android’s free-for-all(-ish) environment, without resorting to talk about “freedom” and the virtues of open source. I’d like to see someone switch from Apple’s ecosystem to a Windows ecosystem and talk about the benefits and drawbacks. I’d like someone to do the same thing from the other direction.

There are two things standing in the way of making these happen. One is that these sort of articles require a lot more time and nuance than is available on most of the high-profile sites. For every piece of awesome long-form journalism on The Verge, there’s dozens of shorter pieces going up every day. By any estimation, a short, feature checklist product review is going to be easier to write than a detailed product comparison built around a deep dive from a user perspective. This leads to the second problem: short, easy articles give you more page views, ad views, and Google Juice.

Not that I want to make a scapegoat of Internet Advertising and blame it for the state of things online. These are problems that predate Internet advertising, and even the Internet as a concept. Advertising is just another factor that is simply not helping. It’s an added financial incentive to keep posting polarized and polarizing articles because that’s what will bring in the ad impressions. The problem still lies within all of us, and so does the solution. Let’s stop focusing on how right we are for using certain tools, and instead focus on using the communicative power we have to teach people how to find the right solutions for their problems. We can do this no matter which company’s products we prefer to have on our desk and in our pockets.

Ted Dziuba – Mastering the Craft

Ted Dziuba — Mastering the Craft:

I know Java well enough, so I haven’t been resisting because of my skill set, I resisted Java because it’s enterprisey. Because I thought it was an inferior technology. Because I had a chip on my shoulder about technical superiority.

That’s not mastery, that’s just being a prick, and I’m done with it.

This is the sort of thing I’m on about lately. There’s a tool for everyone. Some suit the task better than others. Some suit the user better than others. All of them are valid. Just like the example he makes about fine whiskey versus a Captain and Coke. Someone’s preferences are preferences, above all. It’s better to expose yourself to other people’s preferences and appreciate them for what they are, and it’s better to understand your own preferences as well.

Apple Works. Android Works. Windows Works. Just Maybe Not For You.

There’s been a bit too much griping in the technology media I consume about the inferiority of certain “competing platforms” in the mobile and tablet space…

Oh, hell with it. I’m sick of hearing knee-jerk Android bashing come from obviously smart people who should know better. The most recent episode of Amplified was absolutely painful to listen with Jim Dalrymple’s claim that Apple “is the only [company] that has lead” in the tech space. This is absolute nonsense, and he knows it. Jim “likes to use products that work” and Apple products work for him. Great. They work for me too. I wouldn’t have a MacBook on my desk, an iPad in my bag, and an iPhone in my pocket if this were not the case.

Jim’s blanket dismissal of Android as “not working” ignores people like Andy Ihnatko. The great thing about Andy’s three-part rationale for switching to Android is that he doesn’t claim that Android is for everyone. He explains the reasons why Android works for him, and suggests that if you have a similar set of needs and wants as him, Android might work for you. He doesn’t say that iOS “doesn’t work,” or tacitly insult those who deign to use a competing platform. Compare Jim’s statements with Andy’s on the most recent The Ihnatko Almanac where he discusses putting thought into the tools you choose to use, and not just buying a product, knee-jerk, because of the logo on the back.

It’s to everyone’s benefit to have a variety of options for hardware and software. A competitive environment drives innovation, and provides options for all of us. There are people whose ideal mobile computing environment is iOS, a customized Android, or even Windows Phone, Symbian, and Blackberry OS. It’s not our job as technologists or technology pundits to tell people the choices they’ve made are wrong. It’s our job to help people choose what’s right for them. No matter what we use, or how well it works for us, the tools we use are conscious choices and not dogma. We are free to explore other options, change our minds, and be wrong. There’s something to take from everything out there.

Kids Can’t Use Computers? Depends What You Mean By “Use”

Marc Scott of Coding 2 Learn has raised a few hackles with a recent piece making the rounds. The title alone should explain why: “Kids can’t use computers… and this is why it should worry you”. Here’s the gist of the article, quoted:

…[A]ren’t all teenagers digital natives? They have laptops and tablets and games consoles and smart phones, surely they must be the most technologically knowledgeable demographic on the planet…

The truth is, kids can’t use general purpose computers, and neither can most of the adults I know.

He’s absolutely right on the money with the line I italicized. Well, he’s right for certain values of the word “use,” at least. There’s two ways to know how to use a computer: task-based knowledge, and skill-based knowledge. The former is much easier to acquire than the latter.

Task-based knowledge is functional. If someone wants to check their email, go onto Facebook, or watch a cat video on YouTube, people will figure out a way to do it that works for them. Their methods might end up seeming completely roundabout to a more savvy person, however. A while back, a company’s article on Facebook logins ended up as the top result on Google for “Facebook login” leading to them getting thousands of confused comments and emails from people whose Facebook workflow was to search to get to Facebook’s login page. This is the downside of task-based knowledge: when something changes, it can potentially break the workflow. This is also an advantage that mobile and tablet OSes have over traditional desktop computing. When all you have to do is tap the little blue box with the white “f” to get to Facebook, there’s a lot less cognitive load involved.

It’s skill-based knowledge where people fall short, and using a computer to its full potential is largely a skill. Turning on Wi-Fi is a task, and that’s something a person can learn, but skill is knowing how to find the settings for something you need to turn on and fix, no matter what it may be. The car metaphor, which always comes up in discussions like this, is apt. Most of us view our car as a method to get from point A to point B. We know how to drive it, and we know how to park it, and we know how to fill the gas tank. When something goes wrong, or even when it needs maintenance, we quickly turn to a professional. We can learn to fix our cars ourselves, but why should we? It’s much the same with computers.

Why this sorry state of affairs? Marc claims:

Being a bunch of IT illiterates themselves, the politicians and advisers turned to industry to ask what should be included in the new curriculum. At the time, there was only one industry and it was the Microsoft monopoly. Microsoft thought long and hard about what should be included in the curriculum and after careful deliberation they advised that students should really learn how to use office software. And so the curriculum was born.

I don’t know if it’s quite as insidious as that. The most common application many people use computers for in their professional lives are word processing, spreadsheets, and presentations. Even now, most of our jobs don’t require us to get deep enough into a computer to require learning how to code. A basic computer literacy course for students to impart task-based knowledge of office software is probably the best all-round education they need from a vocational perspective. That doesn’t make it the best possible education in computing, however—just the lowest common denominator.

Marc does have a point about the locking down of educational computers “…preventing kids and teachers access to system settings, the command line and requiring admin rights to do almost anything. They’re sitting at a general purpose computer without the ability to do any general purpose computing.” While a locked down environment is easier for schools to administer and reduces the potential for security issues, we lose the ability to teach people, if not the skills of maintaining a computer, at least giving them the task-based knowledge to configure a network connection. At a certain point, computer education needs to move beyond how to use office software and into how to use the computer as a whole.

While having parents and schools alike teach children to try and solve problems themselves is sound (if impractical in the bureaucracy of a school system), the rest of his suggestions border on lunacy. Where things fall down are suggestions like using Linux—even on a cell phone. Marc even admits that his phone “can’t use 3G… crashes when I try to make phone calls and the device runs so hot that when in my jacket pocket it seconds as an excellent nipple-warmer…” If you want to teach an ordinary, non-technically inclined person how to be frustrated and give up on computing, by all means sit them in front of a Linux box. [1]

The problem needs to be approached from both sides—how we teach computers, and how computers work. Apple seems to be the only company on the right track. However, with Marc’s outright dismissal of iOS, I’m sure he’ll disagree. He might agree that we should be making technology easier to use and more intuitive—while not sacrificing the things that make it powerful. More often than not in the attempt, we end up with interfaces that are either too dumbed down to help users learn anything, or interfaces that stubbornly refuse to change for marketing reasons. [2]

Marc lists a series of events that prove people don’t know how to use computers. I ask: Why is the OS insecure enough, out of the box, that a user needs to install software to prevent viruses and malware? Why should someone need to reinstall the insecure operating system to fix it? Why is there a hardware switch to turn on Wi-fi on a laptop, and why is it on the side of the machine, out of sight? Why are error messages for simple issues written in a complex way, or easily dismissed without actually resolving the problem? Why does a cell phone not automatically back itself up remotely? The only issue in the litany that isn’t based around a technical shortcoming is the user who lost their Internet Explorer icon in a mess on the desktop—though there’s a technical solution that exists.

We’re making progress in this area, but not fast enough. Any change that makes a computer easier to use for people, and reduces the potential for things to go wrong and a technician to be brought in, is usually fought against with knee-jerk kicking and screaming from the technical elite. For the majority of people, we want something to work, and work with a minimum of fuss. It’s not that kids don’t know how to prevent malware or configure a wireless network, it’s that they shouldn’t have to. Until that day comes, let’s at least teach them.


  1. I used Linux as my primary OS for a few years, and while it’s certainly improved since 2005, it’s still not ready for average users to make it their primary OS.  ↩

  2. Another thing I have to agree with Marc on: Windows 8 sucks.  ↩