Menu

Sanspoint.

Essays on Technology and Culture

Automating 1Password Backup, and Other Computer Magic

I recently picked up a 16 GB SDHC card, with the sole purpose of turning it into a portable, light, emergency drive for my MacBook Pro. The most recent Macs can boot off of a properly formatted SD card [1], so creating a bootable rescue disk was as easy as cloning the recovery partition to the SD card in Disk Utility. This will allow me to reinstall Mac OS on my computer, as long as it can get on the Internet. As I will, hopefully, replace the internal drive with an SSD in the future, this will be very handy.

Of course, that doesn’t take up 16GB of space on a SD card, which leaves me the flexibility to keep other useful tools on there. One thing I picked up from a recent Back to Work is the idea to keep an encrypted disk image on your emergency drive, with a backup of important data, such as your 1Password database. [2] A great idea, except that, knowing me, I’d drop the database file on there, drop the card in its place, and forget to ever update it. Putting a weekly reminder in Things would not be enough. So, I needed to find a way to automate the process and make it as easy as plugging in the SD card.

So I did. And it was surprisingly easy. Here’s how:

  1. Create an encrypted sparse bundle on your USB drive or SD card.
  2. Install the Do Something When preference pane on your Mac. This is an old preference pane, and still 32-bit. It works, however, even on Mountain Lion.
  3. In Automator, create a workflow to mount the sparse bundle, copy the 1Password file, and eject the sparsebundle. If you’re unfamiliar with Automator, Apple has a good basic tutorial, but it’s really drag and drop. I’d provide my workflow, but it’s customized to my file names and device names.
  4. Enjoy your freshly backed up 1Password database.

The only dependency I’m worried about in this setup is Do Something When. I’m sure there’s an alternative, probably console-based, I could use to trigger the Automator script that does the bulk of the work, and accomplish the same task with a minor tweak. [3] What I love, however, is that with only a basic set of tools, the lynchpin of which is baked into the OS, I can have my computer do a set of repetitive tasks based solely on the presence of one removable hardware device. Back in the old days, this would require actual tedious scripting, or recording a macro. Now, it’s drag and drop, putting more control of our technology in the hands of almost ordinary users.

If you’re not backing up your most sensitive data in more than one place, you’re begging to lose it. If my hard drive dies, or if I get hacked like Mat Honan, I have a way to unlock my digital life, kept on my person, and kept easily up to date just by plugging it into my machine. And, no, I’m not telling you where I keep it.


  1. The card must be formatted as Mac OS Extended (Journaled), and contain a bootable OS install.  ↩

  2. If you’re not using 1Password or a similar app, then shame on you.  ↩

  3. Should this be the case, please let me know, so I can switch.  ↩

Google’s Misplaced Priorities and the Trust of Users

It’s not that I fear Google, I just don’t trust them anymore.

Forget about the Google Graveyard, and the imminent death of Reader. Google’s made a pivot away from making cool things supported by ads, to selling ads to support (some) cool things. It’s a small difference on paper, but a massive difference in how we relate to a company that many of us have uploaded our lives to. The reason we did this in the first place, was that Google’s offerings were simply better. If you used web based email in the dark days before Gmail, you understand what I’m talking about. Nobody, and I mean nobody, has done web based email as good as Google does it. [1] That’s the blessing and the curse, and now we’re (more) aware of both of those.

I don’t blame Google as a company, or Larry Page as an executive. I blame who Google has chosen to be beholden to. Google is a public company, publicly traded and owned by a lot of people, directly and indirectly. It is to the benefit of those owners that Google show year-over-year, quarter-over-quarter growth in revenue and profit. Whether this is to Google’s benefit is up to debate, but in order to keep those people happy, Google has made the decision to run with the “profits over products” mantra that is the shameful standard among companies these days. I don’t need to say who the notable outlier is.

The stock market is a system that evolved in the days when most companies big enough to be publicly traded were in the business of making real, tangible things. To make more money, you either made and sold more things, sold the same number of things, but at a lower cost to you, or both. [2] When you don’t have a tangible thing to sell, whether it’s a service, or an infinitely replicable string of bits that takes up no real physical space, the old ways of making money break down. Look at the record industry for an example. Though Google’s gotten into the physical goods game with buying Motorola and selling the Chromebook Pixel, it’s still mostly dealing with intangibles. There’s no scarcity, and the costs are as low as they can go.

The business end of Google is plugged into a system designed for a much different economy, one where you could, and should pump most of your profits into making things faster, better and/or cheaper. It’s like plugging a MacBook Air into a vintage IBM Mainframe. Getting them to talk to each other is a miracle, wastes the power of one, and taxes the power of the other. [3] However, it’s to the benefit of those shareholders for Google to bend to the rules of this old system, and since one of those shareholders is Larry Page, and he sets the priorities. If your personal bottom line and career depended on keeping that stock price going up, you’d make some of the same decisions, too.

Those decisions are coming at the cost of us Google users. We used to be able to trust Google, because Google put us first. Ads were the necessary tradeoff, and we accepted them in the same way we accept ads on television, radio, magazines, and newspapers. [4] Perhaps not all of that trust was earned, but for a good while, it looked like “Don’t be evil.” was more than just a slogan. Even now, I hesitate to call Google’s actions evil. They are perfectly reasonable actions based on a set of priorities that I find to be misplaced. I’ll work with them, and I’ll acknowledge a good product when they have one, but I doubt I’ll ever be putting my full trust in Google again.


  1. The web-based Gmail interface has its detractors, but nobody points to anyone doing it better on the web.  ↩

  2. Grossly oversimplified for the sake of example.  ↩

  3. Which of these two things represents Google and which represents the stock market is an exercise for the reader.  ↩

  4. Here, you might complain about Google targeting ads based on what it knows about you, and whether they ever really put “us” first. This is nothing new, and honestly, it’s the only way they could do advertising in the Internet age. Necessary evil.  ↩

Switching to NewsBlur

Not long after the fiasco of the Google Reader shutdown announcement, I signed up for a premium account with Newsblur, and promptly went back to Reeder and Mr. Reader with their Google Reader sync. This was purely because of Newsblur’s growing pains from the waves of Google Reader exiles crossing the border. Now that the worst is over, and Newsblur is up and running at almost full-steam, I feel like I can give it a proper chance, and I like a lot of what I’ve seen so far.

In my brief time with the service, it’s very clear that Newsblur is not Google Reader. This is good, and this is bad, but it’s mostly neutral. Different services are different, and Newsblur doesn’t have the same UI, shortcuts, or third-party app “support” as Reader does. Did. It is, however, an easy jump from Google Reader to Newsblur. You don’t even need to use Google Takeout to get your subscriptions. Newsblur uses some API magic to pull not only your feeds, but even starred items. [1] This feature almost certainly won’t work after July 1st, so if NewsBlur is something you’re considering making the jump to, get on it quick.

The web app is a little homely, especially coming from apps like Reeder for Mac, or even the Google Reader web interface. Still, it works well enough, and has some intuitive keyboard shortcuts. There’s also a great feed checker tool built in to the app, and I was able to correct, or delete, broken feeds with only a couple of clicks. Feed management is a little less seamless—I expected to be able to drag and drop to move feeds around, but organizing everything is done it contextual menus, and you can’t create a folder from the move feeds menu. Small issues, both. The site itself is fast, though the user onslaught has forced Samuel to reduce the amount of times per day the service actually fetches new articles. I haven’t noticed this to be a problem, but if you want up-to-the-minute information, it’s something to be aware of.

The worst part is the iOS app. It’s perfectly serviceable, but in the age of Reeder, Mr. Reeder, and other gorgeous, easy-to-use RSS apps for iOS, NewsBlur’s feels like a step back to 2009. It works, but it’s neither as fun, or as pretty as the Google Reader based apps. NewsBlur does, however, have an API, and I hope that the developers of my preferred RSS apps will add support for NewsBlur soon. For now, I will make do. Any native app is better than none. Speaking of which, there’s also a helper app for the Mac to allow Safari to open RSS feeds directly in the NewsBlur web app, though with RSS settings removed in Safari 6, setting it up is a minor pain. [2]

The two best things about NewsBlur, however, are that it’s an independent, paid service, and actively developed. These fill me with the confidence that NewsBlur is not going to go away, and that I can use the site without having to surrender anything more than $24 a year. Truth is, if I didn’t have to leave Google Reader, I wouldn’t, but my forced exile from Google’s garden has ended up with me in a comfortable place with a benevolent caretaker. If you’re still looking for a place to go, give it a try.


  1. Which means, for me, that the two articles I starred in 2009 transferred over. A pleasant surprise.  ↩

  2. I had to download and install a 3rd party preference pane that allowed me to set the URL handler app to the Newsblur helper application. The hardest part was learning that the preference pane existed.  ↩

What Makes SimCity Great (and Why the Sequel Disappoints)

The first time I played SimCity was Christmas, 1991. I’d gotten a Super Nintendo, and SimCity was one of the original games released along side it. Super Mario World was fun, F-Zero was frustrating, but SimCity inspired me in a way that no other video game had before. When I got my first PC, I got SimCity. When SimCity 2000 came out, I begged for it as a Christmas gift, and got it, and played it relentlessly, building (and later destroying) digital cities in my own image. Later iterations of the game, SimCity 3000 and SimCity 4, were good, but didn’t quite capture the magic of SimCity 2000. 3000 was pretty, but added layers of complexity that took away from the core of the game. SimCity 4’s complexity was even greater, but in an exciting way, though it taxed every machine I tried running it on to the absolute limit.

When the first preview of the newest SimCity, I was incredibly excited. Here, at last, was a sequel that looked as revolutionary compared to its predecessor as SimCity 2000 was to the original. [1] The graphics looked incredible. You could build roads that curved and swayed with the terrain. The full–3D experienced that had been promised as far back as SimCity 3000 was finally coming. I watched preview and gameplay videos, and I salivated. When I bought my new computer, I checked its specs against what EA had recommended, knowing full well that when the Mac version dropped, I would snatch it up and never see daylight for a month.

Plans have changed. The Mac version isn’t out yet, but I’ve already decided I would rather pay six bucks for the [Good Old Games] version of SimCity 2000 than however much EA wants for the latest SimCity. I don’t need to retell the clusterexpletive of the game’s launch, and the continuing backlash. That’s what Google is for. Instead, I’d like to tell you what makes SimCity, as a game, as a franchise, so compelling to us. You can fill in the blanks about where the latest sequel falls down.

Actually, to even call SimCity a “game” is a disservice. For years, Maxis labelled their products as “software toys.” An apt moniker, as SimCity began life as little more than a map editor for an Amiga shoot-em-up game. Will Wright had enough fun building the cities that some would-be gamer would bomb that he spun the editor off into a new project, allowing someone to build their own city and watch it grow, managing it all the while. Crime too high? Build police stations. Earthquake, or tornado strike? Better have Fire departments. Too much traffic? Build a rail system… and make sure you’re making enough from taxes to pay for all of it.

SimCity was an open-ended box of virtual Tinkertoys that we could put together any way we want, and later versions of the game opened up the possibilities. In SimCity 4, for example, you could create anything from a dense, sprawling metropolis, to a tiny little farming village, and have it work. Earlier games in the series tended to push a bit more towards creating something more like Portland than Peoria. [2] Even still, it never told you what to do, or how to do it beyond the basics in the manual. If you want to build a utopian paradise where every citizen’s needs are met, there’s no traffic, no pollution, and no crime, you can. If you want a totalitarian state, with an ignorant populace and crumbling infrastructure, go nuts. If you want to just draw pretty pictures on the landscape with the road tool? Have fun. City shaped like a dong? You got it.

Whatever it was, you were building your own vision. SimCity was anti-social, though when SimCity 4 came out, people had taken to sharing diaries of their cities online. You didn’t have to worry about consensus, or other people taking the good parts for themselves. Few games before, or since, give you quite the level of creative freedom SimCity has, even within the same basic genre. SimCity gives you a blank slate, a few simple constraints, and all the tools you need to make something cool. The earlier games in the series don’t lose their value due to age, either. SimCity 2000, which I’ve been playing lately, is as fun and frustrating as it was twenty years ago. The graphics are a product of its time, but they still look great, but SimCity would be compelling as mere abstractions rather than attempts at realism. It’s the freedom and control that make SimCity so compelling.


  1. SimCity Societies doesn’t count. Don’t ask me about it.  ↩

  2. Please don’t e-mail me if you live in either of those places.  ↩

Why Kids Understand Tablets: UI, Touch, and Abstractions

Recently, at a birthday gathering for my girlfriend’s mother, there was awed discussion about the three year old grandson of one guest, and his intuitive grasp of a tablet computer. It got me thinking a bit, and while I have precious little grounding in some of what I’ll be touching on, the kid’s understanding makes more sense than we might think at first. After all, touch is how we interact with the real world. When it comes to technology, touch-based computing is a still in its infancy, but by removing the indirect UI manipulations of a mouse, the learning curve of a UI interface decreases by quite a lot.

Case in point: have you ever had to teach someone how to use a mouse? The difference between a left-click and a right-click? When to double-click and when to not double-click? How to click and drag? The directness of a touch UI is apparent in a situation like this. “How do I play Angry Birds?” Just touch the little picture of the angry bird. Boom. What could be simpler? Perhaps direct, thought-control or Star Trek like voice recognition, but we’re still a long, long way away from that.

The mouse, as an interface, is a vestigial remnant from the transitional period between the days when computing was a keyboard based affair, and the near-future of semi-ubiquitous touch computing. Make no mistake, there are places where a mouse, or mouse-like interface, would be preferable to pure touch, such as a large-screened desktop computer. The mice and trackpads may be akin to the massive stick-shifts on trucks, to complete the Steve Jobs’ metaphor. This sort of thing is why Microsoft’s previous attempts at touch computing failed. Relying on a stylus as a mouse surrogate only emphasized the flaws inherent in using a mouse-based UI in a touch oriented environment.

A good touch interface is easy to understand, because it reacts as you would expect when using it. If you slide a piece of paper in front of you on your desk with a finger, scrolling a web page in a tablet isn’t a huge leap. The direct control, and near-immediate response of a touch interface mimic what we know from the real world. I’ll only use the “skeuomorphism” buzzword once [1], but part of the reason behind a skeuomorphic interface on a touch device is to drive those real-world analogues home. [2]. It matches our patterns of how things work. To move to the next page of a book, you swipe, much like you would if you were reading a real book.

Children are tactile creatures. They’ll stick their hands into anything. I had the opportunity for several summers as a young teenager to work with children in a computer lab at day camp. The younger children with little exposure to computers, would poke at screens, and this was in the mid–90s, when the pinnacle of touch-based interfaces was the Apple Newton, [3] or the original Palm Pilot. As they learned how a computer works, that the buttons on the screen have to be clicked by the proxy of a small white arrow, controlled by this odd plastic lump, connected by a cord, they would poke the screen less. [4]

This presents a sort of chicken and egg problem: do the visual UI elements that emulate real world counterparts [5] make young users think that a button on screen should be pressed with a finger, or does experience with real buttons make a young user think that? Considering how abstracted the traditional touch UI on tablets is, I’m going to say it’s the former. Whether little roundrects on an iPad screen, square icons on a Kindle Fire, or the varied shapes on an Android home screen, kids (and tech-unsavvy parents) think to touch them.

Compare this to the Old Way of Doing Things, when to do something on a computer, you had to type arcane commands in at a cold, unforgiving prompt. It wasn’t intuitive, it was frightening, and it’s why the Macintosh was “The computer for the rest of us.” The nerds of the day disdained it as a toy, but it wasn’t for them. The Macintosh of 1984 was the iPad of 2010—a first step on the journey to a new way of computing that worked for more of us. The technology wasn’t exactly new—but it presented a different abstraction of what occurred under the surface that was easier to understand.

And, really, everything we do on a computer of any sort, is working with abstractions. Unless you’re working in Assembler on a vintage mainframe, you’re working in abstractions. [6] A file, whether you’re typing out the name, double-clicking, or tapping it with a finger, is an abstraction of how data is actually stored down on the hardware level. The code a programmer writes is abstracted away from the actual OP-codes sent to a processor. As we develop new, better abstractions, we can improve how we use the technology in front of us, make it easier and better.

When a three year old child can pick up a piece of hardware, and intuit how to make it do what he wants, even if it’s just playing Angry Birds, that’s a sign that we’re on the right track. I guarantee that if he had to use a mouse, or a stylus, he’d be at an instant disadvantage. Because touch is how he knows and understands the world, touch is how he assumes he can interact with technology. Until only a few years ago, that wouldn’t have been the case.


  1. Twice, counting the quoted use.  ↩

  2. Whether this works on a functional or aesthetic level is clearly up for debate.  ↩

  3. Even that drew heavily from the desktop metaphor that still dominates desktop computing. Hence the stylus, though that was also a limitation of the technology of touch, and resistive screens.  ↩

  4. Older kids with more exposure to technology would also poke screens, but only to show where something was to me. They knew it wouldn’t actually do anything.  ↩

  5. The s-word ones.  ↩

  6. Actually, even assembly code is an abstraction. You could be flipping bits.  ↩