Menu

Sanspoint.

Essays on Technology and Culture

What Makes SimCity Great (and Why the Sequel Disappoints)

The first time I played SimCity was Christmas, 1991. I’d gotten a Super Nintendo, and SimCity was one of the original games released along side it. Super Mario World was fun, F-Zero was frustrating, but SimCity inspired me in a way that no other video game had before. When I got my first PC, I got SimCity. When SimCity 2000 came out, I begged for it as a Christmas gift, and got it, and played it relentlessly, building (and later destroying) digital cities in my own image. Later iterations of the game, SimCity 3000 and SimCity 4, were good, but didn’t quite capture the magic of SimCity 2000. 3000 was pretty, but added layers of complexity that took away from the core of the game. SimCity 4’s complexity was even greater, but in an exciting way, though it taxed every machine I tried running it on to the absolute limit.

When the first preview of the newest SimCity, I was incredibly excited. Here, at last, was a sequel that looked as revolutionary compared to its predecessor as SimCity 2000 was to the original. [1] The graphics looked incredible. You could build roads that curved and swayed with the terrain. The full–3D experienced that had been promised as far back as SimCity 3000 was finally coming. I watched preview and gameplay videos, and I salivated. When I bought my new computer, I checked its specs against what EA had recommended, knowing full well that when the Mac version dropped, I would snatch it up and never see daylight for a month.

Plans have changed. The Mac version isn’t out yet, but I’ve already decided I would rather pay six bucks for the [Good Old Games] version of SimCity 2000 than however much EA wants for the latest SimCity. I don’t need to retell the clusterexpletive of the game’s launch, and the continuing backlash. That’s what Google is for. Instead, I’d like to tell you what makes SimCity, as a game, as a franchise, so compelling to us. You can fill in the blanks about where the latest sequel falls down.

Actually, to even call SimCity a “game” is a disservice. For years, Maxis labelled their products as “software toys.” An apt moniker, as SimCity began life as little more than a map editor for an Amiga shoot-em-up game. Will Wright had enough fun building the cities that some would-be gamer would bomb that he spun the editor off into a new project, allowing someone to build their own city and watch it grow, managing it all the while. Crime too high? Build police stations. Earthquake, or tornado strike? Better have Fire departments. Too much traffic? Build a rail system… and make sure you’re making enough from taxes to pay for all of it.

SimCity was an open-ended box of virtual Tinkertoys that we could put together any way we want, and later versions of the game opened up the possibilities. In SimCity 4, for example, you could create anything from a dense, sprawling metropolis, to a tiny little farming village, and have it work. Earlier games in the series tended to push a bit more towards creating something more like Portland than Peoria. [2] Even still, it never told you what to do, or how to do it beyond the basics in the manual. If you want to build a utopian paradise where every citizen’s needs are met, there’s no traffic, no pollution, and no crime, you can. If you want a totalitarian state, with an ignorant populace and crumbling infrastructure, go nuts. If you want to just draw pretty pictures on the landscape with the road tool? Have fun. City shaped like a dong? You got it.

Whatever it was, you were building your own vision. SimCity was anti-social, though when SimCity 4 came out, people had taken to sharing diaries of their cities online. You didn’t have to worry about consensus, or other people taking the good parts for themselves. Few games before, or since, give you quite the level of creative freedom SimCity has, even within the same basic genre. SimCity gives you a blank slate, a few simple constraints, and all the tools you need to make something cool. The earlier games in the series don’t lose their value due to age, either. SimCity 2000, which I’ve been playing lately, is as fun and frustrating as it was twenty years ago. The graphics are a product of its time, but they still look great, but SimCity would be compelling as mere abstractions rather than attempts at realism. It’s the freedom and control that make SimCity so compelling.


  1. SimCity Societies doesn’t count. Don’t ask me about it.  ↩

  2. Please don’t e-mail me if you live in either of those places.  ↩

Why Kids Understand Tablets: UI, Touch, and Abstractions

Recently, at a birthday gathering for my girlfriend’s mother, there was awed discussion about the three year old grandson of one guest, and his intuitive grasp of a tablet computer. It got me thinking a bit, and while I have precious little grounding in some of what I’ll be touching on, the kid’s understanding makes more sense than we might think at first. After all, touch is how we interact with the real world. When it comes to technology, touch-based computing is a still in its infancy, but by removing the indirect UI manipulations of a mouse, the learning curve of a UI interface decreases by quite a lot.

Case in point: have you ever had to teach someone how to use a mouse? The difference between a left-click and a right-click? When to double-click and when to not double-click? How to click and drag? The directness of a touch UI is apparent in a situation like this. “How do I play Angry Birds?” Just touch the little picture of the angry bird. Boom. What could be simpler? Perhaps direct, thought-control or Star Trek like voice recognition, but we’re still a long, long way away from that.

The mouse, as an interface, is a vestigial remnant from the transitional period between the days when computing was a keyboard based affair, and the near-future of semi-ubiquitous touch computing. Make no mistake, there are places where a mouse, or mouse-like interface, would be preferable to pure touch, such as a large-screened desktop computer. The mice and trackpads may be akin to the massive stick-shifts on trucks, to complete the Steve Jobs’ metaphor. This sort of thing is why Microsoft’s previous attempts at touch computing failed. Relying on a stylus as a mouse surrogate only emphasized the flaws inherent in using a mouse-based UI in a touch oriented environment.

A good touch interface is easy to understand, because it reacts as you would expect when using it. If you slide a piece of paper in front of you on your desk with a finger, scrolling a web page in a tablet isn’t a huge leap. The direct control, and near-immediate response of a touch interface mimic what we know from the real world. I’ll only use the “skeuomorphism” buzzword once [1], but part of the reason behind a skeuomorphic interface on a touch device is to drive those real-world analogues home. [2]. It matches our patterns of how things work. To move to the next page of a book, you swipe, much like you would if you were reading a real book.

Children are tactile creatures. They’ll stick their hands into anything. I had the opportunity for several summers as a young teenager to work with children in a computer lab at day camp. The younger children with little exposure to computers, would poke at screens, and this was in the mid–90s, when the pinnacle of touch-based interfaces was the Apple Newton, [3] or the original Palm Pilot. As they learned how a computer works, that the buttons on the screen have to be clicked by the proxy of a small white arrow, controlled by this odd plastic lump, connected by a cord, they would poke the screen less. [4]

This presents a sort of chicken and egg problem: do the visual UI elements that emulate real world counterparts [5] make young users think that a button on screen should be pressed with a finger, or does experience with real buttons make a young user think that? Considering how abstracted the traditional touch UI on tablets is, I’m going to say it’s the former. Whether little roundrects on an iPad screen, square icons on a Kindle Fire, or the varied shapes on an Android home screen, kids (and tech-unsavvy parents) think to touch them.

Compare this to the Old Way of Doing Things, when to do something on a computer, you had to type arcane commands in at a cold, unforgiving prompt. It wasn’t intuitive, it was frightening, and it’s why the Macintosh was “The computer for the rest of us.” The nerds of the day disdained it as a toy, but it wasn’t for them. The Macintosh of 1984 was the iPad of 2010—a first step on the journey to a new way of computing that worked for more of us. The technology wasn’t exactly new—but it presented a different abstraction of what occurred under the surface that was easier to understand.

And, really, everything we do on a computer of any sort, is working with abstractions. Unless you’re working in Assembler on a vintage mainframe, you’re working in abstractions. [6] A file, whether you’re typing out the name, double-clicking, or tapping it with a finger, is an abstraction of how data is actually stored down on the hardware level. The code a programmer writes is abstracted away from the actual OP-codes sent to a processor. As we develop new, better abstractions, we can improve how we use the technology in front of us, make it easier and better.

When a three year old child can pick up a piece of hardware, and intuit how to make it do what he wants, even if it’s just playing Angry Birds, that’s a sign that we’re on the right track. I guarantee that if he had to use a mouse, or a stylus, he’d be at an instant disadvantage. Because touch is how he knows and understands the world, touch is how he assumes he can interact with technology. Until only a few years ago, that wouldn’t have been the case.


  1. Twice, counting the quoted use.  ↩

  2. Whether this works on a functional or aesthetic level is clearly up for debate.  ↩

  3. Even that drew heavily from the desktop metaphor that still dominates desktop computing. Hence the stylus, though that was also a limitation of the technology of touch, and resistive screens.  ↩

  4. Older kids with more exposure to technology would also poke screens, but only to show where something was to me. They knew it wouldn’t actually do anything.  ↩

  5. The s-word ones.  ↩

  6. Actually, even assembly code is an abstraction. You could be flipping bits.  ↩

The Problem With Technology Evangelism

Is there a douchier title in technology than “Evangelist”? It should conjure up images of people on street corners passing out fliers for a church, political party, or other organization you don’t want to join, and who will yell at you if you brush them off. It’s excessive, and it’s polarizing. Though a lot of us want to tell the world about what we love—that’s half of what social networks are designed for [1]—the problem is that, unless it’s a specific social context, nobody wants to hear about it.

That’s what makes the idea of the “Evangelist” so obnoxious. I say this as a happy member of the Cult of Apple, but one reason why Apple users are so often mocked is the evangelistic fervor with which they freely promote their company’s products at the expense of other things, often without prompting. [2] If it’s annoying when people do it for free, how annoying is it when someone is being paid to do it? The problem is that, the instant money enters the equation, a person’s credibility is potentially compromised. The hand that feeds is also a hand that holds the leash, or the rod.

It’s why John Gruber is skeptical of Apple’s hiring of Kevin Lynch as VP of Technology. Working for Adobe, and promoting Flash on mobile devices was parroting the company line. He may have believed it, he may not, but I’d trust an independent partisan over a corporate partisan any day—which is why I trust Gruber. Apple may hook him up with review units and inside info, but he earns his living the hard way. Kevin Lynch is doubly boned here—if he really thought Flash was a good mobile technology, he was wrong. If he was just parroting the company line, he’s dishonest, which makes for the worst kind of evangelism.

Evangelism in technology is merely marketing taken to an extreme. It’s an attempt to put a name and a friendly face to an entity that has no face. No technology company is innocent here—the evangelist is an established business practice, for better and for worse. Too often, the evangelist is pushing a product that is known to be sub-par, brushing the flaws under the rug. The skeptical technology user needs to be aware, and to poke, prod, and otherwise see past the curtain of whiz-bang buzzwords to see what lies within, lest they get suckered. As dangerous as it is to be a naysayer, total buy-in to the evangelical crowd us just as bad.


  1. The other half is having the social networks advertise the things we love to other people, and have what other people love advertised to us.  ↩

  2. To make the second reference to Andy Ihanatko’s Android piece in as many days, the Reddit thread about his piece had more than a few people calling him a traitor, and an equal number saying that nobody should care because Andy’s a nobody.  ↩

The Little Tweaks

My job involves an awful lot of typing. Often, it’s a lot of typing the same stuff, over and over again. When I catch myself copying and pasting, or retyping the same things over and over, I stop and create a snipped in TextExpander to do it for me. I’ve several specialized snippets I use for my job, ranging from typing “Read full article at” to complex, pop-up, fill-in forms. It’s something that takes anywhere from one to five minutes of my time, but can potentially save me hours in the long run.

We all have little parts of what we do that can be sped up, improved, or even automated. The hardest core of hardcore geeks often find themselves writing scripts, changing system settings, and installing new software for hours on end, with the intent of shaving a few seconds of a common, repetitious task. [1] To the untrained eye, it looks like procrastination. Actually, to the trained eye it can look like that, too, and it sometimes is. However, it’s those little (and not so little) tweaks and customizations that help us make technology our own.

I’m reminded of a comic where a person watches in mounting frustration as they watch another person try to search the Internet. It’s an exaggerated example—well maybe. I’ve never seen anyone use Google to get to Google and do a web search, but it is plausible. If you relate to the person watching, you’re probably more technologically savvy than 90% of the people in your life, if not more, and it’s all through exposure. The more we use something, the more we desire to make it our own [2], and we will often seek ways to do it.

In some cases, this is aided by discoverability in design. While using a new piece of software, you might think “Hey! There has to be an easier way to do x” so you start poking through menus, reading documentation, or pressing random keys until you find the one that does what you want. Your willingness to do so often comes the software’s friendliness. Of course, we’ve made this a lifestyle. For the part of the population where the technology is merely a means to an end, they will often stick with whatever workflow they’ve found works for them, no matter how cumbersome. There’s no incentive for them to consider doing otherwise.

This fundamental difference in mindset is caused by factors far too varied for me to get into here, what with my lack of training and experience in psychology, user experience, software development, or design. Suffice it to say that without an immediate value proposition beyond “Hey! This is easier/faster/cooler,” most users will find a way that works for them and stick to it. Watch your parents use a computer, and you’ll suddenly understand. Unless you’re that one guy whose parents are technology mavens, I guess.

High Priest of App Design, at Home in Philly

More than 2,500 miles from Silicon Valley, in a small home office with a dog bed under the desk, sits a man on the cutting edge of the apps boom.

The Wall Street Journal on Loren Brichter

The man does amazing work, and he lives in my home town. Not much more to be said, though hopefully you can actually read this with The Wall Street Journal’s ridiculous paywall. And if you want to challenge me at Letterpress, my username is, no surprise, sanspoint.