Recently, at a birthday gathering for my girlfriend’s mother, there was awed discussion about the three year old grandson of one guest, and his intuitive grasp of a tablet computer. It got me thinking a bit, and while I have precious little grounding in some of what I’ll be touching on, the kid’s understanding makes more sense than we might think at first. After all, touch is how we interact with the real world. When it comes to technology, touch-based computing is a still in its infancy, but by removing the indirect UI manipulations of a mouse, the learning curve of a UI interface decreases by quite a lot.
Case in point: have you ever had to teach someone how to use a mouse? The difference between a left-click and a right-click? When to double-click and when to not double-click? How to click and drag? The directness of a touch UI is apparent in a situation like this. “How do I play Angry Birds?” Just touch the little picture of the angry bird. Boom. What could be simpler? Perhaps direct, thought-control or Star Trek like voice recognition, but we’re still a long, long way away from that.
The mouse, as an interface, is a vestigial remnant from the transitional period between the days when computing was a keyboard based affair, and the near-future of semi-ubiquitous touch computing. Make no mistake, there are places where a mouse, or mouse-like interface, would be preferable to pure touch, such as a large-screened desktop computer. The mice and trackpads may be akin to the massive stick-shifts on trucks, to complete the Steve Jobs’ metaphor. This sort of thing is why Microsoft’s previous attempts at touch computing failed. Relying on a stylus as a mouse surrogate only emphasized the flaws inherent in using a mouse-based UI in a touch oriented environment.
A good touch interface is easy to understand, because it reacts as you would expect when using it. If you slide a piece of paper in front of you on your desk with a finger, scrolling a web page in a tablet isn’t a huge leap. The direct control, and near-immediate response of a touch interface mimic what we know from the real world. I’ll only use the “skeuomorphism” buzzword once , but part of the reason behind a skeuomorphic interface on a touch device is to drive those real-world analogues home. . It matches our patterns of how things work. To move to the next page of a book, you swipe, much like you would if you were reading a real book.
Children are tactile creatures. They’ll stick their hands into anything. I had the opportunity for several summers as a young teenager to work with children in a computer lab at day camp. The younger children with little exposure to computers, would poke at screens, and this was in the mid–90s, when the pinnacle of touch-based interfaces was the Apple Newton,  or the original Palm Pilot. As they learned how a computer works, that the buttons on the screen have to be clicked by the proxy of a small white arrow, controlled by this odd plastic lump, connected by a cord, they would poke the screen less. 
This presents a sort of chicken and egg problem: do the visual UI elements that emulate real world counterparts  make young users think that a button on screen should be pressed with a finger, or does experience with real buttons make a young user think that? Considering how abstracted the traditional touch UI on tablets is, I’m going to say it’s the former. Whether little roundrects on an iPad screen, square icons on a Kindle Fire, or the varied shapes on an Android home screen, kids (and tech-unsavvy parents) think to touch them.
Compare this to the Old Way of Doing Things, when to do something on a computer, you had to type arcane commands in at a cold, unforgiving prompt. It wasn’t intuitive, it was frightening, and it’s why the Macintosh was “The computer for the rest of us.” The nerds of the day disdained it as a toy, but it wasn’t for them. The Macintosh of 1984 was the iPad of 2010—a first step on the journey to a new way of computing that worked for more of us. The technology wasn’t exactly new—but it presented a different abstraction of what occurred under the surface that was easier to understand.
And, really, everything we do on a computer of any sort, is working with abstractions. Unless you’re working in Assembler on a vintage mainframe, you’re working in abstractions.  A file, whether you’re typing out the name, double-clicking, or tapping it with a finger, is an abstraction of how data is actually stored down on the hardware level. The code a programmer writes is abstracted away from the actual OP-codes sent to a processor. As we develop new, better abstractions, we can improve how we use the technology in front of us, make it easier and better.
When a three year old child can pick up a piece of hardware, and intuit how to make it do what he wants, even if it’s just playing Angry Birds, that’s a sign that we’re on the right track. I guarantee that if he had to use a mouse, or a stylus, he’d be at an instant disadvantage. Because touch is how he knows and understands the world, touch is how he assumes he can interact with technology. Until only a few years ago, that wouldn’t have been the case.
Twice, counting the quoted use. ↩
Whether this works on a functional or aesthetic level is clearly up for debate. ↩
Even that drew heavily from the desktop metaphor that still dominates desktop computing. Hence the stylus, though that was also a limitation of the technology of touch, and resistive screens. ↩
Older kids with more exposure to technology would also poke screens, but only to show where something was to me. They knew it wouldn’t actually do anything. ↩
The s-word ones. ↩
Actually, even assembly code is an abstraction. You could be flipping bits. ↩