The Minority Report Problem
Microsoft is apparently working on “no-touch” screens. It’s the latest salvo in the push towards the Minority Report sort of touchless UI, and I remain heavily skeptical. While a touch-free UI might be a good way to interact with a large display over a distance, it’s very difficult to have a device that tracks hand movement while ignoring false input. An easy hack solution would be to require the user to wear some sort of bracelet or ring to trigger the tracking, but that has its own set of issues. You might get someone to do it at work, but nobody wants to put on a bracelet to change channels on their TV.
What really confuses me is the idea of touchless smartphone and tablet screens. What makes smartphones and tablets so inherently usable is the touch UI. Human beings are tactile creatures. Touching things is our primary way of interacting with the world, and the touchscreen UIs we have now are extremely intuitive to people because of it. A touchless tablet/smartphone interface removes that direct interaction, and adds another layer of abstraction. It’ll be harder to learn, and for what benefit? No fingerspoo on your shiny smartphone? The touchless UI is the latest bit of sci-fi UI hotness we all want on our desks. Unlike Star Trek: The Next Generation’s touchscreens, [1] the Minority Report UI is of limited usefulness. If you can manipulate something directly, and you’re right by it, direct will win.
I’m not saying that touchscreens, as we know them, are the end-all of user interface design. What really excites me is incorporating haptics into touch devices. The latest episode of John Gruber’s The Talk Show, with guest Craig Hockenberry touched on this, using the example of the old iPhone “slide to unlock” interface. If there was a way for a glass touchscreen to simulate the texture of buttons, to signal resistance to an object being dragged, and otherwise mimic a real physical interface, that would open up a ton of new possibilities. Accessibility for the blind would be a boon.
In a Twitter conversation with Joseph Rooks, he said “Figuring out why something can’t be done well is the easy part though. It’s more fun to imagine how it can be good.” He’s right, but there’s a pragmatic streak in me that has me questioning the utility. There are problems that a touch-free UI can solve, but they’re limited in scope. It’s the same with Google Glass style wearable computing. As I mentioned at the start, I can see this technology working, if not well, at least passably in for large displays like TVs. A touch-free tablet or smartphone is a great demo, but it’s a reach to imagine it solving any problem a normal person would have.
- In truth, even the Star Trek all-touch UI has its problems too. How do you control the ship when the power to the screens is down? ↩