Menu

Sanspoint.

Essays on Technology and Culture

Omnipresent Data, and the Wearable Computing Use Case

After my piece on Google Glass went out, I got into an interesting discussion on App.Net with William Kujawa and Max Jacobson. William suggests that further iterations of wearable computing would avoid the problems to mainstream adoption, while Max suggests some use cases for it that might help mainstream the concept. I’m still unconvinced, and it comes down to to one thing. The defining ideal of wearable computing is omnipresent data. Few people have the need for such a thing, and that is where the problem lies.

No wearable computer is capable of providing true omnipresent data, unless you want to wear a belt made of batteries, but it’s reasonable to assume that in a couple generations, we’ll have Google Glass-esque hardware that can last for twelve hours of normal use. That’s twelve hours a day for you to have a display in your vision, and twelve hours a day of notifications and data you can’t avoid unless you physically remove the device. Glued as we are to our smartphones, we can always shove them in our pockets or bags. Glasses, by concept, are not temporary things. They go on your face, and stay there, though I don’t take mine into the shower.

As long as this thing is on your face, you’re seeing, sending, and receiving data. You have omnipresent data in your face at all times, outside of certain specialized tasks, you don’t need data in your face at all time. That’s where the Google Glass concept, and wearable computing in general falls down. Nobody’s provided the ordinary user a compelling use case, least of all Google, for something that does what Google Glass does. Fundamentally, Google Glass is a general purpose computing device, stuck to your head, with a display that’s constantly in your vision. That data will change depending on what apps you have installed and running, but it’s still data, and it’s still in your face.

Harry C. Marks experience with Google Now had me thinking. He notes that Google Now is mostly untapped potential, considering all the data Google collects on us. Still, some consider Google Now to be the “killer app” of Glass, popping up notifications as we need them, based on what we’re doing. I think there’s a subtle, but important difference between activity-based notifications on our smartphones (or maybe our “smart watch”), as opposed to in our face. By way of example, check out this video of a water-based stop sign in Sydney to keep too-tall trucks from smashing into a tunnel entrance, and tell me if you don’t jump when it pops up.

A smartphone beeping, vibrating in your pocket, or lighting up in your peripheral vision is jarring, but it’s not the same as having it in your face. It’s easier to ignore something on the periphery, by choice or otherwise. It takes a physical act to see, acknowledge, and choose to act on the alert presented. This flaw is where Glass or a similar device would succeed in a specialized environment, where a heads-up, in your face alert means a faster reaction. I can see a case for, say, a paramedic on patrol wearing a Google Glass like device to let them know about incoming calls, for example. Outside of that space, how many of us need up-to-the-minute data about anything, wherever we may be or whatever we may be doing?