The Context Aware Future
We have devices that know just about where we are. There are technologies being developed and deployed to know where we are inside buildings. Our smartphones have light sensors, microphones, fingerprint scanners, gyroscopes, and other ways to know about the environment it's in. Our pockets hold computers that can change interfaces on a whim, without the limitation of physical buttons. Add in the power of the sensor arrays on our devices, and I grow ever more convinced that the future of computing is going to be context-aware. We're already making the first, tentative steps, with “stone knives and bearskins.”
Knowing the potential, the current state of context-aware computing leaves me wanting. I've already complained about the difficulty of precise location tracking in dense urban areas, but there are workarounds for that. Living in the Apple ecosystem puts me at a disadvantage here, as it's too locked down for an app to think to silence my ringer, or switch out my home screen apps when I connect to the office Wi-Fi. My phone knows when I'm in motion, but it doesn't know to shut off the ringer if I'm driving a car. I'm limited to geofenced notifications and whatever limited triggers I can get with IFTTT. There's so many other possibilities, but I know Apple won't implement them on a hardware and OS level until they can do it right by their standards.
The other obstacle to context-aware computing is that it would require a lot more setup, or it would take time for the device to learn a user's habits. We see this now with services like Google Now, which tracks your location and tries to learn your habits. It takes time for it to realize that you try to get to work at 9 AM, and you take the subway, both ways, except on Wednesdays when you go to meet friends for coffee after work. Asking a user to provide all their locations and settings up front is just asking for trouble. More than half of smartphone owners don't have a passcode on their devices, according to Apple. Asking them to configure a home screen for the office and at home is pushing it. Adaptive is the way to go, but I would hope any context-aware tool would have an option for direct setup.
Context-aware computing also provides the use case for wearables—to a point. Google Glass style computing would be overkill, but a smart watch that could provide context based alerts and information would be useful, if done right. 1 I'm still not sold on a wearable device that's little more than a way for your other devices to know you're near and buzz when you get a notification, Ã la Craig Hockenberry's theoretical iRing. There are too many failure points: battery capacity, forgetfulness, and marketing to name three. Something like a Fitbit Force, with Apple's attention to detail and integration would pry money out of my wallet. (I'm also the kind of nutball who wears a watch every day.)
The dream for me is to have my phone be my outboard brain—reminding me to leave home early when the trains are backed up, that I made lunch reservations somewhere, I need to pick up my dry cleaning when I get off the subway, that it's time for me to turn off the screens half an hour before I go to bed. I want my devices to be smarter than me about the things where I am dumb. I want to be able to set up the requirements, and be able to forget about it, unless there's a dramatic exception to my routine. I want all of this, and I wanted done in a way to avoid notification fatigue.
Wishful thinking? Of course. But, we're so close now. Some improvements to the sensors in our phones, another iteration or two of battery technology, and a few good apps are all that we need to have truly context-aware computing in our pockets. The future is just over the horizon, and I hope enough people want to go there.
- And this won't be done right until we have low-power connectivity, attractive low-power displays, and high capacity batteries that can fit in a watch. One out of three ain't good. ↩