Menu

Sanspoint.

Essays on Technology and Culture

The Context Aware Future

We have devices that know just about where we are. There are technologies being developed and deployed to know where we are inside buildings. Our smartphones have light sensors, microphones, fingerprint scanners, gyroscopes, and other ways to know about the environment it's in. Our pockets hold computers that can change interfaces on a whim, without the limitation of physical buttons. Add in the power of the sensor arrays on our devices, and I grow ever more convinced that the future of computing is going to be context-aware. We're already making the first, tentative steps, with “stone knives and bearskins.”

Knowing the potential, the current state of context-aware computing leaves me wanting. I've already complained about the difficulty of precise location tracking in dense urban areas, but there are workarounds for that. Living in the Apple ecosystem puts me at a disadvantage here, as it's too locked down for an app to think to silence my ringer, or switch out my home screen apps when I connect to the office Wi-Fi. My phone knows when I'm in motion, but it doesn't know to shut off the ringer if I'm driving a car. I'm limited to geofenced notifications and whatever limited triggers I can get with IFTTT. There's so many other possibilities, but I know Apple won't implement them on a hardware and OS level until they can do it right by their standards.

The other obstacle to context-aware computing is that it would require a lot more setup, or it would take time for the device to learn a user's habits. We see this now with services like Google Now, which tracks your location and tries to learn your habits. It takes time for it to realize that you try to get to work at 9 AM, and you take the subway, both ways, except on Wednesdays when you go to meet friends for coffee after work. Asking a user to provide all their locations and settings up front is just asking for trouble. More than half of smartphone owners don't have a passcode on their devices, according to Apple. Asking them to configure a home screen for the office and at home is pushing it. Adaptive is the way to go, but I would hope any context-aware tool would have an option for direct setup.

Context-aware computing also provides the use case for wearables—to a point. Google Glass style computing would be overkill, but a smart watch that could provide context based alerts and information would be useful, if done right. 1 I'm still not sold on a wearable device that's little more than a way for your other devices to know you're near and buzz when you get a notification, à la Craig Hockenberry's theoretical iRing. There are too many failure points: battery capacity, forgetfulness, and marketing to name three. Something like a Fitbit Force, with Apple's attention to detail and integration would pry money out of my wallet. (I'm also the kind of nutball who wears a watch every day.)

The dream for me is to have my phone be my outboard brain—reminding me to leave home early when the trains are backed up, that I made lunch reservations somewhere, I need to pick up my dry cleaning when I get off the subway, that it's time for me to turn off the screens half an hour before I go to bed. I want my devices to be smarter than me about the things where I am dumb. I want to be able to set up the requirements, and be able to forget about it, unless there's a dramatic exception to my routine. I want all of this, and I wanted done in a way to avoid notification fatigue.

Wishful thinking? Of course. But, we're so close now. Some improvements to the sensors in our phones, another iteration or two of battery technology, and a few good apps are all that we need to have truly context-aware computing in our pockets. The future is just over the horizon, and I hope enough people want to go there.


  1. And this won't be done right until we have low-power connectivity, attractive low-power displays, and high capacity batteries that can fit in a watch. One out of three ain't good. 

Location, Location, Location

Recently, I tried some more location-aware stuff on my iPhone: life-logging apps, context sensitive notifications, automatic logging of when I get to work, that sort of thing. I love the idea in theory, but implementation is rough in practice. I think I know why. Part of it is the inexactitude of cell phone GPS technology. Another is that many of the companies and individuals building location aware apps live in places like Silicon Valley and other places with a lot less human density. Most cities in the world, let alone the United States, aren’t this dense.

I live in New York City. I work in Midtown Manhattan. Measured on Google Maps, my morning coffee shop is within a 700 foot radius from my office. That sounds like a lot, but even the smallest geofence I can set up with IFTTT’s iOS location channel is large enough that it triggered a reminder to log my coffee intake while I sat at my desk. Three times. So, I turned that off. Looking into it, the smallest geofence range I can create in the iOS Reminders app is a radius of 328 feet. (IFTTT’s smallest range appears to be larger.) Combine that with the quirks of GPS data, and no wonder I’m seeing my Starbucks loyalty card on my lock screen when I sit at my desk. The geofence is just not narrow enough.

How much of this is a hardware limitation, and how much of this my environment’s density being a special case, I’m not sure. For what it’s worth, I’ve been a geolocation edge case from the moment I moved to an apartment over a coffee shop in West Philadelphia. (Am I home? Am I getting coffee? Is there even much of a difference?) But, if we’re working towards a context-aware world, with our little GPS-laden phones at the center, it behooves the people making the technology to figure out how to pin someone’s location down better. Until then, over eight million people will be left out of the revolution. All because their phones don’t know if they’re at the office, or at the bar across the street.

App developers working in this space, or any space that relies on making generalizations about human behavior, need to think a little more about their potential audience. Not everyone drives a car, not everyone has a commute above the surface of the earth, and not everyone gets their coffee more than 1000 feet from their desk. For location-aware apps and services, being able to identify small differentiations means the difference between an app you can use, and an app that just frustrates. It’s early days, and these are early adopter blues. Better to point the issues out now, before the normals get on board.

Don’t Host Your Own Email

Another high-profile case of a company poking in a user’s email (Microsoft, in this case), has lead to certain tech pundits espousing self-hosted email. Again. While they’re right, and email you host yourself is (mostly) immune from a third party accessing your data, there’s an intolerable air of arrogance around their idea of self-hosted email. What I read, when Ben Brooks writes about “owning” email is: “I have the time, money, and technical skill to administer an email server. If you don’t have all of those things, you’re getting what you deserve.”

Yes, Google crawls through your email to target ads. Yes, that sucks. Even if you “personally don’t even like emailing people who use Gmail,” you don’t get the right to act high and mighty, because you have time, money, and skills that the majority of people don’t have. Furthermore, unless you’re hosting your server in your own house, where you control all access, remote and physical, there’s still the possibility of someone getting at your email. Even Fastmail with Australia’s laws protecting data, would probably cave if someone knocked on their data center door with a court order and a bunch of men with guns.

So, unless you are totally fine with your email being accessible to the government, and the company hosting it, I suggest you go host it yourself.

But that’s the biggest problem. Self-hosted email is outside of the reach of most people. I have the chops to set up a basic email server, install a spam filter (or have MailRoute point to my server). What I don’t have is the money to spend on hardware and hosting, and the time to keep a mail server up to date with security patches and other administrative crap. If you expect your average Gmail user to “own” their email to the tune of a few hundred dollars up front, and the monthly price of hosting, you expect too much. Suggesting, as Marco Arment does that a user who handles truly sensitive data pay for a Fastmail account is more reasonable, and not nearly as condescending. (And why has Google allowed unencrypted connections for this long, again?)

There’s problems with the arrangement around free email services, to be sure. I’m not happy about Google’s ad algorithms poking through my email, but it’s a trade off I’m willing to take to not do it myself. I’m really not happy about the idea of my government poking through my email either, but I’m not going to blame Google for that. [1] We can address these issues, and educate people about what they’re giving up when they sign up for free email services, without the intolerable air of technological privilege. I suggest people like Ben Brooks try that, before being smug about how secure his ivory tower is.


  1. See previous comment about men with court orders and guns.  ↩

The Minority Report Problem

Microsoft is apparently working on “no-touch” screens. It’s the latest salvo in the push towards the Minority Report sort of touchless UI, and I remain heavily skeptical. While a touch-free UI might be a good way to interact with a large display over a distance, it’s very difficult to have a device that tracks hand movement while ignoring false input. An easy hack solution would be to require the user to wear some sort of bracelet or ring to trigger the tracking, but that has its own set of issues. You might get someone to do it at work, but nobody wants to put on a bracelet to change channels on their TV.

What really confuses me is the idea of touchless smartphone and tablet screens. What makes smartphones and tablets so inherently usable is the touch UI. Human beings are tactile creatures. Touching things is our primary way of interacting with the world, and the touchscreen UIs we have now are extremely intuitive to people because of it. A touchless tablet/smartphone interface removes that direct interaction, and adds another layer of abstraction. It’ll be harder to learn, and for what benefit? No fingerspoo on your shiny smartphone? The touchless UI is the latest bit of sci-fi UI hotness we all want on our desks. Unlike Star Trek: The Next Generation’s touchscreens, [1] the Minority Report UI is of limited usefulness. If you can manipulate something directly, and you’re right by it, direct will win.

I’m not saying that touchscreens, as we know them, are the end-all of user interface design. What really excites me is incorporating haptics into touch devices. The latest episode of John Gruber’s The Talk Show, with guest Craig Hockenberry touched on this, using the example of the old iPhone “slide to unlock” interface. If there was a way for a glass touchscreen to simulate the texture of buttons, to signal resistance to an object being dragged, and otherwise mimic a real physical interface, that would open up a ton of new possibilities. Accessibility for the blind would be a boon.

In a Twitter conversation with Joseph Rooks, he said “Figuring out why something can’t be done well is the easy part though. It’s more fun to imagine how it can be good.” He’s right, but there’s a pragmatic streak in me that has me questioning the utility. There are problems that a touch-free UI can solve, but they’re limited in scope. It’s the same with Google Glass style wearable computing. As I mentioned at the start, I can see this technology working, if not well, at least passably in for large displays like TVs. A touch-free tablet or smartphone is a great demo, but it’s a reach to imagine it solving any problem a normal person would have.


  1. In truth, even the Star Trek all-touch UI has its problems too. How do you control the ship when the power to the screens is down?  ↩

Shared Knowledge and the Old-School Web

I recently discovered Sean Korzdorfer’s Open Notebook, and it’s provided me with some interesting reading, and thinking about the various workflows in my life. Sean puts a lot of thought into the systems he uses to make his life go, far more than I usually do. Of the nuggets of wisdom I’ve pulled from the Open Notebook are ways to better incorporate the awesome reminders tool Due.app into my day, using Daily Routine to add structure to my days, and given me a lot to think about journaling.

There’s something wonderfully old-school about the Open Notebook. It reminds me of the early days of the web, when a personal site was typically a huge mish-mash of stuff. In those wild days, your typical “home page” could as easily contain a proto-blog, links to useful freeware, recipes, and shrines to your favorite television programs. You don’t see that too often any more, and while Sean Korzdorfer’s Open Notebook is (slightly) more focused, it feels as deeply personal as those old personal pages from the 1990s—though better looking.

A while back, I commented on Frank Chimero’s post about what a personal website means in 2014. I wondered, like Frank, “How do you bring all of those silos and streams under one banner, one roof, and make it work?”. Frank’s taken a very old school, 90s personal-site approach in the intervening months, with links to different sections, putting everything under the same roof, but organized and coherent—and with 2014 calibre web design.

What makes Sean Korzdorfer’s Open Notebook different is that it’s as much a resource as it is a personal statement. He’s sharing his knowledge of what works for him, in a way that allows strangers like to me build on what he has done. It’s the same openness and share-alike philosophy that made the early web so interesting. When the web was a frontier and not yet a catalog or a bunch of social apps, people built off of each other’s knowledge and experience. How many web designers and developers got their start from using “View Source” in Netscape 3.04?

None of this has gone away, but it’s not as common or visible as it used was. It’s probably time that changed.