Menu

Sanspoint.

Essays on Technology and Culture

Apple Watch and the Wearable Use Case

If you’re a regular reader, my skeptical stance on wearable computing should be no surprise. Of course, those pieces were on Google Glass, of which the glow is now far, far off. Now, the excitement is in the “smartwatch” space. It’s the new hotness, and I write this on the day Apple announced their entry into the game.

Like all good Apple fanboys, I watched the announcement of the new iPhones, and new Apple Watch, and was impressed. The Apple Watch looks to be a neat piece of kit, and combines a lot of technologies in an exciting way. It also looks pretty cool, though I was hoping for a round face. Problem is, the Apple Watch is just doing the same stuff most other products are doing in the smartwatch space. It’s doing them in a flashier, more integrated, and Apple-like way, but the main features are the same as most other smartwatches. It’s a second screen for your iPhone. It’s a fitness tracker. It’s a NFC device for Apple Pay. It can run apps. Great. That’s going to be enough for some people.

How many people, though? Even ignoring the $349 stating price.

A few days ago, on September 4th, I attended a panel on wearables, run by local tech group Digital Dumbo. Despite my skeptical stance on wearables, I figured it would be worth my time to attend, even if my worst fears came true and the whole thing was just a bunch of tech douchebags singing the praises of Google Glass. I was pleasantly surprised to find the discussion, which included Robert Genovese of Kenneth Cole, Dick Talens of Fitocracy and Pavlok (more on that later), and Gareth Price of digital agency Ready Set Rocket, was plenty questioning and skeptical.

Criticisms ranged from the problem that current wearables are just “slapping smartphone technology on someone’s wrist” to the evolving social norms around wearables. The meat of the panel, however, came around use cases for wearables. They came up short. Dick Talens was critical of fitness wearables as a behavior changer, and as a co-founder of Fitocracy, he has some insight into this. Data alone does not change behavior, unless you’re the sort of auto-didactic nerd who eats up Quantified Self stuff. It’s interesting that Talens current endeavor, Pavlok is a wearable that’s about changing habits… through electric shocks (and social pressure).

There’s two questions that need to be answered with wearables. The first is what it means when we wear something, which was raised by Robert Genovese early in the panel. The second is what value it adds to someone’s life. When you look at the current state of wearables—including the Apple Watch—the answer to both questions is a little fuzzy. Apple’s “Digital Touch” feature, which uses the Watch’s “Taptic Engine” to communicate through sending touches, or by feeling someone’s heartbeat. It’s the sort of touchy-feely—no pun intended—thing that you’d expect from Apple. In other words, Apple’s answer to the meaning of wearables is the human element, the smartwatch as an interpersonal device. It makes for a neat demo, but I don’t know how much it would be used in the real world. Maybe you have to try it to get it. As an answer to what wearables mean, it might be a success. As an answer to the value add of wearables over phones, I’m still skeptical.

It’s the value-add that we’re still going to be figuring out over the next few years. If I were a betting man, I’d be putting my money down on Apple to figure out the right value add, at least if you’re in their ecosystem, before anyone else. Back in June, I speculated on an “Adjacent Possible iWatch” that added smarts to the traditional watch form factor, in the vein of the Withings Activité. My theory was that it would execute supremely well on a small set of functions, possibly incorporating them into the body of an analog watch.

I was wrong. The Apple Watch is very much a full computer, with apps and an interface, and the whole shebang. The “Digital Crown” UI is interesting in the light of this thought from June:

[Apple] combine a lot of pre-existing technologies with a knack for aesthetics and UI that other companies miss, and they often do so in ways that seem painfully obvious in hindsight.

I wonder if the variety of built in functions and app ecosystem are Apple hedging its bets on what will be the value add of a wearable. Much like how there are a million watches to fit everyone’s aesthetics and needs, a more general purpose device does give the owner the ability to make it theirs and use the features that suit their lives. Dismissive as I was of the “Digital Touch” feature earlier, if I had the money, I’d buy an iPhone and Apple Watch for my girlfriend so we could feel each others heartbeat. Haptic feedback for walking directions would be wonderful too, as would some of the fitness features. I’d probably never bother with the photos and messaging stuff, though.

A few years from now, when the prices drop, the battery life improves, and the feature set grows, we will certainly be having a different discussion. Well, I hope we will, at least. I look at the pre-Apple Watch smartwatches and see devices that are trying to overreach in what they can do from a hardware, software, usability, and utility perspective alike. The Apple Watch, much like the original iPhone, is underreaching. It’s leaving head room that will be filled in upcoming years and with hardware and software updates. The best we can hope from Android Wear and other smartwatch is catching up to their potential. Apple’s likely to get to where they want to be first, if history is any judge. And, maybe, when they get there, I’ll put my Casio F–91W out to pasture and send someone my heartbeat.

Against Coveillence and Surveillance Alike

Over at Wired, Kevin Kelly is suggesting that we deal with surveillance, public and private alike, by giving in. And by spying back.

[O]ur central choice now is whether this surveillance is a secret, one-way panopticon — or a mutual, transparent kind of “coveillance” that involves watching the watchers. The first option is hell, the second redeemable.

Oh, if it were only that easy. And even if it was, it still wouldn't be worth doing.

It's not that easy because there's an imbalance of power between the spies, be they Facebook or the NSA, and us, the normal users. It's to the benefit of the spies that we not see the secret sauce that powers what they do. It's why social media companies bury the lead on what they do with and to our data in pages upon pages of legalese, designed to obfuscate their intentions. When was the last time you read Facebook's Terms of Service Agreement? With the NSA, you have the additional problem of trust. Namely, the government will trust them, before it trusts you. The NSA would happily operate in secret, transparency being a threat to national security.

Further complicating the mess is that the tools of surveillance are owned by the companies (and governments) that make it their business to spy on us. They won't open those tools up without a fight, unless you're an advertiser with a lot of money to spend. If there's one thing Facebook, Google, and the NSA have in common, it's not just that they want us to keep our mitts out of the gear they use to spy on us, but they have the same reason for doing so: to prevent competition.

Bruce Schneider was on top of this six years ago.

When your doctor says “take off your clothes,” it makes no sense for you to say, “You first, doc.” The two of you are not engaging in an interaction of equals.

It's perhaps true that, should we be able to overcome the Himalayan institutional obstacles preventing “coveillence” we would be a society of equals. That still doesn't mean it's a society we would want to live in. A good parallel is the Cold War, and the threat of Mutually Assured Destruction from matched stockpiles of nuclear missiles. Neither side dared make the first move, knowing that retaliation would be swift and deadly.

There are so many reasons to keep a secret. It's easy when you're a wealthy, white, heterosexual male to be willing to expose yourself. Not everyone has that luxury. I think back to Mike Monteiro's “How Designers Destroyed the World” talk, and the way Facebook's design exposed a woman's sexuality to her parents, destroying her relationship with them. It was a relationship predicated on hiding an aspect of her self that she knew her parents would react against. Maybe she planned to come out, in time, but it was not Facebook's job to reveal it for her before she was ready.

And we all have skeletons in our closets that we don't want to share. Because we don't know how people will react, and because we do. Part of where Kevin Kelly's argument falls apart—moreso—is when he tries to bring anthropology into it.

“For eons humans have lived in tribes and clans where every act was open and visible and there were no secrets. We evolved with constant co-monitoring. Contrary to our modern suspicions, there wouldn’t be a backlash against a circular world where we constantly spy on each other because we lived like this for a million years…”

I call bullshit.

And that call of bullshit is backed up by the narcissism of small differences. We see it everywhere, from the snark between Apple and Android fanboys, to the conflict in Ferguson, Missouri, to the Sunni/Shiite divide in Iraq. In small, tightly knit groups, deviance from the norm is a threat to group cohesion. In large groups, like cities, we get privacy, and the ability to fork new groups with our own norms and mores. The Internet is the biggest city, and every group on it behaves differently.

And those groups can be just as insular and unempathetic as the early tribes Kevin Kelly so romanticizes. When Kelly says “We’ve broadened our circle of empathy, from clan to race, race to species, and soon beyond that,” he's being naïve at best, disingenuous at worst. There's enough evidence to show that the Internet is making us less empathetic to people who aren't like us. At the very least, it's just given us new avenues to express that lack of empathy.

The imbalance of power that creates the surveillance state comes the dichotomy of the public and private self that is a defining characteristic of humanity. We are not open, or closed. There are a million subtle positions between sharing everything and sharing nothing. We share more of ourselves with our spouses than our friends, more with our friends than our co-workers, more with co-workers than our barista. Some things we keep to ourselves alone.

100% transparent or 100% opaque is a new development, a function pulled as much from the binary nature of computers as it does from technology teams who either don't care, or are paid not to care, about the degrees between those extremes. Instead of bending ourselves to the on or off, open or closed nature of our technology, we should bend the technology to be more like us.

There's an air of defeatism, too, in Kelly's argument. We are too far along, it says, on the road to the panopticon. Let's just go all the way, and make sure we have cameras to turn back on our observers. I reject the premise. None of this is written in stone. We made the technologies, we made the governments and corporations that spy on us. We can remake them as well. We are not, we are never, too late.

A Different Future of Our Home Screens

Over at The Verge, Joe Alicata is bemoaning the state of our home screens.

“Home screens come in many flavors. We have semi-customizable experiences from cable TV providers, and the more modern over-the-top streaming content boxes like Roku. The problem with both is the focus on the “app” rather than on the content. When I land on my Apple TV home screen, it’s mostly a grid of apps — there are precious few clues as to what I should actually watch. The interface certainly lacks any notion of what I might want to consume based on previous patterns.”

The idea of a content focused home screen isn’t new with Joe. Amazon’s Kindle Fire devices provide a carousel of recent media and apps, though they don’t make suggestions as to what you might want to look at today. It’s an idea that makes sense on a dedicated content consumption device like a TV, or set-top box. Even on a more general device, be it smartphone, tablet, or computer, it should be easier than it is to view content somewhere else. “If I get a notification that a new piece of content is available and it happens to be a video, it should be easy to push that to my TV without launching the app and finding the funny-looking icon to cast it to my TV. If the TV is already on, shouldn’t it just show up as an option to watch?” I don’t own a TV, but it would be super useful to shoot a link from my iPhone or iPad to my MacBook, connected to a big display, if I want to watch some video content. Handoff in Yosemite and iOS 8 doesn’t do this, at least not yet.

Where it falls down is the idea that “[t]he home screen of the future needs to lead with content…” as a generalization. As I said earlier, this makes perfect sense for a content-consumption device. However, the idea of this kind of interface on a phone gives me the willies. I don’t need my phone buzzing me to check out this cool video a Twitter friend posted, or throwing suggestions out willy-nilly. Even on my iPad, I’d rather it give me the option to consume, or to create. The grid of icons with strange names does make it harder to find what we want or need, but it’s the best balance for devices that serve for consumption and creation tools. For now.

What would be more useful, on a smartphone in particular, is a home screen that adapts to a user based on context. If I’m at home, show me the apps and widgets that are about the things I do at home. For many people, that’s probably about media consumption, so that would be about stuff to control music, video, books, etc. Out and about, the home screen can focus on those apps and services we use while on the go: local search, transit directions, ways to keep in touch with friends. At the office, it’s all productivity and communication.

The technology is there to make a lot of this contextual stuff happen, and it’s less fuzzy than recommendation algorithms. Content consumption is a big part of what we use our gear for, but far from the only thing. Our most personal devices become far more useful, and far more personal, when they adapt to our needs and wants. Getting “relevant content” is nice. Doing relevant tasks, including consuming content, is better.

Let’s move this forward instead.

Let Me Go

As I strip down and simplify my online life, cutting out the services that aren’t doing anything for me, I’ve run across more than a few sites and apps that make it a complete pain to extract yourself from them. Not only is this not a new issue, but there’s a whole site, justdelete.me dedicated to showing you how to delete your accounts on sites, and telling you how difficult it is. Any product with a yellow, red, or black banner on justdelete.me should be ashamed of themselves.

I find sites that require you to email a customer service person to delete an account to be the most obnoxious. Whatever magic a customer service person has to do on the backend can be triggered by a button and confirmation prompt on the front end by me. I should know—I deleted a number of accounts through the backend when I worked for the startup company. Eventually they added a user-facing account deletion option. I don’t buy technical excuses about how user data is tied in across services. There are ways to remove a user and their data without destroying database integrity. It’s more work, true, but anything is more work than not allowing a user to delete their account.

Why make it so hard to delete accounts? My theory is that it’s about numbers and growth. If a company can point to a chart that says user accounts this quarter are so x higher than they were last quarter, they have a better chance of more funding. By making it harder to delete your account, the company makes sure that rate of growth stays high. Of course, the better measure of a service is active accounts. It’s much better to allow users who don’t want to be there to delete their accounts rather than go idle—your active percentage increases.

It should be as easy to quit a service as it is to join one. When a user wants out, they should be allowed to get it. So much effort is spent streamlining and improving the “onboarding” process for apps and services. The same amount of effort should also go into “offboarding.” It should be simple, painless, and—above all—permanent. No “shadow profiles” or retaining data forever, just the option to download all the data we put into the service, and then that’s it. Don’t even keep our email address on file to send messages trying to invite me back. (This means you, Carbonite.) When someone is walking to the door, just let them go. If you love your users, set them free.

Simplify My iPhone

While preparing for my Social Media Sabbatical, not only did I purge my iPhone of the apps for all my social media streams, I also took the time to nuke apps I was either not using, or were simply redundant. With my Facebook account deactivated, I could no longer use Facebook Messenger, but also found myself unable to use TimeHop, an app that presents my social media posts for that day in the past. While it’s a nice nostalgia trip, keeping a journal will give me a better sense of where I was in the past than Tweets and Facebook statuses. I already keep one digitally and analog. Deleted.

I also noticed a surfeit of fitness apps on my phone. There’s FitBit, myFitnessPal, RunKeeper, Couch-to–5K, and Fitocracy. FitBit was the first to go. Having lost two FitBit trackers in the space of six months, I decided to use the FitBit app’s MobileTrack feature to stay in their ecosystem. Yet, step tracking seems to be the hot feature to add in apps, and before I knew it, several apps on my phone were tracking my motion data. When myFitnessPal added it, FitBit’s days were numbered. Why bother with two apps that do the same thing? It’s easier on my battery, at least. Deleted.

Tracking my exercise is a good way to not only see my progress, but motivate me to keep going. Many of these apps also integrate, so my runs in RunKeeper appear in myFitnessPal, and in Fitocracy. But, why do they need to be in all these places? So people can applaud my efforts and support me? Maybe if people I knew actually used the darn apps, this would happen. In myFitnessPal, the only social fitness app I’m keeping, I have four friends. Of these, only one has touched the app within the last two weeks. I’m keeping it around, as it’s the best food tracker I’ve used. Couch-to–5K is staying until I finish the program. The rest. Deleted.

Then there’s the bevy of utility apps I wanted to keep in my phone “just in case” I needed them. Apps like PDFPen Scan+, DeGeo (to remove location data from my photos), a few photography apps, such as Hueless (for black and white photos), my Google Voice client, PDFPen Scan+, and various apps I use for local services once in a blue moon. With these apps, there’s no point in keeping them around, as I never have cause to use them. When I do, I’m just a few taps and a fingerprint (if that) from downloading them again. Deleted.

Enough, Patrick Rhone’s former podcast, had a recurring feature called “How Bare is Your Air?” where guests try to see if they could live within the confines of a 64GB MacBook Air, paring down their apps to the bare essentials to get their work done. It’s useful to think this way about an iPhone, too. What’s the bare minimum I need to do my work? Thinking that way, I could also toss the various apps I use to replace the built-in ones: Mailbox, Fantastical, Dark Sky, SmartPlayer, Overcast… None of these are essential, but I do prefer them to the built-in apps.

Clearing out the redundancies and unused apps, however, frees space on my phone and in my head. It also makes my phone’s battery a lot happier. I’m not about to turn my iPhone back into a dumbphone, but clearing out the crap and cruft sure feels a lot better. Now, I need to do the same to my iPad, and to my Mac. I don’t think the effects of clearing out either will be as dramatic, but it will still feel good to be getting by with what I need instead of wishfully thinking about apps I should use. It’s better for our devices, and it’s better for our heads.