It hasn’t been a month since I wrote up my iPhone fitness tracking setup, and it’s already been thrown into disarray. I don’t even have my Apple Watch yet. Back in late April, when I wrote about the apps I use to track my food and exercise, I had a system that was working fine. Then, Jawbone made changes to their API, things started breaking, step data stopped ending up where it should, and I decided it was time to throw it all out and start over. It also didn’t help that the clip for my Jawbone UP Move had deformed after four months of use. Frustrating.
There’s two main problems with the state of fitness tracking on iOS, at least if you’re relying on third-party hardware and software. The first is that most applications do not use HealthKit effectively, if at all. I know HealthKit has had its fair share of flaws and problems since its release, but they’ve largely been ironed out. There’s no good reason for a fitness tracking app, like Jawbone or FitBit, to not read and write as much data as they have to HealthKit. In the case of Jawbone, their app can only read and write step and sleep tracking, but in their API, they expose a ton of useful data. This includes walking distance, active calories, and BMI. I used a third-party app to sync this data with HealthKit—the need for which indicates that someone on Jawbone’s software team is falling down on the job—but that also broke with the API change.
This means that the picture of my health that I was trying to formulate is now fractured. And it leads into the second problem with Fitness Tracking on iOS: the companies that make many fitness tracking products want to silo the data we feed them. FitBit is probably the worst offender, without making even a token gesture towards interoperability with the iOS fitness ecosystem, but other products have similar degrees of lockdown. It’s easier, I suppose, to build a business on data if you’re keeping the juiciest bits for yourself, but I thought these companies were in the business of selling gizmos, not my data. And that’s not even getting into the problem of apps that use the iPhone’s internal CoreMotion data for step counting instead of what’s in HealthKit, a problem I’ve chatted about with Jamie Phelps quite a bit on Twitter.
Elsewhere on iOS, several apps that track food, Lose It! and Lifesum come to mind, even have the audacity to lock fitness tracker integration to their premium, paid memberships. A decent, standalone tracker like the Jawbone UP, costs about $50, while a year of premium membership to these food tracking apps costs about the same. Suddenly, you’re paying $100 to see data that you’re generating and pipe it into another app full of data that you’re generating. This just seems unfair to me. I don’t begrudge food tracking apps and companies adding premium tiers—they gotta make money somehow. I just bristle at the idea of locking tracker integration behind that wall. Fortunately, my food tracking app of choice, MyFitnessPal, has kept tracker integration free. If only it still worked with my Jawbone UP…
The only way to stay sane is to stay first party, as much as possible. So, I’ve tossed my Jawbone aside, and am now using my iPhone as a fitness tracker, until my Apple Watch shows up sometime in the next five to eight weeks. Inconvenient, but it looks like enough bugs are ironed out of HealthKit that i don’t think it’ll start losing my steps again. I have a few apps I’m using to pick up the slack of what Jawbone did: Pillow for sleep tracking, Caffiend for caffeine tracking, FitPort for quick visualization and an app called QS Access to let me pull my data out of HealthKit for… some future endeavor.
Which is the one thing I miss about having a service like Jawbone. I recently started using an IFTTT recipe and a script to get my daily activity data and save it into Day One. Now, there’s no sane way to get that data out of HealthKit and into Day One, or any other app, at least not yet. That’s the most frustrating part: I’ve given up putting my health data into one silo, only to start putting it into a different silo. The new one is (hopefully) less leaky, but it also means there’s fewer ways to get that data back out. This is my data. I’m creating it, and I’m using it to improve my health. I should control of it, not Jawbone, not UnderArmour, not even Apple.
Well, in for a $400 Nerd Edition Space Gray Sport with the black band. That’s pretty all-in though.
I wanted to hold off for a year, convinced my I needed to upgrade my iPad first, and that would cost the same. Then, on a whim, I decided to see if I could improve my iPad’s performance with a nuke and repave. It worked. My iPad 3 isn’t a speed demon, by any stretch, but it’s far more usable than it was before. Suddenly, I have a $500 surplus, and I know exactly what I want to waste it on.
So, why an Apple Watch? It comes down to three things:
Fitness Tracking
I wear a Jawbone UP Move and like it a lot. What I don’t like is the lack of integration with the iOS ecosystem in many ways. It’s HealthKit sync is limited to step counts and sleep tracking—the latter of which is flaky. It’s clear that if I want to use iOS and get the best fitness tracking experience, It’s going to have to be either an Apple Watch or nothing. (Or just keeping my phone on my person, which is a pain.)
I’m also planning to get back into running for fitness. The ability to use the Watch as a tracker for running alone appeals to me. It’ll be a while before I can use it on its own with, say, a Couch to 5K app, though, so it’s a good thing I still have my iPhone arm band, which will be a good stopgap until real native apps are available.
The only issue I can see with an Apple Watch over my Jawbone is that I can’t do sleep tracking with it. I can use my iPhone to track sleep, but I’ve found having an alarm right next to my head in the mornings is ineffective. I don’t hit snooze—I turn it off and go right back to sleep. With Apple Watch, I can just plop it on its charger and have that wake me up instead, and be free to do sleep tracking on the phone.
Contextual Computing
Smartwatches are the best expression of context-aware computing we have at the moment. I’ve seen a few Apple Watch owners on Twitter showing off the customized faces they use at various times of their life, and I love the idea. At the office, I can have a face that shows my work calendar, my activity goals, and current weather. At home, I can switch to one that just shows the current time with nothing else to distract me. If I’m out for a walk, I can switch to something that’s optimized for info I’ll need while out of the house.
And then there’s glances: all the info I’d want to see, and quick little actions that I can get to and deal with in seconds, with (hopefully) less distraction and Social Media K-Holes than before. Running errands? Check the OmniFocus glance. Buying groceries? There’s my list, right when I glance at my wrist (I presume). [1] Need to see if the trains are screwed up? Glance.
If there’s one thing I loved about my Pebble, it’s just the power of looking down at my wrist and seeing little bits of contextually relevant information. To have that again, only in a fully-integrated manner, would be an incredible boon. Which leads to number three…
I Miss My Pebble
Not going to lie. I really miss having my Pebble on my wrist. Not enough to go back to it, mind, but I miss the ability to just shove my phone in a pocket and not have to dig it out for stuff like switching music or even just to see what notification I just got. I’ve got my notifications pared down pretty severely, but that only means that when my phone buzzes, it’s probably something important. If I can deal with some of them from my wrist, instead—I’m thinking Due timers especially—that’s one less excuse to pull out my phone and fiddle with it.
Apple Watch looks to excel at all the things where the Pebble failed for me. It’ll let me interact with notifications from my wrist, be a fully-integrated fitness tracker, and let me get relevant, glanceable information without having to pull my phone out of my pocket. I don’t want to have to switch modes just to switch my music, see how far I’ve walked, or just find out what someone messaged me.
I don’t need a smartwatch, let alone a $400 Apple Watch. I just want one. I have a use case for it, and it looks like it will fit my life and my needs well. I’m already comfortable wearing watches—even before I got a Pebble, I would switch between an analog Swiss Army watch and a Casio F–91W. A watch has a natural place in how I live my life, and I want to expand on that. It looks like Apple Watch is the best solution for that right now, so it seems right to dive in.
My neck is killing me. Too much time staring down at the glowing rectangle in my head, or the glowing rectangles on my desk at the office. It makes it hard to get any work done in my off-hours, like writing. “Tech Neck†is a legitimate issue, though be careful when searching for it, as the term is also conflated with neck skin wrinkles caused from looking down into phones.
On top of the neck pain, my optometrist recently prescribed me special short-focus, blue-light blocking lenses for when I’m using a computer screen—in other words, all of the time. The lenses block the high-frequency blue light that causes eye strain, and the focal length helps prevent that issue as well. They have helped, and I think they’re even helping me sleep better, that is when I remember to use them at home.
I do my best to make sure my ergonomic situation isn’t too dire. Though I’m still a seated desk hold out, everything is placed in a fairly ergonomically sound position, and my chair is at the right height. That’s, of course, at home. The less said about my work setup, the better. Still, it could always be worse. I think about Phoenix Perry, a game designer and activist, who spoke of a four-year struggle with severe carpal tunnel syndrome at the recent Facets conference. The pull-quote I took away from her discussion was this:
The user should never be forced to conform their body to an interface.
So many of the tools we use on a daily basis are designed with functionality as the primary focus, not ergonomics. The exception to this is, of course, Apple—who designs many of their peripherals to look better than they work, as Phoenix related in an anecdote about meeting the designer of the Magic Mouse. [1] These tools should adjust to us, and how we use them—not the other way around. Okay, perhaps we also shouldn’t be walking along city streets, head down into the screen in our hands, but it’s a problem that extends far beyond handheld devices.
I suppose one of the advantages to the Thinner and Lighter Movement is that it makes arranging and rearranging the devices and accessories we use for maximum comfort easier. As for phones, will an Apple Watch or Moto 360 help reduce Tech Neck from constant looking down and checking a phone during the day? One can hope, though one can also hope that they won’t need a $350 accessory to protect their physical health from the $600 device in their pocket.
Tech companies need to start looking into harm mitigation. A good place to start might be the blue light issue. If they can put a coating on my glasses to keep the bad blue light from getting into my eyes, why can’t they coat the screens with it too? Okay, it’s pricey, but Apple’s margins are big enough to make it work without causing them to raise the price of a MacBook or an iPad. I’m sure Tim Cook can spin it on stage: “A lot of people love using their iPads in bed. This new screen coating helps prevent blue light keeping iPad users from a good night’s sleep.†Hopefully that’ll keep iPad numbers on an even keep through Q4.
Until then, it’s blue-blocking computer glasses, regular breaks, the occasional Tylenol, and grumbling about why we let our tools hurt us. Something’s gotta give before my neck does.
One of, what I hope is, the overarching themes in my writing on technology is the relationship we have with it. It seems like a beat that is surprisingly untrodden, at least from where I’m standing. Most technology journalism and other writing I see in my circles only touches lightly on our relationship with technology, usually in the context of a new gadget category, or the default (if righteous) outrage over governments and private companies alike collecting our data for various, nefarious purposes. It’s either “Should I buy this?” or “You should be outraged!” In the end, it so often just feel like talking about technology for its own sake.
This came in sharp focus for me while attending the recent Facets conference in Brooklyn. Facets had a strong focus on the hard tech side of things: programming, machine learning, personal privacy, and other heady topics, but it approached it from an interdisciplinary perspective with art and education, so that even a Comp Sci fail-out like me could understand and be a part of the discussion. My time at Facets felt like exploring some mysterious parallel universe of thinking about technology in a way that I only knew about through theory and inference. I want to go back and live there, because the main ways of thinking about technology in the universe I live in seem small and meaningless now.
I’ve griped before about how discussion about the business side of technology drives me batty, but that’s only one of the aspects of most tech discussion I find increasingly infuriating. A theme that ran through so many of the panels at Facets can best be summarized as “People are more important than things.” It came up first in a discussion panel on Technology as Art & Digital Curation. When discussing the problems with how the history of technology is presented, it’s often in a way that emphasizes the thing—the computer, the operating system, the network—over the people who made it, and the people whose lives it changed. And when people are emphasized, of course, it’s usually just the Great White Men who Run Companies, and we hear enough of that narrative. [1]
Which leads to another wonderful part of Facets: the diversity in the panels, with a focus on underrepresented groups in technology: women, African-Americans, LGBT people, as well as people who focus on the intersection between technology and art, not just technical practitioners. Instead of “learn to code” to get a job, panels spoke about the School for Poetic Computation, and New Inc, the first incubator space led by a museum, with a focus on art over commerce. There was discussion on technology-focused activism about espionage and data collection more nuanced than “Put a Snowden On It,” of selfies as identity politics and performative art, and a really cool live demo of using machine learning to create new electronic music instruments.
And, again, throughout each panel and discussion, the focus was on the human element: the hacker and maker to be sure, but also the artist, the citizenry, and the community—in tech and beyond it. Technology is no longer a space that is all to itself. It’s infiltrating every part of our lives, in one way or another, sometimes for the better, sometimes for the worse. By breaking discussion of technology out of its echo chamber with a multi-disciplinary approach to the tools and what can be done with them, we can foster a larger, more robust, and more exciting discussion of technology that isn’t just the same boring things over and over.
I leave the conference wondering why this sort of discussion is so rare in the larger space around technology. I’m sick, tired, and just plain bored of breathless excitement over the latest and greatest consumer gadget. I’m also sick, tired, and just plain bored of breathless anger over the latest and greatest consumer gadget. It gets us nowhere, and I’m as guilty of this as anyone else. There’s a bigger picture, a bigger story to all of this that gets lost in just focusing on the gizmos, the gadgets, and the UI, and the huge numbers funding it. I’m not saying there’s anything wrong with occasionally looking at the tech world from that level, but right now I feel like it’s missing the forest for the fourth leaf on the middle-bottom right branch of the thirty-seventh tree to the east-southeast.
A question raised in the discussion: “How many books do we need on Steve Jobs, anyway?” ↩
Apple’s Q2 finances have come in. They have more money than God, apparently, so let us speak no more of it. The days of Apple being doomed to financial ruin are over, though there is something to scream about, I suppose: iPad numbers are flat to declining. But, this isn’t about market share, or profits, or even whether iPads are better than the tablet competition. It’s about whether tablets as a product—at least in mid 2015—are a worthwhile computing platform. Tablet sales, as a whole, are on the decline. Some say it’s because the refresh cycle for tablets is longer than it is for phones, and there may something to that, but I think it’s because the case for people to own a tablet hasn’t been fleshed out.
While folks like Federico Viticci can use the iPad as their primary computing device, he’s an outlier, and will be for a while to come. The problem is that the tablet as a form factor, is being squeezed from both sides: bigger smartphones that can do all the things a modern tablet can do, and thinner, lighter laptops that can do all the things a modern tablet can do—and more. [1] It’s an ignoble fate, but somewhat obvious in retrospect, for a device that was pitched as being in-between the smartphone and the PC.
The smartphone offers better portability than a tablet, even if you’re talking a 7“ tablet and a 5.5” (ugh) phablet, along with always-on connectivity baked in without an extra data plan. A laptop offers desktop class software, a real keyboard that might not run out of battery before the computer does, and a familiar user experience. I cannot overstate the importance of that last feature. Many the tablet success stories are about kids and the elderly, groups who have not had a lifetime of experience with the traditional PC UI, be it Mac or Windows. Tablet UIs are simpler, and often easier, but do not underestimate familiarity—and laziness. If I’m already at my personal computer, why don’t I do the thing I could do on it there, instead of grabbing a second device? Even my parents, who are of the age where an iPad would be a perfect computer for them, opt to use their MacBook Air instead. When I visited them for the holidays, their iPad had been relegated to playing streaming radio in the living room.
Until someone finds a specific use case where a tablet is a better tool for computing tasks, for most people, it’s doomed to languish in this narrowing gap. I bought my first (and, so far, only) iPad in 2013. While I’m using it now to write this essay, I don’t use it for much beyond the occasional bit of long-form mobile writing with an external keyboard, and reading RSS feeds and Instapaper in the mornings over breakfast. The iPad has never wormed its way into my computing life as a better tool for what I want to do, only a different one. I don’t travel much, but when I do, I often take my iPad, because it’s easier to travel with than my Giant 15" MacBook Pro of Doom—I keep that tethered to a giant display on my desk. If I could get by with a single-port MacBook as my primary computer, I could see my iPad going off to Gazelle for good, and not bothering to replace it.
The tablet is quickly becoming the computer of the gaps. Something needs to happen to break it free, but what that is, I don’t know. It has to take advantage of the form factor in a way that cannot be replicated on the smartphone, and do it better than a small, ultra-light laptop. Side-by-side apps won’t be enough. Haptic keyboards won’t be enough. Thin, light, and long battery life will only go so far unless someone has a reason to reach for their tablet over their personal computer. There’s reasons for some people, but not reason for everyone. Only time will tell in that regard.
I’ll ignore the laptop-tablet hybrids like the Surface, for now. I’ve not seen any evidence that significant numbers of people want them, but if anyone can prove me wrong, I’d like to hear it. ↩