Menu

Sanspoint.

Essays on Technology and Culture

Six Months With Apple Watch

One of the difficulties I’ve found in writing about smartwatches is that there’s no good answer about what they are for. There’s plenty that they can do: fitness tracking, notification triage, glanceable information, contextual computing, showing the time—but none of these stands above any of the others. It’s a conundrum I had when I tried using and writing about the Pebble, and it continues to be a conundrum six months into life with the Apple Watch.

That’s not to say I don’t like my Apple Watch. I still wear it every day, unlike John Gruber. At the bare minimum, it’s the best fitness tracker I’ve ever owned, though I haven’t had the same level of success as Jim Dalrymple from using it. Still, the fitness features alone is enough to keep it on my wrist seven days a week. Filling the rings is still insanely motivating, the workout tracking is great for my daily walks, and Sleep++ fills in the missing feature of the FitBits and Jawbones I’ve used in the past. That’s huge.

Everything else? I’m not sure. There’s still a lot of friction to using the Watch to do things that I used to do on my phone. Sometimes, that friction is because using the phone is engrained in my muscle memory. For others, it’s because using the Watch is a clunky experience due to slow software. I’ve ameliorated some of the frustration around app slowness by turning on “Resume to Previous Activity” for the Wake screen in settings. This means that, if I want to do something on my Watch, I can launch the app, lower my wrist, and bring it back up a minute or two later without losing where I want to be. I sometimes forget to go back to the watch face, but that’s less inconvenient than trying to launch an app, checking my watch, and being back on the watch face.

An ongoing thing with my Watch, particularly since the release of watchOS 2 has been determining specifically what information I keep on my watch face and in my glances. What information do I need to see when I turn to my wrist, versus what do I want quick access to, but not immediately on my wrist? I’ve yet to settle on any particular setup, but I want to reduce the amount of redundancy between Complications and Glances. I’m currently using a Modular face with Complications for Fantastical (showing the date), Sleep++, Streaks, Lose It!, and Dark Sky. I also have a Utilty face set up with just Sleep++, Streaks, and Dark Sky for when I need something less busy. Finally, I have an X-Large face for when all I need is the time. As for Glances: I’m using Battery, Settings, an app for NYC subway status, Now Playing, Due, Things, Activity, Lose It!, and the Heart Rate monitor. I keep a few more apps installed, but rarely use them. Some stuff just… doesn’t need to be on the wrist, I think.

I am making a concerted effort to use the watch more—especially Siri. Siri is great in the kitchen for setting timers, provided you give it a little slop time. I’m making sure to track more of my walks as workouts, and I’m trying to train myself to use the controls in the Now Playing glance when I need to control my music, versus whipping out my phone again. Did you know that if you’re playing audio from an app that also has a Watch app (say, Overcast, for example), tapping the scrolling title will take you to that app on the Watch? I didn’t until just recently. If there’s been a theme for the last six months of using my Watch, it’s figuring out what I want to get out of it, and dropping what it’s bad at. Six months from now, a year from now, who knows?

I know there’s plenty I don’t use the Watch for, though. It’s not a great communication device, mostly because I don’t know enough people with both iPhones and Apple Watches to use some of the more interesting communication features. I’ve used quick replies and dictation over iMessage from time to time, but never more than once or twice a month. I’ve only used Digital Touch and Sketches once or twice with some Internet friends who own watches. And since my bank doesn’t support Apple Pay yet, about the only time I ever hit the side button on my Watch is by mistake, or to force quit a stuck app. (Hold the side button until the power screen comes up, let go, and then hold it again until the app quits.) It might as well not even be there. Maybe watchOS 3 will let me customize what I use it for.

Six months in, though, the biggest issue I have with my Apple Watch is speed. Even if it came at the cost of some battery, I would love it if the Watch just was more responsive when using Apps and Glances. The watchOS 2 release has helped a bit in that regard, but there is still a lot of room for improvement. Beyond speed, I think the biggest obstacle Apple and the other companies in the smartwatch space have right now is creating a compelling enough use case for someone to not just buy a smartwatch, but keep wearing it. A smartwatch needs to add something to a person’s life beyond just another place to check for more information. That’s nice, but it feels like this can and should do more, and do it with less effort on my part. The perils of early adoption, I suppose.

Apple’s UIs are Flawed, but They’re Not Unusable

It’s no secret that the interface redesigns of iOS 7 and MacOS 10.9 have been divisive among certain groups of computer users. Three years into the transition, people are still complaining. Case in point—a recent piece by Don Norman and Bruce Tognazzini in Fast Company that claims “Apple is Giving Design a Bad Name”. They lay out the argument up front:

The products, especially those built on iOS, Apple’s operating system for mobile devices, no longer follow the well-known, well-established principles of design that Apple developed several decades ago. These principles, based on experimental science as well as common sense, opened up the power of computing to several generations, establishing Apple’s well-deserved reputation for understandability and ease of use. Alas, Apple has abandoned many of these principles.

I don’t agree with this sentiment, at least not completely. I’m particularly amused the historical claims to Apple basing their UI design on “experimental science as well as common sense.” Anyone who remembers dragging a floppy disk icon to the trash can to eject a disk can tell you that it was a UI decision that was far from “common sense.” Especially if you were coming from a PC in the mid 90s. Nearly all computer interfaces are unintuitive from the get go. There’s a quote that floats around, attributed to open source programmer Bruce Ediger, that “[t]he only ‘intuitive’ interface is the nipple. After that it’s all learned.” It’s apocryphal, but Ediger did coin a useful variation:

There is no intuitive interface, not even the nipple. It’s all learned.

And, look, I’ll be the first person to admit that I turned on a bunch of accessibility stuff on my iOS devices: “Button Shapes” and “Reduce Motion” on my iPhone, plus “Reduce Transparency” on my iPad—though that one’s more for performance. There are serious UI issues and quirks, especially in iOS’s Music app. (Insert another plug for Cesium here.) Plenty about the iOS and MacOS designs demand to be fixed, but are they bad enough as to be unusable? I doubt it.

The fundamentals of iOS and MacOS have not changed since their initial releases. If you know how to use the original iPhone and the original Macintosh, you can get up to speed on their modern equivalents pretty quick. The hardest adaptation might be the lack of scrollbars on the Mac—a legitimate usability issue. Take a look at the iOS ecosystem now, compared to how it was in 2012, when iOS 6 came out. There were two sizes of iPads, with the same addressable pixel dimensions, and two sizes of iPhone, one with a taller screen than the other. Both could only display a single app at a time, and scaling an app for the taller iPhone wasn’t much of a resource challenge for graphically rich user interfaces. iOS 7 set the seeds for a more diverse iOS ecosystem. In 2015, we have three sizes of phone screen, and two sizes of iPad screen that can display two different apps at two different sizes. You can’t keep the skeuomorphic design of iOS 6—which had its own usability quirks—and have displays of that many sizes. Something had to give.

When Jony Ive spoke from his magic white room during the iOS 7 WWDC keynote, he mentioned that the typographical navigation was inspired by the web. The idea being that people in 2013 know how to navigate through links, which are usually offset through color on a web page. Carrying that idea into a computer interface doesn’t seem like a bad one. Of course, links on the web are often discerned by underlines, too, which is something iOS doesn’t do—and I can hear Jakob Nielsen screaming from here. The new UIs can be refined, but as long as the fundamentals remain the same, we’re doing okay.

Then there’s Norman and Tognazzini’s complaints against gestural interfaces:

[W]hen Apple moved to gestural-based interfaces with the first iPhone, followed by its tablets, it deliberately and consciously threw out many of the key Apple principles. No more discoverability, no more recoverability, just the barest remnants of feedback. Why? Not because this was to be a gestural interface, but because Apple simultaneously made a radical move toward visual simplicity and elegance at the expense of learnability, usability, and productivity.

Pish. Tosh. What are the important controls iOS hides? The most important things an iOS user can do are launch apps, and quit apps. These are prominent, up front, and obvious to even a toddler. Again, all things are learned. Your average user can navigate iOS just fine for the most part—textual links getting lost aside. Slide to unlock. Tap passcode. Tap app. Hit home button to leave app. For 90% of users, 90% of the time, this is all they’ll need to do. It doesn’t take long for someone new to mobile OSes to grasp the fundamentals, because the interfaces involve direct manipulation of the elements. It’s why kids pick up on tablets faster than their parents. Things behave in a (generally) predictable fashion. If you do something seemingly unpredictable, like, say, sliding down on your iOS home screen and getting a search box, it’s replicable.

And these are for advanced features that most people don’t need to fiddle with. Most people work with computers, whether using a traditional Mouse/Touchpad and Keyboard UI, or a gestural, iOS-style UI, using a task-based approach. They learn the steps to complete a task, and those steps, if they lead to a successful completion, are how they will continue to work. You can show them a faster way, you can show them your preferred way, but the steps they choose make logical sense to them. There shouldn’t be just one way to do anything in an interface—and this is something iOS gets wrong in more than a few places. There should be an intuitive way that a user can figure out on their own, and there should be a faster way a user can discover once they have the basics down. iOS nails this.

There’s ways in which iOS can improve. I’m with Norman and Tognazzini about font weights in iOS, for example. It is possible to get lost, and certain apps (coughMusiccough) are almost inscrutable mazes of UI complexity and confusing menus. The situation is not as dire as they think. Remember that the Macintosh UI did not spring, fully formed, out of the forehead of Steve Jobs like Athena out of the forehead of Zeus. It drew from Xerox’s research, and that drew from Douglas Engelbart’s Mother of All Demos. But gestural interfaces? They’re still new, and we’re still figuring it out. As long as people are able to get the basics down, the rest will come in time. That goes for designers as well as users.

The iPad of 2015 is the Mac of 1990

The release of the iPad Pro has rekindled the endless debate that has plagued Apple discussion over the last five years. No, not the “Is Apple doomed?” debate. That one is older than five years. It’s the “Can you do real work on an iPad?” debate. We’ve gone around and around in circles on this. One side cites the limits of the iPad hardware, the limits of iOS, and the limits of the App Store against doing “real” work. The other side cites the expansion of “power user” features in iOS, the massive computing power of the latest generation of iPads, and the few standouts who get by with an iPad as their primary computer. And then another major iteration happens, and we all start writing think-pieces again.

Seriously, if I see another tweet about how the iPad is only useful for writing 30,000 word iPad reviews, I will scream.

I have to draw comparisons to the original Macintosh. When it dropped in 1984, the attitude from many tech people was that it was a toy, not something you can do real work on. Macs had limited software support, no PC compatibility, a tiny black and white display, no command line, no multitasking… I’m only a fairly recent Mac convert, switching in 2005, but I recall this “Macs are toys” attitude persisting among PC users until the 2000s, long after many of these issues were resolved. The tide might have turned around OS X 10.3, which came out in 2003. In other words, it took nearly twenty years for attitudes to change around the Macintosh and deem it worthy for doing “real” work.

The iPad as a platform is five years old, iOS is eight years old. They’ve changed a lot in that time span, and not just visually. There are still limitations to both, but a lot fewer than there were even a year ago. There were professionals using Macintoshes to do all their work in the 1990s, but they were a rare breed. We’re approaching the iPad equivalent of the 1990s now, and the iPad Pro is equivalent to the beloved Macintosh II. Will it be enough to turn the tide and make iPads the computer for everyone?

No. At least, not yet.

Ten years from now, iPad and iOS will have another decade of development under its belt. The limitations that make the platform unsuitable for whatever you do that makes you stick to a Mac will almost certainly be gone by then. Likely, they’ll be gone sooner. Ask yourself if you can do all your work on a Macintosh II, or even a Mac 512k. The answer is probably going to be no, but that’s fine—they don’t make those anymore. Now ask yourself if you think you’ll be able to do all your work on the iPad of 2025. The answer to that is almost certainly yes. We just have to wait until then.

Rip It Up and Start Again?

I’ve been struck by a thought lately when I look at my setup—it’s too much. I want to rip it up, sell my Mac, my iPhone, my iPad, my Watch. I’d use the proceeds to buy a used ThinkPad or some other inexpensive laptop, install Linux, get a cheap Android phone, and live with the barest of the bare minimum I need to still do what I need to do with technology in my life. I’d use free software, simple, lightweight tools that do one thing well, and get maximum efficiency out of inexpensive, commodity hardware.

Then, I realize that, though I’m sure Linux has improved a lot in the intervening decade since I switched to the Mac, and that Android has come a long way, the sheer time cost of starting over is not going to be worth it. Then I start to worry about the Sunk Cost fallacy, and whether I’m not rage-simplifying because I’ve sunk too much money into my Apple setup. And believe me, I’ve sunk a lot of money into hardware and software for the Apple ecosystem over the years.

Next, I realize that my idealized world of simple computing on simple hardware isn’t as simple as I think. I rely so much on synchronization services, not just for the usual suspects: contacts, calendars, email—but for passwords, notes, photos, tasks, even backups of my digital life. While there’s options for all of these that are cross-platform, I run into the problem of trust. I’m not a fan of Dropbox, at least since Condoleeza Rice joined their board. I worry what Google is doing with my data, so I’m trying to cut ties with both of them, where I can.

iCloud is the best option for the majority of what I need to keep with me in terms of trust and reliability. Of course, to keep using iCloud, I have to stay within the Apple ecosystem. Sure, I could use the iCloud web client, but that’s not a solution. I could roll my own synchronization setup, I suppose, but that is also not a solution. I don’t want or need the hassle of managing a server and storage, even with BitTorrent Sync. I’ve had conversations with Nick Wynja about his struggles with this, and honestly, I’d rather just satisfice on something that meets my standards for trust and reliability. iCloud it is, and so we come full circle.

It’s not that there’s anything wrong with my Mac, my iPhone, and my Watch—though my iPad 3 is long in the tooth and doesn’t get any of the cool iOS 9 features that would make iPad ownership more useful. I’m overwhelmed by what I’m using because I allowed it to get this way. Rather than rip everything up and start again, maybe I should just sit down and selectively whittle away at all the accumulated cruft I’m using, or rather, not using. This is the sort of thing Patrick Rhone was on about back in the days of Minimal Mac. It might behoove me to go back and give the book another read.

iPad Pro and How We Get to the Tablet-Based Future

Well, the Apple announcement has come and gone, and my predictions for the iPad were almost completely off. There’s a stylus, but it’s a fancy one, and there’s no handwriting recognition, at least not yet. In fairness, rumors had suggested adding Force Touch to iPad size screens was running into yield issues, so I could claim I’m not wrong, it just happened yet. I’ll fall on the sword here, though. Regardless, the new iPad Pro does establish Apple’s direction for the iPad line, and it’s the first big leap towards tablets replacing the traditional PC for most tasks.

There’s a lot of chatter from people in my circles about how they can’t do their work on an iPad, or an iPad isn’t suited to the work they do. They’re not wrong! The modern iPad, even the iPad Pro, isn’t the right tool for a lot of people. But it’s getting there. To figure out how we get to a tablet-based future, it helps if we take a step back and see how we got to where we are.

In 2015, every traditional personal computer on the market—including the Macintosh—traces its lineage to the original IBM PC. I’m oversimplifying here, especially since the Macintosh had its own strange evolutionary track, but even the sleekest, tiniest, laptop carries the legacy of those 8088-based tanks. We’re talking about machines that were optimized for keyboards, text display, minimal, wired networking, spinning discs, and all sorts of other things. The intervening thirty-four years have added a lot on top of the original personal computer architecture, but that legacy support is still there.

The iPad, though it builds off of the iPhone, is a fundamentally new way of computing built up from scratch to optimize touch input, solid-state storage, ubiquitous wireless communication, and rich graphical interfaces. At launched, it jettisoned a ton of the legacy of the traditional personal computer. Now, we’re seeing Apple add in a number of features we associate with modern desktop computing: keyboard shortcuts, multi-tasking, increased connectivity, and access to the file system—sort of, via the iCloud Drive app. It’s easier, and more effective, to add and adapt the best features of modern desktop personal computing to the tablet paradigm than the other way around (cf. Windows 8).

Between 1990 and 2010, the tablet evolved from a clunky, modified PC with all the attendant baggage, to a sleek, touch-optimized slab of glass with limited functionality, executed well. In the last five years, those limitations have been slowly whittled away. There’s enough processing power in modern tablets to rival desktop computers, owing in no small part to jettisoning a lot of the legacy overhead of the traditional personal computer. The next step is going to be leveraging that power to create tools optimized for the tablet. And this… well, this is the tricky bit.

Creating high-quality, professional applications is hard and expensive. Tablet computing applications don’t sell well, particularly at high, professional prices. With the long upgrade cycle in tablets, many tablet owners are still running devices that lack the power and capabilities needed to do high-performance computing tasks. So, we keep using our traditional personal computers, offloading passive, low-power content consumption tasks to our tablets. Without a compelling cause to upgrade, people muddle through, nobody makes groundbreaking apps, and the tablet future is continually deferred.

The next two years are going to be very interesting for the tablet space. The iPad is leading the way in terms of not only device power and capability, but iOS 9 is the first version of the OS that takes greater advantage of the iPad’s form factor. I don’t think Android tablets will take too long to catch up, at least on the software side. Consider the Apple and Google dichotomy when it comes to hardware and software: Apple believes in Smart Glass with a Dumb Cloud, while Google believes in Dumb Glass with a Smart Cloud. Benedict Evans coined this idea in terms of smartphones, but it could lead to a very interesting tablet arms race going by 2017. If this happens, we could hit a tipping point where tablet sales pick up, increasing the market, and increasing the incentive to make great, tablet-optimized software.

The further out we look, the greater the unknowns, of course. As technology becomes more deeply integrated into our lives, the legacy of the personal computer architecture, hardware and software alike, is going to weigh us down. We’re ready for a change in the way we do things with technology in our lives. It’s not going to happen all at once, of course. The pieces are only just being put into place now. The next decade is going to be fascinating as we figure it all out, and the more I look at it, the more I feel that the tablet will be at the center of our computing world.