Menu

Sanspoint.

Essays on Technology and Culture

Don Norman on Wearable Devices

[T]he risk of disengagement is significant. And once Google allows third-party developers to provide applications, it loses control over the ways in which these will be used. Sebastian Thrun, who was in charge of Google’s experimental projects when Glass was conceived, told me that while he was on the project, he insisted that Glass provide only limited e-mail functionality, not a full e-mail system. Well, now that outside developers have their hands on it, guess what one of the first things they did with it was? Yup, full e-mail.

— Don Norman on Wearable Devices

This piece in the MIT Technology Review expresses a lot of the same misgivings I have about wearable technology, only far better than I can. There are valid use cases for some of us to have omni-present data in out field of vision—even peripheral vision—but none of us need it there at all times. Prosthetic distraction has the potential to be our undoing, but I think there’s enough people expressing legitimate skepticism of Google Glass, and wearable tech in general, that we can avoid many of the potential excesses and dangers.

Functional Constraints

The brilliant Matt Gemmell recently wrote about technological constraints and how they affect the devices we choose.

There are many factors to consider. Performance and power consumption. Size and weight. Noise and heat. Beauty, durability, and portability. Connectivity and upgradeability. Compatibility and of course cost… They’re all interrelated in various ways, forming a complex web of trade-offs.

There’s one constraint that Matt doesn’t touch on: what we can use it for. It’s an easy one to overlook, because so many of our devices can do so many things. Steve Jobs may have described the original iPhone, arguably the device that codified the modern “smartphone,” as being a phone, and iPod, and an “Internet communicator,” but really it’s a general purpose computing device that just happens to be able to make phone calls. At this point, the phone aspect of the smartphone is tertiary at best. The iPhone is functionally constrained by the software limitations Apple imposes on it, but there are ways around that. Elsewhere, Canonical’s taken the general purpose computing aspect of the smartphone far enough that they’re raising money to sell a phone you can hook to a monitor and keyboard and use as a desktop computer. I can only imagine the technological constraints the Ubuntu designers have to work into the product to allow that.

In a larger sense, this is the dividing line between Apple products and those of its competitors. When you buy an Apple product, there’s no question that you’re giving up certain features that are taken for granted on other platforms: internal expansion slots on desktops, RAM upgrades and replaceable batteries on laptops, and the ability to install software from (almost) anywhere you choose on your phone and tablet. By making the choice to buy an Apple product, you’re deciding that these features not important to you. It was certainly a factor I considered when I made the switch to Apple, as I was tired of fiddling in the guts of my computers from both a hardware and software standpoint.

The unwillingness of Microsoft to “compromise” on Windows 8 and their subsequent Surface tablet offerings has been their undoing. Stratechery’s analysis tablets and smartphones as “thin clients” touches briefly on the problem. Windows 8, by welding the traditional desktop interface to the touch-oriented Metro UI created a functional hybrid that fails at being both a compelling tablet and a compelling desktop. While a tablet may, eventually, become the one device for the average user, we’re not there yet, and half-steps like porting the “classic” Windows UI on to a touch device are only standing in the way. The form factor of tablets and a touch-based UI requires creating constraints in UI and in functionality. A device that tries to do everything in an input-limited environment will run into issues.

In a way, this brings me to Jake Knapp, who “can’t handle infinity in [his] pocket,” so he turned off most of the things that make his smartphone smart. It’s an extreme way of dealing with a real problem. It’s the same motivation that drives Harry Marks to his typewriter, or Jonathan Franzen to take a saw to his laptop’s Ethernet port. These self-imposed constraints, switching or modifying our hyper-flexible tools to become something we can limit and ground ourselves into using without the fear of something popping up and distracting us from the task at hand.

What worries me is that disabling features is a power user move. Only someone who knows how their device works can think to turn off the features that distract them. I wouldn’t expect my father to think of turning off Safari on his iPhone, but it also wouldn’t benefit him. The market for uni-tasking devices is small. I remember a few years back, pre iPad, how a novelist friend of mine was gushing over the AlphaSmart, a portable word processor built for typing and nothing else. I wonder how well it sells. It’s not easy to market something with limited functionality to a gadget-hungry public, and that includes nerds and normals, with the proposition that it’s good for only one thing. The only exception may be e-readers, and tablets may already be subsuming that market.

Either users are going to adapt to the growing amount of functionality in their devices, or the market will shift towards devices that offer less features as users seek to become less overwhelmed. I expect it will be the former, rather than the latter. As stable as the market appears, it’s still early days. We don’t know where these devices will go, and whether they will adapt to us, or us to them. The cynic in me expects the status quo: users overwhelmed by functionality and a subset of power users who drop out. I hope I’m proven wrong.

Digital Epicureanism

As technology becomes more omnipresent in our lives, there is the natural backlash of those who feel that something valuable is being lost in the transition. Make no mistake, something is being lost—something is lost in every transition—but are we over-romanticizing what our lives were like before these changes? Almost certainly. We maintain a romantic notion of the past, even when presented with evidence to the contrary. Even Socrates bemoaned the concept of written language, claiming that writing “will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”

That quote so often comes to mind when I read the works of digital ascetics, [1] who opt to disconnect as much as possible. I’m not about to argue the contrary position. I’ve written before about the benefits of skepticism, and the problem with naysayers. Yet, even I look at certain trends in technology, such as the insistence that wearable computers are the future, and moan. We don’t need ubiquitous computing and omnipresent information strapped to our bodies, and we don’t need displays in our peripheral vision at all times. Disconnection, even temporary, is anathema to the proponents of always-on technologies like Google Glass. [2] To provide contrast to the digital ascetics, let’s call this technology worldview digital hedonism. It seems an apt term for a philosophy that espouses constant exposure and connection.

There exists a happy medium between these philosophies. Perhaps we could call it digital epicureanism, though I fully expect someone with more than the cursory college education in philosophy I’ve had to call me out for misapplying the term “epicureanism.” Digital epicureanism is a philosophy of a judicious middle path, seeking to maximize the benefits of the technologies we use today, but with a cautious eye towards how these tools will affect our futures. I base this idea on the assumption that a tool is neither good or bad—only its application can be given moral consideration. To put it another way: a hammer can be used to build a house, or it can be used to break bones. It’s all depends on the intent of the one holding it. The hammer is a neutral party.

It’s never a bad idea for us to evaluate technology on its terms and decide for ourselves the role—or lack thereof—it should play in our lives. As technology changes, and as technology changes is us there is a place for those who eschew some of these changes, just to show us what we’ve given up in exchange. It is then up to us to decide if it was worth it. If you don’t feel the need for an Internet-enabled device to be on your person at all times, by all means give up your smartphone for a “dumbphone,” [3] or nerf your iPhone As long as you’ve evaluated the value proposition and know that your life is not improved by it, you’re perfectly validated in your choice. Your choice, however, is not one that makes you superior to the rest of the masses who have decided they want these things. And vice-versa.


  1. No insult meant to J. D. Bentley.  ↩

  2. I know Google Glass isn’t always on, but that’s a limitation of battery life and performance, not of design.  ↩

  3. I wish we had a better term for cell phones that serve as merely devices for voice and TXT communication.  ↩

To Understand Technology

What does it mean to understand technology? More importantly, how much understanding of technology do we need in our lives? Knowing the fundamentals of how electrons move, and how transistors and logic gates operate is useful, but does little to help us understand how to manage the myriad new ways technology has integrated itself into our lives. Just as you need not understand the chemical reactions occurring in the combustion chamber of your car’s engine to drive to work in the morning, the actual physics behind technology isn’t necessary in our lives.

The best way to learn how to use a car is to develop control of it. Get behind the wheel, turn the ignition, figure out what the switches, pedals, dials, and gauges do, and drive it around for a while in a controlled environment. Except in the most extreme circumstances, a car won’t do anything it’s not told to do. It’s the same with a computer, a smartphone, or a hammer. That’s not to say these are as intuitive as a hammer. Many of us can intuit that the heavy part of the hammer is supposed to make contact with some other thing. Not so much when a tool’s interface consists of multiple parts that must be operated in a specific sequence—or, worse, is completely open-ended.

Consider the earliest computer interfaces that average people might have to deal with: a cryptic line of text, and a blinking cursor (if that). Unless you know the commands, the computer will respond in a very non-cryptic fashion: “Bad command or file name.” The chances you’ll type a particularly dangerous command are very slim. Still, the requirement of pre-existing knowledge to use the computer’s interface—be it from a manual, or teaching—presents a huge obstacle. It’s intimidating to face down a command line with no knowledge, even if you “know” you’re unlikely to break the darn thing. This fear prevents that sort of controlled experimentation that lets us understand other tools and technologies. Even worse, as the familiar tools in our lives become computers in their own right, the learning curve and fear becomes all the more common.

But knowing how to use a thing, and understanding a thing are not the same. To return to the car, we understand that a car is a device that allows us to move ourselves, other people, and physical things from one point to another, at a decent speed. A computer is not so easily simplified in terms of what we can do with it, because increasingly the computer, and computerized devices, can do almost anything we desire. Once again, Douglas Adams beat me to the punch, summarizing the entire problem thusly: “First we thought the PC was a calculator. Then we found out how to turn numbers into letters with ASCII — and we thought it was a typewriter. Then we discovered graphics, and we thought it was a television. With the World Wide Web, we’ve realized it’s a brochure.”

Shortcuts like this allow for us to think we have a grip on what these tools are, but the truth is much more nuanced. A computer, a smartphone, the Internet—these tools are whatever we want them to be, and to understand them, we need to not only know this, but we need to know how to make them what we want them to be. This should be the fundamental mission of any company that exists in this space: not to simply sell a product that people can use for a small number of purposes, but to sell a product that emboldens someone to do anything they can imagine. There’s precious few companies out there do this. All we can hope for, at best, is for a company to hand us the product with no preconceptions on how we’ll use it, but also no inspiration for us to use it as anything more than a calculator, typewriter, television, or brochure. At worst, we have companies that hand us the product with the promise it can be one thing—a telephone, perhaps—but does it poorly.

The reason so few companies give us technology products that inspire us to do more with them, is that it’s more lucrative for them to limit our choices. Handing us an expensive handheld computer that locks us into a predefined experience created by a marketing person to reduce what we use the product for, sell partner content, and collect our personal information to resell does not benefit us at all. Until one of the few companies who does inspire us to do more with our technologies entered the cellphone space, this is what we had to put up with—and many of us still do. We allow this to happen, because the majority of us don’t understand the potential of what they hold in their hands every day, and until they do, the balance of power will forever tip against us.

A Statement of Purpose

Those who say that the world is moving too fast have no sense of history. This is doubly true for the world of technology. We’re in the midst of what seem like huge and drastic shifts in the ways we use technology and the ways technology can be used. I say seem, because I suspect these changes only look huge and drastic when you’re caught up in them. With time, and distance, something that seems as drastic as the Internet in your pocket is really just the next gradual step in an ever iterative process. Or, perhaps this really is a revolutionary time. I can’t claim one way or the other, because as I, too, am caught up in the ripples from the impact. We cannot see how the developments happening now stand in relationship with everything else.

There’s so much that gets wrapped up in technology, partially because so much of the visible technological innovation is happening in what is called the “consumer space”. It’s to the benefit of the companies that produce these consumer technologies to tout them as breakthrough, revolutionary, and so forth. When you hear it enough, it’s easy to get caught up in the hype cycle. Our already limited vision and perspective is being skewed further by the marketers who benefit from a small wave looking like a tsunami. And there’s never enough time to come to grips with what we’re seeing before another announcement. There’s a race to be the first to do a thing, the first to market a thing, and the first to comment on how terrible the thing is compared to what came before.

I keep coming back to the idea of “time and distance”. We don’t have either for so much of what we do. If anything has truly sped up, it’s the communication cycle. Before the instant nature of always on, high-speed Internet connections in our homes and in our pockets, media came with time and distance built in. Newspapers had to be written. Film had to be developed. Radio broadcasts had immediacy, but only if they could get power and a transmitter to where things were happening. Time allowed for things to settle, opinions to be fleshed out, and outcomes to become clearer. Maybe. Whenever you are looking back, there’s always the potential to romanticize the past—to see the high points, but not the low points. There is a value in immediacy, but not in all things.

The dialogues happening in the public space around technology far too often veer into knee-jerk, product centric flag waving. You’re either an Android user, or an iOS user, iOS 7 either sucks or is amazing, third example. The most in-depth discussion around technology in practice only comes in the wake of revealed abuses, such as the debate around the PRISM scandal, or particularly malicious forms of Digital Rights Management. Speaking as a geek, and an advocate of and for technology in our lives, these are the discussions we should be having more often, and earlier. These are discussions that do happen, and do exist, but are so often in academia or its analogues that ordinary people rarely even get to say their piece. We learn about the debate when a documented abuse explodes, when for a brief moment the door is opened and we can see the machinery. By then, it’s too late. It’s easier to stop a system before it starts. When something is in motion, there’s enough vested interest to keep it in motion.

But this goes beyond, far beyond, spying and spyware. These are questions that, in one way or another, are woven into the very fabric of technology. We need to take some time, and find some distance from the onslaught of small things that look big from forced perspective. The questions we need to ask read simply, but are hard to answer. There’s three questions that I think of when I think about technology: “How do we use it?”, “How do we think about it?”, and “How do we use it better?” The first is something we should ask of any technology we are given. It seems like a question of interface, but it’s more. Just like “Design is how it works,” asking “How do we use technology?” Is asking more about the applications towards which we apply it. “How do we think about it?” is what we should ask when we seek to explore a technology’s role in our lives. Is it accepted? Unaccepted? By who? Are the attitudes changing and why? These questions ground us, and ground the technologies we use in our lives by moving past the hype.

Finally, we must ask: “How can we use technology better?” The potential trapped in all of our tools can only be unlocked with awareness and experience. In your own life of interacting with technology, you have presumably had the “A-ha!” moment where you discover a time-saving feature in a piece of software, or stumble upon a neat thing you can do with a gadget. This question seeks to go beyond that, to understand how to bend the tools of technology to our own wills, take control, and better the lives of ourselves and others. It requires time, patience, effort, and developing an understanding of technology that goes deeper than just “how it works”.

The answers will vary for each of us. They depend on our own needs and desires, the tools we have at hand, and the time we’re willing to expend. I seem them asked in disparate places online and offline, but I don’t see them brought together often. The critiques of Google Glass in recent months have come close, but only close. These are the questions I plan to ask of myself and of the technologies in my life. I will put my answers here, along side the new questions that will inevitably arise from the attempts. I know I won’t be alone.