There are many factors to consider. Performance and power consumption. Size and weight. Noise and heat. Beauty, durability, and portability. Connectivity and upgradeability. Compatibility and of course cost… They’re all interrelated in various ways, forming a complex web of trade-offs.
There’s one constraint that Matt doesn’t touch on: what we can use it for. It’s an easy one to overlook, because so many of our devices can do so many things. Steve Jobs may have described the original iPhone, arguably the device that codified the modern “smartphone,” as being a phone, and iPod, and an “Internet communicator,” but really it’s a general purpose computing device that just happens to be able to make phone calls. At this point, the phone aspect of the smartphone is tertiary at best. The iPhone is functionally constrained by the software limitations Apple imposes on it, but there are ways around that. Elsewhere, Canonical’s taken the general purpose computing aspect of the smartphone far enough that they’re raising money to sell a phone you can hook to a monitor and keyboard and use as a desktop computer. I can only imagine the technological constraints the Ubuntu designers have to work into the product to allow that.
In a larger sense, this is the dividing line between Apple products and those of its competitors. When you buy an Apple product, there’s no question that you’re giving up certain features that are taken for granted on other platforms: internal expansion slots on desktops, RAM upgrades and replaceable batteries on laptops, and the ability to install software from (almost) anywhere you choose on your phone and tablet. By making the choice to buy an Apple product, you’re deciding that these features not important to you. It was certainly a factor I considered when I made the switch to Apple, as I was tired of fiddling in the guts of my computers from both a hardware and software standpoint.
The unwillingness of Microsoft to “compromise” on Windows 8 and their subsequent Surface tablet offerings has been their undoing. Stratechery’s analysis tablets and smartphones as “thin clients” touches briefly on the problem. Windows 8, by welding the traditional desktop interface to the touch-oriented Metro UI created a functional hybrid that fails at being both a compelling tablet and a compelling desktop. While a tablet may, eventually, become the one device for the average user, we’re not there yet, and half-steps like porting the “classic” Windows UI on to a touch device are only standing in the way. The form factor of tablets and a touch-based UI requires creating constraints in UI and in functionality. A device that tries to do everything in an input-limited environment will run into issues.
In a way, this brings me to Jake Knapp, who “can’t handle infinity in [his] pocket,” so he turned off most of the things that make his smartphone smart. It’s an extreme way of dealing with a real problem. It’s the same motivation that drives Harry Marks to his typewriter, or Jonathan Franzen to take a saw to his laptop’s Ethernet port. These self-imposed constraints, switching or modifying our hyper-flexible tools to become something we can limit and ground ourselves into using without the fear of something popping up and distracting us from the task at hand.
What worries me is that disabling features is a power user move. Only someone who knows how their device works can think to turn off the features that distract them. I wouldn’t expect my father to think of turning off Safari on his iPhone, but it also wouldn’t benefit him. The market for uni-tasking devices is small. I remember a few years back, pre iPad, how a novelist friend of mine was gushing over the AlphaSmart, a portable word processor built for typing and nothing else. I wonder how well it sells. It’s not easy to market something with limited functionality to a gadget-hungry public, and that includes nerds and normals, with the proposition that it’s good for only one thing. The only exception may be e-readers, and tablets may already be subsuming that market.
Either users are going to adapt to the growing amount of functionality in their devices, or the market will shift towards devices that offer less features as users seek to become less overwhelmed. I expect it will be the former, rather than the latter. As stable as the market appears, it’s still early days. We don’t know where these devices will go, and whether they will adapt to us, or us to them. The cynic in me expects the status quo: users overwhelmed by functionality and a subset of power users who drop out. I hope I’m proven wrong.
As technology becomes more omnipresent in our lives, there is the natural backlash of those who feel that something valuable is being lost in the transition. Make no mistake, something is being lost—something is lost in every transition—but are we over-romanticizing what our lives were like before these changes? Almost certainly. We maintain a romantic notion of the past, even when presented with evidence to the contrary. Even Socrates bemoaned the concept of written language, claiming that writing “will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”
That quote so often comes to mind when I read the works of digital ascetics, [1] who opt to disconnect as much as possible. I’m not about to argue the contrary position. I’ve written before about the benefits of skepticism, and the problem with naysayers. Yet, even I look at certain trends in technology, such as the insistence that wearable computers are the future, and moan. We don’t need ubiquitous computing and omnipresent information strapped to our bodies, and we don’t need displays in our peripheral vision at all times. Disconnection, even temporary, is anathema to the proponents of always-on technologies like Google Glass. [2] To provide contrast to the digital ascetics, let’s call this technology worldview digital hedonism. It seems an apt term for a philosophy that espouses constant exposure and connection.
There exists a happy medium between these philosophies. Perhaps we could call it digital epicureanism, though I fully expect someone with more than the cursory college education in philosophy I’ve had to call me out for misapplying the term “epicureanism.” Digital epicureanism is a philosophy of a judicious middle path, seeking to maximize the benefits of the technologies we use today, but with a cautious eye towards how these tools will affect our futures. I base this idea on the assumption that a tool is neither good or bad—only its application can be given moral consideration. To put it another way: a hammer can be used to build a house, or it can be used to break bones. It’s all depends on the intent of the one holding it. The hammer is a neutral party.
It’s never a bad idea for us to evaluate technology on its terms and decide for ourselves the role—or lack thereof—it should play in our lives. As technology changes, and as technology changes is us there is a place for those who eschew some of these changes, just to show us what we’ve given up in exchange. It is then up to us to decide if it was worth it. If you don’t feel the need for an Internet-enabled device to be on your person at all times, by all means give up your smartphone for a “dumbphone,” [3] or nerf your iPhone As long as you’ve evaluated the value proposition and know that your life is not improved by it, you’re perfectly validated in your choice. Your choice, however, is not one that makes you superior to the rest of the masses who have decided they want these things. And vice-versa.
What does it mean to understand technology? More importantly, how much understanding of technology do we need in our lives? Knowing the fundamentals of how electrons move, and how transistors and logic gates operate is useful, but does little to help us understand how to manage the myriad new ways technology has integrated itself into our lives. Just as you need not understand the chemical reactions occurring in the combustion chamber of your car’s engine to drive to work in the morning, the actual physics behind technology isn’t necessary in our lives.
The best way to learn how to use a car is to develop control of it. Get behind the wheel, turn the ignition, figure out what the switches, pedals, dials, and gauges do, and drive it around for a while in a controlled environment. Except in the most extreme circumstances, a car won’t do anything it’s not told to do. It’s the same with a computer, a smartphone, or a hammer. That’s not to say these are as intuitive as a hammer. Many of us can intuit that the heavy part of the hammer is supposed to make contact with some other thing. Not so much when a tool’s interface consists of multiple parts that must be operated in a specific sequence—or, worse, is completely open-ended.
Consider the earliest computer interfaces that average people might have to deal with: a cryptic line of text, and a blinking cursor (if that). Unless you know the commands, the computer will respond in a very non-cryptic fashion: “Bad command or file name.” The chances you’ll type a particularly dangerous command are very slim. Still, the requirement of pre-existing knowledge to use the computer’s interface—be it from a manual, or teaching—presents a huge obstacle. It’s intimidating to face down a command line with no knowledge, even if you “know” you’re unlikely to break the darn thing. This fear prevents that sort of controlled experimentation that lets us understand other tools and technologies. Even worse, as the familiar tools in our lives become computers in their own right, the learning curve and fear becomes all the more common.
But knowing how to use a thing, and understanding a thing are not the same. To return to the car, we understand that a car is a device that allows us to move ourselves, other people, and physical things from one point to another, at a decent speed. A computer is not so easily simplified in terms of what we can do with it, because increasingly the computer, and computerized devices, can do almost anything we desire. Once again, Douglas Adams beat me to the punch, summarizing the entire problem thusly: “First we thought the PC was a calculator. Then we found out how to turn numbers into letters with ASCII — and we thought it was a typewriter. Then we discovered graphics, and we thought it was a television. With the World Wide Web, we’ve realized it’s a brochure.”
Shortcuts like this allow for us to think we have a grip on what these tools are, but the truth is much more nuanced. A computer, a smartphone, the Internet—these tools are whatever we want them to be, and to understand them, we need to not only know this, but we need to know how to make them what we want them to be. This should be the fundamental mission of any company that exists in this space: not to simply sell a product that people can use for a small number of purposes, but to sell a product that emboldens someone to do anything they can imagine. There’s precious few companies out there do this. All we can hope for, at best, is for a company to hand us the product with no preconceptions on how we’ll use it, but also no inspiration for us to use it as anything more than a calculator, typewriter, television, or brochure. At worst, we have companies that hand us the product with the promise it can be one thing—a telephone, perhaps—but does it poorly.
The reason so few companies give us technology products that inspire us to do more with them, is that it’s more lucrative for them to limit our choices. Handing us an expensive handheld computer that locks us into a predefined experience created by a marketing person to reduce what we use the product for, sell partner content, and collect our personal information to resell does not benefit us at all. Until one of the few companies who does inspire us to do more with our technologies entered the cellphone space, this is what we had to put up with—and many of us still do. We allow this to happen, because the majority of us don’t understand the potential of what they hold in their hands every day, and until they do, the balance of power will forever tip against us.
I recently came across an article by Kevin Morris on DailyDot about how the Wikimedia Commons has become a hub of amateur pornography. [1] It’s better if you read the piece yourself, but to summarize the summary: people have been posting porn to the Commons, and any attempt to regulate it by the larger Wikimedia organization have been rebuffed by the Commons own leadership. While I have no problem with pornography in general, the piece struck me as illustrating an interesting dichotomy in the conceptualization of “free speech” on the Internet, and the spaces for it. The goal of Wikimedia Commons is to be a repository of freely licensed images that would be of value to the educational role of Wikimedia, which is an admirable endeavor. This also means that, by any measure, using it as a host for amateur pornography is mission creep to say the least.
However, there’s a larger discussion to be had about the organizational structures that surround large Internet communities, which includes Wikimedia and its projects. There is a considerable organizational structure within Wikimedia. Though any person can edit anything (with certain exceptions), other users are given power to be administrators, locking down controversial articles, establishing editorial guidelines and more. There are also “Bureaucrats” who appoint administrators, and exert greater control over a project’s mission. A Commons user with the screen name Russavia, who supports the mass of porn on the Commons, is one of those elected “Bureaucrats” and has a lot of support from other Commons users—enough so that Jimmy Wales, the “God-King” of Wikimedia has no control or say in the project. It’s politics, Internet-style, pure and simple, and it all comes down to a sense of what the “mission” of the project is.
By all accounts, Wikimedia as a whole is a very Libertarian (with a capital “L”) endeavor. They have established a baseline set of guidelines for what can, and cannot be done, and allowed extreme freedom within them. There’s no room on Wikitravel for an article about rock music, except in the context of famous rock clubs in a city. On the other hand, there’s no room in Wikipedia for a crowdsourced guide to the best rock clubs in that city. The people who have taken an interest in these non-Commons Wikimedia projects have an interest in building something with a specific mission in mind, and if you want to do something else, they’ll nudge you towards where it belongs. It seems that the Commons, for some reason, has attracted the “small-l” libertarian camp, for whom the rule of the system are suspect—and have leveraged those rules to pull off a coup.
This is an example in how not to do community moderation. By any measure, at some point in the Wikimedia Commons past, there was an inflection point where pornography was becoming an increasingly large part of the content generated. Someone, somewhere, didn’t either seize the moment, or tried to seize it too late. The end result is the mess you see before you. However, I can understand how it happened. In a community of geeks and by geeks, and particularly geeks with a bent towards libertarian attitudes, it’s often better to take a soft approach to addressing issues of inappropriate content. Further muddying the waters is that the line between educational materials and pornography can be fuzzy. Ask any thirteen year old boy in the pre-Internet age who found a National Geographic magazine with pictures of topless women, or the “What’s Happening to my Body?” Book for Girls in the library.
At some point, however, it becomes clear that users are posting pictures that contain explicit material less for the educational value. After a certain point, you’ve learned all you can about human sexual anatomy that you can from still images or short videos. The questions are: how does one handle an abuse of the service, and do the structures in place allow for users with authority to take control of the situation? Without being involved, it may be likely that the answer to the latter is “No.” To paraphrase Douglas Adams, "A common mistake that people make when trying to design rules to keep people from being assholes is to underestimate the ingenuity of complete assholes.†[2]
When we create rules and organizational structures, in real life and online, we tend to assume that everyone involved will be a rational actor—or at least as rational as the people creating those rules and structures. In doing so, we too often fail to plan for the irrational people who seek to bend, fold, spindle, and mutilate those rules and structures for their own gain. Let alone those who just want to watch the world burn. It’s what leads to political corruption, nepotism, gerrymandering, and embezzlement in real political structures, and what leads to “small-l” libertarian coups in the digital world.
One of the great things about the Internet, and its democratizing effect on communication, is how easy and inexpensive it is to create your own Utopian Paradise (no matter your sociopolitical leanings) and invite your like-minded friends. However, there’s no glory as a rabble-rouser in taking your ball and going “home”. I expect as online communities become more commonplace, we’ll see both heavier-handed moderation, and revolts and revolutions in the style of Wikimedia Commons. The communities that succeed will have to strike a balance between guiding its members towards a defined and common goal, while still allowing autonomy and a voice to the dissenting—and an open door for those who want to tear it down to be shunted out through. The details are in the implementation, of course.
The story contains no actual porn, but does feature some graphic language and descriptions that may be frowned upon in the workplace. ↩
Those who say that the world is moving too fast have no sense of history. This is doubly true for the world of technology. We’re in the midst of what seem like huge and drastic shifts in the ways we use technology and the ways technology can be used. I say seem, because I suspect these changes only look huge and drastic when you’re caught up in them. With time, and distance, something that seems as drastic as the Internet in your pocket is really just the next gradual step in an ever iterative process. Or, perhaps this really is a revolutionary time. I can’t claim one way or the other, because as I, too, am caught up in the ripples from the impact. We cannot see how the developments happening now stand in relationship with everything else.
There’s so much that gets wrapped up in technology, partially because so much of the visible technological innovation is happening in what is called the “consumer space”. It’s to the benefit of the companies that produce these consumer technologies to tout them as breakthrough, revolutionary, and so forth. When you hear it enough, it’s easy to get caught up in the hype cycle. Our already limited vision and perspective is being skewed further by the marketers who benefit from a small wave looking like a tsunami. And there’s never enough time to come to grips with what we’re seeing before another announcement. There’s a race to be the first to do a thing, the first to market a thing, and the first to comment on how terrible the thing is compared to what came before.
I keep coming back to the idea of “time and distance”. We don’t have either for so much of what we do. If anything has truly sped up, it’s the communication cycle. Before the instant nature of always on, high-speed Internet connections in our homes and in our pockets, media came with time and distance built in. Newspapers had to be written. Film had to be developed. Radio broadcasts had immediacy, but only if they could get power and a transmitter to where things were happening. Time allowed for things to settle, opinions to be fleshed out, and outcomes to become clearer. Maybe. Whenever you are looking back, there’s always the potential to romanticize the past—to see the high points, but not the low points. There is a value in immediacy, but not in all things.
The dialogues happening in the public space around technology far too often veer into knee-jerk, product centric flag waving. You’re either an Android user, or an iOS user, iOS 7 either sucks or is amazing, third example. The most in-depth discussion around technology in practice only comes in the wake of revealed abuses, such as the debate around the PRISM scandal, or particularly malicious forms of Digital Rights Management. Speaking as a geek, and an advocate of and for technology in our lives, these are the discussions we should be having more often, and earlier. These are discussions that do happen, and do exist, but are so often in academia or its analogues that ordinary people rarely even get to say their piece. We learn about the debate when a documented abuse explodes, when for a brief moment the door is opened and we can see the machinery. By then, it’s too late. It’s easier to stop a system before it starts. When something is in motion, there’s enough vested interest to keep it in motion.
But this goes beyond, far beyond, spying and spyware. These are questions that, in one way or another, are woven into the very fabric of technology. We need to take some time, and find some distance from the onslaught of small things that look big from forced perspective. The questions we need to ask read simply, but are hard to answer. There’s three questions that I think of when I think about technology: “How do we use it?”, “How do we think about it?”, and “How do we use it better?” The first is something we should ask of any technology we are given. It seems like a question of interface, but it’s more. Just like “Design is how it works,” asking “How do we use technology?” Is asking more about the applications towards which we apply it. “How do we think about it?” is what we should ask when we seek to explore a technology’s role in our lives. Is it accepted? Unaccepted? By who? Are the attitudes changing and why? These questions ground us, and ground the technologies we use in our lives by moving past the hype.
Finally, we must ask: “How can we use technology better?” The potential trapped in all of our tools can only be unlocked with awareness and experience. In your own life of interacting with technology, you have presumably had the “A-ha!” moment where you discover a time-saving feature in a piece of software, or stumble upon a neat thing you can do with a gadget. This question seeks to go beyond that, to understand how to bend the tools of technology to our own wills, take control, and better the lives of ourselves and others. It requires time, patience, effort, and developing an understanding of technology that goes deeper than just “how it works”.
The answers will vary for each of us. They depend on our own needs and desires, the tools we have at hand, and the time we’re willing to expend. I seem them asked in disparate places online and offline, but I don’t see them brought together often. The critiques of Google Glass in recent months have come close, but only close. These are the questions I plan to ask of myself and of the technologies in my life. I will put my answers here, along side the new questions that will inevitably arise from the attempts. I know I won’t be alone.