“Ultimately, the technology shows you a problem, but not how to deal with it afterwards, or prepare you for it. It attempts to nudge you into behaviors without know what behaviors it is nudging you out of. As designers, we know how we’d like our designed object to work, that you, or your user, are saved in the nick of time, or that you can have a good laugh about the STI that has been revealed afterwards, but we know that just isn’t the case.”
“When you learn to write you are taught that a deep understanding of your audience is essential to effective communication. On the internet, though, you are often shouting to your intended listeners as they stand in a thick, noisy crowd. You may be appealing directly to their interests and ambitions, but if their neighbor disagrees, you could still face pushback, complaints or even abuse. The potential for criticism from unexpected parties is, on the whole, positive – a boot camp in radical empathy, one that makes it ever more difficult to hide behind provincialism as an excuse for insensitivity. But for many, it’s disorienting.”
The first step towards empathy is to slow down and think before you hit post. We’re all still figuring out how to live in a world where everyone has megaphones, but Jess is pointing towards a good way to manage it.
There’s been a few interesting pieces circulating around about the “web of relationships” and “the web of links” vis-a-vis the blogging environment of the early to mid 2000s, before Twitter and Facebook mucked it all up, siloing our content, locking in our friends, and limiting our interactions. I have a lot of respect for the revolutionary bloggers of the Arab Spring. They did a great service to their countries, got screwed over for it, and are continuing to be screwed over for it. They have every right to miss that world where a good blog post can be shared and responded to at length, rather than through a 140-character—or likely less—Tweet, or an easily missed Facebook post.
Part of why these pieces have been circulating around, though, is a sense of nostalgia among people like me for the old ways of doing things online. We miss the mythical golden age of the web, where it was wild, weird, and wooly. We miss the days of blogrings, of links pages, of early blogs and building up a readership that you could share with. In the mid–2000s was a sense that blogs could change the world, and they did, at least in a small part of the world where the stars all aligned.
Once, there was a dream where everyone would have their own domain, their own presence online that they could own and control at their will. It’s a dream that never happened. Instead, everything got bought up by Google, or everyone jumped ship to Twitter and Facebook. Now everything is siloed off and locked down. Links mean nothing, and if you’re not writing for one of the big media sites, are lucky enough to go viral on Medium, or managed to get an audience large enough to support your work before everything blew up, you’re screwed. Now, a blog link is just another piece of disembodied content in the stream. As Hossain Derakhshan puts it:
Nearly every social network now treats a link as just the same as it treats any other object — the same as a photo, or a piece of text — instead of seeing it as a way to make that text richer. You’re encouraged to post one single hyperlink and expose it to a quasi-democratic process of liking and plussing and hearting: Adding several links to a piece of text is usually not allowed. Hyperlinks are objectivized, isolated, stripped of their powers.
In 2002, I set up my first blog right at this domain. I begged my parents to spring for a year of web hosting and a domain name as my high school graduation present. I set up GreyMatter, built a template, and started blogging. Over the intervening years, I switched to MovableType, then to WordPress. I changed hosts, built new templates and themes, and tried to find the right voice, and the right subject matter. I didn’t change the world, or pick up Daring Fireball-level readership, but I’m still here, typing away.
What we so easily forget is that, in the early 2000s, it was a huge pain in the ass to get anything up online. There were two options. The expensive, hard, but more respectable way, was to do it yourself—either by setting up your own web server, or paying for hosting. Either way, then you had to set up blogging software, which was also a pain in the ass. FTPing to a server to upload the files, SSHing into the server to set the permissions using the arcane incantations of the UNIX command line. Finally, running the configuration software on the server in your browser, crossing your fingers, waving a dead chicken, and hoping very hard that you didn’t mess anything up. (This is how I did it. Even that process is easier now.)
The cheap and easy way was to sign up for Blogspot or LiveJournal. Even a paid account on LiveJournal, and the cost of a domain was less than paying for web hosting. You had less control, and you could be linked to by the outside world, but let’s not kid ourselves. Among the tech-elite, having a Blogspot or LiveJournal account was usually a sign that you couldn’t be taken seriously, unless you were Jamie Zawinski. Thank goodness for those free services though, because if you told an ordinary person in 2003 that they needed to learn how to use FTP and UNIX just to put words online, most would check out. Tumblr is the closest thing we have to a 2015 version of LiveJournal and Blogspot, and while a few tech elites use it, most just eye it warily. At least you can link to Tumblr posts.
While I’m no fan of Facebook or Twitter these days, I have to admit that they do something well that a lot of people want. They let people put out their words, pictures, and ideas in front of an audience without requiring too much effort or financial outlay. Could DeRay McKesson have the reach and importance he has now if he were a self-hosted blogger, instead of leveraging the low overhead of posting to Twitter? You can still tweet via text message from a flip phone, if it really comes to it. As long as you are connected and have an account, you can put your words out there.
Centralized publishing platforms carry the risk of censorship, of course, but this isn’t new. Even in the days of Blogspot and LiveJournal, there was the risk that some regime change at the publisher could come down on something they don’t like. Even my web host, if I should do something on here that violates their Terms of Service, and they learn about it, cancel my hosting account. But that’s direct, human action. The new worry is the risk of algorithmic censorship. This is something not enough people are talking about. At least Zeynep Tufekci brings it up:
Facebook engineers will swear up and down that they are serving people “what they want” but that glosses over the key question that if the main way to tell Facebook “what we want” is to “like” something. How do we signal that we want to see more of important, but unlikable, updates on Facebook? We can’t, it turns out.
Of course, this isn’t a zero-sum game. The existence of social media, of Twitter, Facebook, Snapchat Stories, Apple News, et. al. doesn’t mean that blogging and linking is going away forever. If you’re looking at this in a web browser, that’s proof enough. If you’re looking at this in an RSS reader or a Read Later service, that’s also proof. We can save the web of links, of people, and of connections without dismantling the new social media infrastructure. We need tools that make it easy for people to have a space of their own on the web that isn’t necessarily part of some giant network like Twitter, or locked into a service like Facebook.
How do we maintain the balance of making it easy and accessible for people to use their voices online, without getting them bogged down in the technical details? I don’t have an answer to that question. I just know it doesn’t have to be an all-or-nothing world. It’s going to take an act of will, and some new tools that balance ease, cost, and flexibility for a world where people’s primary way of access isn’t necessarily a traditional computer. There’s bound to be some push-and-pull along the way between the extremes of the social media silos and the full control of independent spaces. Nothing is set in stone yet, and we can still figure this out. Let’s take the good parts of the web we have to save, and merge it with the ubiquity and easy of the web we have now. The rest are implementation details.
Gloom and doom is the forecast for tablets these days. Sales are dropping, even for iPad, the king of the tablet hill—which is a small hill, to be sure. With bigger smartphones coming at it from one side, and tiny, ultralight laptops coming at it from the other, where does the tablet go? Why spend $400 on a tablet, when you can get a perfectly good laptop for a little more, or a perfectly good Chomebook for a lot less? Tablets are a luxury! They’re a niche product! They’re doomed to be an also-ran in the computing space!
The Childhood of the Tablet
We’ve only been in the tablet computer era for five years, at least If we go by the launch of the original iPad as the birth of the modern era of the tablet. There were plenty of computers in a tablet form factor before the iPad, but most were just giant, thick laptops with no keyboards, and with interfaces optimized for keyboard and mouse. The iPad was the first tablet to provide a specific, finger optimized interface, which is exactly what you want for a handheld device. The iPad was the basic form and UI of a tablet computer, done right.
To make a clunky analogy, tablets, pre-iPad were giant IBM PCs. The iPad was the Mac—a refined product with a new, user-friendly UI with some restrictions that the IBM PC didn’t have. Back in 1984, when the Mac launched, there wasn’t much need for the average person to have a home computer. They were the province of hobbyists, geeks, and hobbyist geeks. It wasn’t until the rise of the Internet in the mid–90s that home computer ownership became a real need, though that was primed a bit by the CD-ROM Multimedia Gold Rush. If 2010 was the tablet equivalent of 1984, then we’re only in 1989, where computers are a useful home accessory, but not even close to a necessity.
There’s a ton of potential in tablet computing as a form-factor that we are only just beginning to unlock five years in. It’s not hard to envision a future for the tablet that sees it, not just as a secondary device, but as the primary computer for most people. With the right developments, the tablet could even become the primary computer for developers and other power user laptop and desktop computer holdouts. What does this future look like? Come with me, as I explore the Tablet of Tomorrow.
A Day With the Tablet of Tomorrow
The year is 2025.
You wake up, shower, dress, and scroll through the morning news and email on your tablet at the breakfast table. Nothing new here. Time to leave, and you toss your tablet into your bag, and head to work. Your desk at work has a 24" display at retina resolution, a wireless keyboard and mouse, and a small docking station with a lightning port. At your desk, you whip out your tablet, plop it on the docking station to charge, and get to work.
How? The tablet has instantly connected, wirelessly, to your keyboard, mouse, and display. It knows you’re on your office Wifi, and switches to what you were last working on, be it your email, the Henderson report, or an HTML file in an editor on one side, and a web browser viewing the file on the other. You work, switching between apps as you need, occasionally setting up your tablet to show something you need to work with you—maybe a chat window for Slack, or your email if you’re expecting something important. Maybe the stream of the Sportsball Playoffs, if you’re not.
After a couple hours of work, you have a meeting, so you grab your tablet off the docking station, fully charged, and the optional stylus. You switch to a note-taking app and write down stuff, or sketch and doodle as the meeting goes on, waiting for your moment to present. When that happens, you switch to your presentation app, swipe the slideshow over to the meeting room’s projector, and do your presention. People ignore it, because it’s a meeting, but hey, you’re done. Now it’s time to go back.
You return to your desk, and plop your tablet down. The 24" display lights up with everything you were working on before you got up for your meeting, while your meeting notes display on the tablet. If you want, you can swipe those notes up as a pane on your desktop, or just leave them on the tablet and mark them up while you work. Use a stylus, or use your finger. In the meantime, you get back to work.
Soon, quitting time rolls around, so you throw your tablet in your bag, and head home. On the train (or in your self-driving car if you must), you catch up on the news and email. At home, you drop your tablet in a charging dock at your desk, where it connects with your own home desktop monitor, keyboard, and a trackpad, ’cause that’s just how you roll at home. You catch up on your work and personal email, then edit and post a couple photos and videos to Facebook from your trip to the beach over the weekend.
When you’re done with all of that, you take your freshly charged tablet off the desk, swing downstairs, and drop it off in the living room while you eat dinner. While you eat, your tablet wirelessly connects to the 48“ TV. After dinner, you plop on the couch and grab your tablet. It’s already showing you your TV app of choice, and so you swipe through your options. Settling on Season 16 of Game of Thrones, you tap the latest episode’s listing, and suddenly the 40” screen across the room lights up and the theme thunders through your speakers.
Then, you realize you have a form you need to fill out for a doctor’s appointment, so you switch to your email and fill it out. You use your fingers to fill in the checkboxes, and for text, the haptic Force Touch keyboard on the glass is good enough for a little bit of touch-typing. You’ve been known to compose emails, take notes in meetings, and even write the occasional short blog post on it, but for longer work, you use your wireless keyboard at the desk. Meanwhile, as you fill out your doctor’s form and mail it off, the TV screen doesn’t skip a frame of bloody, sexy, fantasy action.
After Game of Thrones, you throw on some tunes over the home audio system, have a videochat with your special someone, and then play a video game. Realizing it would be cooler if your game was on that big 40" display, you shoot it over, and it doesn’t skip a frame, while your tablet turns into a custom controller with more features and a mini-map. You play for a while, and with 50% battery remaining, realize it’s time to turn in. On your way to bed, you plug your tablet in with the cable you keep on the sideboard.
Getting to the Future from Here
You know what’s not in the picture above? A traditional computer, desktop or laptop. The only presupposition is that, eventually, we get fast enough, power-efficient enough Bluetooth-like short-range wireless, and fast enough home wi-fi to send 4 to 5k video to a display at real time. Ten years might be an optimistic timeframe. It could be fifteen or twenty, but it’s not an impossibility. The obstacles are just wireless transfer speeds, processing power, haptics, and battery life. These aren’t problems we’re close to solving, but ones we’re making huge progress on. Everything else is stuff we can do now, at least in terms of software. Cloud storage handles the heavy lifting of storing documents and photos. The closest thing to a conventional computer in the home that I see in this future is a small home media server.
In a way, this vision of the future isn’t dissimilar from the idea behind Microsoft Surface. Surface tries to be all things to all people, but limits its ambitions to a device that’s all compromise—a hybrid touch/desktop UI that nobody likes, a crappy hardware keyboard, and an awkward form-factor. It’s gotten better, but it’s a device that carries with it 30 years of legacy computing baggage.
Could you do all this with a powerful enough phone? Maybe, and I could even see that being an option for people who have giant 6" Samsung phones. However, the tablet has a much more flexible form factor. A phone form factor isn’t great for multitasking, while a tablet has enough display space for multiple apps. A bigger form factor means a bigger battery, and more space for storage and RAM that are naturally constrained on a phone form-factor.
Imagine the ability to do computing work everywhere and anywhere with a screen you can carry in a small bag, or even in a big enough pocket. Imagine turning any big screen into your desktop. This is where the tablet could go, and what the tablet could become. The potential for a tablet to replace the conventional personal computer for the vast majority of people is immense.
People who worry about the future of the tablet are thinking too short term. It doesn’t matter what happens next quarter for iPad sales. It matters what happens in a longer-term scenario where the tablet becomes more capable, more connected, and more powerful—enough to replace the box on your desk, and yet still connect with all the extra stuff you need to live your digital life. We’ll get there in time. Until then, don’t panic, and start working to build that real post-PC future. Or, stick to your old, legacy computing device. It’s not going to go away, but the balance will certainly shift given enough time and development. I’m looking forward to it.
“I think it’s in part of a reaction to the atomization of technology. So much of new technology–and certainly media coverage of it–seems to be focused on making individual lives better while our common infrastructure decays. Uber instead of public transit. Airbnb instead of affordable housing. MOOCs instead of publicly-funded higher education. Spending time with infrastructure is reminding myself that it was once possible to work collectively for the shared good, something we need to figure out how to do again (and globally) if we are going to address planetary-scale problems like anthropogenic climate change.”
I’m not an infrastructure fanatic—though as my Twitter followers can attest, I love the subway, or at least complaining about it. I can still sympathize with Deb’s point of view. It nestles nicely with a recent piece in The Atlantic on the real sharing economy. This means organizations like tool lending libraries, Baltimore’s free book store, car sharing instead of “ride sharing,” and—of course—infrastructure. Or, to quote a quote from the Atlantic article: “When I think about the true sharing economy, I see libraries, parks, and common roads.” Exactly the opposite of the stuff getting all the press and VC funding.
People have long complained about technology’s isolating effect on us, especially in the age of the smartphone. I’ve been long skeptical of that attitude, but there is a real isolation problem in technology. It’s narrowed the scope of ideas away from things that benefit everyone to thing that benefit individuals. And not just any individuals—individuals with money, access to personal technology, and the privilege to not worry about the people providing their services. I’m including myself in this group too, by the way.
I don’t think we’re quite heading for a future where Uber is going to rip up the subway lines like what happened to the Los Angeles street car system in the ’40s and 50s. We are in a present, however, where spending money, private and public alike, on things for the common good is passé. Not everything needs to be a profit center after all, and solving problems on a global scale—the ones you can’t solve by another on-demand service startup—aren’t going to make anyone any money.
Maybe I should get into infrastructure tourism. Might as well see it while it lasts.