Menu

Sanspoint.

Essays on Technology and Culture

Dongles and DRM in the Wireless Audio Future

The furor over the iPhone 7’s headphone jack, or lack thereof, seems to be fading. It was doomed to be a non-issue anyway, so I’m not surprised. This is, in part, because Apple was so far out in front of the reveal, and because it’s a minor inconvenience at worst. Charging and listening via wired gear at the same time is still a concern, but solutions will come down the pike for that. Just wait a month or two, or join us happy Bluetooth headphone users in the wonderful world of wireless listening. You don’t even need $160 AirPods to do it.

But there’s still some cause for concern with the removal of the headphone jack. It’s just not the concerns I hear a lot of people screaming about. While I’d need to see a teardown to be certain, evidence suggests that the Lightning to 3.5mm Headphone Jack Adapter is about as passive as you can get with a Lightning dongle. It probably consists of a Lightning adapter chip, a Digital to Analog Converter, and an Analog to Digital Converter for the microphone on your headset. Square has already come out and said their old 3.5mm headphone jack-based card reader works with the dongle. Other devices that input audio via that jack should work fine, albeit in mono. [1]

I’ve heard some people float the idea that there could be a DRM lockout in future iPhones, or future versions of the dongle, to prevent unauthorized devices. That’s not likely giving how the headphone jack works. As long as there’s conversion of digital to analog along the path to the speakers, there’s a way to tap it. Even high-resolution, 192k/24-bit audio can be output over 3.5mm, as long as it’s stereo. The PonoPlayer doesn’t have any fancy outputs, just two standard 3.5mm jacks, one amplified and one not. Perhaps there might be a kill switch not to allow output via the Lightning to 3.5mm Adapter, but why? It could easily be bypassed with a third-party adapter, and all Lightning headphones would have a DAC in them anyway. You’d block legit users and potential pirates alike.

Any potential DRM risk is around other audio formats beyond stereo output. Let us imagine a future where Apple Music provides 5.1 surround sound audio. This is a bit preposterous on the face of it, but work with me. While there are “5.1” headphones, they connect over USB, since you can’t output 5.1 sound through the headphone jack. You can, however, output 5.1 through an optical audio port, and most modern Macs combine optical audio and 3.5mm phone jacks, via the Mini-TOSLINK optical audio connector. The hardware overhead of optical audio, however, makes me doubt there will ever be a Lightning-to-Mini-TOSLINK Adapter.

If you want to listen to 5.1 audio out of your iPhone, you’ll have to hook it to something via Lightning. It would be, at least theoretically, possible to capture each channel of the 5.1 signal after it passes through the DAC, but that’s a lot of work for minimal gain. Plus, I have my doubts Apple will ever bother with 5.1 audio on the iPhone or iPad. If streaming and wireless are the future of audio, bandwidth is too constrained to make high-fidelity and 5.1 worthwhile for the near-future. More importantly, most consumers don’t give two shits about high-fidelity audio or surround sound anyway.

So, I’m not worried about a DRM lockout in a headphone jack-free iPhone ecosystem. Unless Apple decides that you can’t connect any old DAC to a Lightning port, or forces everything to go over Bluetooth or a proprietary wireless method, we’ll be in good shape. When it comes to portable devices and audio, the future is wireless, at least for listening. As long as there’s a supported way to get audio out and audio in—which Lightning supports—over wires, things are going to be okay.


  1. This is a limit the headphone jacks have on all iPhones. The TRRS standard commonly used on smartphones has only four contacts: left audio, right audio, ground, and microphone.  ↩

On Well-Worn iPhones and Upgrade Cycles

I find myself casting a curious glance at the people on my Twitter timeline trying to decide which color of iPhone 7 to get. Should they get the new Matte Black iPhone, or the shiny new Jet Black iPhone? The Jet Black iPhone is a new color, it’s shiny, and it’s gorgeous, but it’s also a fingerprint magnet and Apple even admits it might get scratched up during regular use. However, it is allegedly better feeling in the hand—less slippery. Matte Black is nice too, and doesn’t get scratched up. And does any of this matter if you’re going to be stuffing the darn thing into a case for its lifespan?

The conundrum is a fascinating one, since it says a lot about our relationship to our devices. A few years back I read a great essay on how our devices age with use. The photos of the well-worn, well used original iPhone are gorgeous, both as photography, and subject matter. There’s something about the way the finish has worn away and scratched on that first iPhone that just makes it feel good to me. It’s like a comfortable, old t-shirt.

Later iPhone models with their different cases and finishes don’t age like that. The closest we’ve come is probably the Space Black iPhone 5. The anodization would chip off on the chamfered edges, giving it a rough-and-tumble look after a while. I appreciated the look while I had mine. John Gruber agrees, saying “it added character — call it a Millennium Falcon look.” It’s possible the new Jet Black iPhone will age well too, with the scratches dings, and fingerprints that mark a well-loved device.

But there’s still a contingent of people, a vocal one, who want their devices to look as good as they did when they opened the box on the day they’re replaced. Why is this? Some people are just paranoid about aesthetics, and wear and tear, but there’s more to it. It’s about the relationship we have with our devices. Smartphones are probably the most personal device any of us own. They are on us all the time, even if we’re not actively using them, we feel their weight in our pocket or purse. They are omnipresent.

They are also transient. Used to be if you bought a computer, or some other consumer electronic device, you would keep it for a long time, and maybe upgrade parts of it. My first personal computer, a 486 with 4MB of RAM lasted me from Christmas 1993 to Christmas 1997, when I finally replaced to a Pentium II machine. In the intervening years, I replaced the hard drive, added a CD-ROM and sound card, and upgraded the RAM. Since switching to the Mac in 2005, I’ve had four Mac computers. Since buying my first iPhone in 2009, I’ve had five iPhones. [1]

The smartphone is an annual, or bi-annual upgrade for many people, and this changes the relationship to it. Many of us fund each new device we get by selling the old one. Devices that look new sell for more, end of story. It’s less of a concern if your phone carrier offers a trade-in program for your old phone. Carriers are less concerned about condition, and more that you extend your contact another year. Still, no matter how aesthetically pleasing it is, nobody wants to hand over a device that looks like it’s taken a beating. God knows what that phone store clerk will think of us. So we keep our phones in cases, baby them, and clean them not because we want to protect them for ourselves, but to protect them for the next owner.

By necessity then, the relationship you have is not with an iPhone, but with the iPhone in each of its iterations. I don’t know if I like my iPhone 6S better than my iPhone 5S. I probably like it about the same, all told. The things I like and dislike are different that what I liked about the previous model, and I can extend this all the way back to my first iPhone 3GS—or to the iPod touch I owned before that. All of these devices exist on a continuum marked by upgrades of hardware and software, but the core experience remains the same. This is one of Apple’s strengths. You can go from an original iPhone to the shiny new iPhone 7, skipping everything in between, and still understand the device.

Patrick Rhone recently published his own, interesting thoughts on upgrades. They serve as a solid counterpoint to the transient relationship we have with our phones, and why we upgrade. Patrick’s relationship to his iPhone is as a tool: if it works well enough, and the new one doesn’t have anything that fits his needs, why upgrade? It’s not worth the cost when what he has works well enough. I won’t be upgrading either, for similar reasons—though I also won’t say I’m not tempted. (That Jet Black iPhone is gorgeous, and I wouldn’t mind water resistance having already lost one phone to water damage.)

It’s a question of priority, in the end. I don’t have an answer, I just find the entire conundrum of upgrading and protecting our devices from use to be interesting. I keep my iPhone in a case, not to protect the looks, but because I managed to break my previous model of iPhone a few times without one. I’d rather not go through the hassle of an insurance replacement or a Genius Bar repair if I can help it. My relationship to my phone is different from Patrick’s, different from John Gruber’s, and different from yours. Let’s celebrate that, instead of worrying about a scratch or two on the glossy finish of your new Jet Black iPhone if you got one.


  1. Though one device upgrade was prompted by having my phone stolen, and that doesn’t count the three 5S devices I went through because of water damage and broken screens.  ↩

The Golden Age of Wireless: A Personal Cloud

Every day, I become a temporary cyborg. Before I leave for work, I strap a computer laden with sensors on my wrist, and a pair of speakers around my head. They both connect to a computer that I keep in my pocket. It’s only three devices, but they form a small, private network of their own around my body. It’s a network that keeps me informed about the state of my body and anything important that needs my attention, with my smartphone at the center. And all without wires.

When you think about it, this is amazing.

The last decade has seen technology become more personal, and be on our body. We’ve had electronic devices we could keep on our person and on our bodies, of course. Everything from pocket watches to PDAs, from pedometers to iPods have been personal technology. What separates then from now is that, until recently, none of these things could talk to each other. Only by placing a computer in our pocket with an omnipresent connection to the Internet and the ability to talk to other devices without wires, could we conceive of a world where our most personal technology becomes an ecosystem unto itself.

It was never impossible to do this before. Look at Steve Mann at MIT, if you can stand to. What separates the clunky, stone age of wearable technology and sensors from the Bronze Age, is that now we can do it all for relatively cheap, and without wires. Imagine stringing a wire from your wrist to your pocket, maybe under your shirt. String a second wire from your headphones to your pocket. If you’re convinced that augmented reality is the future, how about a third one to your glasses? (I’m still skeptical) Instead, every day, I sit in my own personal cloud of devices that speak to each other, leaving my body free and comfortable. No wires, no muss, no fuss.

Okay, some fuss. Bluetooth has come a long way since high-powered douchebags kept giant glowing Borg implants in their ears hooked up to their Motorola RAZR, but it still has a ways to go. It’s radio, and like most forms of radio, it’s pretty crappy at transmitting through thick bags of water like the human body. Battery life is still an issue, though it’s interesting that my Apple Watch lasts longer on a single charge than the smartphone it connects to. These are all kinks that will be ironed out in time. It’s not hard to imagine a world with more reliable, more power-efficient wireless connectivity, which is why we’re so impatient for it.

The process of getting there is going to be bumpy, and full of temporary inconveniences. In other words, it’s the story of every technological development for the last several millennia. On the other side of it come untold rewards. It will be a boon to accessibility. Smartphone technology has already shown itself to be insanely useful to the blind, and let’s not forget Molly Watts’s powerful piece on how her Apple Watch helps her despite being deaf and mostly blind. The more cumbersome it is to add personal technology to a person’s life, the greater the friction, the less useful it becomes. And, face it, wires are friction. If you’ve ever had to exercise with headphone cables against your skin, it’s literally friction.

The future is wireless. The future is a cloud of our personal devices, talking to each other, helping us know what we need to know, when we need to know it. All of it working seamlessly, all of it working together, and all of it communicating without wires. We won’t be there tomorrow, but we’ll be there soon enough. And we’ll probably wonder how we ever lived without it.

Why the Apple Watch Will Never Have a Camera

Apple’s September 7th event is just around the corner, and it’s looking like it’ll bring an upgraded Apple Watch 2. As part leaks and official rumors trickle out, the rumor mill has quieted about adding a camera to the Watch. Despite this, I still hear people insist that at some point the Apple Watch will have a camera. It’s just a question of when. I’m trying to be more open and less skeptical about technology things I don’t get, but this seems ridiculous.

I’m going to make a claim right now that the Apple Watch is not going to get a camera. Not in the Apple Watch 3, the Apple Watch 4, the Apple Watch SE, or the Apple Watch Air. It’s not going to happen. Though to protect what credibility I have, this claim chowder will expire after five years, if only because five years from now, we might be gossiping about a completely different Apple product’s upcoming features.

Also, this claim chowder does not cover the idea of Apple adding a camera for iris recognition to unlock the device. While it is, technically, a camera, iris recognition uses near infrared illumination to work. Visible light hides the important structures of the iris that are needed for identification, especially for dark-colored eyes. You’re not going to be taking selfies and FaceTiming with an iris recognition camera. I’m sorry.

The biggest clue that Apple isn’t going to be adding a camera to the watch is the recent repositioning of the Watch as a tool for fitness and quick information. Messaging was part of the original Apple Watch pitch, but Apple’s marketing has shifted away from that since watchOS 2. The features in the upcoming watchOS 3 put the nail into the coffin of the Watch as a primary messaging device with the removal of the goofy circle of friends UI for the side button, replacing it with an app dock. Apple wouldn’t have done this if the Watch were going to stick with messaging as a primary use. Even if it messaging gains prominence in the future, a camera would still be a dumb idea for a bunch of reasons.

Here, try an experiment. Hold an iPhone against your wrist, putting the front-facing camera approximately where the face of an Apple Watch would be. Now, hold your arm up and try to take a good selfie. Bet your arm gets pretty tired, pretty fast. Imagine trying to have a FaceTime conversation like this. There’s a reason people don’t hold watches up in front of their face. We tend to look down at them, raising our wrists about halfway. It’s a much more comfortable position. So, try taking a selfie with your iPhone front camera resting where your watch would when you look at it like a normal person. Now, count your nostril hairs.

Not to mention, there’s the creepshot problem. A small, wrist-mounted camera is the perfect device for taking non-consensual photos up a woman’s skirt. If you don’t think this is a problem, it’s so prevalent in Japan that all cell phones have a shutter sound that cannot be disabled. That includes iPhones.

Smartwatches with Cameras

Nostril-cam and creepiness aside, there’s also the problem of where the heck you put the camera in the first place. It’s not as easy of an idea as you might think. Most smartwatches with cameras tend to put them in the bezel, which is an easy solution that is loaded with problems. First off, where on the bezel do you stick the camera? Smartwatches with bezel-mounted cameras tend to stick them above the screen, typically at an angle. This is so you can take a selfie, or a picture of something else, but it’s a compromise that makes for neither good selfies, or good photos.

camerainbezelright

Plus, the Apple Watch is designed to be worn ambidextrously. There’s a setting to flip the display so you can wear it on either wrist. A camera above the display when worn on the left wrist becomes a camera that’s below the display when worn on the right. So, maybe you put it on the left or right side of the bezel. Solves the orientation problem, but not the nostril-cam problem, and it’s not going to be good at all for anything other than a nostril-selfie. There’s also the Arrow Smartwatch which puts the camera in a rotating bezel. It’s an interesting idea, but it hasn’t shipped yet and doesn’t solve the angle problem. Plus, if you think Apple’s going to make the Watch round any time soon, I’ve got a glass cube on 5th Avenue to sell you.

watchwithcameracrown

How about a second crown with a camera in it? Then you can rotate the camera to point at yourself, or whoever you’re taking a picture of. An interesting design idea, but impractical in practice. You have to limit the rotation of the second crown to 180 degrees, lest someone take a poorly lit photo of their wrist. It’ll also have to stick out quite a bit to not capture the side bezel of the watch in every photo. Having a connection between the camera and logic board that doesn’t break after extended crown twiddling is difficult. And it’s not hard to imagine confused Watch owners trying to twiddle the wrong crown to scroll the screen.

This leaves one last option: a camera in the band. It’s not only plausible, it’s been done. The original Samsung Galaxy Gear included a watch band with a built-in camera. Not a good camera, but a camera none the less. When you think about that secret diagnostic port on the inside of the band connector, it’s not hard to imagine some kind of special Apple Smart Watch Band with a camera that hooks in.

Galaxy-Gear-watch-850x758

Even this has huge issues. First off, remember that the Apple Watch is ambidextrous by design. If Apple wants to release a Watch band with a camera in it, it’ll have to be usable on both the left and right wrist. It would be a very dumb and dangerous idea to have the camera connection run through the band’s clasp, so a future Apple Watch would need to have two connectors, one on each side. I’m willing to bet Jony wasn’t too happy about having one port on his Watch, let alone a second. Besides, it’s been a year and a half, and Apple’s yet to announce any official use for that port. Unless something hooking into it gets announced on the 7th, I wouldn’t get my hopes up.

And how do you take a selfie with the camera mounted on the watch band, anyway?

I know Apple’s got some very clever people in their Industrial Design and Engineering teams. I’m sure one of them is probably clever enough to figure out how to put a camera in the Apple Watch in a way I haven’t thought of above. Problem is, clever isn’t the same thing as useful. Apple isn’t the sort of company that’s willing to compromise usability, design, and quality just so they can check off another box on a list of features. You’re going to be taking photos with your phone for the foreseeable future. Get used to it. If you want the experience of photography on your Apple Watch, the remote shutter feature still works.

iPad-only is the New Desktop Linux… For Now

The new hotness is no longer going iPad only. Now, all the cool kids are writing missives about how the iPad can’t be their only computer. Okay, that’s a more dismissive than I mean to be about Watts Martin’s excellent Medium piece on trying to do his work on the iPad, especially since I’m typing this on a Mac. He makes some great points about file handling and interoperability with the legacy PC world. Going iOS only is feasible, but only if you have the infrastructure to support it in your line of work. If you desperately need to use the Track Changes feature in Word, well, don’t plan on going iOS only just yet.

What I’d like to do, instead, is discuss the difference between the iOS and iPad ecosystem and the world of desktop Linux. First a caveat: while I have used Linux as my primary operating system, I am a decade removed from that whole scene. I ditched Linux for the Mac in 2006, and have never looked back. Well, okay, I’ve looked back once or twice, enough to know that my experience with Linux in the early 2000s is not accurate to the experience someone would have in 2016. That said, some aspects of Linux have not changed in the intervening ten years, and its those aspects that make all the difference here.

The history of iOS has been the slow re-development of the GUI-based computer from first principles. From the original iPad and iOS in 2010, the past six years have seen an incremental inclusion of the features we take for granted with modern desktop operating systems. You can argue about whether this was the plan all along, but you can’t deny it’s happening. iOS on the iPad now has multitasking, rudimentary multi-user support for education, the start of a user-accessible file system via iCloud, improved inter-app communication, and even the first steps towards a native development environment. However slow it’s been, the forward momentum of development makes me think that in a few more years, we’ll have an iPad and an iOS that addresses most of Martin’s complaints.

Linux, on the other hand, is design-by-committee. There’s a million ways to do everything, and no unified vision for the operating system above the kernel level. There’s distributions focused on usability for the desktop, and development on open source Linux consumer applications that have almost perfect feature parity with their Windows and Mac counterparts. Despite all of this, and people proclaiming every year since 1998 as “The Year of Linux on the Desktop” it’s yet to happen. And I have my doubts it will beyond the niche of computer geeks who are into that sort of thing and/or are frugal on hardware and software. That’s not to say Linux doesn’t have a place. It’s part of the infrastructure of the Internet, and it’s not going anywhere. Linux, in a pure form, as a desktop OS, however, is not likely to happen in the next few years. And it won’t happen as the underpinning of ChromeOS either.

Maybe the future isn’t everyone with 10” and 13” slabs of glass as their primary computing device, but I maintain that it’s still far too early to tell. Right now, the consensus seems to be that big changes to iOS for the iPad will come in the Spring. Whatever changes Apple brings will make doing “real work” easier and faster. In a few more years, I fully expect a native development environment in time, perhaps once Swift is streamlined a bit more. The iPad and iOS keep making slow, steady strides towards being a new way of computing. Linux, on the other hand, continues to be itself, a powerful tool that can be used as a desktop operating system if you want, but unlikely to make any additional inroads into people’s homes except as the foundation of an Internet of Things device. And I also don’t think the Mac or traditional PC will ever go away either.