Menu

Sanspoint.

Essays on Technology and Culture

What We Lose In A Streaming Music World

You can call it perfect timing. A couple of weeks ago, I decided to try iTunes Match, the iTunes in the Cloud solution that doesn’t involve Apple Music and subscription streaming. After using it on two Macs and my iPad for a few days, I found that some of my favorite albums and songs were being matched with the wrong versions, yet again. In response, I rage quit and got a refund. I’d never been quite so upset with an Apple product in my life.

Then, I came across this sad piece by James Pinkstone whose library was destroyed by an Apple Music and iTunes bug. A week after that, MacRumors reported that Apple was planning to end music downloads in two years. The rumor was squashed by an Apple spokesperson, and that was enough for many. Not me, though. “What if?” has been running through my mind since.

James Pinkstone wrote about how certain, unique versions of songs in his library were replaced by common versions. I ran into a similar issue with iTunes Match, replacing German-language versions of Kraftwerk songs with their English counterparts, or a remastered version of an album with the original CD master. At least I had recourse of a backup and my local files in situ on my main Mac…

I’ve written before about why I choose to own music in a streaming world, but I never felt as though my music collection would be taken from me. Now, I’m not so sure. Maybe music downloads will go away, and maybe the future of iTunes will be cloud first, and local files somewhere below Connect, but above syncing ringtones.

But the looming threat of a streaming only world opens up a bigger question about music ownership and the experience of music. I’ll be the first to admit that all forms of physical media are a huge pain the rear. LPs, CDs, and cassettes are all fragile. Digital files are less fragile, but just as annoying to organize and sync. I keep an external hard drive tethered to my MacBook just to store my media library. I have more music than could fit on the largest of iPhones. Managing this is a hassle.

Yet, these forms of music media are real in a way that streaming music is not. There are digital files in my collection that are over a decade old. They’ve traveled with me across multiple computers, and multiple lives. This is meaningful in a way that streaming can never be. How do you connect with music that you simply rent, and could disappear from your library the moment you turn your back? A record label dispute could mean that your favorite artist’s music might be locked down to a single streaming service—such as the library of the dearly departed Prince.

It feels like the move to streaming music means we’re losing something. What happens to music that isn’t available on a streaming service? How will you explore the music of a surprisingly good opening band when they don’t exist in the library of Apple Music, Spotify, or TIDAL? So much music that has touched my soul, you can’t stream it for love or money. I had to seek it out on my own, pawing through used music bins, or going to shows. When there’s an all-you-can eat buffet for $9.99, what’s the incentive to order something that isn’t on the menu?

Maybe I’m becoming a fossil, but I can’t help but worry. Music is one of the most personal forms of art in the world. The way we relate to it cannot be isolated to files stored somewhere remotely. There’s the thrill of discovery, the emotional connection to lyrics, a voice, or even a single sound. By owning my music library, I make sure that I can maintain those relationships to the art. I should never have to worry that, if I double-click an album in iTunes, I will hear the wrong thing. If I do, I know that restoring order is within my grasp, not something that requires technical support calls and arcane rituals.

I may, eventually, be left behind by a streaming world. I don’t expect I’ll be alone. If the world goes on without me, and I am left to my digital files, and my collection of plastic and wax discs, I will be okay. But there’s a big difference between being left behind, and being abandoned, and it’s the latter that scares me to death. If you care about music in any tangible form, it should scare you as well.

Mindful Tech, Part 7.5: Our Data Trails, Ourselves Continued

Before you read this, take a moment, and check out Take This Lollipop. Fair warning, the site requires Flash, and it needs to connect with your Facebook account to work. It’s worth trying, at least once, and you can always disable its Facebook access when you’re done watching.

Go ahead. I’ll be here when you’re done.


Take This Lollipop is creepy, and a bit heavy handed, yet it makes a point about who has access to your data. It also reveals the potential of our data to create narratives. In a world in which our data is constantly being used to create a specific narrative for us, e.g. you’re a White Male, age 18–35, with an interest in Consumer Technology, and 80s Music, who is also $36,000 in debt, so here are Relevant Advertisements—we have the power to use our data trails to create narratives about ourselves as well.

Recently, I had the pleasure of seeing a talk by Lam Thuy Vo, a Data Journalist and Data Artists, at Facets 2016. She showed off a series of personal projects that used data to examine the very human lives of herself an others. These include Quantified Breakup, which examined her own data on movement, messaging, finances, and more, in the wake of her divorce. It’s a fascinating and different way of thinking about data, and a great contrast to the almost paranoiac view in the previous Mindful Tech piece. She also introduced us to Take This Lollipop, as well.

Data trails are more than just what’s collected for advertising purposes. We collect data on ourselves, deliberately and not-so-deliberately, and in ways we don’t even think about. If you wear a fitness tracker, you’re collecting data on yourself deliberately. If you carry an iPhone, you have a record of everywhere you go, not so deliberately. Data trails encompass the thoughts we post to Twitter, the emails we send on Gmail, our browser histories, the music we listen to on Spotify, anything we do online, for better and for worse.

It’s becoming more and more impossible not to opt-in to even some of the most egregious data collection. For example, when I was looking for work, I discovered pretty quickly that if I don’t have a LinkedIn profile, as far as most employers were concerned, I did not exist. This may not be an issue if you’re working in manual labor fields, but if you want a desk job where you’re moving data around, if you’re not on LinkedIn, you don’t exist. When all of your friends and family are on Facebook, and you’re not, how does this change your social landscape in the real world? And, of course, what happens if you’re blocked from one of these networks for whatever reason? [1]

There’s no clear answers here. Lam brought up the idea of a Digital Bill of Rights that determines who has the right to our data and when. There’s a social difference between attitudes to data privacy between the United States and other parts of the world. You run into ideas like the Right to be Forgotten in Europe, but when the Internet is dominated by American corporations with American ideas of privacy and data retention, attempts to legislate our way out of this are doomed to be insufficient.

In the interim, the best option is to learn about your data, and to take ownership of it. Ownership of data matters. One thing that Lam pointed out in her talk is that it is possible to pull your data out of many of these services. Whether it’s human-readable is another matter. The best you can typically hope for are CSV files, which you can manipulate using the most humble of data analysis tools: Microsoft Excel and PivotTables. It’s then up to the viewer to create a cohesive narrative from that data: a story with a beginning, middle, and end.

A while back, I wrote about how I want to know what the services I use know about me.

“If I shouldn’t worry about the data I feed to Google, Facebook, and a whole holy host of similar companies and services out there, why not be more transparent about what data is being collected, how, and what they know about me? I want to see a simple, clean, human readable page on every service I feed my personal data to that tells me every last piece of information that they know…”

There’s an opening for services that can do this for people, though the privacy risks of aggregating all this data together are significant. If a malicious actor gets in to a service that houses the aggregation of all of our personal data, it’s not hard to see the potential for abuse. It would be a revolution in doxing alone. Instead, I’d like to see tools that exist in the user space, off the cloud, that let us analyze and identify the stories in our data. The better to know what we’re making, what we’re leaking, and what we should be deleting.

And even deleting our data is problematic. The database design of many websites is such where it is easier to mark a database record as inactive, rather than remove it entirely. This is one part lazy design, and one part technical limitation. How can we be sure that the data we’ve deleted is truly gone, when we want it gone? What happens when the data trails we thought were lost when a service dies get bought by another company? Truth is we don’t know. And that makes thinking about it all the more essential.


  1. This is huge. Facebook’s “Real Name” policy has had a chilling effect on transgender people, or anyone who needs a pseudonym to avoid harassment and abuse, locking them away from digital support networks, family, and friends.  ↩

Mindful Tech, Part 7: Our Data Trails, Ourselves

How many accounts are you signed up for across the web? In my 1Password vault, there are over 300 individual logins, and I’m almost certain that I haven’t logged in to more than half of them in at least a year. Maybe I deleted the account, wherever that’s an option, and left it in 1Password. Likely, it’s just an account I can’t be free of. It’s bothersome just how few websites that require a login don’t have a way to delete it.

All of these unused, idle accounts present a risk. They’re part of a data trail that contains Heaven knows what on me. I can measure a guess, though: contact information, work history [1], health data, financial data, and who knows what else they’re correlating any of it with. Even with good password hygiene—I do use 1Password, after all—a data breach could be devastating. I know that I have been pwned three times. At least, that’s the ones I know about…

We often don’t think about the trails we leave behind as we traverse the web, except whenever the last fiasco brews over what Facebook, Google, et. al. are doing with our data this time. Then there’s a little bit of righteous indignation, maybe updating our ad blockers, and then going back to sleep until the next outrage. Sure, we think we’re immune from whatever invasive technological development is being used to spy on us, but when was the last time you thought about the data you willingly gave up?

Getting a grasp on the data we spill out, let alone what it’s being used for, is difficult by design. It’s part of the special sauce that makes the companies money. Google, to its credit, has a page where you can see the profile it’s built on you. And, you can at least opt-out of the worst of it. Facebook, not so much. And all of this comes before the other services that track you, online and off. Companies like Acxiom, Experian, and Equifax. These don’t exist in a vacuum. Opting out of these services, too, is possible, if difficult.

Let’s bring it back to digital data trails. Jacoby Young has a small series of interviews—including one with me—where people audit how much they use the Big Five tech companies. Taking his interview gave me a chance to take stock of where I am on a process I started a year and a half ago to wean myself away from services I can no longer trust with my data. Trust, for me, is a matter of understanding what I’m getting out of the data I give up in exchange for the product. In the case of Google and Facebook, the two services I want to use least, I’m struggling. Yet, I’m still tied to both platforms for multiple reasons.

Even the services I trust can be porous. I use the Health app on my iPhone as a central depository for data on my physical body. Apple’s implementation of Health on the iPhone is extremely secure: encrypted and inaccessible to Apple in any form. Until recently, Apple didn’t even include Health data in encrypted iCloud backups, which took security a little too far. In any case, I’m happy to trust Apple with my health data. The apps that feed into it, however… I can assign and remove permissions for apps to read and write my health data, but I can’t be certain what they’re doing with it. Let alone, what they’re doing with the data I feed into the apps. Who knows where all that is going?

That’s not to say we shouldn’t surrender some data. Not only is it inevitable, when we know both what we’re giving up, and what we’re getting in return, it seems fair. As long as one is happy with the terms of that transaction, I can’t tell them to stop. Besides, the only alternative besides complete disconnection, is to invest time, money, and work into building technologies to keep your data under your control. It’s possible, but it’s not easy—certainly beyond the reach of the average person.

So, now we’re forced to either create a calculus of trust for who we share our data with, or just give up and let our data fall where it may. Easier just not to think about it. Besides, what does it matter? It’s only data. But that data is increasingly personal, increasingly specific, and increasingly identifiable as us. When Google knows more about you than you know about you, this has massive potential for abuse.

It’s important that we see the value of what we’re giving up, and decide for ourselves what we are comfortable with. Take stock of who you’ve shared your data with, what accounts you have and no longer use. See what apps are connected with your social media accounts and what data trails you’re leaving behind you. Without knowing how much you’re leaving behind, how can you possibly be comfortable with the situation? Knowledge is power. If only the companies we’re giving this data up to would be willing to share it with us.


  1. Job application sites are often the worst when it comes to not being able to delete your account.  ↩

The Right Device for the Right Task

An early version of this post appeared in Issue 7 of the Sanspoint supporter Newsletter. To subscribe, visit the support page, and subscribe for $3 a month, or make a donation of any amount.

I’ve had a couple debates, mostly in a private Slack, but I’m coming around to CGP Grey’s idea of assigning his various devices to be (mostly) single-tasking machines. He explains how he uses his various iPads in Episode #26 of Cortex, with additional info on his iPad Pro writing setup in a blog post.

Note that I’m coming around to the idea of different devices for different tasks, not Grey’s specific implementation. He’ll be the first to admit that the way he works isn’t for everyone, and not everyone can afford three iPads. (To Grey’s credit, his iPad mini is an older model, not one he bought specifically as a makeshift Kindle.) Assigning specific functions for our devices has merit in my mind because it is so easy to get overwhelmed by the possibility of our devices. If you’re an inveterate procrastinator who is likely to dive into an Internet K-Hole, there’s appeal in having a device that doesn’t let you do that.

I’m not about to go all out and start completely disabling features on my iPhone, though the idea appeals to me. [1] Instead, CGP’s discussion has me thinking about ways I can start being more focused in the use of my devices. I’m asking myself what role each device serves in my life, and how I can maximize what each is good at versus what I need from my devices.

This came into focus when I got a second Mac for my new day job. Now, I have a device that is specifically for a certain context in my life: this is my Work Computer for my Day Job. When I am on this computer, I am (ostensibly) at work. Why can’t I do the same with my other devices?

About a week ago, I snagged a Logitech Type+ Keyboard Case for my iPad Air 2 for really cheap—like $30 cheap. This makes it a lot easier for me to use the iPad as a dedicated writing device. I’m writing this particular newsletter on my Mac, but I’ve done a fair amount of writing with the iPad and Type+ lately, even if it hasn’t been publish yet. I’m very happy with the choice. iOS may have multitasking now, but it’s still harder for me to switch modes on the iPad and dive into a distraction rathole.

I still need to figure out what roles are best for my iPhone and my home Mac. Plus, I’m thinking about my Apple Watch and how to streamline that for what it’s best at, too. It’s easy to look at CGP’s setup and go, “Hey, jerkface, not all of us can drop a bunch of money on iPads and mechanical keyboards,” but that misses the point. It’s not about buying more gear, it’s about optimizing what you have so it works best for you.

What “works best” means is a personal thing. If that means turning off Safari and all the other apps that might keep you from doing what your job is, then fine. If it’s streamlining down to a pair of devices that can do everything, then good for you. Instead of getting lost in the details of one person’s specific implementation, consider the ways you can apply the idea to your own digital life.


  1. There’s a good follow-up on that link I didn’t know about until writing this piece.  ↩

Mindful Tech, Part 6: Mindfully Learning Technology

Has this ever happened to you? You go to a familiar website or app, only for it to show up with a brand new interface. You stare at it in confusion, wondering where all the familiar visual landmarks have gone, and what new, convoluted way they’ve come up with to do a simple task. Maybe IT has rolled out a new “software solution” to your machine, replacing an older system that you’ve gotten used to. They promise things will be easier and faster, but all you can see is yet another monkey wrench thrown into your machinery. When this happens, it drives all of us nuts to some degree or another, even if it’s from all the people around us complaining.

A while back, I wrote a response to an article on how kids being unable to use computers. I identified two types of knowledge on how to use technology: task-based and skill-based. Task-based knowledge is the most common way people learn how to use technology. Task-based users establish routines for what they want to accomplish, using cues like the shape of icons, physical location of buttons, and familiar situations. Naturally, these routines are going to be thrown off with even a small change to an interface. A complete redesign? Then they have to start all over again, and the user will not be happy.

In contrast, you have skill-based users. Instead of memorizing specific steps to accomplish a task, they learn the system Itself. Their knowledge can be applied to all sorts of tasks, because they know how to explore an interface and recognize how actions they perform for one task can be applied to a different, related task. When presented with a new interface to a system, the skill-based user might need a moment to adjust, but they’ll be able to relate the new interface to the mental model in their head. Task-based users navigate by the map. Skill-based users navigate by the terrain.

Most of us fall somewhere in between these extremes. Maybe on our home machines, we’re skill-based speed machines, who can take every curveball that app designers throw our way. At work, however, we’re still trying to get used to the new CRM two years in, and dreading the day IT finally rolls out the next update. Maybe we’re a whiz on our smartphone, but trying to get anything done on the home PC is an exercise in frustration. Or, maybe it’s the other way around: the PC is a breeze, but your smartphone is a frustrating, Fisher-Price toy.

How do we bridge this gap? Through mindful learning.

Often, when we’re presented with a new piece of technology to learn, it’s in one of two circumstances. Either it’s something we chose for ourselves that we’re excited to start playing with it, or it’s something imposed from above, leaving us considerably less excited. The more interested we are in learning something, the more likely we are to explore, play, and find multiple ways of doing the same thing, the more we focus on the skill of using it. When something is assigned to us, however, we’re more interested in just completing the task, and building a fragile routine that breaks when a change is introduced. The research backs this up:

"[M]indfulness theory suggests that some staples of information systems design, such as the transfer of routines between contexts, the use of highly specific instructions, and the assumption that information gathering necessarily leads to greater certainty, can hinder mindfulness with significant detrimental consequences…

“Users’ willingness to uncritically accept software-generated results demonstrates how easy it is for systems to promote routinized, mindless use that can ultimately undermine reliable performance.”

Reliability, Mindfulness, and Information Systems

Boom. The less we think about the systems we use, the more likely we are to just accept the results, or give up when faced with difficult tasks. Not an effective way of using our devices, but a boon for the freelance IT support industry.

Here’s an example of mindful learning from my own life: in college, over a decade ago, I decided to study Computer Science. I wanted to learn how to program, and—in time—make computer games. Despite my interest in the subject, the top-down pedagogy of my school’s CS program made learning a chore. I passed CS101 by the skin of my teeth. The next level course, however, I ended up retaking it before failing out—but that’s another story. Last year, when I began work on Just Do the Thing, I found the learning process much more pleasant. This was because I was directing myself, building what I wanted to build, figuring it out as I went, and using Google when things got hairy.

In other words, when we want to learn, we do it in a skill-based mindset. When we have to learn, we do it in a task-based mindset. Mindful learning only happens when we want to learn. Making that leap is hard. If there’s a crappy application or an unpleasant process you need to learn for your job, mustering up the desire to learn it is not an easy task. It doesn’t help that a lot of technology training and systems design, aren’t conducive to mindful, skill-based learning.

We can, however, shift from a mindless, task and routine-based approach to using our technology, into a mindful, skill-based approach. It just requires stepping out of ourselves and our routines for a moment to see exactly what we’re doing. If you’ve ever gained new clarity into a workflow after talking with your tech support representative, you’ve experienced this first hand.

“Interaction with technical support personnel helps users momentarily transition to mindful consideration of their situation. This shifts users’ focus from the goal to the process, increases the salience of technical details and specific actions, and forces them to consciously attend to the current state of the system (as opposed to the expected state)…”

Reliability, Mindfulness, and Information Systems

But you don’t need Nick Burns, Your Company’s Computer Guy to knock you out of a routine. We can do it ourselves, with a little bit of effort and awareness of our intention. Redesigns and new applications give us opportunity to re-evaluate the mental systems we have created for our tools, and find ways to do better. All the mindfulness in the world won’t fix a bad interface, but it can make it easier for us to deal with one.

Computers may be unpredictable boxes of change, but we can learn how to adapt to those changes and roll with the punches. A good place to start is to shake up your own workflows. Find another way to do a task you do every day, even if it takes longer. Explore the features already built in to your software. Heck, try reading the fu… friendly manual, or at least the in-app help when it’s available. These are little things, but they can quickly dislodge the blocks to understanding what we’re doing. Only then can we start working with our technology instead of just using it.