Menu

Sanspoint.

Essays on Technology and Culture

Mindful Tech, Part 7: Our Data Trails, Ourselves

How many accounts are you signed up for across the web? In my 1Password vault, there are over 300 individual logins, and I’m almost certain that I haven’t logged in to more than half of them in at least a year. Maybe I deleted the account, wherever that’s an option, and left it in 1Password. Likely, it’s just an account I can’t be free of. It’s bothersome just how few websites that require a login don’t have a way to delete it.

All of these unused, idle accounts present a risk. They’re part of a data trail that contains Heaven knows what on me. I can measure a guess, though: contact information, work history [1], health data, financial data, and who knows what else they’re correlating any of it with. Even with good password hygiene—I do use 1Password, after all—a data breach could be devastating. I know that I have been pwned three times. At least, that’s the ones I know about…

We often don’t think about the trails we leave behind as we traverse the web, except whenever the last fiasco brews over what Facebook, Google, et. al. are doing with our data this time. Then there’s a little bit of righteous indignation, maybe updating our ad blockers, and then going back to sleep until the next outrage. Sure, we think we’re immune from whatever invasive technological development is being used to spy on us, but when was the last time you thought about the data you willingly gave up?

Getting a grasp on the data we spill out, let alone what it’s being used for, is difficult by design. It’s part of the special sauce that makes the companies money. Google, to its credit, has a page where you can see the profile it’s built on you. And, you can at least opt-out of the worst of it. Facebook, not so much. And all of this comes before the other services that track you, online and off. Companies like Acxiom, Experian, and Equifax. These don’t exist in a vacuum. Opting out of these services, too, is possible, if difficult.

Let’s bring it back to digital data trails. Jacoby Young has a small series of interviews—including one with me—where people audit how much they use the Big Five tech companies. Taking his interview gave me a chance to take stock of where I am on a process I started a year and a half ago to wean myself away from services I can no longer trust with my data. Trust, for me, is a matter of understanding what I’m getting out of the data I give up in exchange for the product. In the case of Google and Facebook, the two services I want to use least, I’m struggling. Yet, I’m still tied to both platforms for multiple reasons.

Even the services I trust can be porous. I use the Health app on my iPhone as a central depository for data on my physical body. Apple’s implementation of Health on the iPhone is extremely secure: encrypted and inaccessible to Apple in any form. Until recently, Apple didn’t even include Health data in encrypted iCloud backups, which took security a little too far. In any case, I’m happy to trust Apple with my health data. The apps that feed into it, however… I can assign and remove permissions for apps to read and write my health data, but I can’t be certain what they’re doing with it. Let alone, what they’re doing with the data I feed into the apps. Who knows where all that is going?

That’s not to say we shouldn’t surrender some data. Not only is it inevitable, when we know both what we’re giving up, and what we’re getting in return, it seems fair. As long as one is happy with the terms of that transaction, I can’t tell them to stop. Besides, the only alternative besides complete disconnection, is to invest time, money, and work into building technologies to keep your data under your control. It’s possible, but it’s not easy—certainly beyond the reach of the average person.

So, now we’re forced to either create a calculus of trust for who we share our data with, or just give up and let our data fall where it may. Easier just not to think about it. Besides, what does it matter? It’s only data. But that data is increasingly personal, increasingly specific, and increasingly identifiable as us. When Google knows more about you than you know about you, this has massive potential for abuse.

It’s important that we see the value of what we’re giving up, and decide for ourselves what we are comfortable with. Take stock of who you’ve shared your data with, what accounts you have and no longer use. See what apps are connected with your social media accounts and what data trails you’re leaving behind you. Without knowing how much you’re leaving behind, how can you possibly be comfortable with the situation? Knowledge is power. If only the companies we’re giving this data up to would be willing to share it with us.


  1. Job application sites are often the worst when it comes to not being able to delete your account.  ↩

The Right Device for the Right Task

An early version of this post appeared in Issue 7 of the Sanspoint supporter Newsletter. To subscribe, visit the support page, and subscribe for $3 a month, or make a donation of any amount.

I’ve had a couple debates, mostly in a private Slack, but I’m coming around to CGP Grey’s idea of assigning his various devices to be (mostly) single-tasking machines. He explains how he uses his various iPads in Episode #26 of Cortex, with additional info on his iPad Pro writing setup in a blog post.

Note that I’m coming around to the idea of different devices for different tasks, not Grey’s specific implementation. He’ll be the first to admit that the way he works isn’t for everyone, and not everyone can afford three iPads. (To Grey’s credit, his iPad mini is an older model, not one he bought specifically as a makeshift Kindle.) Assigning specific functions for our devices has merit in my mind because it is so easy to get overwhelmed by the possibility of our devices. If you’re an inveterate procrastinator who is likely to dive into an Internet K-Hole, there’s appeal in having a device that doesn’t let you do that.

I’m not about to go all out and start completely disabling features on my iPhone, though the idea appeals to me. [1] Instead, CGP’s discussion has me thinking about ways I can start being more focused in the use of my devices. I’m asking myself what role each device serves in my life, and how I can maximize what each is good at versus what I need from my devices.

This came into focus when I got a second Mac for my new day job. Now, I have a device that is specifically for a certain context in my life: this is my Work Computer for my Day Job. When I am on this computer, I am (ostensibly) at work. Why can’t I do the same with my other devices?

About a week ago, I snagged a Logitech Type+ Keyboard Case for my iPad Air 2 for really cheap—like $30 cheap. This makes it a lot easier for me to use the iPad as a dedicated writing device. I’m writing this particular newsletter on my Mac, but I’ve done a fair amount of writing with the iPad and Type+ lately, even if it hasn’t been publish yet. I’m very happy with the choice. iOS may have multitasking now, but it’s still harder for me to switch modes on the iPad and dive into a distraction rathole.

I still need to figure out what roles are best for my iPhone and my home Mac. Plus, I’m thinking about my Apple Watch and how to streamline that for what it’s best at, too. It’s easy to look at CGP’s setup and go, “Hey, jerkface, not all of us can drop a bunch of money on iPads and mechanical keyboards,” but that misses the point. It’s not about buying more gear, it’s about optimizing what you have so it works best for you.

What “works best” means is a personal thing. If that means turning off Safari and all the other apps that might keep you from doing what your job is, then fine. If it’s streamlining down to a pair of devices that can do everything, then good for you. Instead of getting lost in the details of one person’s specific implementation, consider the ways you can apply the idea to your own digital life.


  1. There’s a good follow-up on that link I didn’t know about until writing this piece.  ↩

Mindful Tech, Part 6: Mindfully Learning Technology

Has this ever happened to you? You go to a familiar website or app, only for it to show up with a brand new interface. You stare at it in confusion, wondering where all the familiar visual landmarks have gone, and what new, convoluted way they’ve come up with to do a simple task. Maybe IT has rolled out a new “software solution” to your machine, replacing an older system that you’ve gotten used to. They promise things will be easier and faster, but all you can see is yet another monkey wrench thrown into your machinery. When this happens, it drives all of us nuts to some degree or another, even if it’s from all the people around us complaining.

A while back, I wrote a response to an article on how kids being unable to use computers. I identified two types of knowledge on how to use technology: task-based and skill-based. Task-based knowledge is the most common way people learn how to use technology. Task-based users establish routines for what they want to accomplish, using cues like the shape of icons, physical location of buttons, and familiar situations. Naturally, these routines are going to be thrown off with even a small change to an interface. A complete redesign? Then they have to start all over again, and the user will not be happy.

In contrast, you have skill-based users. Instead of memorizing specific steps to accomplish a task, they learn the system Itself. Their knowledge can be applied to all sorts of tasks, because they know how to explore an interface and recognize how actions they perform for one task can be applied to a different, related task. When presented with a new interface to a system, the skill-based user might need a moment to adjust, but they’ll be able to relate the new interface to the mental model in their head. Task-based users navigate by the map. Skill-based users navigate by the terrain.

Most of us fall somewhere in between these extremes. Maybe on our home machines, we’re skill-based speed machines, who can take every curveball that app designers throw our way. At work, however, we’re still trying to get used to the new CRM two years in, and dreading the day IT finally rolls out the next update. Maybe we’re a whiz on our smartphone, but trying to get anything done on the home PC is an exercise in frustration. Or, maybe it’s the other way around: the PC is a breeze, but your smartphone is a frustrating, Fisher-Price toy.

How do we bridge this gap? Through mindful learning.

Often, when we’re presented with a new piece of technology to learn, it’s in one of two circumstances. Either it’s something we chose for ourselves that we’re excited to start playing with it, or it’s something imposed from above, leaving us considerably less excited. The more interested we are in learning something, the more likely we are to explore, play, and find multiple ways of doing the same thing, the more we focus on the skill of using it. When something is assigned to us, however, we’re more interested in just completing the task, and building a fragile routine that breaks when a change is introduced. The research backs this up:

"[M]indfulness theory suggests that some staples of information systems design, such as the transfer of routines between contexts, the use of highly specific instructions, and the assumption that information gathering necessarily leads to greater certainty, can hinder mindfulness with significant detrimental consequences…

“Users’ willingness to uncritically accept software-generated results demonstrates how easy it is for systems to promote routinized, mindless use that can ultimately undermine reliable performance.”

Reliability, Mindfulness, and Information Systems

Boom. The less we think about the systems we use, the more likely we are to just accept the results, or give up when faced with difficult tasks. Not an effective way of using our devices, but a boon for the freelance IT support industry.

Here’s an example of mindful learning from my own life: in college, over a decade ago, I decided to study Computer Science. I wanted to learn how to program, and—in time—make computer games. Despite my interest in the subject, the top-down pedagogy of my school’s CS program made learning a chore. I passed CS101 by the skin of my teeth. The next level course, however, I ended up retaking it before failing out—but that’s another story. Last year, when I began work on Just Do the Thing, I found the learning process much more pleasant. This was because I was directing myself, building what I wanted to build, figuring it out as I went, and using Google when things got hairy.

In other words, when we want to learn, we do it in a skill-based mindset. When we have to learn, we do it in a task-based mindset. Mindful learning only happens when we want to learn. Making that leap is hard. If there’s a crappy application or an unpleasant process you need to learn for your job, mustering up the desire to learn it is not an easy task. It doesn’t help that a lot of technology training and systems design, aren’t conducive to mindful, skill-based learning.

We can, however, shift from a mindless, task and routine-based approach to using our technology, into a mindful, skill-based approach. It just requires stepping out of ourselves and our routines for a moment to see exactly what we’re doing. If you’ve ever gained new clarity into a workflow after talking with your tech support representative, you’ve experienced this first hand.

“Interaction with technical support personnel helps users momentarily transition to mindful consideration of their situation. This shifts users’ focus from the goal to the process, increases the salience of technical details and specific actions, and forces them to consciously attend to the current state of the system (as opposed to the expected state)…”

Reliability, Mindfulness, and Information Systems

But you don’t need Nick Burns, Your Company’s Computer Guy to knock you out of a routine. We can do it ourselves, with a little bit of effort and awareness of our intention. Redesigns and new applications give us opportunity to re-evaluate the mental systems we have created for our tools, and find ways to do better. All the mindfulness in the world won’t fix a bad interface, but it can make it easier for us to deal with one.

Computers may be unpredictable boxes of change, but we can learn how to adapt to those changes and roll with the punches. A good place to start is to shake up your own workflows. Find another way to do a task you do every day, even if it takes longer. Explore the features already built in to your software. Heck, try reading the fu… friendly manual, or at least the in-app help when it’s available. These are little things, but they can quickly dislodge the blocks to understanding what we’re doing. Only then can we start working with our technology instead of just using it.

Things I Don’t Get

I sit on the side of the vast swimming pool of technology, my ankles in the water, suntan lotion covering what a bathing suit does not. I like to watch the people come and go, but sometimes I’ll dive in to a part of the pool that seems appealing. Those parts are rare, however, at least from my vantage point. I’m suspicious whenever I see a crowd, even if a part of my brain is wondering if they know something I don’t.

I can overcome this skepticism, at times. I was heavily skeptical of smartwatches until I decided to try a Pebble on a self-imposed dare. Now, I’m a smartwatch convert, even as more than a few Apple Watch early-adopters have given up on the platform. But right now, the big things I see my fellow tech people gushing over leave me wanting. It’s not that I don’t see potential or utility in any of these, but that I don’t see enough. For some of these technologies, all I can see is downside. Best to go through the list.

Virtual Reality

Virtual Reality is big again. It’s been a technology that has been just on the cusp of bring the Next Big Thing for over two decades, but now, they seem to have cracked it. I’ve not tried the Oculus Rift—I don’t have that kind of money, or a desire to own a Windows PC—but I’ve played with a Samsung Gear VR. It’s come a long way from the giant, heavy, clunky helmets I used to play Duke Nukem 3D at an amusement park in 1996. On a purely technical level, it’s safe to say that Virtual Reality has arrived.

But, so far, all I’ve seen VR used for is really neat tech demos and video games. I have yet to see any great applications that take advantage of Virtual Reality to do anything more groundbreaking than the aforementioned Duke Nukem 3D VR deathmatch from 20 years ago. There’s nothing wrong with a technology that exists purely for entertainment, but that doesn’t match up with the sheer hype I keep hearing. Did Facebook really buy Oculus to make video games, or something more?

I’m skeptical of VR, because I don’t see many uses for it outside of entertainment. Some suggest it would be a great tool for engineers, and that’s reasonable. If you’re building a structure, the ability to explore it in VR is a potential boon, but is it better than a rendered walk through on a computer screen? I’m not sure. Even if VR makes inroads in business, what does it do for the home user, outside of entertainment? VR FaceTime? Second Life has its niche of users, but if VR is the next big thing, it’ll have to find some appeal to millions of people who don’t want to play video games, or pretend to be six-foot tall walking genitalia in cyberspace.

Smart Homes and the Internet of Things

The Smart Home is another idea that has been on the cusp for the last several decades. It was the subject of a Looney Tunes cartoon from 1947, for crying out loud. Omnipresent wireless connectivity has made it easier and more functional, but it still seems like a fragile, technology demo to me. Why do I need to turn out the lights from my smartphone when I can just stand up and walk ten steps to the switch? Why does my washing machine need to send me a text message when it’s done a cycle, when I can just set a timer on my phone? What do I do with all my smart home gadgets when I move?

And that’s before you factor in the security holes, or the very real possibility that the company you bought your home automation gear from might brick your device for no good reason. There’s a chance that the security stuff will get ironed out in time. The utility factor, not so much.

Self-Driving Cars

I remain skeptical of self-driving cars, not because I don’t think the technology will work, but because I feel they’re solving the wrong problem. The promoters of self-driving cars saw the existing urban and suburban infrastructure—often the sprawling road and highway focused infrastructure of the West Coast—and have tailored a solution optimized for it. It’s a clever hack, repurposing existing infrastructure for a new kind of networked transportation model, but it is still a hack.

On the East Coast, or at least the Northeastern Megalopolis where I live, there’s less—but not zero—road and highway sprawl. Jobs and homes are more centralized, and we have existing, high-efficiency, high-capacity transit systems to ferry us most of the way between. We’re better suited, I think, to mass transit. Self-driving cars may kill traffic jams dead, but in doing so might aggravate suburban sprawl—the last thing we need.

Self-driving car technology could be a huge boon outside of personal, private transportation, but I’ve yet to see anyone talking about using self-driving trucks to carry goods instead of using long-haul truckers. According to the Organisation For Economic Co-Operation And Development, “trucking is by far the most harmful mode of goods transport.” We’re hauling a lot of goods around the United States by truck. Where’s Elon Musk with the Tesla Self-Driving Tractor-Trailer?

AI and Bots

Color me skeptical of the coming AI revolution too. Microsoft’s first go with their Tay AI turned into racist cluster-expletive, and there’s no indication they’ve learned much from the experience. Caroline Sinders breaks down the problems, not just for Microsoft, but for any company who wants to be in this space. Besides, what we’re calling AI these days are really just complex algorithms with some natural language processing to get the inputs right.

I’m not worried about HAL 9000 locking me out of my apartment. I’m worried about more subtle algorithmic horrors. Many algorithms are picking up the unconscious—and conscious—biases of their creators. The results can even be a threat to national security. If the goal of Artificial Intelligence is to create systems that surpass human flaws and foibles, we’ve got an uphill climb at a slope of about 98º right now.

So What Do I Care About?

There’s lot of cool and exciting stuff happening. Electric cars excite me, even if I’m not likely to own one—dense urban dweller that I am. Social media has its problems, but the power of getting the world to communicate together is still incredible to me. I’m amazed at what we can do with more and cheaper sensors, though I worry who has access to that data. I swear that, one day, the context-aware computing future will finally come to pass. Medical science continues to astound me, and if people can get over their irrational fear of genetically modified foods, we could accomplish literal miracles.

But I don’t hear much about those. I’m trying to be less of a pessimist, and assume that when there’s smoke around something, there’s fire. For some of these ideas that are the next big thing coming around again (VR, Smart Homes), I can’t help but be skeptical. We’ve done this dance before. Maybe there’s a real value to all of this, but it’s escaping me. None of the gushing technology press has made the case, let alone the PR departments of the companies promoting the technology.

Maybe it’s me. I don’t want to be the pessimist, stuck in his dumb home staring into rectangles, and typing his queries into a search engine, and taking the train to visit his parents like an animal while the rest of the world lives the Star Trek future. If the rest of the world is seeing something I’m not, I just wish they’d communicate it better.