Menu

Sanspoint.

Essays on Technology and Culture

Mindful Tech, Part 7.5: Our Data Trails, Ourselves Continued

Before you read this, take a moment, and check out Take This Lollipop. Fair warning, the site requires Flash, and it needs to connect with your Facebook account to work. It’s worth trying, at least once, and you can always disable its Facebook access when you’re done watching.

Go ahead. I’ll be here when you’re done.


Take This Lollipop is creepy, and a bit heavy handed, yet it makes a point about who has access to your data. It also reveals the potential of our data to create narratives. In a world in which our data is constantly being used to create a specific narrative for us, e.g. you’re a White Male, age 18–35, with an interest in Consumer Technology, and 80s Music, who is also $36,000 in debt, so here are Relevant Advertisements—we have the power to use our data trails to create narratives about ourselves as well.

Recently, I had the pleasure of seeing a talk by Lam Thuy Vo, a Data Journalist and Data Artists, at Facets 2016. She showed off a series of personal projects that used data to examine the very human lives of herself an others. These include Quantified Breakup, which examined her own data on movement, messaging, finances, and more, in the wake of her divorce. It’s a fascinating and different way of thinking about data, and a great contrast to the almost paranoiac view in the previous Mindful Tech piece. She also introduced us to Take This Lollipop, as well.

Data trails are more than just what’s collected for advertising purposes. We collect data on ourselves, deliberately and not-so-deliberately, and in ways we don’t even think about. If you wear a fitness tracker, you’re collecting data on yourself deliberately. If you carry an iPhone, you have a record of everywhere you go, not so deliberately. Data trails encompass the thoughts we post to Twitter, the emails we send on Gmail, our browser histories, the music we listen to on Spotify, anything we do online, for better and for worse.

It’s becoming more and more impossible not to opt-in to even some of the most egregious data collection. For example, when I was looking for work, I discovered pretty quickly that if I don’t have a LinkedIn profile, as far as most employers were concerned, I did not exist. This may not be an issue if you’re working in manual labor fields, but if you want a desk job where you’re moving data around, if you’re not on LinkedIn, you don’t exist. When all of your friends and family are on Facebook, and you’re not, how does this change your social landscape in the real world? And, of course, what happens if you’re blocked from one of these networks for whatever reason? [1]

There’s no clear answers here. Lam brought up the idea of a Digital Bill of Rights that determines who has the right to our data and when. There’s a social difference between attitudes to data privacy between the United States and other parts of the world. You run into ideas like the Right to be Forgotten in Europe, but when the Internet is dominated by American corporations with American ideas of privacy and data retention, attempts to legislate our way out of this are doomed to be insufficient.

In the interim, the best option is to learn about your data, and to take ownership of it. Ownership of data matters. One thing that Lam pointed out in her talk is that it is possible to pull your data out of many of these services. Whether it’s human-readable is another matter. The best you can typically hope for are CSV files, which you can manipulate using the most humble of data analysis tools: Microsoft Excel and PivotTables. It’s then up to the viewer to create a cohesive narrative from that data: a story with a beginning, middle, and end.

A while back, I wrote about how I want to know what the services I use know about me.

“If I shouldn’t worry about the data I feed to Google, Facebook, and a whole holy host of similar companies and services out there, why not be more transparent about what data is being collected, how, and what they know about me? I want to see a simple, clean, human readable page on every service I feed my personal data to that tells me every last piece of information that they know…”

There’s an opening for services that can do this for people, though the privacy risks of aggregating all this data together are significant. If a malicious actor gets in to a service that houses the aggregation of all of our personal data, it’s not hard to see the potential for abuse. It would be a revolution in doxing alone. Instead, I’d like to see tools that exist in the user space, off the cloud, that let us analyze and identify the stories in our data. The better to know what we’re making, what we’re leaking, and what we should be deleting.

And even deleting our data is problematic. The database design of many websites is such where it is easier to mark a database record as inactive, rather than remove it entirely. This is one part lazy design, and one part technical limitation. How can we be sure that the data we’ve deleted is truly gone, when we want it gone? What happens when the data trails we thought were lost when a service dies get bought by another company? Truth is we don’t know. And that makes thinking about it all the more essential.


  1. This is huge. Facebook’s “Real Name” policy has had a chilling effect on transgender people, or anyone who needs a pseudonym to avoid harassment and abuse, locking them away from digital support networks, family, and friends.  ↩