On App.Net, Patrick Rhone linked to a pouch for your phone that blocks cell signals. The idea is that you can put your phone in there, freeing you from the distractions it presents, so you can actually pay attention to whatever you need to do. In the same post, Patrick also notes how hard the off button to use, with only the slightest hint of sarcasm. The panacea of putting your phone in a portable Faraday cage is a textbook example of solving the wrong problem. If you’re too busy checking your phone to pay attention to the world around you, the problem is not the phone, the problem is you.
Technology is an enabler, both in a positive and a negative sense. We have new ways to create, communicate, and control our lives, but we also have new ways to distract ourselves from creating, communicating, and controlling our lives—especially the latter. When we give in to the impulse to check Facebook twice a minute to see if someone “liked” our witty status update, we’re habitualizing ourselves to the idea that we need to know now, right now, whether our friends think we’re amusing. While it doesn’t help that a lot of these services, and the devices we use them on, are designed to buzz, chime, and vibrate with every virtual interaction, even when they don’t, it’s easy to fall into the habit of needing to know.
The services aren’t to blame. Neither are the devices. They are tools for us to use to our own ends, and a tool can be neither good or bad. It’s the application and the intent that determines that. In the case of constant notifications and the endless need to check for updates, it could be that lack of intent is also the problem. I admit that, during periods of mental downtime during my day [1] I will whip out my iPhone and do a run of my Twitter feed, App.Net feed, Facebook, and personal e-mail. If I post something, I’ll do this run again, just to see if I missed anything, despite knowing that I’ll get a push notification if someone replies to me on App.Net. It’s typically subconscious, and acknowledging that is the first step.
Paying $38(!) for a pouch that blocks cell signals, no matter how well designed, is only treating a symptom. I know that if I can’t check Twitter on my phone at work, I’ll check it on the web. If I disable Wifi on my work machine… well, I can’t get work done. So, maybe I’ll install apps that prevent me from accessing non-work related things for a certain period of time. Except that I can typically disable them with minimal effort. [2] The pouch may keep my phone from buzzing when I get an @-reply on App.Net, but when I take it out, they’ll be waiting. Instead, I can permanently disable push notifications for Riposte for free, which I just did.
Buying toys for your toys isn’t going to help you use those toys better. Products like the cell signal blocking pouch are like putting a bandage on a tumor. It covers up the problem, but it doesn’t fix it. The problem exists in a different space entirely. It’s internal, it’s hard as hell to get at, and it is a real pain in the ass to fix. However, it is fixable, and we have the tools we need to get it done. Some of the tools we need come baked into the devices and services that we use to aggravate the symptoms, but most of the tools exist within us. It just takes the willpower to use them, and they’re a lot easier on your wallet.
Approximately every ten minutes or less, by my very bad estimation. ↩
This is not a joke. During a period of unemployment, I decided to install an extension to my browser that blocked “fun” sites from 9 AM to 5 PM. I could only disable it by typing a randomly fifty character string. I am very good at typing random, fifty character strings. ↩
What is Facebook to most people over the age of 25? It’s a never-ending class reunion mixed with an eternal late-night dorm room gossip session mixed with a nightly check-in on what coworkers are doing after leaving the office. In other words, it’s a place where you go to keep tabs on your friends and acquaintances.
Are we on these services to communicate, or are we on these services to show off? This post has me thinking a lot about how I use social networking. Hopefully, it’ll have you thinking too.
It’s possible now, for only a few hundred dollars, to learn nearly everything about your body. You can get a fitness tracking device to monitor how much you move, how well you sleep, and how many calories you burn. You can get a WiFi enabled scale to track and plot how much you weigh. You can get a Bluetooth monitor to measure your pulse, and your blood pressure. You can get an app that pulls all of this data together. With all these tools in hand, you can generate the data you need to make a Nicholas Felton style annual report on yourself.
Proponents of the Quantified Self movement suggest that measuring everything you do is a pathway to better health, and a better life. I’m not going to say they’re wrong, either. The facts say they’re right. If you can see that every time you go to bed at three in the morning, because you were out at the bar the night before, you wake up late and feel terrible, you can then decide to not go to bed at three in the morning. This is an extremely simple and reductive example, but the point remains that knowing facts about yourself and your body does give you an extra layer of perspective towards making behavioral changes. Now, the technology is there and reasonably priced enough to put it in reach of people who can afford a couple hundred dollars in gear.
This works well, if you’re the sort of person for whom gamification is made for. In the parlance of role-playing games of old—the sort with pen, paper, and character sheets with numbers representing stats—someone who strove to max out there character in all aspects was known as a munchkin. There’s plenty of Quantified Self adherents who are in it just to know more about themselves, but I worry if the Quantified Self movement may lead to the same phenomenon to grow in the real world. I don’t think it’s a huge leap to imagine it, either. Go to the gym and look at the muscle-bound guys showing off for each other, and ask yourself if the munchkin qualifier may not apply.
Of course, there’s also the parts of the self that aren’t so easily reduced to numbers. You can know the number of steps you walked, but there’s plenty about us that is more nebulous. To be blunt, you can’t quantify “happiness”. It lacks a scale, or even a sense of the baseline. What does it mean to really know one self? Quantification of the quantifiable is a start, but the only measurement that counts in the end is how you feel. Hacking the body is a start, but hacking the mind and the soul is another. One day, perhaps, the data scientists will team up with the philosophers—there’s enough looking for work—to build the tools needed for the Quantified Soul movement. Then, things will get interesting.
Whenever we interact online, particularly in text, a lot gets lost. No emoticon, Markdown syntax, mock HTML, or simple caging can truly capture the tonalities of actual speech. Body language is completely out. We’re left with a mode of communication where intent needs, even begs to be defined early on. I suspect that this is the reason why a lot of online communication tools can become method of broadcasting cruelty. Part of how we come to know people, in the real world, is through knowing their face, the tone of their voice, their mannerisms. We are able to personify someone, even if we don’t know their name. So much communication on the Internet, even in the age of Social Media, is de-personified, and that’s where the trouble starts.
De-personification of the people we deal with on our communication mediums allows us to forget ourselves. When we see the person on the other end as a non-person, I suspect this triggers a change in how we choose to act. We’re freed of the traditional expectations of interpersonal communication and the concepts of basic civility. So, we’re less judicious with our choice of words, we’re more aggressive in our stances, and we’re less willing to consider anything that differs with our point of view. De-personalization is also a stepping stone to outright dehumanization, and the lifting of all boundaries on behavior. Once a person is seen as something other than human, history has shown us time and time again what happens.
Anonymity is merely an additional layer of armor in a dangerous world of de-personalized communication. This may have been part of the reason Google insisted on real names and images for profiles on Google Plus. [1] It’s a way to keep people slightly more civil and honest. I don’t know how effective it is, however, because of the point I made at the start of this essay. Online communication is still, largely, textual, and strips out the very things that help us define a person in a human space.
It takes a conscious act of will to see the people we interact with online as more than just pixels on a screen. Somewhere, on the other side of all those cables and boxes, sitting at a keyboard, is another person just like you and I. They think, they feel, they sense, and they communicate just like us, because they are us. Empathy may be innate, but I think only to a degree. Empathy is a skill we develop and learn through interacting with people, and especially in a space where we can get the full range of interaction. When someone lives in a space into which interaction is filtered down to words on a screen, unless a sense of empathy has been built ahead of time, it’s hard to view them as anything more than words.
The other part being that if they know who you really are, they can target ads to you better, but that is beyond the scope of this essay. ↩
After my piece on Google Glass went out, I got into an interesting discussion on App.Net with William Kujawa and Max Jacobson. William suggests that further iterations of wearable computing would avoid the problems to mainstream adoption, while Max suggests some use cases for it that might help mainstream the concept. I’m still unconvinced, and it comes down to to one thing. The defining ideal of wearable computing is omnipresent data. Few people have the need for such a thing, and that is where the problem lies.
No wearable computer is capable of providing true omnipresent data, unless you want to wear a belt made of batteries, but it’s reasonable to assume that in a couple generations, we’ll have Google Glass-esque hardware that can last for twelve hours of normal use. That’s twelve hours a day for you to have a display in your vision, and twelve hours a day of notifications and data you can’t avoid unless you physically remove the device. Glued as we are to our smartphones, we can always shove them in our pockets or bags. Glasses, by concept, are not temporary things. They go on your face, and stay there, though I don’t take mine into the shower.
As long as this thing is on your face, you’re seeing, sending, and receiving data. You have omnipresent data in your face at all times, outside of certain specialized tasks, you don’t need data in your face at all time. That’s where the Google Glass concept, and wearable computing in general falls down. Nobody’s provided the ordinary user a compelling use case, least of all Google, for something that does what Google Glass does. Fundamentally, Google Glass is a general purpose computing device, stuck to your head, with a display that’s constantly in your vision. That data will change depending on what apps you have installed and running, but it’s still data, and it’s still in your face.
Harry C. Marks experience with Google Now had me thinking. He notes that Google Now is mostly untapped potential, considering all the data Google collects on us. Still, some consider Google Now to be the “killer app†of Glass, popping up notifications as we need them, based on what we’re doing. I think there’s a subtle, but important difference between activity-based notifications on our smartphones (or maybe our “smart watchâ€), as opposed to in our face. By way of example, check out this video of a water-based stop sign in Sydney to keep too-tall trucks from smashing into a tunnel entrance, and tell me if you don’t jump when it pops up.
A smartphone beeping, vibrating in your pocket, or lighting up in your peripheral vision is jarring, but it’s not the same as having it in your face. It’s easier to ignore something on the periphery, by choice or otherwise. It takes a physical act to see, acknowledge, and choose to act on the alert presented. This flaw is where Glass or a similar device would succeed in a specialized environment, where a heads-up, in your face alert means a faster reaction. I can see a case for, say, a paramedic on patrol wearing a Google Glass like device to let them know about incoming calls, for example. Outside of that space, how many of us need up-to-the-minute data about anything, wherever we may be or whatever we may be doing?