Essays on Technology and Culture

Can Empathy Scale to the Internet?

Cynicism is easy, especially when all you can see serves as justification for it. It also blinds you to anything that might serve to contradict that same cynicism. Reading about the growing Internet counterculture of racism and misogyny that festers in the Web’s darkest corners, and see it not only refuse to be disinfected by sunlight, but encouraged—cynicism is an easy refuge.

The problem with cynicism is that it solves nothing. A cynic may be a failed idealist beneath the prickly exterior, but it’s not for want of trying. Because we cynics have tried—or tried to try—and seen our efforts go nowhere, we take our ball and go home to snark from the sidelines. Cynicism works for us because it feels good. If we can’t save the world, let’s at least enjoy watching it fall apart. Cynicism also eats away at our empathy, and empathy is what we need most now. This came into focus for me while reading a wonderful piece by Parker Molloy. Her conclusion stuck with me:

“[S]o long as we look at the world through the lens of objective good versus objective evil, we’ll never truly be able to understand why anyone does anything… The world would be a better place if we could all learn to empathize a bit more with one another… to not view people as pure evil or pure good, and to understand that we’re all in this world together so we might as well make the best of it we can, as one big happy human family thing.”

To get past the false dichotomy of “objective good versus objective evil,” we need to develop the skill of empathy—and it is a skill. Some of us have a more developed, innate sense of empathy than others, much in the same way that some of us have a more developed sense of musicality than others. That doesn’t mean whatever degree of skill we have in empathy can’t be improved through effort and practice.

The Internet and social media provide countless opportunities to practice empathy. What comes through our timelines and streams, whether we agree with it or not, comes from a real place and real emotion. This is true, whether we’re seeing anger, joy, sadness, bemusement, threats and abuse, or support and love. Even the darkest, cruelest, and most cynical attempts at humor come from a place of genuine emotion. Understanding this is the first step—and that step is the hardest of all.

Online communication remains text-based for the most part, meaning we lose much of the metadata of conversation; facial expressions, tone, and—too often—context. A Tweet in isolation offers at most 140 characters of information. It’s place in a larger conversation is lost, making it easier to decontextualize and for someone to apply their own meaning and agenda to it. There are imperfect methods—hacks, really—to bring that missing data back into our online conversations, but an Emoji or a GIF can only go so far. For minds that expect more information in a conversation than the basic content of a message, communication, and thus empathy, becomes all the more difficult.

This may explain why so many view what happens online as being less than “real.” How can it be, when all the hallmarks of human interaction are lost to the medium? That unrealness also gives us license to be someone other than ourselves—in whatever capacity we could be said to have one self—when interacting online. The normal laws of behavior and propriety are suspended, and we are free to express ideas and behaviors we never would in the “real” world. In practice, this is known as the Online Disinhibition Effect. It is beyond real, and the data speaks for itself. For example:

“A Johns Hopkins University study in 2007 found that 64 per cent of bullied children were exclusively attacked online. That is, many children who were habitual bullies on social media would completely refrain from this behaviour when meeting their victims in person.”

The above data point comes from a remarkable essay called “Possessed by a Mask” by Sandra Newman for Aeon Magazine. Newman draws parallels between the historical role of masks in human society, and the way we behave behind the mask of the Internet.

Even if we’re using our real names, we’re so disconnected as to be masked by default. When we’re masked in a room of other masked people, the rules often stop applying to us. It take a conscious act of will to see past the masks. As Newman says in her conclusion: “Above all, we should remember that, behind the masked figures that surround us, there are people as vulnerable, fallible, as real as ourselves.”

While empathy is hard before you add the Internet, that’s an explanation, not an excuse. Ideally, empathy would be baked in to every social and communication product from the beginning, but empathy, as a concept and a skill, is not valued by technology companies in their products.

There are many reasons for this, beyond the nature of the medium. One is the massive amount of privilege afforded to those who build our communication tools. If you’ve never experienced abuse, harassment, or even the inevitable painful memories that come with time, you won’t think about it. It becomes a blind spot in product development, further deprioritized in favor of juicing the numbers, monetizing the service, and generally serving “investor storytime” to keep the money rolling in.

When the companies that define our online communication start to take the abuse of their platforms seriously, we’ll finally hit the turning point on the technical problems of harassment and abuse. It’s largely been lip service until now. We’ve seen it on Twitter through GamerGate, with Reddit, and even at South By Southwest. But we can no more place the blame for tech companies failures of empathy on the Internet entirely at the feet of Venture Capitalists any more than we can place it at the feet of the companies they fund, the engineering teams building the products, or entirely at the feet of the users.

We’re all to blame at some level, and we’re all responsible for finding a solution. It is possible to build systems on a technical level that—if not ones that strengthen empathy, at least making it prohibitive to be cruel and abusive before the fact. Makerbase is a great example, and Anil Dash’s “8 Steps” are a good start for anyone entering the social space. It starts with a willingness to think about these problems before they occur. And, as long as we’re sticking to the VC model, it takes a willingness for VCs to reward companies that think about these problems.

As for us, the end users who have to live with these flawed channels of communication? Thank goodness for the work of people at Crash Override Network and The Online Abuse Prevention Initiative for building strategies and tools to protect us from bad actors of all stripes. More can be done, but we’ve started having the conversation, and it’s slowly starting to pay off. But, any solution must have empathy at its core, for all users.

Here is where I need to make it clear that I am not equivocating. The behavior of certain groups, such the deliberately offensive “Chanterculture,” is often indefensible. There is no excuse, no defense for the harassment, abuse, threats, and violence their victims have experienced. We need to develop empathy for the abusers who lack empathy, as well as their victims. As I said before, even the abuse comes from a place of genuine emotion. There’s more to the callous cruelty than its visible manifestations. Understanding it will go a long way to helping the perpetrators of online abuse mend their ways and finding peace for all involved.

Empathy has to work both ways for it to be effective. The challenge in scaling empathy is the struggle of developing empathy for those we would prefer to have nothing to do with. Empathy doesn’t mean agreeing with their viewpoint, merely trying to understand where they come from. It doesn’t mean freeing abusers from the consequences of their actions, but those consequences must draw from empathy. No obstacle—a mask, a lack of information, a lack of the metadata of communication—should stand in the way. Surmounting these obstacles takes substantial effort, but it’s within our reach. It just requires us to care—and to drop our cynicism.

Internet and social media companies need to develop empathy for their users. Users need to develop empathy for other users. Top-down technological solutions can get us part of the way there, but we can try in our own digital lives to be more empathetic on a daily basis. I find myself going back to Jess Zimmerman’s excellent piece: “Can the internet actually be an empathy boot camp?” Many of the points she makes I have echoed above. I’ll echo another: we’re all going to screw this up, and often.

“It’s harder now to be convincing, and easier to put your foot in your mouth; you’re virtually guaranteed to accidentally hurt someone and have to apologize… But this lack of control over your audience forces you to consider more people’s needs more deeply, to become and remain more aware of the variety of human traumas, motives, histories and concerns.”

Each of our mistakes, our failures of empathy, are a chance to learn and strengthen our skills. It feels, however, that we often don’t bother to notice those failures for all the reasons outlined above. If our empathy is going to reach Internet scale, we have to start building it here and now. Let’s, all of us, work towards a more empathetic Internet, beginning with us. Stop, slow down, and think before you post another snarky comment. Try to understand the motivations of others, and try to get less outraged over outrage. Practice, practice, practice empathy in our lives, whether we’re an end-user, an engineer, or a product manager. This applies to the Internet and the “real” world, but the former is where the bigger challenge lies.