When a publication like Infoworld is talking about ethics in technology, it's a sign something is up. 1 While Peter Wayner hits the nail on the head about the questions we need to be asking about the technology we make and use, he's quick to note that “…ethics courses have become a staple of physical-world engineering degrees, they remain a begrudging anomaly in computer science pedagogy.” I'm sure he's right, but we live in a world where GitHub is considered to be a résumé. One can get a programming job, and even start a new company by self-teaching yourself programming. Adding ethics to the CS pedagogy is a great idea, but doesn't help those who lack a formal education.
These ethical dilemmas, especially “Whether – and how – to transform users into products,” and “How much should future consequences influence present decisions” should be part of the dialogue. It seems, rather than try to figure out answers to these dilemmas, we go for the easy assumptions. Yes, we should transform users into products, and no, we shouldn't think about future consequences. The former is easily explained as a side effect of what drives returns on VC investments, the only thing close to a reliable big bet you can make in this market. The latter is an extension of the “fail early, fail often” ethos of the modern Silicon Valley and its children.
“Fail early, fail often"is dangerous, as it leads to extreme short-term thinking. It's often spun as incentive to try new ideas and refactor them if they don't pan out, which is a good strategy. However, when the bar for failure is set too low, a company can abandon a good idea for the new and shiny, even if there is potential for success by just giving it more time and effort. Some of this is an effect of the push for returns on VC investment if a company doesn't become a runaway success after a few rounds. In other cases, it's a get-rich-quick mentality on the part of the founders.
The lack of long-term thinking is also baked into the culture of some of the giants in the technology space. Google and Facebook alike put little thought into the ethics of what they have to do to drive more people into their ecosystem, collect data, and sell ads. Their bottom line is tied to it. Facebook expanding their company into virtual reality through the purchase of Oculus may imply Mark Zuckerberg wants to expand the possibilities of what Facebook can do beyond their current monetization strategy, but who can say? Google's various pie-in-the-sky projects seem to be more about goodwill in the tech community than finding a way improve what it tries to present as a core business. How do military robots help "organize all the world's information?”
We're living in a dangerous time. Heartbleed is a high profile example of the risks we're facing giving up so much of our data to these fast moving systems that spend far more time convincing us to disgorge our lives into them than they do protecting our data. It's easier to focus on getting us to surrender our information than it is to protect it, and the business case is stronger, too. I don't buy the argument that it's the “nature” of the network that things are how they are. We all define what this network is, whether we are a creator or a consumer—a line that is becoming increasingly blurred. We all have a voice in the debate over these ethical dilemmas, and it's time we actually had a debate about it.