Computer Says No, But A Human Taught It To
The computer is a cold, hard, logical, and rational thing. It does not care for your squishy human ambiguities, nuance, and irrationality. The computer lives in a world of zeroes and ones, of pure logic and reason, and it is always correct. Quibble all you want with its results, but you’ll get nowhere. The problem isn’t the computer, it’s between the chair and the keyboard. It’s you.
This is the attitude reflected in the technolgists who dream of algorithms and software eating the world. Instead of all the ambiguous, flawed human beings doing all the work, the world will be run by intelligent, logical, hyper-rational software AIs. Humans will be freed to do all the creative work that the machines can’t—and also to program the machines, of course. Maybe, eventually, some of the machines can program themsevles, but in the meantime, better learn to code, ’cause it’s gonna be the only job left.
Problem is, of course, that computers aren’t logical, or rational. They’re just good at following directions. Those directions might not make any logical or rational sense, but as long as they’re valid and executable, the computer will follow those directions to the letter. If the computer is given instructions that encode the biases, unconcious or not, of the programmer, the results will contain those same biases. Nobody likes to hear this, if only because so many developers love to see themselves as logical, rational, and unbiased. Problem is, there’s no such thing as a completely logical, completely rational, completely unbiased human being.
Those technologiests who play up artificial intelligence as the solution to human foibles either don’t understand that AI will inherit those same human foibles, or don’t care. The latter is much more sinister, but no matter the case, you still end up with what Maciej Ceglowski puts so succinctly as “money laundering for bias.” The casual disregard for algorithmic bias, the blind ignorance of the very human problems that humans build into their systems, acting as if the compiler or interpreter washes away the original sin of human bias, is already causing problems.
But it’s not the technologists who are having their photos tagged as gorillas, seeing ads suggesting they might be criminals, not seeing ads for high paying jobs because of their gender, or asked to pay more for the same service based on the racial makeup of their neighborhood. Instead, technologists like Elon Musk are more worried about AIs becoming smarter than humans, the plot of a million terrible science ficgtion movies. If the AI missteps of the last few years are any indication, it will be a long time before any AI becomes smarter than us—especially since we’re the ones building them.
The sooner we wake up to this fact and start addressing the problems of bias in algorithms, the better. One solution is to open the playing field of development to a wider, more diverse group of people. Or, even better, rethink the degree to which we even want to use automation and algorithms in the first place? If these pieces of software aren’t going to be any less biased and flawed as the humans who make them, what’s the benefit? Instead of the impassive, black box of a computer algorithm making the decision, let’s go back to the human being. At least then we have a chance to appeal.