Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.