I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.
But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism
Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.
Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.
In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.
Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.
Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.
“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.
To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.
In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:
“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “
Deborah Raji, MIT Technology Review
These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.
In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019 National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”
Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?
There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse. According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.
Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.
Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world. We are.