Note: No Artificial Intelligence was involved in the creation of this column.
In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.
Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:
- Cover stories that wouldn’t have been covered due to lack of human resources
- Do the factual heavy lifting for human reporters
- Alert humans to possible news stories in big data sets
So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.
For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.
Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:
“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”
Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.
Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:
“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”
The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:
“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”
There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.
That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.