There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Can OpenAI Make Searching More Useful?

As you may have heard, OpenAI is testing a prototype of a new search engine called SearchGPT. A press release from July 25 notes: “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

I’ve been waiting for this for a long time: search that moves beyond relevance to usefulness.  It was 14 years ago that I said this in an interview with Aaron Goldman regarding his book “Everything I Know About Marketing I Learned from Google”:“Search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness. That’s why I believe apps are the next flavor of search, little dedicated helpers that allow us to do something with the information. The information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

I’ve felt for almost two decades that the days of search as a destination were numbered. For over 30 years now (Archie, the first internet search engine, was created in 1990), when we’re looking for something online, we search, and then we have to do something with what we find on the results page. Sometimes, a single search is enough — but often, it isn’t. For many of our intended end goals, we still have to do a lot of wading through the Internet’s deep end, filtering out the garbage, picking up the nuggets we need and then assembling those into something useful.

I’ve spent much of those past two decades pondering what the future of search might be. In fact, my previous company wrote a paper on it back in 2007. We were looking forward to what we thought might be the future of search, but we didn’t look too far forward. We set 2010 as our crystal ball horizon. Then we assembled an all-star panel of search design and usability experts, including Marissa Mayer, who was then Google’s vice president of search user experience and interface design, and Jakob Nielsen, principal of the Nielsen Norman Group and the web’s best known usability expert. We asked them what they thought search would look like in three years’ time.

Even back then, almost 20 years ago, I felt the linear presentation of a results page — the 10 blue links concept that started search — was limiting. Since then, we have moved beyond the 10 blue links. A Google search today for the latest IPhone model (one of our test queries in the white paper) actually looks eerily similar to the mock-up we did for what a Google search might look like in the year 2010. It just took Google 14 extra years to get there.

But the basic original premise of search is still there: Do a query, and Google will try to return the most relevant results. If you’re looking to buy an iPhone, it’s probably more useful, mainly due to sponsored content. But it’s still well short of the usefulness I was hoping for.

It’s also interesting to see what directions search has (and hasn’t) taken since then. Mayer talked a lot about interacting with search results. She envisioned an interface where you could annotate and filter your results: “I think that people will be annotating search results pages and web pages a lot. They’re going to be rating them, they’re going to be reviewing them. They’re going to be marking them up, saying ‘I want to come back to this one later.’”

That never really happened. The idea of search as a sticky and interactive interface for the web sort of materialized, but never to the extent that Mayer envisioned.

From our panel, it was Nielsen’s crystal ball that seemed to offer the clearest view of the future: “I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.”

Nielsen was spot-on in 2007. It’s exactly those advances in natural language processing and artificial intelligence that could allow ChatGPT to now move beyond the paradigm of the search results page and move searching the web into something more useful.

A decade and a half ago, I envisioned an ecosystem of apps that could bridge the gap between what we intended to do and the information and functionality that could be found online.  That’s exactly what’s happening at OpenAI — a number of functional engines powered by AI, all beneath a natural language “chat” interface.

At this point, we still have to “say” what we want in the form of a prompt, but the more we use ChatGPT (or any AI interface) the better it will get to know us. In 2007, when we wrote our white paper on the future of search, personalization was what we were all talking about. Now, with ChatGPT, personalization could come back to the fore, helping AI know what we want even if we can’t put it into words.

As I mentioned in a previous post, we’ll have to wait to see if SearchGPT can make search more useful, especially for complex tasks like planning a vacation, making a major purchase onr planning a big event.

But I think all the pieces are there. The monetization siloes that dominate the online landscape will still prove a challenge to getting all the way to our final destination, but SearchGPT could make the journey faster and a little less taxing.

Note: I still have a copy of our 2007 white paper if anyone is interested. Just email me (email in the contact us page), give me your email and I’ll send you a copy.

The Adoption of A.I.

Recently, I was talking to a reporter about AI. She was working on a piece about what Apple’s integration of AI into the latest iOS (cleverly named Apple Intelligence) would mean for its adoption by users. Right at the beginning, she asked me this question, “What previous examples of human adoption of tech products or innovations might be able to tell us about how we will fit (or not fit) AI into our daily lives?”

That’s a big question. An existential question, even. Luckily, she gave me some advance warning, so I had a chance to think about it.  Even with the heads up, my answer was still well short of anything resembling helpfulness. It was, “I don’t think we’ve ever dealt with something quite like this. So, we’ll see.”

Incisive? Brilliant? Erudite? No, no and no.

But honest? I believe so.

When we think in terms of technology adoption, it usually falls into two categories: continuous and discontinuous. Continuous innovation simply builds on something we already understand. It’s adoption that follows a straight line, with little risk involved and little effort required. It’s driving a car with a little more horsepower, or getting a smartphone with more storage.

Discontinuous innovation is a different beast. It’s an innovation that displaces what went before it. In terms of user experience, it’s a blank slate, so it requires effort and a tolerance for risk to adopt it. This is the type of innovation that is adopted on a bell curve, first identified by American sociologist Everett Rogers in 1962. The acceptance of these new technologies spreads along a timeline defined by the personalities of the marketplace. Some are the type to try every new gadget, and some hang on to the tried and true for as long as they possibly can. Most of us fall somewhere in between.

As an example, think about going from driving a tradition car to an electric vehicle. The change from one to the other requires some effort. There’s a learning curve involved. There’s also risk. We have no baseline of experience to measure against. Some will be ahead of the curve and adopt early. Some will drive their gas clunker until it falls apart.

Falling into this second category of discontinuous innovation, but different by virtue of both the nature of the new technology and the impact it wields, are a handful of innovations that usher in a completely different paradigm. Think of the introduction of electrical power distribution in the late 19th century, the introduction of computers in the second half of the 20th century, or the spread of the internet in the 21st Century.

Each of these was foundational, in that they sparked an explosion of innovation that wouldn’t have been possible if it were not for the initial innovation. These innovations not only change all the rules, they change the very game itself. And because of that, they impact society at a fundamental level. When these types of innovations come along, your life will change whether you choose to adopt the technology or not. And it’s these types of technological paradigm shifts that are rife with unintended consequences.

If I was trying to find a parallel for what AI means for us, I would look for it amongst these examples. And that presents a problem when we pull out our crystal ball and try to peer ahead at what might be. We can’t know. There’s just too much in flux – too many variables to compute with any accuracy. Perhaps we can project forward a few months or a year at the most, based on what we know today. But trying to peer any further forward is a fool’s game. Could you have anticipated what we would be doing on the Internet in 2024 when the first BBS (Bulletin Board System) was introduced in Chicago in 1978?

A.I. is like these previous examples, but it’s also different in one fundamental way. All these other innovations had humans at the switch. Someone needed to turn on the electrical light, boot up the computer or log on to the internet. At this point, we are still “using” A.I., whether it’s as an add-on in software we’re familiar with, like Adobe Photoshop, or a stand-alone app like ChatGPT, but generative A.I.’s real potential can only be discovered when it slips from the grasp of human control and starts working on its own, hidden under some algorithmic hood, safe from our meddling human hands.

We’ve never dealt with anything like this before. So, like I said, we’ll see.

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.