OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?

AI, Creativity and the Last Beatle’s Song

I have never been accused of being a Luddite. Typically, I’m on the other end of the adoption curve – one of the first to adopt a new technology. But when it comes to AI, I am stepping forward gingerly.

Now, my hesitancy notwithstanding, AI is here to stay. In my world, it is well past the tipping point from a thing that exists solely in the domain to tech to a topic of conversation for everyone, from butchers to bakers to candlestick makers. Everywhere I turn now I see those ubiquitous two letters – AI. That was especially true in the last week, with the turmoil around Sam Altman and the “is he fired/isn’t he” drama at OpenAI.

In 1991 Geoffery Moore wrote the book Crossing the Chasm, looking at how technologies are adopted. He explained that it depends on the nature of the technology itself. If it’s a continuation of technology we understand, the adoption follows a fairly straight-forward bell curve through the general population.

But if it’s a disruptive technology – one that we’re not familiar with – then adoption plots itself out on an S-Curve. The tipping point in the middle of that curve where it switches from being skinny to being fat is what he called the “chasm.” Some technologies get stuck on the wrong side of the chasm, never to be adopted by the majority of the market.  Think Google Glass, for example.

There is often a pattern to the adoption of disruptive technologies (and AI definitely fits this description).  To begin with, we find a way to adapt it and use it for the things we’re already doing. But somewhere along the line, innovators grasp the full potential of the technology and apply it in completely new ways, pushing capabilities forward exponentially. And it’s in that push forward where all the societal disruption occurs. Suddenly, all the unintended consequences make themselves known.

This is exactly where we seem to be with AI. Most of us are using it to tweak the things we’ve always done. But the prescient amongst us are starting to look at what might be, and for many of us, we’re doing so with a furrowed brow. We’re worried, and, I suspect, with good reason.

As one example, I’ve been thinking about AI and creativity. As someone who has always dabbled in creative design, media production and writing, this has been top of mind for me. I have often tried to pry open the mystic box that is the creative process.

There are many, creative software developers foremost amongst them, that will tell you that AI will be a game changer when it comes to creating – well – just about anything.

Or, in the case of the last Beatle single to be released, recreating anything. Now and Then, the final Beatles song featuring the Fab Four, was made possible by an AI program created by Peter Jackson’s team for the documentary Get Back. It allowed Paul McCartney, Ringo Starr and their team of producers (headed by George Martin’s son Giles) to separate John Lennon’s vocals from the piano background on a demo tape from 1978.

One last Beatle’s song featuring John Lennon – that should be a good thing – right?  I guess. But there’s a flip side to this.

Let’s take writing, for example. Ask anyone who has written something longer than a tweet or Instagram post. What you start out intending to write is never what you end up with. Somehow, the process of writing takes its own twists and turns, usually surprising even the writer. Even these posts, which average only 700 to 800 words, usually end up going in unexpected directions by the time I place the final period.

Creativity is an iterative process and there are stages in that process. It takes time for it all to  play out. No matter how good my initial idea is, if I simply fed it in an AI black box and hit the “create” button, I don’t know if the outcome would be something I would be happy with.

“But,” you protest, “what about AI taking the drudgery out of the creative process? What if you use it to clean up a photo, or remove background noise from an audio recording (a la the Beatles single). That should free up more time and more options for you to be creative, right?”

That’s promise is certainly what’s being pitched by AI merchants right now. And it makes sense. But it only makes sense at the skinny end of the adoption curve. That’s where we’re at right now, using AI as a new tool to do old jobs. If we think that’s where we’re going to stay, I’m pretty sure we’re being naïve.

I believe creativity needs some sweat. It benefits from a timeline that allows for thinking, and rethinking, over and over again. I don’t believe creativity comes from instant gratification, which is what AI gives us. It comes from iteration that creates the spaces needed for inspiration.

Now, I may be wrong. Perhaps AI’s ability to instantly produce hundreds of variation of an idea will prove the proponents right. It may unleash more creativity than ever. But I still believe we will lose an essential human element in the process that is critical to the act of creation.

Time will tell. And I suspect it won’t take very long.

(Image – The Beatles in WPAP – wendhahai)