Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

You Know What Government Agencies Need? Some AI

A few items on my recent to-do list  have necessitated dealing with multiple levels of governmental bureaucracy: regional, provincial (this being in Canada) and federal. All three experiences were, without exception, a complete pain in the ass. So, having spent a good part of my life advising companies on how to improve their customer experience, the question that kept bubbling up in my brain was, “Why the hell is dealing with government such a horrendous experience?”

Anecdotally, I know everyone I know feels the same way. But what about everyone I don’t know? Do they also feel that the experience of dealing with a government agency is on par with having a root canal or colonoscopy?

According to a survey conducted last year by the research firm Qualtrics XM, the answer appears to be yes. This report paints a pretty grim picture. Satisfaction with government services ranked dead last when compared to private sector industries.

The next question, being that AI is all I seem to have been writing about lately, is this: “Could AI make dealing with the government a little less awful?”

And before you say it, yes, I realize I recently took a swipe at the AI-empowered customer service used by my local telco. But when the bar is set as low as it is for government customer service, I have to believe that even with the limitations of artificially intelligent customer service as it currently exists, it would still be a step forward. At least the word “intelligent” is in there somewhere.

But before I dive into ways to potentially solve the problem, we should spend a little time exploring the root causes of crappy customer service in government.

First of all, government has no competitors. That means there are no market forces driving improvement. If I have to get a building permit or renew my driver’s license, I have one option available. I can’t go down the street and deal with “Government Agency B.”

Secondly, in private enterprise, the maxim is that the customer is always right. This is, of course, bullshit.  The real truth is that profit is always right, but with customers and profitability so inextricably linked, things generally work out pretty well for the customer.

The same is not true when dealing with the government. Their job is to make sure things are (supposedly) fair and equitable for all constituents. And the determination of fairness needs to follow a universally understood protocol. The result of this is that government agencies are relentlessly regulation bound and fixated on policies and process, even if those are hopelessly archaic. Part of this is to make sure that the rules are followed, but let’s face it, the bigger motivator here is to make sure all bureaucratic asses are covered.

Finally, there is a weird hierarchy that exists in government agencies.  Frontline people tend to stay in place even if governments change. But the same is often not true for their senior management. Those tend to shift as governments come and go. According to the Qualtrics study cited earlier, less than half (48%) of government employees feel their leadership is responsive to feedback from employees. About the same number (47%) feel that senior leadership values diverse perspectives.

This creates a workplace where most of the people dealing with clients feel unheard, disempowered and frustrated. This frustration can’t help but seep across the counter separating them from the people they’re trying to help.

I think all these things are givens and are unlikely to change in my lifetime. Still, perhaps AI could be used to help us navigate the serpentine landscape of government rules and regulations.

Let me give you one example from my own experience. I have to move a retaining wall that happens to front on a lake. In Canada, almost all lake foreshores are Crown land, which means you need to deal with the government to access them.

I have now been bouncing back and forth between three provincial ministries for almost two years to try to get a permit to do the work. In that time, I have lost count of how many people I’ve had to deal with. Just last week, someone sent me a couple of user guides that “I should refer to” in order to help push the process forward. One of them is 29 pages long. The other is 42 pages. They are both about as compelling and easy to understand as you would imagine a government document would be. After a quick glance, I figured out that only two of the 71 combined pages are relevant to me.

As I worked my way through them, I thought, “surely some kind of ChatGPT interface would make this easier, digging through the reams of regulation to surface the answers I was looking for. Perhaps it could even guide you through the application process.”

Let me tell you, it takes a lot to make me long for an AI-powered interface. But apparently, dealing with any level of government is enough to push me over the edge.

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

AI Customer Service: Not Quite Ready For Prime Time

I had a problem with my phone, which is a landline (and yes, I’ve heard all the smartass remarks about being the last person on earth with a landline, but go ahead, take your best shot).

The point is, I had a problem. Actually, the phone had a problem, in that it didn’t work. No tone, no life, no nothing. So that became my problem.

What did I do? I called my provider (from my cell, which I do have) and after going through this bizarre ID verification process that basically stopped just short of a DNA test, I got routed through to their AI voice assistant, who pleasantly asked me to state my problem in one short sentence.

As soon as I heard that voice, which used the same dulcet tones as Siri, Alexa and the rest of the AI Geek Chorus, I knew what I was dealing with. Somewhere at a board table in the not-too-distant past, somebody had come up with the brilliant idea of using AI for customer service. “Do you know how much money we could save by cutting humans out of our support budget?” After pointing to a chart with a big bar and a much smaller bar to drive the point home, there would have been much enthusiastic applause and back-slapping.

Of course, the corporate brain trust had conveniently forgotten that they can’t cut all humans out of the equation, as their customers still fell into that category.  And I was one of them, now dealing face to face with the “Artificially Intelligent” outcome of corporate cost-cutting. I stated my current state of mind more succinctly than the one short sentence I was instructed to use. It was, instead, one short word — four letters long, to be exact. Then I realized I was probably being recorded. I sighed and thought to myself, “Buckle up. Let’s give this a shot.”

I knew before starting that this wasn’t going to work, but I wasn’t given an alternative. So I didn’t spend too much time crafting my sentence. I just blurted something out, hoping to bluff my way to the next level of AI purgatory. As I suspected, Ms. AI was stumped. But rather than admit she was scratching her metaphysical head, she repeated the previous instruction, preceded by a patronizing “pat on my head” recap that sounded very much like it was aimed at someone with the IQ of a soap dish. I responded again with my four-letter reply — repeated twice, just for good measure.

Go ahead, record me. See if I care.

This time I tried a roundabout approach, restating my issue in terms that hopefully could be parsed by the cybernetic sadist that was supposedly trying to help me. Needless to say, I got no further. What I did get was a helpful text with all the service outages in my region. Which I knew wasn’t the problem. But no one asked me.

I also got a text with some troubleshooting tips to try at home. I had an immediate flashback to my childhood, trying to get my parents’ attention while they were entertaining friends at home, “Did you try to figure it out yourself, Gordie? Don’t bother Mommy and Daddy right now. We’re busy doing grown up things. Run along and play.”

At this point, the scientific part of my brain started toying with the idea of making this an experiment. Let’s see how far we can push the boundaries of this bizarre scenario: equally frustrating and entertaining. My AI tormenter asked me, “Do you want to continue to try to troubleshoot this on the phone with me?”

I was tempted, I really was. Probably by the same part of my brain that forces me to smell sour milk or open the lid of that unidentified container of green fuzz that I just found in the back of the fridge.  And if I didn’t have other things to do in my life, I might have done that. But I didn’t. Instead, in desperation I pleaded, “Can I just talk to a human, please?”

Then I held my breath. There was silence. I could almost hear the AI wheels spinning. I began to wonder if some well-meaning programmer had included a subroutine for contrition. Would she start pleading for forgiveness?

After a beat and a half, I heard this, “Before I connect you with an agent, can I ask you for a few more details so they’re better able to help you?” No thanks, Cyber-Sally, just bring on a human, posthaste! I think I actually said something to that effect. I might have been getting a little punchy in my agitated state.

As she switched me to my requested human, I swore I could hear her mumble something in her computer-generated voice. And I’m pretty sure it was an imperative with two words, the first a verb with four letters, the second a subject pronoun with three letters.

And, if I’m right, I may have newfound respect for AI. Let’s just call it my version of the Turing Test.

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

The Messaging of Climate Change

86% of the world believes that climate change is a real thing. That’s the finding of a massive new mega study with hundreds of authors (the paper’s author acknowledgement is a page and a half). 60,000 participants from 63 countries around the world took part. And, as I said, 86% of them believe in climate change.

Frankly, there’s no surprise there. You just have to look out your window to see it. Here in my corner of the world, wildfires wiped out hundreds of homes last summer and just a few weeks ago, a weird winter whiplash took temperatures from unseasonably warm to deep freeze cold literally overnight. This anomaly wiped out this region’s wine industry. The only thing surprising I find about the 86 percent stat is that 14% still don’t believe. That speaks of a determined type of ignorance.

What is interesting about this study is that it was conducted by behavioral scientists. This is an area that has always fascinated me. From the time I read Richard Thaler and Cass Sunstein’s book, Nudge, I have always been interested in behavioral interventions. What are the most effective “nudges” in getting people to shift their behaviors to more socially acceptable directions?

According to this study, that may not be that easy. When I first dove into this study, my intention was to look at how different messages had different impacts depending on the audience: right wing vs left wing for instance. But in going through the results, what struck me the most was just how poorly all the suggested interventions performed. It didn’t matter if you were liberal or conservative or lived in Italy or Iceland. More often than not, all the messaging fell on deaf ears.

What the study did find is that how you craft your campaign about climate change depends on what you want people to do. Do you want to shift non-believers in Climate Change towards being believers? Then decrease the psychological distance. More simply put, bring the dangers of climate change to their front doorstep. If you live next to a lot of trees, talk about wildfires. If you live on the coast, talk about flooding. If you live in a rural area, talk about the impacts of drought. But it should be noted that we weren’t talking a massive shift here – with an “absolute effect size of 2.3%”. It was the winner by the sheer virtue of sucking the least.

If you want to build support for legislation that mitigates climate change, the best intervention was to encourage people to write a letter to a child that’s close to you, with the intention that they read it in the future. This forces the writer to put some psychological skin in the game.  

Who could write a future letter to someone you care about without making some kind of pledge to make sure there’s still a world they can live in? And once you do that, you feel obligated to follow through. Once again, this had a minimal impact on behaviors, with an overall effect size of 2.6%.

A year and a half ago, I talked about Climate Change messaging, debating Mediapost Editor-in-Chief Joe Mandese about whether a doom and gloom approach would move the needle on behaviors. In a commentary from the summer of 2022, Mandese wrapped up by saying, “What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

In a follow up, I worried that doom and gloom might backfire on us, “Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.”

So, what does this study say?

The answer, again, is, “it depends.” If we’re talking about getting people to share posts on social media, then Doom and Gloom is the way to go. Of all the various messaging options, this had the biggest impact on sharing, by a notable margin.

This isn’t really surprising. A number of studies have shown that negative news is more likely to be shared on social media than positive news.

But what if we’re asking people to make a change that requires some effort beyond clicking the “share” button? What if they actually have to do something? Then, as I suspected, Doom and Gloom messaging had the opposite effect, decreasing the likelihood that people would make a behavioral change to address climate change (the study used a tree planting initiative as an example). In fact, when asking participants to actually change their behavior in an effortful way, all the tested climate interventions either had no effect or, worse, they “depress(ed) and demoralize(d) the public into inaction”.

That’s not good news. It seems that no matter what the message is, or who the messenger is, we’re likely to shoot them if they’re asking us to do anything beyond bury our head in the sand.

What’s even worse, we may be losing ground. A study from 10 years ago by Yale University had more encouraging results. They showed that effective climate change messaging, was able to shift public perceptions by up to 19 percent. While not nearly as detailed as this study, the results seem to indicate a backslide in the effectiveness of climate messaging.

One of the commentators that covered the new worldwide study perhaps summed it up best by saying, “if we’re dealing with what is probably the biggest crisis ever in the history of humanity, it would help if we actually could talk about it.”

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

A Column About Nothing

What do I have to say in my last post for 2023? Nothing.

Last week, I talked about the cost of building a brand. Then, this week, I (perhaps being the last person on earth to do so) heard about Nothing.  No – not small “n” nothing as in the absence of anything – Big “N” Nothing as in the London based tech start-up headed by Chinese born entrepreneur Carl Pei.

Nothing, according to their website, crafts “intuitive, flawlessly connected products that improve our lives without getting in the way. No confusing tech-speak. No silly product names. Just artistry, passion and trust. And products we’re proud to share with our friends and family. Simple.”

Now, just like the football talents of David Beckham I explored in my last post, the tech Nothing produces is good – very good – but not uniquely good. The Nothing phone (1) and the just released Nothing Phone (2) are capable mid-range smart phones. Again, from the Nothing website, you are asked to “imagine a world where all your devices are seamlessly connected.”

It may just be me, but isn’t that what Apple has been promising (and occasionally delivering) for the better part of the last quarter century? Doesn’t Google make the same basic promise? Personally, I see nothing earth shaking in Nothing’s mission. It all feels very “been there, done that.” Or, if you’ll allow me – it all seems like much ado about Nothing (sorry). Yet people have paid thousands over the asking price when the 100 units of the first Nothing phone were put up for auction prior to its public launch.

Why?  Because of the value of the Nothing brand. And that value comes from one place. No, not the tech. The community. Pei may be a pretty good building of phones, but he’s an even better building of community. He has expertly built a fan base who love to rave about Nothing. On the “Community” section of the Nothing Website, you’re invited to “abandon the glorification of I and open up to the potential of We.”  I’m not sure exactly what that means, but it all sounds very cool and idealistic, if a little vague.

Another genius move by Pei was to open up to the potential of Nothing. In what is probably a latent (or perhaps not so latent) backlash against over advertising and in-your-face branding, we were eager to jump on the Nothing bandwagon. It seems like anti-branding, but it’s not. It’s actually expertly crafted, by-the-book branding. Just like Seinfeld, a show about nothing that became one of the most popular tv shows in history, it has been shown that there is some serious branding swagger to the concept of nothing. I can’t believe no one thought to stake a claim to this branding goldmine before now.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

AI, Creativity and the Last Beatle’s Song

I have never been accused of being a Luddite. Typically, I’m on the other end of the adoption curve – one of the first to adopt a new technology. But when it comes to AI, I am stepping forward gingerly.

Now, my hesitancy notwithstanding, AI is here to stay. In my world, it is well past the tipping point from a thing that exists solely in the domain to tech to a topic of conversation for everyone, from butchers to bakers to candlestick makers. Everywhere I turn now I see those ubiquitous two letters – AI. That was especially true in the last week, with the turmoil around Sam Altman and the “is he fired/isn’t he” drama at OpenAI.

In 1991 Geoffery Moore wrote the book Crossing the Chasm, looking at how technologies are adopted. He explained that it depends on the nature of the technology itself. If it’s a continuation of technology we understand, the adoption follows a fairly straight-forward bell curve through the general population.

But if it’s a disruptive technology – one that we’re not familiar with – then adoption plots itself out on an S-Curve. The tipping point in the middle of that curve where it switches from being skinny to being fat is what he called the “chasm.” Some technologies get stuck on the wrong side of the chasm, never to be adopted by the majority of the market.  Think Google Glass, for example.

There is often a pattern to the adoption of disruptive technologies (and AI definitely fits this description).  To begin with, we find a way to adapt it and use it for the things we’re already doing. But somewhere along the line, innovators grasp the full potential of the technology and apply it in completely new ways, pushing capabilities forward exponentially. And it’s in that push forward where all the societal disruption occurs. Suddenly, all the unintended consequences make themselves known.

This is exactly where we seem to be with AI. Most of us are using it to tweak the things we’ve always done. But the prescient amongst us are starting to look at what might be, and for many of us, we’re doing so with a furrowed brow. We’re worried, and, I suspect, with good reason.

As one example, I’ve been thinking about AI and creativity. As someone who has always dabbled in creative design, media production and writing, this has been top of mind for me. I have often tried to pry open the mystic box that is the creative process.

There are many, creative software developers foremost amongst them, that will tell you that AI will be a game changer when it comes to creating – well – just about anything.

Or, in the case of the last Beatle single to be released, recreating anything. Now and Then, the final Beatles song featuring the Fab Four, was made possible by an AI program created by Peter Jackson’s team for the documentary Get Back. It allowed Paul McCartney, Ringo Starr and their team of producers (headed by George Martin’s son Giles) to separate John Lennon’s vocals from the piano background on a demo tape from 1978.

One last Beatle’s song featuring John Lennon – that should be a good thing – right?  I guess. But there’s a flip side to this.

Let’s take writing, for example. Ask anyone who has written something longer than a tweet or Instagram post. What you start out intending to write is never what you end up with. Somehow, the process of writing takes its own twists and turns, usually surprising even the writer. Even these posts, which average only 700 to 800 words, usually end up going in unexpected directions by the time I place the final period.

Creativity is an iterative process and there are stages in that process. It takes time for it all to  play out. No matter how good my initial idea is, if I simply fed it in an AI black box and hit the “create” button, I don’t know if the outcome would be something I would be happy with.

“But,” you protest, “what about AI taking the drudgery out of the creative process? What if you use it to clean up a photo, or remove background noise from an audio recording (a la the Beatles single). That should free up more time and more options for you to be creative, right?”

That’s promise is certainly what’s being pitched by AI merchants right now. And it makes sense. But it only makes sense at the skinny end of the adoption curve. That’s where we’re at right now, using AI as a new tool to do old jobs. If we think that’s where we’re going to stay, I’m pretty sure we’re being naïve.

I believe creativity needs some sweat. It benefits from a timeline that allows for thinking, and rethinking, over and over again. I don’t believe creativity comes from instant gratification, which is what AI gives us. It comes from iteration that creates the spaces needed for inspiration.

Now, I may be wrong. Perhaps AI’s ability to instantly produce hundreds of variation of an idea will prove the proponents right. It may unleash more creativity than ever. But I still believe we will lose an essential human element in the process that is critical to the act of creation.

Time will tell. And I suspect it won’t take very long.

(Image – The Beatles in WPAP – wendhahai)