The Raging Ripple Effect of AirBNB

Ripple Effect: the continuing and spreading results of an event or action.

I’m pretty sure Brian Chesky and Joe Gebbia had no idea what they were unleashing when they decided to rent out an air mattress in the front room of their San Francisco apartment in the fall of 2007. The idea made all kinds of sense: there was not a hotel room to be had, there was a huge conference in town and they were perpetually short on their rent. It seemed like the perfect win-win – and, at first, it was.

But then came the Internet. AirBnB was born and would unleash unintended consequences that would change the face of tourism, up-end real estate markets and tear apart neighborhoods in cities around the world..

For the past two decades we have seen the impact of simple ideas that can scale massively thanks to the networked world we live in. In a physical world, there are real world factors that limit growth. Distribution, logistics, production, awareness – each of these critical requirements for growth are necessarily limited by geography and physical reality. 

But in a wired world, sometimes all you need is to provide an intermediary link between two pools of latent potential and the effect is the digital equivalent of an explosion. There is no physical friction to moderate the effect. That’s what AirBnB Did. Chesky and Gebbia’s simple idea became the connection between frustrated travellers who were tired of exorbitant hotel prices and millions of ordinary people who happened to have a spare bed. There was enormous potential on both sides and all AirBNB had to do was facilitate the connection.

AirBnB’s rise was meteoric. After Chesky and Gebbia’s initial brainstorm in 2007, they launched a website the next spring, in 2008. One year later there were hosts in 1700 cities in 100 different countries. Two years after that, AirBnB had hosted their 1 millionth guest and had over 120,000 listings. By 2020, the year Covid threw a pandemic sized spanner in the works of tourism, AirBnB had 5.6 million listings and was heading towards an IPO. 

Surprisingly, though, a global pandemic wasn’t the biggest problem facing AirBnB. There was a global backlash building that had nothing to do with Covid 19. AirBnB’s biggest problem was the unintended ripple effects of Chesky and Gebbia’s simple idea.

Up until the debut of the internet and the virtual rewiring of our world, new business ideas usually grew slowly enough for the world to react to their unintended consequences. As problems emerged, new legislation could be passed, new safeguards could be introduced and new guidelines could be put in place. But when AirBnB grew from a simple idea to a global juggernaut in a decade, things happened too quickly for the physical world to respond. Everything was accelerated: business growth, demand and the impact on both tourism and the communities those tourists were flocking to. 

Before we knew what was happening, tourism had exploded to unsustainable levels, real estate markets went haywire and entire communities were being gutted as their character changed from a traditional neighborhood to temporary housing for wave after wave of tourists. It’s only recently that many cities that were being threatened with the “AirBnB” effect responded with legislation that either banned or severely curtailed short term vacation rentals.

The question is, now that it’s been unleashed, can the damage done by AirBnB be undone? Real estate markets that were artificially fueled by sales to prospective short term rental hosts may eventually find a new equilibrium, but many formerly affordable listings could remain priced beyond the reach of first time home buyers. Will cities deluged by an onslaught of tourism ever regain the charm that made them favored destinations in the first place? Will neighbourhoods that were transformed by owners cashing in on the AirBnB boom ever regain their former character?

In our networked world, the ripples of unintended consequences spread quickly, but their effects may be with us forever.

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

The Tesla Cybertruck’s Branding Blow-Up

The inexact science of branding is nowhere more evident that in the case of the Tesla Cybertruck, which looks like it might usurp the Edsel’s title as the biggest automotive flop in history.

First, a little of the Tesla backstory. No, it wasn’t founded by Elon Musk. It was founded in 2003 by Martin Eberhard and Marc Tarpenning. Musk came in a year later as a money man. Soon, he had forced Eberhard and Tarpenning out of the company. But their DNA remained, notably in the design and engineering of the hugely popular Tesla Model S, Model X and Model 3. These designs drove Tesla to capture over 50% of the electric car market and are straight line extensions of the original technology developed by Eberhard, Tarpenning and their initial team

Musk is often lauded as an eccentric genius in the mold of Steve Jobs, who had his fingers in every aspect of Tesla. While he was certainly influential, it’s not in the way most people think. The Model S, Model X and Model 3 soon became plagued by production issues, failed software updates, product quality red flags and continually failing to meet to meet Musk’s wildly optimistic and often delusional predictions, both in terms of sales and promised updates. Those things all happened on Musk’s watch.  Even with all this, Tesla was the darling of investors and media, driving it to be the most valuable car company in the world.

Then came the Cybertruck.

Introduced in 2019, the Cybertruck did have Musk’s fingerprints all over it. The WTF design, the sheer impracticality of a truck in name only, a sticker price nearly double of what Musk originally promised and a host of quality issues including body panels that have a tendency to fall off have caused sales to not even come close to projections.

In its first year of sales (2024), the Cybertruck sold 40,000 units, about 16% of what Musk predicted annual sales could be. That makes it a bigger fail than the Edsel, which sold 63,000 units against a target of 200,000 sales in its introductory year – 1958. The Edsel did worse in 1959 and was yanked from the market in 1960. The Cybertruck is sinking even faster. In the first quarter of this year, only 6406 Cybertrucks were sold, half the number sold in the same quarter a year ago. There are over 10,000 Cybertrucks on Tesla lots in the U.S., waiting for buyers that have yet to show up.

But it’s not just that the Cybertruck is a flawed product. Musk has destroyed Tesla’s brand in a way that can only be marvelled at. His erratic actions have managed to generate feelings of visceral hate in a huge segment of the market and that hate has found a visible target in the Cybertruck. It has become the symbol of Elon Musk’s increasingly evident meltdown.

I remember my first reaction when I heard that Musk had jumped on the MAGA bandwagon. “How the hell,” I thought, “does that square with the Tesla brand?” That brand, pre-Musk-meltdown and pre-Cybertruck, was a car for the environmentally conscious who had a healthy bank account – excitingly leading edge but not dangerously so. Driving a Tesla made a statement that didn’t seem to be in the MAGA lexicon at all. It was all very confusing.

But I think it’s starting to make a little more sense. That brand was built by vehicles that Musk had limited influence over. Sure, he took full credit for the brand, but just like company he took over, it’s initial form and future direction was determined by others.

The Cybertruck was a different story. That was very much Musk’s baby. And just like his biological ones (14 and counting), it shows all the hallmarks of Musk’s “bull in a China shop” approach to life. He lurches from project to project, completely tone-deaf to the implications of his actions. He is convinced that his genius is infallible. If the Tesla brand is a reflection of Musk, then the Cybertruck gives us a much truer picture. It shows what Tesla would have been if there had never been a Martin Eberhard and Marc Tarpenning and Musk was the original founder.

To say that the Cybertruck is “off brand” for Tesla is like saying that the Titanic had a tiny mishap. But it’s not that Musk made a mistake in his brand stewardship. It’s that he finally had the chance to build a brand that he believed in.

Curation is Our Future. But Can You Trust It?

 You can get information from anywhere. But the meaning of that information can come from only one place: you. Everything we take in from the vast ecosystem of information that surrounds us goes through the same singular lens – one crafted by a lifetime of collected beliefs and experiences.

Finding meaning has always been an essentially human activity. Meaning motivates us – it is our operating system. And the ability to create shared meaning can create or crumble societies. We are seeing the consequences of shared meaning play out right now in real time.

The importance of influencing meaning creates an interesting confluence between technology and human behavior. For much of the past two decades, technology has been focusing on filtering and organizing information. But we are now in an era where technology will start curating our information for us. And that is a very different animal.

What does it mean to “curate” an answer, rather than simply present it to you? Curation is more than just collecting and organizing things. The act of curation is to put that information in a context that provides additional value by providing a possible meaning. This crosses the line that delineates just disseminating information from attempting to influence individuals by providing them a meaningful context for that information. 

Not surprisingly, the roots of curation lie – in part – with religion. It comes from the Latin “curare” – “to take care of”. In medieval times, curates were priests who cared for souls. And they cared for souls by providing a meaning that lay beyond the realms of our corporal lives. If you really think about religion, it is one massive juxtaposition of a pre-packaged meaning on the world as we perceive it.

In the future, as we access our world through technology platforms, we will rely on technology to mediate meaning. For example, searches on Google now include an “AI Overview” at the top of the search results The Google Page explaining what the Overview is says it shows up when “you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.” That is Google – or rather Google’s AI – curating an answer for you.

It could be argued that this is just another step to make search more useful – something I’ve been asking for a decade and a half now. In 2010, I said that “search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness.” If AI could begin to provide actionable answers with a high degree of reliability, it would be a major step forward. There are many that say such curated answers could make search obsolete. But we have to ask ourselves, is this curation something we can trust?

With Google, this will probably start as unintentional curation – giving information meaning through a process of elimination. Given how people scan search listings (something I know a fair bit about) it’s reasonable to assume that many searchers will scan no further than the AI Overview, which is at the top of the results page. In that case, you will be spoon-fed whatever meaning happens to be the product of the AI compilation without bothering to qualify it by scanning any further down the results page. This conveyed meaning may well be unintentional, a distillation of the context from whatever sources provided the information. But given that we are lazy information foragers and will only expend enough effort to get an answer that seems reasonable, we will become trained to accept anything that is presented to us “top of page” at face value.

From there it’s not that big a step to intentional curation – presenting information to support a predetermined meaning. Given that pretty much every tech company folded like a cheap suit the minute Trump assumed office, slashing DEI initiatives and aligning their ethics – or lack of – to that of the White House, is it far-fetched to assume that they could start wrapping the information they provide in a “Trump Approved” context, providing us with messaged meaning that supports specific political beliefs? One would hate to think so but based on Facebook’s recent firing of its fact checkers, I’m not sure it’s wise to trust Big Tech to be the arbitrators of meaning.

They don’t have a great track record.

The World vs Big Tech

Around the world, governments have their legislative cross hairs trained on Big Tech. It’s happening in the US, the EU and here in my country,  Canada. The majority of these are anti-trust suits. But Australia has just introduced a different type of legislation, a social media ban for those under 16. And that could change the game – and the conversation -completely for Big Tech.

There are more anti-trust actions in the queue in the US than at any time in the previous five decades. The fast and loose interpretation of antitrust enforcement in the US is that monopolies are only attacked when they may cause significant harm to customers through lack of competition. The US approach to anti-trust since the 1970s has typically followed the Chicago School of neoclassical economy theory, which places all trust in the efficiency of markets and tells government to keep their damned hands off the economy. Given this and given the pro-business slant of all US administrations, both Republican and Democratic, since Reagan, it’s not surprising that we’ve seen relatively few anti-trust suits in the past 50 years.

But the rapid rise of monolithic Big Tech platforms has raised more discussion about anti-trust in the past decade than in the previous 5 decades. These platforms suck along the industries they spawn in their wake and leave little room for upstart competitors to survive long enough to gain significant market share.

Case in point: Google. 

The recent Canadian lawsuit has the Competition Bureau (our anti-trust watchdog) suing Google for anti-competitive practices selling its online advertising services north of the 49th parallel. They’re asking Google to sell off two of its ad-tech tools, pay penalties worth up to 3% of the platform’s global gross revenues and prohibit the company from engaging in anti-competitive practices in the future.

According to a 3-year inquiry into Google’s Canadian business practices by the Bureau, Google controls 90% of all ad servers and 70% of advertising networks operating in the country. Mind you, Google started the online advertising industry in the relatively green fields of Canada back when I was still railing about the ignorance of Canadian advertisers when it came to digital marketing. No one else really had a chance. But Google made sure they never got one by wrapping its gigantic arms around the industry in an anti-competitive bear hug.

The recent Australian legislation is of a different category, however. Anti-trust suits are – by nature – not personal. They are all about business. But the Australian ban puts Big Tech in the same category as Big Tobacco, Big Alcohol and Big Pharma – alleging that they are selling an addictive product that causes physical or emotional harm to individuals. And the rest of the world is closely watching what Australia does. Canada is no exception.

The most pertinent question is how will Australia enforce the band? Restricting social media access to those under 16 is not something to be considered lightly.  It’s a huge technical, legal and logistical hurdle to get over. But if Australia can figure it out, it’s certain that other jurisdictions around the world will follow in their footsteps.

This legislation opens the door to more vigorous public discourse about the impact of social media on our society. Politicians don’t introduce legislation unless they feel that – by doing so – they will continue to get elected. And the key to being elected is one of two things; give the electorate what they want or protect them against what they fear. In Australia, recent polling indicates the ban is supported by 77% of the population. Even those opposing the ban aren’t doing so in defense of social media. They’re worried that the devil might be in the details and that the legislation is being pushed through too quickly.

These types of things tend to follow a similar narrative arc: fads and trends drive widespread adoption – evidence mounts about the negative impacts – industries either ignore or actively sabotage the sources of the evidence – and, with enough critical mass, government finally gets into the act by introducing protective legislation.

With tobacco in the US, that arc took a couple of decades, from the explosion of smoking after World War II to the U.S. Surgeon General’s 1964 report linking smoking and cancer. The first warning labels on cigarette packages appeared two years later, in 1966.

We may be on the cusp of a similar movement with social media. And, once again, it’s taken 20 years. Facebook was founded in 2004.

Time will tell. In the meantime, keep an eye on what’s happening Down Under.

The Political Brinkmanship of Spam

I am never a fan of spam. But this is particularly true when there is an upcoming election. The level of spam I have been wading through seems to have doubled lately. We just had a provincial election here in British Columbia and all parties pulled out all stops, which included, but was not limited to; email, social media posts, robotexts and robocalls.

In Canada and the US, political campaigns are not subject to phone and text spam control laws such as our Canadian Do Not Call List legislation. There seems to be a little more restriction on email spam. A report from Nationalsecuritynews.com this past May warned that Americans would be subjected to over 16 billion political robocalls. That is a ton of spam.

During this past campaign here in B.C., I noticed that I do not respond to all spam with equal abhorrence. Ironically, the spam channels with the loosest restrictions are the ones that frustrate me the most.

There are places – like email – where I expect spam. It’s part of the rules of engagement. But there are other places where spam sneaks through and seems a greater intrusion on me. In these channels, I tend to have a more visceral reaction to spam. I get both frustrated and angry when I have to respond to an unwanted text or phone call. But with email spam, I just filter and delete without feeling like I was duped.

Why don’t we deal with all spam – no matter the channel – the same? Why do some forms of spam make us more irritated than others? It’s almost like we’ve developed a spam algorithm that dictates how irritated we get when we deal with spam.

According to an article in Scientific American, the answer might be in how the brain marshalls its own resources.

When it comes to capacity, the brain is remarkably protective. It usually defaults to the most efficient path. It likes to glide on autopilot, relying on instinct, habit and beliefs. All these things use much less cognitive energy than deliberate thinking. That’s probably why “mindfulness” is the most often quoted but least often used meme in the world today.

The resource we’re working with here is attention. Limited by the capacity of our working memory, attention is a spotlight we must use sparingly. Our working memory is only capable of handling a few discrete pieces of information at a time. Recent research suggests the limit may be around 3 to 5 “chunks” of information, and that research was done on young adults. Like most things with our brains, the capacity probably diminishes with age. Therefore, the brain is very stingy with attention. 

I think spam that somehow gets past our first line of defence – the feeling that we’re in control of filtering – makes us angry. We have been tricked into paying attention to something that was unsuspected. It becomes a control issue. In an information environment where we feel we have more control, we probably have less of a visceral response to spam. This would be true for email, where a quick scan of the items in our inbox is probably enough to filter out the spam. The amount of attention that gets hijacked by spam is minimal.

But when spam launches a sneak attack and demands a swing of attention that is beyond our control, that’s a different matter. We operate with a different mental modality when we answer a phone or respond to a text. Unlike email, we expect those channels to be relatively spam-free, or at least they are until an election campaign comes around. We go in with our spam defences down and then our brain is tricked into spending energy to focus on spurious messaging.

How does the brain conserve energy? It uses emotions. We get irritated when something commandeers our attention. The more unexpected the diversion, the greater the irritation.  Conversely, there is the equivalent of junk food for the brain – input that requires almost no thought but turns on the dopamine tap and becomes addictive. Social media is notorious for this.

This battle for our attention has been escalating for the past two decades. As we try to protect ourselves from spam with more powerful filters, those that spread spam try to find new ways to get past those filters. The reason political messaging was exempt from spam control legislation was that democracies need a well-informed electorate and during election campaigns, political parties should be able to send out accurate information about their platforms and positions.

That was the theory, anyway.

The Adoption of A.I.

Recently, I was talking to a reporter about AI. She was working on a piece about what Apple’s integration of AI into the latest iOS (cleverly named Apple Intelligence) would mean for its adoption by users. Right at the beginning, she asked me this question, “What previous examples of human adoption of tech products or innovations might be able to tell us about how we will fit (or not fit) AI into our daily lives?”

That’s a big question. An existential question, even. Luckily, she gave me some advance warning, so I had a chance to think about it.  Even with the heads up, my answer was still well short of anything resembling helpfulness. It was, “I don’t think we’ve ever dealt with something quite like this. So, we’ll see.”

Incisive? Brilliant? Erudite? No, no and no.

But honest? I believe so.

When we think in terms of technology adoption, it usually falls into two categories: continuous and discontinuous. Continuous innovation simply builds on something we already understand. It’s adoption that follows a straight line, with little risk involved and little effort required. It’s driving a car with a little more horsepower, or getting a smartphone with more storage.

Discontinuous innovation is a different beast. It’s an innovation that displaces what went before it. In terms of user experience, it’s a blank slate, so it requires effort and a tolerance for risk to adopt it. This is the type of innovation that is adopted on a bell curve, first identified by American sociologist Everett Rogers in 1962. The acceptance of these new technologies spreads along a timeline defined by the personalities of the marketplace. Some are the type to try every new gadget, and some hang on to the tried and true for as long as they possibly can. Most of us fall somewhere in between.

As an example, think about going from driving a tradition car to an electric vehicle. The change from one to the other requires some effort. There’s a learning curve involved. There’s also risk. We have no baseline of experience to measure against. Some will be ahead of the curve and adopt early. Some will drive their gas clunker until it falls apart.

Falling into this second category of discontinuous innovation, but different by virtue of both the nature of the new technology and the impact it wields, are a handful of innovations that usher in a completely different paradigm. Think of the introduction of electrical power distribution in the late 19th century, the introduction of computers in the second half of the 20th century, or the spread of the internet in the 21st Century.

Each of these was foundational, in that they sparked an explosion of innovation that wouldn’t have been possible if it were not for the initial innovation. These innovations not only change all the rules, they change the very game itself. And because of that, they impact society at a fundamental level. When these types of innovations come along, your life will change whether you choose to adopt the technology or not. And it’s these types of technological paradigm shifts that are rife with unintended consequences.

If I was trying to find a parallel for what AI means for us, I would look for it amongst these examples. And that presents a problem when we pull out our crystal ball and try to peer ahead at what might be. We can’t know. There’s just too much in flux – too many variables to compute with any accuracy. Perhaps we can project forward a few months or a year at the most, based on what we know today. But trying to peer any further forward is a fool’s game. Could you have anticipated what we would be doing on the Internet in 2024 when the first BBS (Bulletin Board System) was introduced in Chicago in 1978?

A.I. is like these previous examples, but it’s also different in one fundamental way. All these other innovations had humans at the switch. Someone needed to turn on the electrical light, boot up the computer or log on to the internet. At this point, we are still “using” A.I., whether it’s as an add-on in software we’re familiar with, like Adobe Photoshop, or a stand-alone app like ChatGPT, but generative A.I.’s real potential can only be discovered when it slips from the grasp of human control and starts working on its own, hidden under some algorithmic hood, safe from our meddling human hands.

We’ve never dealt with anything like this before. So, like I said, we’ll see.