No News is Not Good News

Kelowna, the city I live in – with a population of about 250,000 – just ran its last locally produced TV news show. That marks the end of a 67-year streak. Our local station, CHBC – first signed on the air on September 21, 1957.

That streak was not without some hiccups. There have been a number of ownership changes. The trend in those transitions was away from local ownership towards huge nation spanning media conglomerates. In 2009, when the station became part of the Global network, the intention was to shut down the local station and run everything out of CHAN, the Vancouver Global operation. We kicked up a Kelowna fuss and convinced Global to at least keep a local news presence in the community. But – as it turned out – that was just buying us some time. 15 years later, the plug was finally pulled.

In that time, my city has also essentially lost its daily newspaper, which is a mere ghost of its former self; an anemic online version and a printed paper which is little more than a wrapper for a bunch of grocery flyers.  The tri weekly paper has suffered a similar fate. Radio stations have gutted their local news teams. The biggest news team in the region works for a local news portal. They are young and eager, but few of them are trained journalists.

CHBC started as an extension of local radio. At the time it was launched, only 500 households in the city had a TV set. Broadcasting was “over the air” and I live in a very mountainous location, so it was impossible to watch TV prior to the station signing on. 

Given that the first TV stations only signed on in Canada in 1952 (CBFT in Montreal and CBLT in Toronto), it’s rather amazing to think that my little town (population 10,000 at the time) had its own station just 5 years later. Part of the rapid roll out of TV in Canada was to prevent cultural colonization from the rapidly expanding American TV industry. Our federal government pushed hard to have Canadian programming available from coast to coast.

For the decades that followed, it was local news that defined communities. Local was granular and immediately relevant in a way networks news couldn’t be. It gave you what you needed to know to knowingly participate in local democracy.

For that alone, CHBC News will be missed here in Kelowna.

This story probably resonates with all of you. The death of local journalism is not unique to my city. I have just learned that I probably will be living in a news desert soon.  The  importance of local news is enshrined in the very definition of a news desert:

“a community, either rural or urban, with limited access to the sort of credible and comprehensive news and information that feeds democracy at the grassroots level.”

The death of local news was recently discussed at the Canadian Association of Journalists Annual conference in Toronto. There, April Lindgren, a professor at Toronto Metropolitan University’s School of Journalism and the principal investigator of the Local News Research Project, said this:

I think one of the things .. people don’t think about in terms of the mechanics of the role of local news in a community is the role that it plays in equipping people to participate in decision-making.”

We need local news. A recent study by Resonate said that Americans trust Local News more than any other source. And not just by a little margin. By a lot. The next closest answer was a full 15 percentage points behind.

But there are two existential problems that are pushing local news to the brink of an extinction event. First of all, most local news outlets were swallowed up into corporate mass media conglomerates over the past 3 or 4 decades. And secondly, the business model for local news has disappeared. Local advertising dollars have migrated to other platforms. So the fate of local news had become a P&L decision.

That’s what it was for CHBC. It’s owned by Corus entertainment. Corus owns the Global Network (15 stations), 39 radio stations, 33 specialty TV channels and a bunch of other media miscellanea.  

Oh, did I mention that Corus is also bleeding cash at a fatal rate? On the heels of an announced $770 Million loss (CDN) it cut 25% of its workforce. That was the death knell for CHBC. It didn’t have a hope in hell.

Local news doesn’t have to die. It just has to find another way to live. Like so much of our media environment, basing survival on advertising revenue is a sure recipe for disaster. That’s why the Local News Research Project is floating ideas like supporting local news with philanthropy. I’m not sure that’s a viable or scalable answer.

I think a better idea might be to move local news to protected species status. If we recognize its importance to democracy, especially at local levels, then perhaps tax dollars should go to ensuring it’s survival.

The scenario of government supported local journalism brings up a philosophical debate that I have ignited in the past, when I talked about public broadcasting. It split my readers along national lines, with those from the US giving a thumbs down to the idea, and those from Australia, New Zealand and Canada receiving it more favorably.

Let’s see what happens this time.

Can OpenAI Make Searching More Useful?

As you may have heard, OpenAI is testing a prototype of a new search engine called SearchGPT. A press release from July 25 notes: “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

I’ve been waiting for this for a long time: search that moves beyond relevance to usefulness.  It was 14 years ago that I said this in an interview with Aaron Goldman regarding his book “Everything I Know About Marketing I Learned from Google”:“Search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness. That’s why I believe apps are the next flavor of search, little dedicated helpers that allow us to do something with the information. The information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

I’ve felt for almost two decades that the days of search as a destination were numbered. For over 30 years now (Archie, the first internet search engine, was created in 1990), when we’re looking for something online, we search, and then we have to do something with what we find on the results page. Sometimes, a single search is enough — but often, it isn’t. For many of our intended end goals, we still have to do a lot of wading through the Internet’s deep end, filtering out the garbage, picking up the nuggets we need and then assembling those into something useful.

I’ve spent much of those past two decades pondering what the future of search might be. In fact, my previous company wrote a paper on it back in 2007. We were looking forward to what we thought might be the future of search, but we didn’t look too far forward. We set 2010 as our crystal ball horizon. Then we assembled an all-star panel of search design and usability experts, including Marissa Mayer, who was then Google’s vice president of search user experience and interface design, and Jakob Nielsen, principal of the Nielsen Norman Group and the web’s best known usability expert. We asked them what they thought search would look like in three years’ time.

Even back then, almost 20 years ago, I felt the linear presentation of a results page — the 10 blue links concept that started search — was limiting. Since then, we have moved beyond the 10 blue links. A Google search today for the latest IPhone model (one of our test queries in the white paper) actually looks eerily similar to the mock-up we did for what a Google search might look like in the year 2010. It just took Google 14 extra years to get there.

But the basic original premise of search is still there: Do a query, and Google will try to return the most relevant results. If you’re looking to buy an iPhone, it’s probably more useful, mainly due to sponsored content. But it’s still well short of the usefulness I was hoping for.

It’s also interesting to see what directions search has (and hasn’t) taken since then. Mayer talked a lot about interacting with search results. She envisioned an interface where you could annotate and filter your results: “I think that people will be annotating search results pages and web pages a lot. They’re going to be rating them, they’re going to be reviewing them. They’re going to be marking them up, saying ‘I want to come back to this one later.’”

That never really happened. The idea of search as a sticky and interactive interface for the web sort of materialized, but never to the extent that Mayer envisioned.

From our panel, it was Nielsen’s crystal ball that seemed to offer the clearest view of the future: “I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.”

Nielsen was spot-on in 2007. It’s exactly those advances in natural language processing and artificial intelligence that could allow ChatGPT to now move beyond the paradigm of the search results page and move searching the web into something more useful.

A decade and a half ago, I envisioned an ecosystem of apps that could bridge the gap between what we intended to do and the information and functionality that could be found online.  That’s exactly what’s happening at OpenAI — a number of functional engines powered by AI, all beneath a natural language “chat” interface.

At this point, we still have to “say” what we want in the form of a prompt, but the more we use ChatGPT (or any AI interface) the better it will get to know us. In 2007, when we wrote our white paper on the future of search, personalization was what we were all talking about. Now, with ChatGPT, personalization could come back to the fore, helping AI know what we want even if we can’t put it into words.

As I mentioned in a previous post, we’ll have to wait to see if SearchGPT can make search more useful, especially for complex tasks like planning a vacation, making a major purchase onr planning a big event.

But I think all the pieces are there. The monetization siloes that dominate the online landscape will still prove a challenge to getting all the way to our final destination, but SearchGPT could make the journey faster and a little less taxing.

Note: I still have a copy of our 2007 white paper if anyone is interested. Just email me (email in the contact us page), give me your email and I’ll send you a copy.

Why Time Seems to Fly Faster Every Year

Last week, I got an email congratulating me on being on LinkedIn for 20 years.

My first inclination was that it couldn’t be twenty years. But when I did the mental math, I realized it was right.  I first signed up in 2004. LinkedIn had just started 2 years before, in 2002.

LinkedIn would have been my first try at a social platform. I couldn’t see the point of MySpace, which started in 2003. And I was still a couple years away from even being aware Facebook existed. It started in 2004, but it was still known as TheFacebook. It wouldn’t become open to the public until 2006, two years later, after it dropped the “The”. So, 20 years pretty much marks the sum span of my involvement with social media.

Twenty years is a significant chunk of time. Depending on your genetics, it’s probably between a quarter and a fifth of your life. A lot can happen in 20 years. But we don’t process time the same way as we get older. 20 years when you’re 18 seems like a lot bigger chunk of time than it does when you’re in your 60’s.

I always mark these things in my far-off distant youth by my grad year, which was in 1979. If I use that as the starting point, rolling back 20 years would take me all the way to 1959, a year that seemed pre-historic to me when I was a teenager. That was a time of sock hops, funny cars with tail fins, and Frankie Avalon. These things all belonged to a different world than the one I knew in 1979. Ancient Rome couldn’t have been further removed from my reality.

Yet, that same span of time lies between me and the first time I set up my profile on LinkedIn. And that just seems like yesterday to me. This all got me wondering – do we process time differently as we age? The answer, it turns out, is yes. Time is time – but the perception of time is all in our heads.

The reason why we feel time “flies” as we get older was explained in a paper published by Professor Adrian Bejan. In it, he states, “The ‘mind time’ is a sequence of images, i.e. reflections of nature that are fed by stimuli from sensory organs. The rate at which changes in mental images are perceived decreases with age, because of several physical features that change with age: saccades frequency, body size, pathways degradation, etc. “

So, it’s not that time is moving faster, it’s just that our brain is processing it slower. If our perception of time is made up of mental snapshots of what is happening around us, we simply become slower at taking the snapshots as we get older. We notice less of what’s happening around us. I suspect it’s a combination of slower brains and perhaps not wanting to embrace a changing world quite as readily as we did when we were young. Maybe we don’t notice change because we don’t want things to change.

If we were using a more objective yardstick (speaking of which, when is the last time you actually used a yardstick?), I’m guessing the world changed at least as much between 2004 and 2024 as it did between 1959 and 1979. If I were at 18 years old today, I’m guessing that Britney Spears, The Lord of the Rings and the last episode of Frasier would seem as ancient to me as a young Elvis, Ben-Hur and The Danny Thomas Show seemed to me then.

To me, all these things seem like they were just yesterday. Which is probably why it comes as a bit of a shock to see a picture of Britney Spears today. She doesn’t look like the 22-year-old we remember, which we mistakenly remember as being just a few years ago. But Britney is 42 now, and as a 42-year-old, she’s held up pretty well.

And, now that I think of it, so has LinkedIn. I still have my profile, and I still use it.

Why The World No Longer Makes Sense

Does it seem that the world no longer makes sense? That may not just be you. The world may in fact no longer be making sense.

In the late 1960s, psychologist Karl Weick introduced the world to the concept of sensemaking, but we were making sense of things long before that. It’s the mental process we go through to try to reconcile who we believe we are to the world in which we find ourselves.  It’s how we give meaning to our life.

Weick identified 7 properties critical to the process of sensemaking. I won’t mention them all, but here are three that are critical to keep in mind:

  1. Who we believe we are forms the foundation we use to make sense of the world
  2. Sensemaking needs retrospection. We need time to mull over new information we receive and form it into a narrative that makes sense to us.
  3. Sensemaking is a social activity. We look for narratives that seem plausible, and when we find them, we share them with others.

I think you see where I’m going with this. Simply put, our ability to make sense of the world is in jeopardy, both for internal and external reasons.

External to us, the quality of the narratives that are available to us to help us make sense of the world has nosedived in the past two decades. Prior to social media and the implosion of journalism, there was a baseline of objectivity in the narratives we were exposed to. One would hope that there was a kernel of truth buried somewhere in what we heard, read or saw on major news providers.

But that’s not the case today. Sensationalism has taken over journalism, driven by the need for profitability by showing ads to an increasingly polarized audience. In the process, it’s dragged the narratives we need to make sense of the world to the extremes that lie on either end of common sense.

This wouldn’t be quite as catastrophic for sensemaking if we were more skeptical. The sensemaking cycle does allow us to judge the quality of new information for ourselves, deciding whether it fits with our frame of what we believe the world to be, or if we need to update that frame. But all that validation requires time and cognitive effort. And that’s the second place where sensemaking is in jeopardy: we don’t have the time or energy to be skeptical anymore. The world moves too quickly to be mulled over.

In essence, our sensemaking is us creating a model of the world that we can use without requiring us to think too much. It’s our own proxy for reality. And, as a model, it is subject to all the limitations that come with modeling. As the British statistician George E.P. Box said, “All models are wrong, but some are useful.”

What Box didn’t say is, the more wrong our model is, the less likely it is to be useful. And that’s the looming issue with sensemaking. The model we use to determine what is real is become less and less tethered to actual reality.

It was exactly that problem that prompted Daniel Schmachtenberger and others to set up the Consilience Project. The idea of the Project is this – the more diversity in perspectives you can include in your model, the more likely the model is to be accurate. That’s what “consilience” means: pulling perspectives from different disciplines together to get a more accurate picture of complex issues.  It literally means the “jumping together” of knowledge.

The Consilience Project is trying to reverse the erosion of modern sensemaking – both from an internal and external perspective – that comes from the overt polarization and the narrowing of perspective that currently typifies the information sources we use in our own sensemaking models.  As Schmachtenberger says,  “If there are whole chunks of populations that you only have pejorative strawman versions of, where you can’t explain why they think what they think without making them dumb or bad, you should be dubious of your own modeling.”

That, in a nutshell, explains the current media landscape. No wonder nothing makes sense anymore.

My Mind is Meandering

Thirty-seven years ago, when I first drove into the valley I now call home, I said to myself, “Now, this is a place for meandering!”

Meandering is a word we don’t use enough today. We certainly don’t do the actual act of meandering enough anymore. To “meander” is to “flow in a winding course.” It comes from Maiandros, the Greek name of a river in Turkey (also known as the Büyük Menderes) known for its sinuous path. This is perhaps what brought the word to mind when I drove into Western Canada’s Okanagan Valley. This is a valley formed by water, either in flowing or frozen form.

I have always loved the word meander. Even the sound of it is like a journey; you scale the heights of the hard “e,” pausing for a minute to rest against the soft “a”, after which you descend into the lush vale that is formed by its remaining syllable. The aquatic origins of the word are appropriate, because to meander is to be in a state of flow but with no purpose in mind. Meandering allows the mind to freewheel, to pick its own path.

You know what’s another great word? Saunter.

My favorite story about sauntering is that told by Albert Palmer in his 1919 book, The Mountain Trail and Its Message. He tells of an exchange with John Muir, the founder of the Sierra Club, who was called the Father of America’s National Parks. In the exchange, Muir explains why he finds the word “saunter” far more to his taste than “hike”:

“Do you know the origin of that word ‘saunter’? It’s a beautiful word. Away back in the Middle Ages people used to go on pilgrimages to the Holy Land, and when people in the villages through which they passed asked where they were going, they would reply, “A la sainte terre,’ ‘To the Holy Land.’ And so they became known as sainte-terre-ers or saunterers. Now these mountains are our Holy Land, and we ought to saunter through them reverently, not ‘hike’ through them.”

According to Google’s Ngram viewer, literary usage of the word “saunter”  hit Its peak in the 1800s and was in decline for most of the following century. That timeline makes sense. Sauntering would definitely be popular with the Romantic movement of the late 1800s. This was a movement back to appreciate the charms of nature and would have been an open invitation to “saunter” in Muir’s “Holy Land.”

For some reason, the word seems to be enjoying a bit of a resurgence in usage in the last 20 years.

Meander is a different story. It only started to really appear in books towards the end of the 1800s and continued to be used through the 20th century, although usage dropped during times of tribulation, notably World War I, the Great Depression of the 1930s and throughout World War II. Again, that’s not surprising. It’s hard to meander when you’re in a constant state of anxiety.

As my mind meandered down this path, I wondered if there is a digital equivalent to meandering or sauntering. Take scrolling through Facebook, for example. It is navigating without any specific destination in mind, so perhaps it qualifies as meandering. There is no direct line to connect A to B.

But I wouldn’t call social media scrolling sauntering. There’s a distinction between ”meandering” and “sauntering.” I think saunter implies that you know where you’re going, but there is no rigid schedule set to get there. You can take as much time as you like to smell the flowers on your way.

Also, as John Muir mentioned, sauntering requires a certain sense of place. The setting in which you saunter is of critical importance. However you would define your own “Holy Land,” that’s the place where you should saunter. It should be grounded in some gravitas.

That’s why I don’t think you can really saunter through social media. To me, Facebook, Instagram or TikTok are a far cry from being considered hallowed ground.

Google Leak? What Google Leak?

If this were 15 years ago, I might have cared about the supposed Google Leak that broke in late May.

But it’s not, and I don’t. And I’m guessing you don’t either. In fact, you could well be saying “what Google leak?” Unless you’re a SEO, there is nothing of interest here. Even if you are a SEO, that might be true.

I happen to know Rand Fishkin, the person who publicly broke the leak last week. Neither Rand nor I are in the SEO biz anymore, but obviously his level of interest in the leak far exceeded mine. He devoted almost 6000 words to it in the post where he first unveiled the leaked documents, passed on to him by Erfan Azimi, CEO and director of SEO of EA Eagle Digital.

Rand and I spoke at many of the same conferences before I left the industry in 2012. Even at that time, our interests were diverging. He was developing what would become the Moz SEO tool suite, so he was definitely more versed in the technical side of SEO. I had already focused my attention on the user side of search, looking at how people interacted with a search engine page. Still, I always enjoyed my chats with Rand.

Back then, SEO was an intensely tactical industry. Conference sessions that delved into the nitty gritty of ranking factors and shared ways to tweak sites up the SERP were the ones booked into the biggest conference rooms, because organizers knew they’d be jammed to the rafters.

I always felt a bit like a fish out of water at these conferences. I tried to take a more holistic view, looking at search as just one touchpoint in the entire online journey. To me, what was most interesting was what happened both before the search click and after it. That was far more intriguing to me than what Google might be hiding under their algorithmic hood.

Over time, my sessions developed their own audience. Thanks to mentors like Danny Sullivan, Chris Sherman and Brett Tabke, conference organizers carved out space for me on their agendas. Ken Fadner and the MediaPost team even let me build a conference that did its best to deal with search at a more holistic level, the Search Insider Summit. We broadened the search conversation to include more strategic topics like multipoint branding, user experience and customer journeys.

So, when the Google leak story bleeped on my radar, I was immediately taken back to the old days of SEO. Here, again, there was what appeared to be a dump of documents that might give some insights into the nuts and bolts of Google’s ranking factors. Mediapost’s own post said that “leaked Google documents has given the search industry proprietary insight into Google Search, revealing very important elements that the company uses to rank content.” Predictably, SEOs swarmed over it like a flock of seagulls attacking a half-eaten hot dog on a beach. They were still looking for some magic bullet that might move them higher in the organic results.

They didn’t come up with much. Brett Tabke, who I consider one of the founders of SEO (he coined the term SERP), spent five hours combing through the documents and said it wasn’t a leak and the documents contained no algorithm-related information. To mash up my metaphors, the half-eaten hotdog was actually a nothingburger.

But Oh My SEOs – you still love diving into the nitty gritty, don’t you?

What is more interesting to me is how the actual search experience has changed in the past decade or two. In doing the research for this, I happened to run into a great clip about Tech monopolies from Last Week Tonight with John Oliver. He shows how much of the top of the Google SERP is now dominated by information and links from Google. Again, quoting a study from Rand Fishkin’s new company, SparkToro, Oliver showed that “64.82% of searches on Google…ended..without clicking to another web property.”

That little tidbit has some massive implications for marketers. The days of relying on a high organic ranking are long gone, because even if you achieve it, you’ll be pushed well down the page.

And on that, Rand Fishkin and I seem to agree. In his post, he does say, “If there was one universal piece of advice I had for marketers seeking to broadly improve their organic search rankings and traffic, it would be: ‘Build a notable, popular, well-recognized brand in your space, outside of Google search.’”

Amen.

Can Media Move the Overton Window?

I fear that somewhere along the line, mainstream media has forgotten its obligation to society.

It was 63 years ago, (on May 9, 1961) that new Federal Communications Commission Chair Newton Minow gave his famous speech, “Television and the Public Interest,” to the convention of the National Association of Broadcasters.

In that speech, he issued a challenge: “I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.”

Minow was saying that media has an obligation to set the cultural and informational boundaries for society. The higher you set them, the more we will strive to reach them. That point was a callback to the Fairness Doctrine, established by the FCC in 1949. The policy required that “holders of broadcast licenses to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.” The Fairness Doctrine was abolished by the FCC in 1987.

What Minow realized, presciently, was that mainstream media is critically important in building the frame for what would come to be called, three decades later, the Overton Window. First identified by policy analyst Joseph Overton at the Mackinaw Center for Public Policy, the term would posthumously be named after Overton by his colleague Joseph Lehman.

The term is typically used to describe the range of topics suitable for public discourse in the political arena. But, as Lehman explained in an interview, the boundaries are not set by politicians: “The most common misconception is that lawmakers themselves are in the business of shifting the Overton Window. That is absolutely false. Lawmakers are actually in the business of detecting where the window is, and then moving to be in accordance with it.

I think the concept of the Overton Window is more broadly applicable than just within politics. In almost any aspect of our society where there are ideas shaped and defined by public discourse, there is a frame that sets the boundaries for what the majority of society understands to be acceptable — and this frame is in constant motion.

Again, according to Lehman,  “It just explains how ideas come in and out of fashion, the same way that gravity explains why something falls to the earth. I can use gravity to drop an anvil on your head, but that would be wrong. I could also use gravity to throw you a life preserver; that would be good.”

Typically, the frame drifts over time to the right or left of the ideological spectrum. What came as a bit of a shock in November of 2016 was just how quickly the frame pivoted and started heading to the hard right. What was unimaginable just a few years earlier suddenly seemed open to being discussed in the public forum.

Social media was held to blame. In a New York Times op-ed written just after Trump was elected president (a result that stunned mainstream media) columnist Farhad Manjoo said,  “The election of Donald J. Trump is perhaps the starkest illustration yet that across the planet, social networks are helping to fundamentally rewire human society.”

In other words, social media can now shift the Overton Window — suddenly, and in unexpected directions. This is demonstrably true, and the nuances of this realization go far beyond the limits of this one post to discuss.

But we can’t be too quick to lay all the blame for the erratic movements of the Overton Window on social media’s doorstep.

I think social media, if anything, has expanded the window in both directions — right and left. It has redefined the concept of public discourse, moving both ends out from the middle. But it’s still the middle that determines the overall position of the window. And that middle is determined, in large part, by mainstream media.

It’s a mistake to suppose that social media has completely supplanted mainstream media. I think all of us understand that the two work together. We use what is discussed in mainstream media to get our bearings for what we discuss on social media. We may move right or left, but most of us realize there is still a boundary to what is acceptable to say.

The red flags start to go up when this goes into reverse and mainstream media starts using social media to get its bearings. If you have the mainstream chasing outliers on the right or left, you start getting some dangerous feedback loops where the Overton Window has difficulty defining its middle, risking being torn in two, with one window for the right and one for the left, each moving further and further apart.

Those who work in the media have a responsibility to society. It can’t be abdicated for the pursuit of profit or by saying they’re just following their audience. Media determines the boundaries of public discourse. It sets the tone.

Newton Minow was warning us about this six decades ago.

Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

Talking Out Loud to Myself

I talk to myself out loud. Yes, full conversations, questions and answers, even debates — I can do everything all by myself.

I don’t do it when people are around. I’m just not that confident in my own cognitive quirks. It doesn’t seem, well… normal, you know?

But between you and me, I do it all the time. I usually walk at the same time. For me, nothing works better than some walking and talking with myself to work out particularly thorny problems.

Now, if I was using Google to diagnose myself, it would be a coin toss whether I was crazy or a genius. It could go either way.  One of the sites I clicked to said it could be a symptom of psychosis. But another site pointed to a study at Bangor University (2012 – Kirkham, Breeze, Mari-Beffa) that indicates that talking to yourself out loud may indicate a higher level of intelligence. Apparently, Nikola Tesla talked to himself during lightning storms. Of course, he also had a severe aversion to women who wore pearl earrings. So the jury may still be out on that one.

I think pushing your inner voice through the language processing center of your brain and actually talking out loud does something to crystallize fleeting thoughts. One of the researchers of the Bangor study, Paloma Mari-Beffa, agrees with this hypothesis:

“Our results demonstrated that, even if we talk to ourselves to gain control during challenging tasks, performance substantially improves when we do it out loud.”

Mari-Beffa continues,

“Talking out loud, when the mind is not wandering, could actually be a sign of high cognitive functioning. Rather than being mentally ill, it can make you intellectually more competent. The stereotype of the mad scientist talking to themselves, lost in their own inner world, might reflect the reality of a genius who uses all the means at their disposal to increase their brain power.”

When I looked for any academic studies to support the value of talking out loud to yourself, I found one (Huang, Carr and Cao, 2001) that was obviously aimed at neuroscientists, something I definitely am not. But after plowing through it, I think it said the brain does work differently when you say things out loud.

Another one (Gruber, von Cramon 2001) even said that when we artificially suppress our strategy of verbalizing our thoughts, our brains seem to operate the same way that a monkey’s brain would, using different parts of the brain to complete different tasks (e.g., visual, spatial or auditory). But when allowed to talk to themselves, humans tend to use a verbalizing strategy to accomplish all kinds of tasks. This indicates that verbalization seems to be the preferred way humans work stuff out. It gives guide rails and a road map to our human brain.

But if we’ve learned anything about human brains, we’ve learned that they don’t all work the same way. Are some brains more likely to benefit from the owner talking to themselves out loud, for instance? Take introverts, for example. I am a self-confessed introvert. And I talk to myself. So I had to ask, are introverts more likely to have deep, meaningful conversations with themselves?

If you’re not an introvert, let me first tell you that introverts are generally terrible at small talk. But — if I do say so myself — we’re great at “big” talk. We like to go deep in our conversations, generally with just one other person. Walking and talking with someone is an introvert’s idea of a good time. So walking and talking with yourself should be the introvert’s holy grail.

While I couldn’t find any empirical evidence to support this correlation between self-talk and introversion, I did find a bucketful of sites about introverts noting that it’s pretty common for us to talk to ourselves. We are inclined to process information internally before we engage externally, so self-talk becomes an important tool in helping us to organize our thoughts.

Remember, external engagements tend to drain the battery of an introvert, so a little power management before the engagement to prevent running out of juice midway through a social occasion makes sense.

I know this is all a lot to think about. Maybe it would help to talk it out — by yourself.

Feature image by Brecht Bug – Flickr – Creative Commons

You Know What Government Agencies Need? Some AI

A few items on my recent to-do list  have necessitated dealing with multiple levels of governmental bureaucracy: regional, provincial (this being in Canada) and federal. All three experiences were, without exception, a complete pain in the ass. So, having spent a good part of my life advising companies on how to improve their customer experience, the question that kept bubbling up in my brain was, “Why the hell is dealing with government such a horrendous experience?”

Anecdotally, I know everyone I know feels the same way. But what about everyone I don’t know? Do they also feel that the experience of dealing with a government agency is on par with having a root canal or colonoscopy?

According to a survey conducted last year by the research firm Qualtrics XM, the answer appears to be yes. This report paints a pretty grim picture. Satisfaction with government services ranked dead last when compared to private sector industries.

The next question, being that AI is all I seem to have been writing about lately, is this: “Could AI make dealing with the government a little less awful?”

And before you say it, yes, I realize I recently took a swipe at the AI-empowered customer service used by my local telco. But when the bar is set as low as it is for government customer service, I have to believe that even with the limitations of artificially intelligent customer service as it currently exists, it would still be a step forward. At least the word “intelligent” is in there somewhere.

But before I dive into ways to potentially solve the problem, we should spend a little time exploring the root causes of crappy customer service in government.

First of all, government has no competitors. That means there are no market forces driving improvement. If I have to get a building permit or renew my driver’s license, I have one option available. I can’t go down the street and deal with “Government Agency B.”

Secondly, in private enterprise, the maxim is that the customer is always right. This is, of course, bullshit.  The real truth is that profit is always right, but with customers and profitability so inextricably linked, things generally work out pretty well for the customer.

The same is not true when dealing with the government. Their job is to make sure things are (supposedly) fair and equitable for all constituents. And the determination of fairness needs to follow a universally understood protocol. The result of this is that government agencies are relentlessly regulation bound and fixated on policies and process, even if those are hopelessly archaic. Part of this is to make sure that the rules are followed, but let’s face it, the bigger motivator here is to make sure all bureaucratic asses are covered.

Finally, there is a weird hierarchy that exists in government agencies.  Frontline people tend to stay in place even if governments change. But the same is often not true for their senior management. Those tend to shift as governments come and go. According to the Qualtrics study cited earlier, less than half (48%) of government employees feel their leadership is responsive to feedback from employees. About the same number (47%) feel that senior leadership values diverse perspectives.

This creates a workplace where most of the people dealing with clients feel unheard, disempowered and frustrated. This frustration can’t help but seep across the counter separating them from the people they’re trying to help.

I think all these things are givens and are unlikely to change in my lifetime. Still, perhaps AI could be used to help us navigate the serpentine landscape of government rules and regulations.

Let me give you one example from my own experience. I have to move a retaining wall that happens to front on a lake. In Canada, almost all lake foreshores are Crown land, which means you need to deal with the government to access them.

I have now been bouncing back and forth between three provincial ministries for almost two years to try to get a permit to do the work. In that time, I have lost count of how many people I’ve had to deal with. Just last week, someone sent me a couple of user guides that “I should refer to” in order to help push the process forward. One of them is 29 pages long. The other is 42 pages. They are both about as compelling and easy to understand as you would imagine a government document would be. After a quick glance, I figured out that only two of the 71 combined pages are relevant to me.

As I worked my way through them, I thought, “surely some kind of ChatGPT interface would make this easier, digging through the reams of regulation to surface the answers I was looking for. Perhaps it could even guide you through the application process.”

Let me tell you, it takes a lot to make me long for an AI-powered interface. But apparently, dealing with any level of government is enough to push me over the edge.