The Cost of Not Being Curious

The world is having a pandemic-proportioned wave of Ostrichitis.

Now, maybe you haven’t heard of Ostrichitis. But I’m willing to bet you’re showing at least some of the symptoms:

  • Avoiding newscasts, especially those that feature objective and unbiased reporting
  • Quickly scrolling past any online news items in your feed that look like they may be uncomfortable to read
  • Dismissing out of hand information coming from unfamiliar sources

These are the signs of Ostrichitis – or the Ostrich Effect – and I have all of them. This is actually a psychological effect, more pointedly called willful ignorance, which I wrote about a few years ago. And from where I’m observing the world, we all seem to have it to one extent or another.

I don’t think this avoidance of information comes as a shock to anyone. The world is a crappy place right now. And we all seem to have gained comfort from adopting the folk wisdom that “no news is good news.” Processing bad news is hard work, and we just don’t have the cognitive resources to crunch through endless cycles of catastrophic news. If the bad news affirms our existing beliefs, it makes us even madder than what we were. If it runs counter to our beliefs, it forces us to spin up our sensemaking mechanisms and reframe our view of reality. Either way, there are way more fun things to do.

A recent study from the University of Chicago attempted to pinpoint when children started avoid bad news. The research team found that while young children don’t tend to put boundaries around their curiosity, as they age they start avoiding information that challenges their beliefs or their own well-being. The threshold seems to be about 6 years old. Before that, children are actively seeking information of all kinds (as any parent barraged by never ending “Whys” can tell you). After that, chidren start strategizing the types of information they pay attention to.

Now, like everything about humans, curiosity tends to be an individual thing. Some of us are highly curious and some of us avoid seeking new information religiously. But even if we are a curious sort, we may pick and choose what we’re curious about. We may find “safe zones” where we let our curiosity out to play. If things look too menacing, we may protect ourselves by curbing our curiosity.

The unfortunate part of this is that curiosity, in all its forms, is almost always a good thing for humans (even if it can prove fatal to cats).

The more curious we are, the better tied we are to reality. The lens we use to parse the world is something called a sense-making loop. I’ve often referred to this in the past. It’s a processing loop that compares what we experience with what we believe, referred to as our “frame”. For the curious, this frame is often updated to match what we experience. For the incurious, the frame is held on to stubbornly, often by ignoring new information or bending information to conform to their beliefs. A curious brain is a brain primed to grow and adapt. An incurious brain is one that is stagnant and inflexible. That’s why the father of modern-day psychology, William James, called curiosity “the impulse towards better cognition.”

When we think about the world we want, curiosity is a key factor in defining it. Curiosity keeps us moving forward. The lack of curiosity locks us in place or even pushes us backwards, causing the world to regress to a more savage and brutal place. Writers of dystopian fiction knew this. That’s why authors including H.G. Wells, Aldous Huxley, Ray Bradbury and George Orwell all made a lack of curiosity a key part of their bleak future worlds. Our current lack of curiosity is driving our world in the same dangerous direction.

For all these reasons, it’s essential that we stay curious, even if it’s becoming increasingly uncomfortable.

Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

The Credibility Crisis

We in the western world are getting used to playing fast and loose with the truth. There is so much that is false around us – in our politics, in our media, in our day-to-day conversations – that it’s just too exhausting to hold everything to a burden of truth. Even the skeptical amongst us no longer have the cognitive bandwidth to keep searching for credible proof.

This is by design. Somewhere in the past four decades, politicians and society’s power brokers have discovered that by pandering to beliefs rather than trading in facts, you can bend to the truth to your will. Those that seek power and influence have struck paydirt in falsehoods.

In a cover story last summer in the Atlantic, journalist Anne Applebaum explains the method in the madness: “This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

As Applebaum points out, we have become a society of nihilists. We are too tired to look for evidence of meaning. There is simply too much garbage to shovel through to find it. We are pummeled by wave after wave of misinformation, struggling to keep our heads above the rising waters by clinging to the life preserver of our own beliefs. In the process, we run the risk of those beliefs becoming further and further disconnected from reality, whatever that might be. The cogs of our sensemaking machinery have become clogged with crap.

This reverses a consistent societal trend towards the truth that has been happening for the past several centuries. Since the Enlightenment of the 18th century, we have held reason and science as the compass points of our True Norh. These twin ideals were buttressed by our institutions, including our media outlets. Their goal was to spread knowledge. It is no coincidence that journalism flourished during the Enlightenment. Freedom of the press was constitutionally enshrined to ensure they had the both the right and the obligation to speak the truth.

That was then. This is now. In the U.S. institutions, including media, universities and even museums, are being overtly threatened if they don’t participate in the wilful obfuscation of objectivity that is coming from the White House. NPR and PBS, two of the most reliable news sources according to the Ad Fontes media bias chart, have been defunded by the federal government. Social media feeds are awash with AI slop. In a sea of misinformation, the truth becomes impossible to find. And – for our own sanity – we have had to learn to stop caring about that.

But here’s the thing about the truth. It gives us an unarguable common ground. It is consistent and independent from individual belief and perspective. As longtime senator Daniel Patrick Moynihan famously said, “Everyone is entitled to his own opinion, but not to his own facts.” 

When you trade in falsehoods, the ground is consistently shifting below your feet. The story is constantly changing to match the current situation and the desired outcome. There are no bearings to navigate by. Everyone had their own compass, and they’re all pointing in different directions.

The path the world is currently going down is troubling in a number of ways, but perhaps the most troubling is that it simply isn’t sustainable. Sooner or later in this sea of deliberate chaos, credibility is going to be required to convince enough people to do something they may not want to do. And if you have consistently traded away your credibility by battling the truth, good luck getting anyone to believe you.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

Just Behave Archive: Q&A With Marissa Mayer, Google VP, Search Products & User Experience

This blog is the most complete collection of my various posts across the web – with one exception. For 4 years, from 2007 to 2011, I wrote a column for Search Engine Land called “Just Behave” (Danny Sullivan’s choice of title, not mine – but it grew on me). At the time, I didn’t cross-post because Danny wanted the posts to be exclusive. Now, with almost 2 decades past, I think it’s safe to bring these lost posts back home to the nest, here at “Out of My Gord”. You might find them interesting from a historical perspective, and also because it gave me the chance to interview some of the brightest minds in search at that time. So, here’s my first, with Google’s then VP of Search Products and User Experience – Marissa Mayer. It ran in January, 2007 :

Marissa Mayer has been the driving force behind Google’s Spartan look and feel from the very earliest days. In this wide-ranging interview, I talked with Marissa about everything from interface design to user behavior to the biggest challenge still to be solved with search as we currently know it.

I had asked for the interview because of some notable findings in our most recent eye tracking study. I won’t go into the findings in any great depth here, because Chris Sherman will be doing a deep dive soon. But for the purpose of setting the background for Marissa’s interview, here are some very quick highlights:


MSN and Yahoo Users had a better User Experience on Google

In the original study, the vast majority of participants were Google users, and their interactions were restricted to Google. With the second study, we actually recruited participants that indicated their engine of preference was Yahoo! or MSN (now Live Search), as the majority of their interactions would be with those two engines. We did take one task at random, however, and asked them to use Google to complete the task. By almost every metric we looked at, including time to complete the task (choose a link), the success of the link chosen, the percentage of the page scanned before choosing a link and others, these users had a more successful experience on Google than on their engine of choice.

Google Seemed to Have a Higher Degree of Perceived Relevancy

In looking at the results, we didn’t believe that it was the actual quality of the results that lead to a more successful user experience as much as it was how those results were presented to the user. Something about Google’s presentation made it easier to determine which results were relevant. We referred to it in the study as information scent, using the term common in the information foraging theory.

Google Has an Almost Obsessive Dedication to Relevancy at the Top of the Results Page

The top of the results, especially the top left corner, is the most heavily scanned part of the results page. Google seemed to be the most dedicated of all the three engines in ensuring the results that fall in this real estate are highly relevant to the query. For example, Google served up top sponsored ads in far fewer sessions in the study than did either Yahoo or MSN.

Google Offers the “Cleanest” Search Experience

Google is famous for its Spartan home page. It continues this minimalist approach to search with the cleanest results page. When searching, we all have a concept in mind and that concept can be influenced by what else we see on the page. Because a number of searches on Yahoo! and MSN were launched from their portal page, we wondered how that impacted the search experience.

Google Had Less Engagement than Yahoo with their Vertical Results

The one area where Google appeared to fall behind in these head to head tests was with the relevance of the OneBox, or their vertical results. Yahoo! in particular seemed to score more consistently with users with their vertical offerings, Yahoo! Shortcuts.

It was in these areas in particular that I wanted to get the thinking of Marissa and her team at Google. Whatever they’re doing, it seems to be working. In fact, I have said in the past that Google has set the de facto standard for what we expect from a search engine, at least for now.

Here’s the interview:

Gord: What, at the highest level, is Google’s goal for the user?

Marissa: Our goal is to make sure that people can find what they’re looking for and get off the page as quickly as possible

If we look at this idea of perceived versus real relevancy, some things seemed to make a big difference in how relevant people perceived the results to be on a search engine: things like how much white space there was around individual listings, separating organic results from the right rail, the query actually being bolded in the title and the description and very subtle nuances like a hair line around the sponsored ads as opposed to a screened box. What we found when we delved into it was there seemed to be a tremendous attention to that detail on Google. It became clear that this stuff had been fairly extensively tested out.

I think all of your observations are correct. I can walk you through any one of the single examples you just named and I can talk you through the background and exactly what our philosophy was when we designed it and the numbers we saw in our tests as we had tested them, but you’re right in that it’s not an accident. For example, putting a line along the side of the ad as opposed to boxing it allows it to integrate more into the page and lets it fall more into what people read.

One thing that I think about a lot are people that are new to the internet. A lot of times they subconsciously map the internet to physical idioms. For example, when you look at how you parse a webpage, chances are that there are some differences if there are links in the structure and so forth, but a lot of times it looks just like a page in a book or a page on a magazine, and when you put a box around something, it looks like a sidebar. The way people handle reading a page that has a sidebar on it is that they read the whole main page and then, at the end, if it’s not too interesting, they stop and read the sidebar on that page.

For us, given that we think our ads in some cases are as good an answer as our search results and we want them to be integral to the user experience, we don’t want that kind of segmentation and pausing. We tried not to design it so it looked like a side bar, even though we have two distinct columns. You know, There are a lot of philosophies like that that go into the results page and of course, testing both of those formats to see if that matches our hypothesis.

That brings up something else that was really interesting. If we separate the top sponsored from the right rail, the majority of the interaction happens on the page in that upper left real estate. One thing that became very apparent was that Google seemed to be the most aware of relevancy at that top of page, that Golden Triangle real estate. In all our scenarios, you showed top sponsored the least number of times and generally you showed fewer top sponsored results. We saw a natural tendency to break off the top 3 or 4 listings on a page and scan them as a set and then make your choice from those top 3 or 4. In Google, those top 3 or 4 almost always include 1 or 2 organic results, sometimes all organic results.

That’s absolutely the case. Yes, we’re always looking at how can we do better targeting with ads. But we believe part of the targeting for those ads is “how well do those ads match your query?” And then the other part is how well does this format and that prominence convey to you how relevant it is. That’s baked into the relevance.

Our ad team has worked very very hard. One of the most celebrated teams at Google is our Smart Ads team. In fact, you may have heard of the Google Founder’s Awards, where small teams of people get grants of stock of up to $10,000,000 in worth, split across a small number of individuals. One of the very first teams at Google to receive that award was the Smart Ads team. And they were looking, interestingly enough, at how you target things. But they were also looking at what’s the probability that someone will click on a result. And shouldn’t that probability impact our idea of relevance, and also the way we choose to display it.

So we do tend to be very selective and keep the threshold on what appears on the top of the page very high. We only show things on the top when we’re very very confident that the click through rate on that ad will be very high. And the same thing is true for our OneBox results that occasionally appear above the top (organic) results. Larry and Sergey, when I started doing user interface work, said we’re thinking of making your salary proportional to the number of pixels above the first result, on average. We’ve mandated that we always want to have at least one result above the fold. We don’t let people put too much stuff up there. Think about the amount of vertical space on top of the page as being an absolute premium and design it and program it as if your salary depended on it.

There are a couple of other points that I want to touch on. When we looked at how the screen real estate divided up on the search results page, based on a standard resolution, there seemed to be a mathematical precision to the Google proportions that wasn’t apparent on MSN and on Yahoo. The ratio seemed pretty set. We always seemed to come up with a 33% ratio dedicated to top organic, even on a fully loaded results page, so obviously that’s not by accident. That compared to, on a fully loaded page, less than 14% on Yahoo.

That’s interesting, because we never reviewed on a percentage basis that you’re mentioning. We’ve had a lot of controversy amongst the team, should it be in linear inches along the left hand margin, should it actually be square pixelage computed on a percentage basis? Because of the way that the search is laid out linear inches or vertical space may be more accurate. As I said, the metric that I try to hold the team to is always getting at least one organic result above the fold on 800 by 600, with the browser held at that size.

The standard resolution we set for the study was 1024 by 768.

Yes, we are still seeing as many as 30% plus of our users at 800 by 600. My view is, we can view 1024 by 768 as ideal. The design has to look good on that resolution. It has to at least work and appear professional on 800 by 600. So all of us with our laptops, we’re working with 1024 by 768 as our resolution, so we try to make sure the designs look really good on that. It’s obvious that some of our engineers have bigger monitors and bigger resolutions than that, but we always are very conscious of 800 by 600. It’s pretty funny, most of our designers, myself included, have a piece of wall paper that actually has rectangles in the back where if you line up the browser in the upper left hand corner and then align the edge of the browser with the box you can simulate all different sizes so we can make sure it works in the smaller browsers.

One of the members of our staff has a background in physics and design and he was the one that noticed that if you take the Golden Ratio it lined up very well with how the Google results page is designed. The proportions of the page lined up pretty closely with how that Ratio is proportioned.

I’m a huge fan of the Golden Ratio. We talk about it a lot in our design reviews, both implicitly and explicitly, even when it comes down to icons. We prefer that icons not be square, we prefer that they be more of the 1.7:1.

I wanted to talk about Google OneBox for a minute. Of all the elements on the Google page, frankly, that was the one that didn’t seem to work that well. It almost seemed to be in flux somewhat while we were doing the data collection. Relevancy seemed to be a little off on a number of the searches. Is that something that is being tested.

Can you give me an example?

The search was for digital cameras and we got news results back in OneBox. Nikon had a recall on a bunch of digital cameras at the time and we went, as far as disambiguating the user intent from the query, it would seem that news results for the query digital cameras is probably not the best match.

It’s true. The answer is that we do a fairly good job, I believe, in targeting our OneBox results. We hold them to a very high click through rate expectation and if they don’t meet that click through rate, the OneBox gets turned off on that particular query. We have an automated system that looks at click through rates per OneBox presentation per query. So it might be that news is performing really well on Bush today but it’s not performing very well on another term, it ultimately gets turned off due to lack of click through rates. We are authorizing it in a way that’s scalable and does a pretty good job enforcing relevance. We do have a few niggles in the system where we have an ongoing debate and one of them is around news versus product search

One school of thought is what you’re saying, which is that it should be the case that if I’m typing digital cameras, I’m much more likely to want to have product results returned. But here’s another example. We are very sensitive to the fact that if you type in children’s flannel pajamas and there’s a recall due to lack of flame retardation on flannel pajamas, as a parent you’re going to want to know that. And so it’s a very hard decision to make.

You might say, well, the difference there is that it’s a specific model. Is it a Nikon D970 or is it digital cameras, which is just a category? So it’s very hard on the query end to disambiguate. You might say if there’s a model number then it’s very specific and if only the model number matches in the news return the news and if not, return the products. But it’s more nuanced than that. With things like Gap flannel pajamas for children, it’s very hard to programmatically tell if that’s a category or a specific product. So we have a couple of sticking points.

So that would be one of the reasons why, for a lot of searches, we weren’t seeing product results coming back, and in a lot of local cases, we weren’t seeing local results coming back?. That would be that click through monitoring mechanism where it didn’t meet the threshold and it got turned off?

That’s right.

Here’s another area we explored in the study. Obviously a lot of searches from Yahoo or MSN Live Search get launched from a portal and the user experience if you launch from the Google home page is different. What does it mean as far as interaction with search results when you’re launching the search from what’s basically a neutral palette versus something that’s launched from a portal that colors the intent of the user as it passes them through to the search results?

We want the user to not be distracted, to just type in what they want and not be very influenced by what they see on the page, which is one reason why the minimalist home page works well. It’s approachable, it’s simple, it’s straightforward and it gives the user a sense of empowerment. This engine is going to do what they want it to do, as opposed to the engine telling them what they should be doing, which is what a portal does. We think that to really aid and facilitate research and learning, the clean slate is best.

I think there’s a couple of interesting problems in the portal versus simple home page piece. You might say it’s easier to disambiguate from a portal what a person might be intending. They look at the home page and there’s a big ad running for Castaway and if they search Castaway, they mean the movie that they just saw the ad for. That might be the case but the other thing that I think is more confusing than anything is the fact that most people who launch the search from the portal home page are actually ignoring and tuning out most of the content on a page. If anything you’re more inclined to mistake intent, to think, “Oh, of course when they typed this they meant that,” but they actually didn’t, because they didn’t even see this other thing. One thing that we’re consistently noticing, which your Golden Triangle finding validated, is that users have a laser focus on their task.

The Google home page is very simple and when we put a link underneath the Google search box on the home page to advertise one of our products, we say, “Hey, try Google video, it’s new, or download the new Picassa.” Basically it’s the only other thing on the page, and while it does get a fair amount of click through, it’s nothing compared to the search, because most users don’t even see it. Most users on our search results page don’t see the logo on the top of the page, they don’t see OneBox, they don’t even see spelling corrections, even though it’s there in bright red letters. There’s a single-mindedness of I’m going to put in my search, not let anything on the home page get in the way, and I’m going to go for the first blue left aligned link on the results page and everything above it basically gets ignored. And we’ve seen that trend again and again. My guess is that if anything, that same thing is happening at the portals but because there is so much context around it on the home page, their user experience and search relevance teams may be led astray, thinking that that context has more relevance than it has.

One thing eye tracking allowed us to pull this apart a little bit is that when we gave people two different scenarios, one aimed more towards getting them to look at the organic results and one that would have them more likely to look at sponsored results, and then look down to organic results, we saw the physical interaction with the page didn’t vary as much as we thought, but the cognitive interaction with the page, when it came to what they remembered seeing and what they clicked on, was dramatically different. So it’s almost like they took the same path through, but the engagement factor flicked on at different points.

My guess is that people who come to the portal are much more likely to look at ads. I like to think of them as users with ADHD. They’re on the home page and they enjoy a home page that pulls their attention in a lot of different directions. They’re willing to process a lot of information on the way to typing in their search, and as a result, that same mind that likes that, it may not even be a per user thing, it may be an of-the-moment thing, but a person that’s in the mindset of enjoying that, on the home page, is also going to be much more likely to look around on the search results page. Their attention is going to be much more likely to be pulled in the direction of an ad, even if it’s not particularly relevant, banner, brand, things like that.

I want to wrap up by asking you, what in your mind is the biggest challenge still to be solved with the search interface as we currently know it?

I think there’s a ton of challenges, because in my view, search is in its infancy, and we’re just getting started. I think the most pressing, immediate need as far as the search interface is to break paradigm of the expectation of “You give us a keyword, and we give you 10 URL’s”. I think we need to get into richer, more diverse ways you’re able to express their query, be it though natural language, or voice, or even contextually. I’m always intrigued by what the Google desktop sidebar is doing, by looking at your context, or what Gmail does, where by looking at your context, it actually produces relevant webpages, ads and things like that. So essentially, a context based search.

So, challenge one is how the searches get expressed, I think we really need to branch out there, but I also think we need to look at results pages that aren’t just 10 standard URLS that are laid out in a very linear format. Sometimes the best answer is a video, sometimes the best answer will be a photo, and sometime the best answer will be a set of extracted facts. If I type in general demographic statistics about China, it’d be great if I got “A” as a result. A set of facts that had been parsed off of and even aggregated and cross validated across a result set.

And sometimes the best result would be an ad. Out of interest, when we tracked through to the end of the scenario to see which links provided the greatest degree of success, the top sponsored results actually delivered the highest success rates across all the links that were clicked on in the study.

Really? Even more so than the natural search results?

Yes. Even the organic search results. Now mind you, the scenarios given were commercial in nature.

Right… that makes much more sense. I do think that for the 40 or so percent of page views that we serve ads on that those ads are incredibly relevant and usually do beat the search results, but for the other 60% of the time the search results are really the only reasonable answer.

Thanks, Marissa.

In my next column, I talk with Larry Cornett, Senior Director of Search & Social Media in Yahoo’s User Experience & Design group about their user experience. Look for it next Friday, February 2.

Bread and Circuses: A Return to the Roman Empire?

Reality sucks. Seriously. I don’t know about you, but increasingly, I’m avoiding the news because I’m having a lot of trouble processing what’s happening in the world. So when I look to escape, I often turn to entertainment. And I don’t have to turn very far. Never has entertainment been more accessible to us. We carry entertainment in our pocket. A 24-hour smorgasbord of entertainment media is never more than a click away. That should give us pause, because there is a very blurred line between simply seeking entertainment to unwind and becoming addicted to it.

Some years ago I did an extensive series of posts on the Psychology of Entertainment. Recently, a podcast producer from Seattle ran across the series when he was producing a podcast on the same topic and reached out to me for an interview. We talked at length about the ubiquitous nature of entertainment and the role it plays in our society. In the interview, I said, “Entertainment is now the window we see ourselves through. It’s how we define ourselves.”

That got me to thinking. If we define ourselves through entertainment, what does that do to our view of the world? In my own research for this column, I ran across another post on how we can become addicted to entertainment. And we do so because reality stresses us out, “Addictive behavior, especially when not to a substance, is usually triggered by emotional stress. We get lonely, angry, frustrated, weary. We feel ‘weighed down’, helpless, and weak.”

Check. That’s me. All I want to do is escape reality. The post goes on to say, “Escapism only becomes a problem when we begin to replace reality with whatever we’re escaping to.”

I believe we’re at that point. We are cutting ties to reality and replacing them with a manufactured reality coming from the entertainment industry. In 1985 – forty years ago – author and educator Neil Postman warned us in his book Amusing Ourselves to Death that we were heading in this direction. The calendar had just ticked past the year 1984 and the world collectively sighed in relief that George Orwell’s eponymous vision from his novel hadn’t materialized. Postman warned that it wasn’t Orwell’s future we should be worried about. It was Aldous Huxley’s forecast in Brave New World that seemed to be materializing:

“As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions…  Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.”

Postman was worried then – 40 years ago – that the news was more entertainment than information. Today, we long for even the kind of journalism that Postman was already warning us about. He would be aghast to see what passes for news now. 

While things unknown to Postman (social media, fake news, even the internet) are throwing a new wrinkle in our downslide into an entertainment induced coma, it’s not exactly new.   This has happened at least once before in history, but you have to go back almost 2000 years to find an example. Near the end of the Western Roman Empire, as it was slipping into decline, the Roman poet Juvenal used a phrase that summed it up – panem et circenses – “bread and circuses”:

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

Juvenal was referring to the strategy of the Roman emperors to provide free wheat and circus games and other entertainment games to gain political power. In an academic article from 2000, historian Paul Erdkamp said the ploy was a “”briberous and corrupting attempt of the Roman emperors to cover up the fact that they were selfish and incompetent tyrants.”

Perhaps history is repeating itself.

One thing we touched on in the podcast was a noticeable change in the entertainment industry itself. Scarlett Johansenn noticed the 2025 Academy Awards ceremony was a much more muted affair than in years past. There was hardly any political messaging or sermons about how entertainment provided a beacon of hope and justice. In an interview with Vanity Fair  – Johanssen mused that perhaps it’s because almost all the major studies are now owned by Big-Tech Billionaires, “These are people that are funding studios. It’s all these big tech guys that are funding our industry, and funding the Oscars, and so there you go. I guess we’re being muzzled in all these different ways, because the truth is that these big tech companies are completely enmeshed in all aspects of our lives.”

If we have willingly swapped entertainment for reality, and that entertainment is being produced by corporations who profit from addicting as many eyeballs as possible, prospects for the future do not look good.

We should be taking a lesson from what happened to Imperial Rome.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Do We Have the Emotional Bandwidth to Stay Curious?

Curiosity is good for the brain. It’s like exercise for our minds. It stretches the prefrontal cortex and whips the higher parts of our brains into gear. Curiosity also nudges our memory making muscles into action and builds our brain’s capacity to handle uncertain situations.

But it’s hard work – mentally speaking. It takes effort to be curious, especially in situations where curiosity could figuratively “kill the cat.” The more dangerous our environment, the less curious we become.

A while back I talked about why the world no longer seems to make sense. Part of this is tied to our appetite for curiosity. Actively trying to make sense of the world puts us “out there”, leaving the safe space of our established beliefs behind. It is literally the definition of an “open mind” – a mind that has left itself open to being changed. And that’s a very uncomfortable place to be when things seem to be falling down around our ears.

Some of us are naturally more curious than others. Curious people typically achieve higher levels of education (learning and curiosity are two sides of the same coin). They are less likely to accept things at face value. They apply critical thinking to situations as a matter of course. Their brains are wired to be rewarded with a bigger dopamine hit when they learn something new.

Others rely more on what they believe to be true. They actively filter out information that may challenge those beliefs. They double down on what is known and defend themselves from the unknown. For them, curiosity is not an invitation, it’s a threat.

Part of this is a differing tolerance for something which neuroscientists call “prediction error” – the difference between what we think will happen and what actually does happen. Non-curious people perceive predictive gaps as threats and respond accordingly, looking for something or someone to blame. They believe that it can’t be a mistaken belief that is to blame, it must be something else that caused the error. Curious people look at prediction errors as continually running scientific experiments, given them a chance to discover the errors in their current mental models and update them based on new information.

Our appetite for curiosity has a huge impact on where we turn to be informed. The incurious will turn to information sources that won’t challenge their beliefs. These are people who get their news from either end of the political bias spectrum, either consistently liberal or consistently conservative. Given that, they can’t really be called information sources so much as opinion platforms. Curious people are more willing to be introduced to non-conforming information. In terms of media bias, you’ll find them consuming news from the middle of the pack.

Given the current state of the world, more curiosity is needed but is becoming harder to find. When humans (or any animal, really) are threatened, we become less curious. This is a feature, not a bug. A curious brain takes a lot longer to make a decision than a non-curious one. It is the difference between thinking “fast” and “slow” – in the words of psychologist and Nobel laureate Daniel Kahneman. But this feature evolved when threats to humans were usually immediate and potentially fatal. A slow brain is not of any benefit if you’re at risk of being torn apart by a pack of jackals. But today, our jackal encounters are usually of the metaphorical type, not the literal one. And that’s a threat of a very different kind.

Whatever the threat, our brain throttles back our appetite for curiosity. Even the habitually curious develop defense mechanisms in an environment of consistently bad news. We seek solace in the trivial and avoid the consequential. We start saving cognitive bandwidth from whatever impending doom we may be facing. We seek media that affirms our beliefs rather than challenges them.

This is unfortunate, because the threats we face today could use a little more curiosity.