Is Google Slipping, Or Is It Just Our Imagination?

Recently, I’ve noticed a few articles speculating about whether Google might be slipping:

Last month, the American Customer Satisfaction Index notified us that our confidence in search is on the decline. Google’s score dropped 2% to 82. The culprit was the amount of advertising found on the search results page. To be fair, both Google and search in general have had lower scores. Back in 2015, Google scored a 77%, it’s lowest score ever.

This erosion of customer satisfaction may be leading to a drop in advertising ROI. According to a recent report from Analytic Partners, the return on investment from paid search dropped 27% from 2010 to 2016. Search wasn’t alone. All digital ROI seems to be in decline. Analytic’s VP of Marketing, Joe LaSala, predicts that ROI from digital will continue to decline until it converges with ROI from traditional media.

In April of this year, Forbes ran an article asking the question: “Is Google’s Search Quality Starting to Decline?” Contributors to this decline, according to the article, included the introduction of rich snippets and featured news, including popularity as a ranking factor and ongoing black hat SEO manipulation.

But the biggest factor in the drop of Google’s perceived quality was actually in the perception itself. As the Forbes article’s author, Jayson DeMers, stated;

It’s important to realize just how sophisticated Google is, and how far it’s come from its early stages, as well as the impossibility of having a “perfect” search platform. Humans are flawed creatures, and our actions are what are dictating the shape of search.

Google is almost 20 years old. The domain Google.com was registered on September 15, 1997. Given that 20 years is an eternity in internet years, it’s actually amazing that it’s stood up as well as it has for the past two decades. Whether Google’s naysayers care to admit it or not, that’s due to Google’s almost religious devotion to the quality of their search results. That devotion extends to advertising. The balance between user experience and monetization has always been one that Google has paid a lot of attention too.

But it’s not the presence of ads that has led to this perceived decline of quality. It’s a change in our expectations of what a search experience should be. I would argue that for any given search, using objective measures of result relevance, the results Google shows today are far more relevant than the results they showed in 2008, the year it got it’s highest customer satisfaction score (86%). Since then, Google has made great strides in deciphering user intent and providing a results page that’s a good match for that intent. Sometimes it will get it wrong, but when it gets it right, it puts together a page that’s a huge improvement over the vanilla, one size fits all results page of 2008.

The biggest thing that’s changed in the past 10 years is the context from which we’re launching those searches. In 2008, it was almost always the desktop. But today, chances are we’re searching from a mobile device – or our car – or our home through Amazon Echo. This has changed our expectations of search. We are task focused, rather than “browsing” for information. This creates an entirely different mental framework within which we receive the results. We apply a new yardstick of acceptable relevance. Here, we’re not looking for a list of 20 possible answers – we’re looking for one answer. And it had better be the right one. Context based search must be hyper-relevant.

Compounding this trend is the increasing number of circumstances where search is going “under the hood” – something I’ve been forecasting for a long time now. For example, if you use Siri to launch a search through your CarPlay connected device when you’re driving, the results are actually coming from Bing but they’re stripped of the context of the Bing search results page. Here, the presentation of search results is just one step in a multi-step task flow. It’s important that the result that is on top is the one you’re probably looking for.

Unfortunately for Google – and the other search providers – this expectation stays in place even when the context shifts. When we launch a search from our desktop, we are increasingly intolerant of results that are even a little off base from our intent. Ads become the most easily identified culprit. A results set that would have seemed almost frighteningly prescient even a few years ago now seems sub par. Google has come a long way in the past 20 years but it’s still losing ground to our expectations.

 

 

Is Busy the New Alpha?

Imagine you’ve just been introduced into a new social situation. Your brain immediately starts creating a social hierarchy. That’s what we do. We try to identify the power players. The process by which we do this is interesting. The first thing we do is look for obvious cues. In a new job, that would be titles and positions. Then, the process becomes very Bayesian – we form a base understanding of the hierarchy almost immediately and then constantly update it as we gain more knowledge. We watch power struggles and update our hierarchy based on the winners and losers. We start assigning values to the people in this particular social network and; more importantly, start assessing our place in the network and our odds for ascending in the hierarchy.

All of that probably makes sense to you as you read it. There’s nothing really earth shaking or counter intuitive. But what is interesting is that the cues we use to assign standings are context dependent. They can also change over time. What’s more, they can vary from person to person or generation to generation.

In other words, like most things, our understanding of social hierarchy is in the midst of disruption.

An understanding of hierarchy appears to be hardwired into us. A recent study found that humans can determine social standing and the accumulation of power pretty much as soon as they can walk. Toddlers as young as 17 months could identify the alphas in a group. One of the authors of the study, University of Washington psychology professor Jessica Sommerville , said that even the very young can “see that someone who is more dominant gets more stuff.” That certainly squares with our understanding of how the world works. “More stuff” has been how we’ve determined social status for hundreds of years. In sociology, it’s called conspicuous consumption, a term coined by sociologist Thorstein Veblen. And it’s a signaling strategy that evolved in humans over our recorded history. The more stuff we had, and the less we had to do to get that stuff, the more status we had. Just over a hundred years ago, Veblen called this the Leisure Class.

But today that appears to be changing. A recent study seems to indicate that we now associate busyness with status. Here, it’s time – not stuff – that is the scarce commodity. Social status signaling is more apt to involve complaining about how we never go on a vacation than about our “summer on the continent”.

At least, this seems to be true in the U.S. The researchers also ran their study in Italy and there the situation was reversed. Italians still love their lives of leisure. The U.S. is the only developed country in the world without a single legally required paid vacation day or holiday. In Italy, every employee is entitled to at least 32 paid days off per year.

In our world of marketing – which is acutely aware of social signaling – this could create some interesting shifts in messaging. I think we’re already seeing this. Campaigns aimed at busy people seem to equate scarcity of time with success. The one thing missing in all this social scrambling – whether it be conspicuous consumption or working yourself to death – might be happiness. Last year a study out of the University of British Columbia found a strong link between those who value their time more than money and happiness.

Maybe those Italians are on to something.

Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.

 

Live, From Inside the Gale of Creative Destruction

Talk about cognitive dissonance…

First, Mediapost’s Jack Loechner writes about a Forrester Report, The End of Advertising as We Know It, which was published earlier this year. Seeing as last week I starting ringing the death knell for advertising agencies, I though I should check the report out.

Problem One: The report was only available on Forrester if I was willing to plunk down $499. American. Which is – I don’t know – about 14 zillion Canadian. Much as I love and respect you, my readers, there’s no friggin’ way that’s going to happen. So, I go to Google to see if I can find a free source to get the highlights.

Problem Two: Everyone and Sergio Zyman’s dog has apparently decided to write a book or white paper entitled “The End of Advertising as We Know It.” Where to begin researching the end? Well, here’s one deliciously ironic option – one of those white papers was published by none other than WPP. You know I have to check that out! As it turns out – no surprise here – it’s a sales pitch for the leading edge cool stuff that one of WPP’s agencies, AKQA, can do for you. I tried to sift through the dense text but gave up after continually bumping into buzz-laden phrases like “365 ideas”, “Business Invention” and “People Stories.” I return to the search results page and follow a Forbes link that looks more promising.

Problem Three: Yep! This is it. It’s Forbes summation of the Forrester Report. I start reading and learning that the biggest problem with advertising is that we hate to be interrupted by advertising. Well, I could have told you that. Oh – wait – I did (for free, I might add). But here’s the cognitively dissonant part. As I’m trying to read the article, an autoplay video ad keeps playing on the Forbes page, interrupting me. And you know what? I hated it! The report was right. At least, I think it was, as I stopped reading the article.

I’m guessing you’re going through something similar right now. As you’re trying to glean my pearls of wisdom, you’re tiptoeing around advertising on the page. That’s not Mediapost’s fault. They have a business to run and right now, there’s no viable business model other than interruptive advertising to keep the lights on. So you have the uniquely dissonant experience of reading about the end of advertising while being subjected to advertising.

My experience – which is hardly unique – is a painful reminder about the inconvenient truth of innovative disruption: it’s messy in the middle of it. When Joseph Schumpeter called it a “gale of creative destruction” it made it sound revolutionary and noble in the way that the Ride of the Valkyries or the Starks retaking Winterfell is noble. But this stuff gets messy, especially if you’re trying to hang on to the things being destroyed when the gale hits in full force.

Here’s the problem, in a nutshell. The tension goes back to a comment made back in 1984 from Stewart Brand to Steve Wozniak:

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

In publishing, we not only have the value of the information itself, but we have the cost of wrapping insight around that information. Forrester’s business is industry analysis. Someone has to do the analyzing and there are costs associated with that. So they charge $499 for a report on the end of advertising.

Which brings us to the second part of the tension. Because so much information is now free and Google gives me, the information consumer, the expectation that I can find it for free – or, at least, highlights of it for free – I expect all information to be free. I believe I have an alternative to paying Forrester. In today’s age, information tends to seep through the cracks in pay walls, as it did when Forbes and Mediapost published articles on the report. Forrester is okay with that, because it hopes it will make more people willing to pay $499 aware of the report.

For their part Forbes – or Mediapost – relies on advertising to keep the information available to you for free, matching our expectations. But they have their own expenses. Whether we like it or not, interruptive advertising is the only option currently available to them.

So there we have it, a very shaky house of cards built on a rapidly crumbling foundation. Welcome to the Edge of Chaos. A new model will be created from this destruction. That is inevitable. But in the meantime, there’s going to be a lot of pain and WTF moments. Just like the one I had this week.

Social Media is Barely Skin Deep

Here’s a troubling fact. According to a study from the Georgia Institute of Tech, half of all selfies taken have one purpose, to show how good the subject looks. They are intended to show the world how attractive we are: our makeup, our clothes, our shoes, our lips, our hair. The category accounts for more selfies than all other categories combined. More than selfies taken with people or pets we love, more than us doing the things we love, more than being in the places we love, more than eating the food we love. It appears that the one thing we love the most is ourselves. The selfies have spoken

In this study, the authors reference a 1956 work from sociologist Erving Goffman– The Presentation of Self in Everyday Life. Goffman took Shakespeare’s line – “All the World is a Stage and all the men and women merely players” – quite literally. His theory was that we are all playing the part of whom we want to be perceived as. Our lives are divided up into two parts – the front, when we’re “on stage” and playing our part, and the “back” – when we prepare for our role. The roles we play depend on the context we’re in.

 

Goffman’s theory introduces an interesting variable into consideration. The way we play these roles and the importance we place on them will vary with the individual. For some of us, it will be all about the role and less about the actual person who inhabits that role. These people are obsessed about how they are perceived by others. They’re the ones snapping selfies of themselves to show the world just how marvelous they look.

For others, they care little about what the world thinks of them. They are internally centered and are focused on living their lives, rather than acting their way through their lives for the entertainment of – and validation from – others. In between the two extremes is the ubiquitous bell curve of normal distribution. Most of us live somewhere on that curve.

Goffman’s theory was created specifically to provide insight into face-to-face encounters. Technology has again throw a gigantic wrinkle into things – and that wrinkle may explain why we keep taking those narcissistic selfies.

Humans are pretty damned good at judging authenticity in a face-to-face setting. We pick up subtle cues from across a wide swath of interpersonal communication channels: vocal intonations, body language, eye-to-eye contact, micro-expressions. Together, these inputs give us a pretty accurate “bullshit detector.” If someone comes across as an inauthentic “phony” the majority of us will just roll our eyes and simply start avoiding the person. In face-to-face encounters there is a social feedback mechanism that keeps the “actors” amongst us at least somewhat honest in order to remain part of the social network that forms their audience.

But social media platforms provide the idea incubator for inauthentic presentation of our own personas. There are three factors in particular that allow shallow “actors” to flourish – even to the point of going viral.

False Intimacy and Social Distance

In his blog on Psychology Today, counselor Michael Formica talks about two of these factors – social distance and false intimacy. I’ve talked about false intimacy before in another context – the “labelability” of celebrities. Social media removes the transactional costs of retaining a relationship. This has the unfortunate side effect of screwing up the brain’s natural defenses against inauthentic relationships. When we’re physically close to a person, there are no filters for the bad stuff. We get it all. Our brains have evolved to do a cost/benefit analysis of each relationship we have and decide whether it’s worth the effort to maintain it. This works well when we depend on physically proximate relationships for our own well-being.

But social media introduces a whole new context for maintaining social relationships. When the transactional costs are reduced to a scanning of a newsfeed and hitting the “Like” button, the brain says “What the hell, let’s add them to our mental friends list. It’s not costing me anything.” In evolutionary terms, intimacy is the highest status we can give to a relationship and it typically only comes with a thorough understanding of the good and the bad involved in that relationship by being close to the person – both physically and figuratively. With zero relational friction, we’re more apt to afford intimacy, whether or not it’s been earned.

The Illusion of Acceptance

The previous two factors perfectly set the “stage” for false personas to flourish, but it’s the third factor that allows them to go viral. Every actor craves acceptance from his or her audience. Social exclusion is the worst fate imaginable for them. In a face-to-face world, our mental cost/benefit algorithm quickly weeds out false relationships that are not worth the investment of our social resources. But that’s not true online. If it costs us nothing, we may be rolling our eyes – safely removed behind our screen – as we’re also hitting the “Like” button. And shallow people are quite content with shallow forms of acceptance. A Facebook like is more than sufficient to encourage them to continue their act. To make it even more seductive, social acceptance is now measurable – there are hard numbers assigned to popularity.

This is pure cat-nip to the socially needy. Their need to craft a popular – but entirely inauthentic – persona goes into overdrive. Their lives are not lived so much as manufactured to create a veneer just thick enough to capture a quick click of approval. Increasingly, they retreat to an online world that follows the script they’ve written for themselves.

Suddenly it makes sense why we keep taking all those selfies of ourselves. When all the world’s a stage, you need a good head shot.

The Medium is the Message, Mr. President

Every day that Barack Obama was in the White House, he read 10 letters. Why letters? Because form matters. There’s still something about a letter. It’s so intimate. It uses a tactile medium. Emotions seem to flow easier through the use of cursive loops and sound of pen on paper. They balance between raw and reflective. As such, they may be an unusually honest glimpse into the soul of the writer. Obama seemed to get that. There was an entire team of hundreds of people at the White House that reviewed 10,000 letters a day and chose the 10 that made it to Obama, but the intent was to give an unfiltered snapshot of the nation at any given time. It was a mosaic of personal stories that – together – created a much bigger narrative.

Donald Trump doesn’t read letters. He doesn’t read much of anything. The daily presidential briefing has been dumbed down to media more fitting of the President’s 140 character long attention span. Trump likes to be briefed with pictures and videos. His information medium of choice? Cable TV. He has turned Twitter into his official policy platform.

Today, technology has exponentially multiplied the number of communication media we have available to us. And in that multiplicativity, Marshall McLuhan’s 50-year-old trope about the medium being the message seems truer than ever. The channels we chose – whether we’re on the sending or receiving end – carry their own inherent message. They say who we are, what we value, how we think. They intertwine with the message, determining how it will be interpreted.

I’m sad that letter writing is a dying art, but I’m also contributing to its demise. It’s been years since I’ve written a letter. I do write this column, which is another medium. But even here I’m mislabeling it. Technically this is a blog post. A column is a concept embedded in the medium of print – with its accompanying physical restriction of column inches. But I like to call it a column, because in my mind that carries its own message. A column comes with an implicit promise between you – the readers – any myself, the author. Columns are meant to be regularly recurring statements of opinion. I have to respect the fact that I remain accountable for this Tuesday slot that MediaPost has graciously given me. Week after week, I try to present something that I hope you’ll find interesting and useful enough to keep reading. I feel I owe that to you. To me, a “post” feels more ethereal – with less of an ongoing commitment between author and reader. It’s more akin to a drive-by-writing.

So that brings me to one of the most interesting things about letters and President Obama’s respect for them. They are meant to be a thoughtful medium between two people. The thoughts captured within are important enough to the writer that they’re put in print but they are intended just for the recipient. They are one of the most effective media ever created to ask for empathetic understanding from one person in particular. And that’s how Obama’s Office of Presidential Correspondence treated them. Each letter represented a person who felt strongly enough about something that they wanted to share it with the President personally. Obama used to read his ten letters at the end of the day, when he had time to digest and reflect. He often made notations in the margins asking pointed questions of his staff or requesting more investigation into the circumstances chronicled in a letter. He chose to set aside a good portion of each day to read letters because he believed in the message carried by the medium: Individuals – no matter who they are – deserve to be heard.

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.