I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

To Buy or Not to Buy: The Touchy Subject of Mobile ECommerce

A recent report from Akamai indicates that users have little patience when it comes to making purchases on a mobile device. Here are just a few of the stats:

  • While almost half of all consumers browse via their phones, only 1 in 5 complete transactions on mobile
  • Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types
  • Just a 100-millisecond delay in load time hurt conversion rates by up to 7%
  • Bounce rates were highest among mobile shoppers and lowest among those using tablets

But there may be more behind this than just slow load times. We also have to consider what modes we’re in when we’re interacting with our mobile device.

In 2010, Microsoft did a fascinating research project that looked at how user behaviors varied from desktop to tablet to smart phone. The research was headed by Jacquelyn Krones, who was a Search Product Manager at the time. Search was the primary activity examined, but there was a larger behavioral context that was explored. While the study is 7 years old, I think the core findings are still relevant. The researchers found that we tend to have three large buckets of behaviors: missions, explorations and excavations. Missions were focused tasks that were usually looking for a specific piece of information – i.e. looking for an address or phone number. Explorations where more open ended and less focused on a given destination – i.e. seeing if there was any thing you wanted to do this Friday night. Excavations typically involved multiple tasks within an overarching master task – i.e. researching an article. In an interview with me, Krones outlined their findings:

“There’s clearly a different profile of these activities on the different platforms. On desktops and laptops, people do all three of the activities – they conduct missions and excavations and explorations.

“On their phones we expected to see lots of missions – usually when you use your mobile phone and you’re conducting a search, whatever you’re doing in terms of searching is less important than what’s going on with you in the real world – you’re trying to get somewhere, you’re having a discussion with somebody and you want to look something up quick or you’re trying to make a decision about where to go for dinner.

“But we were surprised to find that people are using their mobile phones for exploration. But once we saw the context, it made sense – people have a low tolerance for boredom. Their phone is actually pretty entertaining, much more entertaining than just looking at the head in front of you while you’re waiting in line. You can go check a sports score, read a story, or look at some viral video and have a more engaged experience.

“On tablets, we found that people are pretty much only using them for exploration today. I had expected to see more missions on tablets, and I think that that will happen in the future, but today people perceive their mobile phone as always with them, very personal, always on, and incredibly efficient for getting information when they’re in mission mode.”

Another study, coming out The University of British Columbia Okanagan, also saw a significant difference in behavioral modality when it came to interacting with touch screens. Assistant Professor Ying Zhu was the principal author:

“The playful and fun nature of the touchscreen enhances consumers’ favour of hedonic products; while the logical and functional nature of a desktop endorses the consumers’ preference for utilitarian products,” explains Zhu.

“Zhu’s study also found that participants using touchscreen technology scored significantly higher on experiential thinking than those using desktop computers. However, those on desktops scored significantly higher on rational thinking.”

I think what we have here is an example of thinking: fast and slow. I suspect we’re compartmentalizing our activities, subconsciously setting some aside for completion on the desktop. I would suspect utilitarian type purchasing would fall into this category. I know that’s certainly true in my case. As Dr. Zhu noted, we have a very right brain relationship with touchscreens, while desktops tend to bring out our left-brain. I have always been amazed at how our brains subconsciously prime us based on anticipating an operating environment. Chances are, we don’t even realize how much our behaviors change when we move from a smart phone to a tablet to a desktop. But I’d be willing to place a significant wager that it’s this subconscious techno-priming that’s causing some of these behavioural divides between devices.

Slow load times are never a good thing, on any device, but while they certainly don’t help with conversions, they may not be the only culprit sitting between a user and a purchase. The device itself could also be to blame.

Is Google Slipping, Or Is It Just Our Imagination?

Recently, I’ve noticed a few articles speculating about whether Google might be slipping:

Last month, the American Customer Satisfaction Index notified us that our confidence in search is on the decline. Google’s score dropped 2% to 82. The culprit was the amount of advertising found on the search results page. To be fair, both Google and search in general have had lower scores. Back in 2015, Google scored a 77%, it’s lowest score ever.

This erosion of customer satisfaction may be leading to a drop in advertising ROI. According to a recent report from Analytic Partners, the return on investment from paid search dropped 27% from 2010 to 2016. Search wasn’t alone. All digital ROI seems to be in decline. Analytic’s VP of Marketing, Joe LaSala, predicts that ROI from digital will continue to decline until it converges with ROI from traditional media.

In April of this year, Forbes ran an article asking the question: “Is Google’s Search Quality Starting to Decline?” Contributors to this decline, according to the article, included the introduction of rich snippets and featured news, including popularity as a ranking factor and ongoing black hat SEO manipulation.

But the biggest factor in the drop of Google’s perceived quality was actually in the perception itself. As the Forbes article’s author, Jayson DeMers, stated;

It’s important to realize just how sophisticated Google is, and how far it’s come from its early stages, as well as the impossibility of having a “perfect” search platform. Humans are flawed creatures, and our actions are what are dictating the shape of search.

Google is almost 20 years old. The domain Google.com was registered on September 15, 1997. Given that 20 years is an eternity in internet years, it’s actually amazing that it’s stood up as well as it has for the past two decades. Whether Google’s naysayers care to admit it or not, that’s due to Google’s almost religious devotion to the quality of their search results. That devotion extends to advertising. The balance between user experience and monetization has always been one that Google has paid a lot of attention too.

But it’s not the presence of ads that has led to this perceived decline of quality. It’s a change in our expectations of what a search experience should be. I would argue that for any given search, using objective measures of result relevance, the results Google shows today are far more relevant than the results they showed in 2008, the year it got it’s highest customer satisfaction score (86%). Since then, Google has made great strides in deciphering user intent and providing a results page that’s a good match for that intent. Sometimes it will get it wrong, but when it gets it right, it puts together a page that’s a huge improvement over the vanilla, one size fits all results page of 2008.

The biggest thing that’s changed in the past 10 years is the context from which we’re launching those searches. In 2008, it was almost always the desktop. But today, chances are we’re searching from a mobile device – or our car – or our home through Amazon Echo. This has changed our expectations of search. We are task focused, rather than “browsing” for information. This creates an entirely different mental framework within which we receive the results. We apply a new yardstick of acceptable relevance. Here, we’re not looking for a list of 20 possible answers – we’re looking for one answer. And it had better be the right one. Context based search must be hyper-relevant.

Compounding this trend is the increasing number of circumstances where search is going “under the hood” – something I’ve been forecasting for a long time now. For example, if you use Siri to launch a search through your CarPlay connected device when you’re driving, the results are actually coming from Bing but they’re stripped of the context of the Bing search results page. Here, the presentation of search results is just one step in a multi-step task flow. It’s important that the result that is on top is the one you’re probably looking for.

Unfortunately for Google – and the other search providers – this expectation stays in place even when the context shifts. When we launch a search from our desktop, we are increasingly intolerant of results that are even a little off base from our intent. Ads become the most easily identified culprit. A results set that would have seemed almost frighteningly prescient even a few years ago now seems sub par. Google has come a long way in the past 20 years but it’s still losing ground to our expectations.

 

 

Live, From Inside the Gale of Creative Destruction

Talk about cognitive dissonance…

First, Mediapost’s Jack Loechner writes about a Forrester Report, The End of Advertising as We Know It, which was published earlier this year. Seeing as last week I starting ringing the death knell for advertising agencies, I though I should check the report out.

Problem One: The report was only available on Forrester if I was willing to plunk down $499. American. Which is – I don’t know – about 14 zillion Canadian. Much as I love and respect you, my readers, there’s no friggin’ way that’s going to happen. So, I go to Google to see if I can find a free source to get the highlights.

Problem Two: Everyone and Sergio Zyman’s dog has apparently decided to write a book or white paper entitled “The End of Advertising as We Know It.” Where to begin researching the end? Well, here’s one deliciously ironic option – one of those white papers was published by none other than WPP. You know I have to check that out! As it turns out – no surprise here – it’s a sales pitch for the leading edge cool stuff that one of WPP’s agencies, AKQA, can do for you. I tried to sift through the dense text but gave up after continually bumping into buzz-laden phrases like “365 ideas”, “Business Invention” and “People Stories.” I return to the search results page and follow a Forbes link that looks more promising.

Problem Three: Yep! This is it. It’s Forbes summation of the Forrester Report. I start reading and learning that the biggest problem with advertising is that we hate to be interrupted by advertising. Well, I could have told you that. Oh – wait – I did (for free, I might add). But here’s the cognitively dissonant part. As I’m trying to read the article, an autoplay video ad keeps playing on the Forbes page, interrupting me. And you know what? I hated it! The report was right. At least, I think it was, as I stopped reading the article.

I’m guessing you’re going through something similar right now. As you’re trying to glean my pearls of wisdom, you’re tiptoeing around advertising on the page. That’s not Mediapost’s fault. They have a business to run and right now, there’s no viable business model other than interruptive advertising to keep the lights on. So you have the uniquely dissonant experience of reading about the end of advertising while being subjected to advertising.

My experience – which is hardly unique – is a painful reminder about the inconvenient truth of innovative disruption: it’s messy in the middle of it. When Joseph Schumpeter called it a “gale of creative destruction” it made it sound revolutionary and noble in the way that the Ride of the Valkyries or the Starks retaking Winterfell is noble. But this stuff gets messy, especially if you’re trying to hang on to the things being destroyed when the gale hits in full force.

Here’s the problem, in a nutshell. The tension goes back to a comment made back in 1984 from Stewart Brand to Steve Wozniak:

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

In publishing, we not only have the value of the information itself, but we have the cost of wrapping insight around that information. Forrester’s business is industry analysis. Someone has to do the analyzing and there are costs associated with that. So they charge $499 for a report on the end of advertising.

Which brings us to the second part of the tension. Because so much information is now free and Google gives me, the information consumer, the expectation that I can find it for free – or, at least, highlights of it for free – I expect all information to be free. I believe I have an alternative to paying Forrester. In today’s age, information tends to seep through the cracks in pay walls, as it did when Forbes and Mediapost published articles on the report. Forrester is okay with that, because it hopes it will make more people willing to pay $499 aware of the report.

For their part Forbes – or Mediapost – relies on advertising to keep the information available to you for free, matching our expectations. But they have their own expenses. Whether we like it or not, interruptive advertising is the only option currently available to them.

So there we have it, a very shaky house of cards built on a rapidly crumbling foundation. Welcome to the Edge of Chaos. A new model will be created from this destruction. That is inevitable. But in the meantime, there’s going to be a lot of pain and WTF moments. Just like the one I had this week.

The Medium is the Message, Mr. President

Every day that Barack Obama was in the White House, he read 10 letters. Why letters? Because form matters. There’s still something about a letter. It’s so intimate. It uses a tactile medium. Emotions seem to flow easier through the use of cursive loops and sound of pen on paper. They balance between raw and reflective. As such, they may be an unusually honest glimpse into the soul of the writer. Obama seemed to get that. There was an entire team of hundreds of people at the White House that reviewed 10,000 letters a day and chose the 10 that made it to Obama, but the intent was to give an unfiltered snapshot of the nation at any given time. It was a mosaic of personal stories that – together – created a much bigger narrative.

Donald Trump doesn’t read letters. He doesn’t read much of anything. The daily presidential briefing has been dumbed down to media more fitting of the President’s 140 character long attention span. Trump likes to be briefed with pictures and videos. His information medium of choice? Cable TV. He has turned Twitter into his official policy platform.

Today, technology has exponentially multiplied the number of communication media we have available to us. And in that multiplicativity, Marshall McLuhan’s 50-year-old trope about the medium being the message seems truer than ever. The channels we chose – whether we’re on the sending or receiving end – carry their own inherent message. They say who we are, what we value, how we think. They intertwine with the message, determining how it will be interpreted.

I’m sad that letter writing is a dying art, but I’m also contributing to its demise. It’s been years since I’ve written a letter. I do write this column, which is another medium. But even here I’m mislabeling it. Technically this is a blog post. A column is a concept embedded in the medium of print – with its accompanying physical restriction of column inches. But I like to call it a column, because in my mind that carries its own message. A column comes with an implicit promise between you – the readers – any myself, the author. Columns are meant to be regularly recurring statements of opinion. I have to respect the fact that I remain accountable for this Tuesday slot that MediaPost has graciously given me. Week after week, I try to present something that I hope you’ll find interesting and useful enough to keep reading. I feel I owe that to you. To me, a “post” feels more ethereal – with less of an ongoing commitment between author and reader. It’s more akin to a drive-by-writing.

So that brings me to one of the most interesting things about letters and President Obama’s respect for them. They are meant to be a thoughtful medium between two people. The thoughts captured within are important enough to the writer that they’re put in print but they are intended just for the recipient. They are one of the most effective media ever created to ask for empathetic understanding from one person in particular. And that’s how Obama’s Office of Presidential Correspondence treated them. Each letter represented a person who felt strongly enough about something that they wanted to share it with the President personally. Obama used to read his ten letters at the end of the day, when he had time to digest and reflect. He often made notations in the margins asking pointed questions of his staff or requesting more investigation into the circumstances chronicled in a letter. He chose to set aside a good portion of each day to read letters because he believed in the message carried by the medium: Individuals – no matter who they are – deserve to be heard.

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.