Face Time in the Real World is Important

For all the advances made in neuroscience, we still don’t fully understand how our brains respond to other people. What we do know is that it’s complex.

Join the Chorus

Recent studies, including this one from Rochester University, are showing that when we see someone we recognize, the brain responds with a chorus of neuronal activity. Neurons from different parts of the brain fire in unison, creating a congruent response that may simultaneously pull from memory, from emotion, from the rational regions of our prefrontal cortex and from other deep-seated areas of our brain. The firing of any one neuron may be relatively subtle, but together this chorus of neurons can create a powerful response to a person. This cognitive choir represents our total comprehension of an individual.

Non-Verbal Communication

“You’ll have your looks, your pretty face. – And don’t underestimate the importance of body language!” – Ursula, The Little Mermaid

Given that we respond to people with different parts of the brain, it makes sense that we use part of the brain we didn’t realize when communicating with someone else. In 1967, psychologist Albert Mehrabian attempted to pin this down with some actual numbers, publishing a paper in which he put forth what became known as Mehrabian’s Rule: 7% of communication is verbal, 38% is tone of voice and 55% is body language.

Like many oft-quoted rules, this one is typically mis-quoted. It’s not that words are not important when we communication something. Words convey the message. But it’s the non-verbal part that determines how we interpret the message – and whether we trust it or not.

Folk wisdom has told us, “Your mouth is telling me one thing, but your eyes are telling me another.” In this case, folk wisdom is right. We evolved to respond to another person with our whole bodies, with our brains playing the part of conductor. Maybe the numbers don’t exactly add up to Mehrabian’s neat and tidy ratio, but the importance of non-verbal communication is undeniable. We intuitively pick up incredibly subtle hints: a slight tremor in the voice, a bead of sweat on the forehead, a slight turn down of one corner of the mouth, perhaps a foot tapping or a finger trembling, a split-second darting of the eye. All this is subconsciously monitored, fed to the brain and orchestrated into a judgment about a person and what they’re trying to tell us. This is how we evolved to judge whether we should build trust or lose it.

Face to Face vs Face to Screen

Now, we get to the question you knew was coming, “What happens when we have to make these decisions about someone else through a screen rather than face to face?”

Given that we don’t fully understand how the brain responds to people yet, it’s hard to say how much of our ability to judge whether we should convey trust or withhold it is impaired by screen-to-screen communication. My guess is that the impairment is significant, probably well over 50%. It’s difficult to test this in a laboratory setting, given that it generally requires some type of neuroimaging, such as an fMRI scanner. In order to present a stimulus for the brain to respond to when the subject is strapped in, a screen is really the only option. But common sense tells me – given the sophisticated and orchestrated nature of our brain’s social responses – that a lot is lost in translation from a real-world encounter to a screen recording.

New Faces vs Old Ones

If we think of how our brains respond to faces, we realize that in today’s world, a lot of our social judgements are increasing made without face-to-face encounters. In a case where we know someone, we will pull forward a snapshot of our entire history with that person. The current communication is just another data point in a rich collection of interpersonal experience. One would think that would substantially increase our odds of making a valid judgement.

But what if we must make a judgement on someone we’ve never met before, and have only seen through a screen; be it a TikTok post, an Instagram Reel, a YouTube video or a Facebook Post? What if we have to decide whether to believe an influencer when making an important life decision? Are we willing to rely on a fraction of our brain’s capacity when deciding whether to place trust in someone we’ve never met?

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.

I’m not a Doctor, But I Play One on Social Media

Step 1. You have a cough
Step 2. You Google It
Step 3. You spend 3 hours learning about a rare condition you have never heard of before today but are now convinced you have.

We all joke about Doctor Google. The health anxiety business is booming, thanks to online diagnostic tools that convince us that we have a rare disease that affects about .002% of the population.

It you end up on WebMD, at least they suggest talking to a doctor. But there’s another source of medical information that offers no such caveats – social media influencers.

As healthcare becomes an increasingly for-profit business there are a new band of influencers who are promoting dubious tests and procedures because there is a financial incentive to do so.  They are also offering their decidedly non-expert opinion on important health practices such as vaccination. Unfortunately, people are listening.

During Covid, we saw how social media fostered antipathy towards vaccinations and public health measures such as wearing face masks. These posts ran counter to the best advice coming from trusted health authorities and created a distrust in science. But that misinformation campaign didn’t stop when the worst of Covid was over. It continues to influence many of us today.

Take the recent measles outbreak in Texas. As of the writing of this, the outbreak has grown to over 250 cases and 2 deaths. Measles cases across the US have already surpassed the number of cases for all of 2024. Vaccination rates for children in the US seem stuck at the 90% range and have been for a while. This is below the 95% vaccination rate required to stop the spread of measles.

One of the reasons is a group of social media influencers who have targeted women and spread the false impression that they’re being “bad moms” if they allow their children to be vaccinated. According to a study by the University of Washington, these posts often include a link to a unproven “natural” or homeopathic remedy sold through an affiliate program or multi-level marketing campaign.

Measles was something the medical community considered eradicated in North America in 2000. But it has resurfaced thanks to misinformation spread through social media. And that’s tragic. The first child to die in the most recent outbreak was the first measles related fatality in 10 years in America. The child was otherwise healthy. It didn’t have to happen.

It’s not just measles. There is an army of social media influencers all hawking dubious tests, treatments and tinctures for profit. None of them have the slightest clue what they’re talking about. They have no medical training. They do – however – know how to market themselves and how to capitalize on a mistrust of the medical system by spreading misinformation for monetary gain.

A recently published study looked at the impact of social media influences dispensing uneducated medical advice. They warned, “alarming evidence suggests widespread dissemination of health-related content by individuals lacking the requisite expertise, often driven by commercial rather than public health interests.”

Another study looked at 1000 posts by influencers to a combined audience of 194 million followers. The posts were promoting medical tests including full-body MRI scans, genetic screening for early detection of cancer, blood tests for testosterone levels, the anti-Mullerian hormone test and a gut microbiome test. 85 percent of the posts touted benefits without mentioning any risks. They also failed to mention the limited usefulness of these tests. Lead study author Brooke Nickel said, “These tests are controversial, as they all lack evidence of net benefit for healthy people and can lead to harms including overdiagnosis and overuse of the medical system. If information about medical tests on social media sounds too good to be true, it probably is.”

Social media misinformation is at epidemic levels. And – in the case of medical information – it can sometimes be a matter of life and death.

Strategies for Surviving the News

When I started this post, I was going to unpack some of the psychology behind the consumption of the news. I soon realized that the topic is far beyond the confines of this post to realistically deal with. So I narrowed my focus to this – which has been very top of mind for me lately – how do you stay informed without becoming a trembling psychotic mess? How do you arm yourself for informed action rather than becoming paralyzed into inaction by the recent fire hose of sheer WTF insanity that makes up the average news feed.

Pick Your Battles

There are few things more debilitating to humans than fretting about things we can’t do anything about. Research has found a strong correlation between depression and our locus of control – the term psychologists use for the range of things we feel we can directly impact. There is actually a term for being so crushed by bad news that you lose the perspective needed to function in your own environment. It’s called Mean World Syndrome.

If effecting change is your goal, decide what is realistically within your scope of control. Then focus your information gathering on those specific things. When it comes to informing yourself to become a better change agent, going deep rather than wide might be a better strategy.

Be Deliberate about Your Information Gathering

The second strategy goes hand in hand with the first. Make sure you’re in the right frame of mind to gather information. There are two ways the brain processes information: top-down and bottom-up. Top-down processing is cognition with purpose – you have set an intent and you’re working to achieve specific goals. Bottom up is passively being exposed to random information and allowing your brain to be stimulated by it. The way you interpret the news will be greatly impacted by whether you’re processing it with a “top-down” intent or letting your brain parse it from the “bottom-up”

By being more deliberate in gathering information with a specific intent in mind, you completely change how your brain will process the news. It will instantly put it in a context related to your goal rather than let it rampage through our brains, triggering our primordial anxiety circuits.

Understand the Difference between Signal and Noise

Based on the top two strategies, you’ve probably already guessed that I’m not a big fan of relying on social media as an information source. And you’re right. A brain doom scrolling through a social media feed is not a brain primed to objectively process the news.

Here is what I did. For the broad context, I picked two international information sources I trust to be objective: The New York Times and the Economist out of the U.K. I subscribed to both because I wanted sources that weren’t totally reliant on advertising as a revenue source (a toxic disease which is killing true journalism). For Americans, I would highly recommend picking at least one source outside the US to counteract the polarized echo chamber that typifies US journalism, especially that which is completely ad supported.

Depending on your objectives, include sources that are relevant to those objectives. If local change is your goal, make sure you are informed about your community. With those bases in place, even If you get sucked down a doom scrolling rabbit hole, at least you’ll have a better context to allow you to separate signal from noise.

Put the Screen Down

I realize that the majority of people (about 54% of US Adults according to Pew Research) will simply ignore all of the above and continue to be informed through their Facebook or X feeds. I can’t really change that.

But for the few of you out there that are concerned about the direction the world seems to be spinning and want to filter and curate your information sources to effect some real change, these strategies may be helpful.

For my part, I’m going to try to be much more deliberate in how I find and consume the news.  I’m also going to be more disciplined about simply ignoring the news when I’m not actively looking for it. Taking a walk in the woods or interacting with a real person are two things I’m going to try to do more.

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified