Clear, Simple…and Wrong

For every complex problem there is an answer that is clear, simple, and wrong
H. L. Mencken

We live in a world of complex problems. And – increasingly – we long for simple solutions to those problems. Brexit was a simple answer to a complex problem. Trump’s border wall is a simple answer to a complex problem. The current wave of populism is being driven by the desire for simple answers to complex problems.

But, like H.L. Mencken said…all those answers are wrong.

Even philosophers – who are a pretty complex breed – have embraced the principle of simplicity. William of Ockham, a 14th century Franciscan friar who studied logic, wrote “Entia non sunt multiplicanda praetor necessitate.” This translates as “More things should not be used than are necessary.” It has since been called “Occam’s Razor.” In scientific research, it’s known as the principle of parsimony.

But Occam’s Razor illustrates a short coming of humans. We will look for the simplest solution even if it isn’t the right solution. We forget the “are necessary” part of the principle. The Wikipedia entry for Occam’s Razor includes this caveat, “Occam’s razor only applies when the simple explanation and complex explanation both work equally well. If a more complex explanation does a better job than a simpler one, then you should use the complex explanation.”

This introduces a problem for humans. Simple answers are usually easier for us.  People can grasp them easier.  Given a choice between complex and simple, we almost always default to the simple. For most of our history, this has not been a bad strategy. When all the factors the determine our likelihood to survive are proximate and intending to eat you, simple and fast is almost always the right bet.

But then we humans went and built a complex world. We started connecting things together into extended networks. We exponentially introduced dependencies. Through our ingenuity, we transformed our environments and, in the process, made complexity the rule rather than the exception. Unfortunately, that our brains didn’t keep up. They still operate as if our biggest concerns were to find food and to avoid becoming food.

Our brains are causal inference machines. We assign cause and effect without bothering to determine if we are right.  We are hardwired to go for simple answers. When the world was a pretty simple place, the payoff for cognitively crunching complex questions wasn’t worth it. But that’s no longer the case. And when we mistake correlation for causation, the consequences can be tragic.

Let’s go back to the example of Trump’s Wall. I don’t question that illegal (or legal, for that matter) immigration causes pressures in a society. That’s perfectly natural, no matter where those immigrants are coming from. But it’s also a dynamic and complex problem. There are a myriad of interleaved and inter-dependent factors underlying the visible issue. If we don’t take the time to understand those dynamics of complexity, a simple solution – like a wall – could unleash forces that have drastic and unintended consequences. Even worse, thanks to the nature of complexity, those consequences can be amplified throughout a network.

Simple answers can also provide a false hope that keeps us from digging deeper for the true nature of the problem. It lets us fall into the trap of “one and done” thinking. Why hurt our heads thinking about complex issues when we can put a checkmark beside an item on our to do list and move on to the next one?

According to Ian McKenzie, this predilection for simplicity is also rotting away the creative core of advertising. In an essay he posted on Medium, he points to a backlash against Digital because of its complexity, “Digital is complex. And because the simplicity bias says complicated is bad, digital and data are bad by association. And this can cause smart people trained in traditional thinking to avoid or tamp down digital ideas and tactics because they appear to be at odds with the simplicity dogma.”

Like it or not, we ignore complexity at our peril. As David Krakauer, President of the Santa Fe Institute and William H. Miller Professor of Complex Systems warned, “There is only one Earth and we shall never improve it by acting as if life upon it were simple. Complex systems will not allow it.”

 

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

Influencer Marketing’s Downward Ethical Spiral

One of the impacts of our increasing rejection of advertising is that advertisers are becoming sneakier in presenting advertising that doesn’t look like advertising. One example is Native advertising. Another is influencer marketing. I’m not a big fan of either. I find native advertising mildly irritating. But I have bigger issues with influencer marketing.

Case in point: Taytum and Oakley Fisher. They’re identical twins, two years old and have 2.4 million followers on Instagram. They are adorable. They’re also expensive. A single branded photo on their feed goes for sums in the five-figure range. Of course, “they” are only two and have no idea what’s going on. This is all being stage managed behind the scenes by their parents, Madison and Kyler.

The Fishers are not an isolated example. According to an article on Fast Company, adorable kids – especially twins –  are a hot segment in the predicted 5 to 10 billion dollar Influencer market. Influencer management companies like God and Beauty are popping up. In a multi-billion dollar market, there are a lot of opportunities for everyone to make a quick buck. And the bucks get bigger when the “stars” can actually remember their lines. Here’s a quote from the Fast Company article:

“The Fishers say they still don’t get many brand deals yet, because the girls can’t really follow directions. Once they’re old enough to repeat what their parents (and the brands paying them) want, they could be making even more.”

Am I the only one that finds this carrying the whiff of moral repugnance?

If so, you might say, “what’s the harm?” The audience is obviously there. It works. Taytum and Oakley appear to be having fun, according to their identical grins. It’s just Gord being in a pissy mood again.

Perhaps. But I think there’s more going on here than we see on the typical Instagram feed.

One problem is transparency – or lack of it. Whether you agree with traditional advertising or not, at least it happens in a well-defined and well-lit marketplace. There is transparency into the fundamental exchange: consumer attention for dollars. It is an efficient and time-tested market.  There are metrics in place to measure the effectiveness of this exchange.

But when advertising attempts to present itself as something other than advertising, it slips from a black and white transaction to something lurking in the darkness colored in shades of grey. The whole point of influencer marketing is to make it appear that these people are genuine fans of these products, so much so that they can’t help evangelizing them through their social media feeds. This – of course – is bullshit. Money is paid for each one of these “genuine” tweets or posts. Big money. In some cases, hundreds of thousands of dollars. But that all happens out of sight and out of mind. It’s hidden, and that makes it an easy target for abuse.

But there is more than just a transactional transparency problem here. There is also a moral one. By becoming an influencer, you are actually becoming the influenced – allowing a brand to influence who you are, how you act, what you say and what you believe in. The influencer goes in believing that they are in control and the brand is just coming along for the ride. This is – again – bullshit. The minute you go on the payroll, you begin auctioning off your soul to the highest bidder. Amena Khan and Munroe Bergdorf both discovered this. The two influencers were cut for L’Oreal’s influencer roster by actually tweeting what they believed in.

The façade of influencer marketing is the biggest problem I have with it. It claims to be authentic and it’s about as authentic as pro wrestling – or Mickey Rourke’s face. Influencer marketing depends on creating an impossibly shiny bubble of your life filled with adorable families, exciting getaways, expensive shoes and the perfect soymilk latte. No real life can be lived under this kind of pressure. Influencer marketing claims to be inspirational, but it’s actually aspirational at the basest level. It relies on millions of us lusting after a life that is not real – a life where “all the women are strong, all the men are good-looking, and all the children are above average.”

Or – at least – all the children are named Taytum or Oakley.

 

The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

The Rain in Spain

Olá! Greetings from the soggy Iberian Peninsula. I’ve been in Spain and Portugal for the last three weeks, which has included – count them – 21 days of rain and gale force winds. Weather aside, it’s been amazing. I have spent very little of that time thinking about online media. But, for what they’re worth, here are some random observations from the last three weeks:

The Importance of Familiarity

While here, I’ve been reading Derek Thompson’s book Hitmakers. One of the critical components of a hit is a foundation of familiarity. Once this is in place, a hit provides just enough novelty to tantalize us. It’s why Hollywood studios seem stuck on the superhero sequel cycle.

This was driven home to me as I travelled. I’m a do-it-yourself traveller. I avoid packaged vacations whenever and wherever possible. But there is a price to be paid for this. Every time we buy groceries, take a drive, catch a train, fill up with gas or drive through a tollbooth (especially in Portugal) there is a never-ending series of puzzles to be solved. The fact that I know no Portuguese and very little Spanish makes this even more challenging. I’m always up for a good challenge, but I have to tell you, at the end of three weeks, I’m mentally exhausted. I’ve had more than enough novelty and I’m craving some more familiarity.

This has made me rethink the entire concept of familiarity. Our grooves make us comfortable. They’re the foundations that make us secure enough to explore. It’s no coincidence that the words “family” and “familiar” come from the same etymological root.

The Opposite of Agile Development

seville-catheral-altarWhile in Seville, we visited the cathedral there. The main altarpiece, which is the largest and one of the finest in the world, was the life’s work of one man, Pierre Dancart. He worked on it for 44 years of his life and never saw the finished product. In total, it took over 80 years to complete.

Think about that for a moment. This man worked on this one piece of art for his entire life. There was no morning where he woke up and wondered, “Hmm, what am I going to do today?” This was it, from the time he was barely more than a teenager until he was an old man. And he still never got to see the completed work. That span of time is amazing to me. If built and finished today, it would have been started in 1936.

The Ubiquitous Screen

I love my smartphone. It has saved my ass more than once on this trip. But I was saddened to see that our preoccupation with being connected has spread into every nook and cranny of European culture. Last night, we went for dinner at a lovely little tapas bar in Lisbon. It was achingly romantic. There was a young German couple next to us who may or may not have been in love. It was difficult to tell, because they spent most of the evening staring at their phones rather than at each other.

I have realized that the word “screen” has many meanings, one of which is a “barrier meant to hide things or divide us.”

El Gordo

Finally, after giving my name in a few places and getting mysterious grins in return, I have realized that “gordo” means “fat” in Spanish and Portuguese.

Make of that what you will.

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?