The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

The Rain in Spain

Olá! Greetings from the soggy Iberian Peninsula. I’ve been in Spain and Portugal for the last three weeks, which has included – count them – 21 days of rain and gale force winds. Weather aside, it’s been amazing. I have spent very little of that time thinking about online media. But, for what they’re worth, here are some random observations from the last three weeks:

The Importance of Familiarity

While here, I’ve been reading Derek Thompson’s book Hitmakers. One of the critical components of a hit is a foundation of familiarity. Once this is in place, a hit provides just enough novelty to tantalize us. It’s why Hollywood studios seem stuck on the superhero sequel cycle.

This was driven home to me as I travelled. I’m a do-it-yourself traveller. I avoid packaged vacations whenever and wherever possible. But there is a price to be paid for this. Every time we buy groceries, take a drive, catch a train, fill up with gas or drive through a tollbooth (especially in Portugal) there is a never-ending series of puzzles to be solved. The fact that I know no Portuguese and very little Spanish makes this even more challenging. I’m always up for a good challenge, but I have to tell you, at the end of three weeks, I’m mentally exhausted. I’ve had more than enough novelty and I’m craving some more familiarity.

This has made me rethink the entire concept of familiarity. Our grooves make us comfortable. They’re the foundations that make us secure enough to explore. It’s no coincidence that the words “family” and “familiar” come from the same etymological root.

The Opposite of Agile Development

seville-catheral-altarWhile in Seville, we visited the cathedral there. The main altarpiece, which is the largest and one of the finest in the world, was the life’s work of one man, Pierre Dancart. He worked on it for 44 years of his life and never saw the finished product. In total, it took over 80 years to complete.

Think about that for a moment. This man worked on this one piece of art for his entire life. There was no morning where he woke up and wondered, “Hmm, what am I going to do today?” This was it, from the time he was barely more than a teenager until he was an old man. And he still never got to see the completed work. That span of time is amazing to me. If built and finished today, it would have been started in 1936.

The Ubiquitous Screen

I love my smartphone. It has saved my ass more than once on this trip. But I was saddened to see that our preoccupation with being connected has spread into every nook and cranny of European culture. Last night, we went for dinner at a lovely little tapas bar in Lisbon. It was achingly romantic. There was a young German couple next to us who may or may not have been in love. It was difficult to tell, because they spent most of the evening staring at their phones rather than at each other.

I have realized that the word “screen” has many meanings, one of which is a “barrier meant to hide things or divide us.”

El Gordo

Finally, after giving my name in a few places and getting mysterious grins in return, I have realized that “gordo” means “fat” in Spanish and Portuguese.

Make of that what you will.

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

Email Keeps Us Hanging On

Adobe just released their Consumer Email Survey Report. And one line from it immediately jumped out:

“We’ve seen a 28 percent decrease in consumers checking email messages from bed in the morning (though 26 percent still do it),”

Good for you, you 28 percent who have a life. I, unfortunately, fall into the pathetic 26 percent.

So, what is it about email that still makes it such a dominant part of our digital lives? It’s been 46 years since the first email was sent, from Ray Tomlinson to himself. Yet, it’s never gone out of vogue. In fact, according to this survey, the majority of us (about 85%) see our use of email staying the same or increasing over the next two years. Even the rebellious Generation Z – the post-Millenials who are rewriting the book on tech behaviors, color inside the lines when it comes to email. 41% of them predict their use of email will increase at work, and 30% of them foresee themselves using email more in their personal lives.

Email is the most commonly used communication channel for work – beating actually talking to other people by a full 11 points

What was interesting to me was when and where email was used:

adobe-consumer-email-survey-report-2017-27-1024

From Adobe’s Email Survey Report, used with permission

This suggests some interesting modality variations. I’ve talked about modality before, including a column a few weeks ago about devices. Personally, as a UX geek, I find the whole idea of modality fascinating. Here’s the best way I can think of to understand the importance of modality as it applies to behaviors. You have to stay late at work to fire an employee that has become a train wreck, becoming increasingly hostile to management and bullying her co-workers. It does not go well, but you get it done. Unfortunately it makes you late for your 10-year-old daughter’s birthday party. Consider the seismic shifting of mental frameworks required so you don’t permanently traumatize a roomful of giggling pre-teens. That’s modality in action. It becomes essential when we’re talking about technology because as we step into different roles to accomplish different objectives, it seems we have pre-determined technologies already assigned to the tasks required.

Email seems closely linked as a communication channel perfect for certain behavioral modes: If you want a quick update on a project, are delivering feedback or asking a brief question, email is the preferred communication channel. But for anything that requires more social finesse – asking for help, pitching a new idea, letting your boss know about an issue or even calling it quits – there’s no substitute for face to face.

Here we see why email has not faded in popularity – it’s Occam’s Razor of factual communication. It does just what it needs to do, without unnecessary complication. It allows both the sender and receiver to communication on their timelines, without disruption. It provides an archival record of communication. And it’s already integrated into all our task flows – no extra steps are required. Many start-ups have promised to abolish the in-box. So far, none have succeeded.

What Email doesn’t do very well is convey emotion. Emails have a habit of blowing up in our faces in delicate situations, for all the same reasons as stated above. But that’s okay. We know that. That’s why most of us don’t use it for that purpose. (Note – even for delicate situations, email is still usually the next most popular choice after face to face. 11% of survey respondents would still choose email to tell their bosses to take a flying leap).

As email approached it’s half-century birthday, logic tells us that someday it will become obsolete. But it’s outlasted VCRs, fax machines, 8 tracks and a veritable junk heap of other discarded technologies. In fact, it’s hard to think of one other thing that has changed so little over the decades and is still such an integral part of our lives. Say what you want about email – it does appear to have legs.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.