Navigating Grief: Ouija Boards and AI Communication with the Dead

When I was growing up, we had a Ouija board in our home. But no one was allowed to use it, so it was hidden in the bottom of a forgotten closet. It was, according to my mother, “a thing of the devil.”

At this point, you might have two questions: what is a Ouija board, and if it was evil, why did we have one in the first place?

Ouija boards first gained popularity with the rise of the spiritualist movement in the late 1800s. They were also called spirit boards or witch boards. By the turn of the last century, it had become a parlor game, marketed by the Kennard Novelty Company.

The Ouija board had the alphabet, numbers, the words “yes” and “no” and various other graphics and symbols printed on it. There is a “planchette” – a small heart shaped piece of wood, generally on felt tipped pegs. The planchette was placed in the middle of the board and those seated around the board would place their fingers on the planchette. Then, the planchette, seemingly moving of its own accord, would spell out answers to questions from the group. Typically, the board was supposedly used to communicate with spirits of those who had passed on, speaking through the board from the other side.

That brings us to why we had the board. My father died suddenly in 1962 at the age of 27. I was one year old when he passed away. My mother was just 24 and, in the span of a disappearing heartbeat, became both a widow and a single mother. My father did everything for my mom. And now, suddenly, he was gone.

Mom, as you may have guessed from the “devil” comment, was always quite religious. And despite the church frowning heavily on things like Ouija boards, her grief was such that she was convinced by a friend to try the board to talk once more to her departed husband, the love of her young life.

She never told me exactly what came from this experiment, but suffice to say that after that, the board was moved to the bottom of the closet, underneath a big cardboard box of other things we couldn’t use but also couldn’t throw away. It was never used again. I suspect some of my father’s things were also tucked away in that box.

While Ouija boards are not as popular as they once were, they’re still around, if you look hard enough for them. Hasbro now markets them, and you can even buy one through Amazon, if the spirit moves you. Amazon helpfully suggests bundling your purchase with a handheld LED ghost detector and the SB7 Spirit Box – also useful for exorcisms and hunting trips into the great beyond.

Various church leaders are still warning us not to use Ouija boards. One religious online publication cautions, “Ouija boards are not innocent toys that can be played at Halloween parties. They can have grave spiritual consequences that can last years, leading a person down the dark path of Satan’s lies.”

Consider yourself duly warned.

Of course, in the 62 years since my father passed away, technology has added a new wrinkle or two to our ability to talk to the dead. We can now do it through AI.

At the Amazon re:MARS conference in 2022, Senior Vice President Rohit Prasad told attendees that they were working on ways to change Alexa’s voice to that of anyone, living or dead. A video showed Alexa reading a bedtime story to a young child in the voice of his grandmother (presumably no longer with us to read the story herself). Prasad said Alexa could collect enough voice data from less than a minute of audio to make this personalization possible. While that may seem weird, or even creepy, to most of us, Prasad was non-plussed: “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”

A recent CNN article talked about other ways the grieving are using AI to stay in touch with their dearly departed. Rather than using a wooden pointer to laboriously spell out answers on a board, an AI avatar based on someone who has passed on can carry on a real time conversation with us. If you train it with the right data, it can answer questions and provide advice. You can even create a video of those no longer here and chat with them. I know if any of these technologies were around 62 years ago, my mom would have probably tried them.

I spent much of my childhood watching my mother deal with her grief, so I certainly wouldn’t want to pass judgement on anyone willing to try anything to help heal the scars of loss, but this seems to be a dangerous path to go down, and not just because you may end up unknowingly chatting with demons.

As Mary-Frances O’Connor, a University of Arizona professor who studies grief, said in the CNN article, “When we fall in love with someone, the brain encodes that person as, ‘I will always be there for you and you will always be there for me.’ When they die, our brain has to understand that this person isn’t coming back.”

In 1969, psychiatrist Elizabeth Kübler-Ross defined the five stages of grief: denial, anger, bargaining, depression and acceptance. While these have been criticized as being overly simplistic and misleading (i.e. – grief is usually not a linear journey going neatly from one stage to the next), it is commonly understood that – at some point – acceptance allows us to move on with our own lives. That might be harder to do if you’re lugging an AI powered Ouija Board with you.

My mom understood; some things are better left at the bottom of a forgotten closet.

Can Media Move the Overton Window?

I fear that somewhere along the line, mainstream media has forgotten its obligation to society.

It was 63 years ago, (on May 9, 1961) that new Federal Communications Commission Chair Newton Minow gave his famous speech, “Television and the Public Interest,” to the convention of the National Association of Broadcasters.

In that speech, he issued a challenge: “I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.”

Minow was saying that media has an obligation to set the cultural and informational boundaries for society. The higher you set them, the more we will strive to reach them. That point was a callback to the Fairness Doctrine, established by the FCC in 1949. The policy required that “holders of broadcast licenses to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.” The Fairness Doctrine was abolished by the FCC in 1987.

What Minow realized, presciently, was that mainstream media is critically important in building the frame for what would come to be called, three decades later, the Overton Window. First identified by policy analyst Joseph Overton at the Mackinaw Center for Public Policy, the term would posthumously be named after Overton by his colleague Joseph Lehman.

The term is typically used to describe the range of topics suitable for public discourse in the political arena. But, as Lehman explained in an interview, the boundaries are not set by politicians: “The most common misconception is that lawmakers themselves are in the business of shifting the Overton Window. That is absolutely false. Lawmakers are actually in the business of detecting where the window is, and then moving to be in accordance with it.

I think the concept of the Overton Window is more broadly applicable than just within politics. In almost any aspect of our society where there are ideas shaped and defined by public discourse, there is a frame that sets the boundaries for what the majority of society understands to be acceptable — and this frame is in constant motion.

Again, according to Lehman,  “It just explains how ideas come in and out of fashion, the same way that gravity explains why something falls to the earth. I can use gravity to drop an anvil on your head, but that would be wrong. I could also use gravity to throw you a life preserver; that would be good.”

Typically, the frame drifts over time to the right or left of the ideological spectrum. What came as a bit of a shock in November of 2016 was just how quickly the frame pivoted and started heading to the hard right. What was unimaginable just a few years earlier suddenly seemed open to being discussed in the public forum.

Social media was held to blame. In a New York Times op-ed written just after Trump was elected president (a result that stunned mainstream media) columnist Farhad Manjoo said,  “The election of Donald J. Trump is perhaps the starkest illustration yet that across the planet, social networks are helping to fundamentally rewire human society.”

In other words, social media can now shift the Overton Window — suddenly, and in unexpected directions. This is demonstrably true, and the nuances of this realization go far beyond the limits of this one post to discuss.

But we can’t be too quick to lay all the blame for the erratic movements of the Overton Window on social media’s doorstep.

I think social media, if anything, has expanded the window in both directions — right and left. It has redefined the concept of public discourse, moving both ends out from the middle. But it’s still the middle that determines the overall position of the window. And that middle is determined, in large part, by mainstream media.

It’s a mistake to suppose that social media has completely supplanted mainstream media. I think all of us understand that the two work together. We use what is discussed in mainstream media to get our bearings for what we discuss on social media. We may move right or left, but most of us realize there is still a boundary to what is acceptable to say.

The red flags start to go up when this goes into reverse and mainstream media starts using social media to get its bearings. If you have the mainstream chasing outliers on the right or left, you start getting some dangerous feedback loops where the Overton Window has difficulty defining its middle, risking being torn in two, with one window for the right and one for the left, each moving further and further apart.

Those who work in the media have a responsibility to society. It can’t be abdicated for the pursuit of profit or by saying they’re just following their audience. Media determines the boundaries of public discourse. It sets the tone.

Newton Minow was warning us about this six decades ago.

Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary