Social Media is Barely Skin Deep

Here’s a troubling fact. According to a study from the Georgia Institute of Tech, half of all selfies taken have one purpose, to show how good the subject looks. They are intended to show the world how attractive we are: our makeup, our clothes, our shoes, our lips, our hair. The category accounts for more selfies than all other categories combined. More than selfies taken with people or pets we love, more than us doing the things we love, more than being in the places we love, more than eating the food we love. It appears that the one thing we love the most is ourselves. The selfies have spoken

In this study, the authors reference a 1956 work from sociologist Erving Goffman– The Presentation of Self in Everyday Life. Goffman took Shakespeare’s line – “All the World is a Stage and all the men and women merely players” – quite literally. His theory was that we are all playing the part of whom we want to be perceived as. Our lives are divided up into two parts – the front, when we’re “on stage” and playing our part, and the “back” – when we prepare for our role. The roles we play depend on the context we’re in.

 

Goffman’s theory introduces an interesting variable into consideration. The way we play these roles and the importance we place on them will vary with the individual. For some of us, it will be all about the role and less about the actual person who inhabits that role. These people are obsessed about how they are perceived by others. They’re the ones snapping selfies of themselves to show the world just how marvelous they look.

For others, they care little about what the world thinks of them. They are internally centered and are focused on living their lives, rather than acting their way through their lives for the entertainment of – and validation from – others. In between the two extremes is the ubiquitous bell curve of normal distribution. Most of us live somewhere on that curve.

Goffman’s theory was created specifically to provide insight into face-to-face encounters. Technology has again throw a gigantic wrinkle into things – and that wrinkle may explain why we keep taking those narcissistic selfies.

Humans are pretty damned good at judging authenticity in a face-to-face setting. We pick up subtle cues from across a wide swath of interpersonal communication channels: vocal intonations, body language, eye-to-eye contact, micro-expressions. Together, these inputs give us a pretty accurate “bullshit detector.” If someone comes across as an inauthentic “phony” the majority of us will just roll our eyes and simply start avoiding the person. In face-to-face encounters there is a social feedback mechanism that keeps the “actors” amongst us at least somewhat honest in order to remain part of the social network that forms their audience.

But social media platforms provide the idea incubator for inauthentic presentation of our own personas. There are three factors in particular that allow shallow “actors” to flourish – even to the point of going viral.

False Intimacy and Social Distance

In his blog on Psychology Today, counselor Michael Formica talks about two of these factors – social distance and false intimacy. I’ve talked about false intimacy before in another context – the “labelability” of celebrities. Social media removes the transactional costs of retaining a relationship. This has the unfortunate side effect of screwing up the brain’s natural defenses against inauthentic relationships. When we’re physically close to a person, there are no filters for the bad stuff. We get it all. Our brains have evolved to do a cost/benefit analysis of each relationship we have and decide whether it’s worth the effort to maintain it. This works well when we depend on physically proximate relationships for our own well-being.

But social media introduces a whole new context for maintaining social relationships. When the transactional costs are reduced to a scanning of a newsfeed and hitting the “Like” button, the brain says “What the hell, let’s add them to our mental friends list. It’s not costing me anything.” In evolutionary terms, intimacy is the highest status we can give to a relationship and it typically only comes with a thorough understanding of the good and the bad involved in that relationship by being close to the person – both physically and figuratively. With zero relational friction, we’re more apt to afford intimacy, whether or not it’s been earned.

The Illusion of Acceptance

The previous two factors perfectly set the “stage” for false personas to flourish, but it’s the third factor that allows them to go viral. Every actor craves acceptance from his or her audience. Social exclusion is the worst fate imaginable for them. In a face-to-face world, our mental cost/benefit algorithm quickly weeds out false relationships that are not worth the investment of our social resources. But that’s not true online. If it costs us nothing, we may be rolling our eyes – safely removed behind our screen – as we’re also hitting the “Like” button. And shallow people are quite content with shallow forms of acceptance. A Facebook like is more than sufficient to encourage them to continue their act. To make it even more seductive, social acceptance is now measurable – there are hard numbers assigned to popularity.

This is pure cat-nip to the socially needy. Their need to craft a popular – but entirely inauthentic – persona goes into overdrive. Their lives are not lived so much as manufactured to create a veneer just thick enough to capture a quick click of approval. Increasingly, they retreat to an online world that follows the script they’ve written for themselves.

Suddenly it makes sense why we keep taking all those selfies of ourselves. When all the world’s a stage, you need a good head shot.

Mobs, Filter Bubbles and Democracy

You know I love to ask “why”? And last Tuesday provided me with the mother of all “whys”. I know there will be a lot of digital ink shed on this – but I just can’t help myself.

So..why?

Eight years ago, on Mediapost, I wrote that we had seen a new type of democracy. I still think I was right. What I didn’t know at the time was that I had just seen one side of a more complex phenomenon. Tuesday we saw another side. And we’re still reeling from it.

It’s not the first time we’ve seen this. Trump’s ascendancy is following the same playbook as Brexit, Marine Le Pen’s right winged attack in France and Rodrigo Duterte’s recent win for the presidency of the Philippines. Behind all these things, there are a few factors at play. Together, they combine to create a new social phenomenon. And, when combined with traditional democratic vehicles, they can cause bad things to happen to good people.

The FYF (F*&k You Factor)

Michael Moore absolutely nailed what happened Tuesday night, even providing a state-by-state, vote-by-vote breakdown of what went down – but he did it back in July. And he did it because he and Trump are both masters of the FYF. Just like you can’t bullshit a bullshitter – you can’t propagandize a propagandist. Trump had borrowed a page out of Moore’s playbook and Moore could see it coming a mile away.

The FYF requires two things – fear and anger. Anger comes from the fear. Typically, it’s fear of – and anger about – something you feel is beyond your control. This inevitably leads to a need to blame someone or something. The FYF master first creates the enemy, and then gives you a way to say FY to them. In Moore’s words, “The Outsider, Donald Trump, has arrived to clean house! You don’t have to agree with him! You don’t even have to like him! He is your personal Molotov cocktail to throw right into the center of the bastards who did this to you!”

What Michael Moore knew – and what the rest of us would figure out too late – was that for half the US, this wasn’t a vote for president. This was a vote for destruction. The more outrageous that Trump seemed, the more destructive he would be. Whether it was intentional or note, Trump’s genius was in turning Clinton’s competence into a liability. He succeeded in turning this into a simple yes or no choice – vote for the Washington you know – and hate – or blow it up.

The Threshold Factor

The FYF provides the core – the power base. Trump’s core was angry white men. But then you have to extend beyond this core. That’s where mob mentality comes in.

In 1978, Mark Granovetter wrote a landmark paper on threshold models of behavior. I’ll summarize. Let’s say you have two choices of behavior. One is to adhere to social and behavioral norms. Let’s call this the status quo option. The other is to do something you wouldn’t normally do, like defy your government – let’s call this the F*&k You option. Which option you choose is based on a risk/reward calculation.

What Granovetter realized is that predicting the behavior of a group isn’t a binary model – it’s a spectrum. In any group of people, you are going to have a range of risk/reward thresholds to get over to go from one behavioral alternative to the other. Being social animals, Granovetter theorized the deciding factor was the number of other people we need to see who are also willing to choose option 2 – saying F*&k you. The more people willing to make that choice, the lower the risk that you’ll be singled out for your behavior. Some people don’t need anyone – they are the instigators. Let’s give them a “0”. Other people may never join the mob mentality, even if everyone else is. We’ll give them a “100.” In between you have all the rest, ranging from 1 to 99.

The instigators start the reaction. Depending on the distribution of thresholds, if there are enough 1, 2, 3’s and so forth, the bandwagon effect happens quickly, spreading through the group. It isn’t until you hit a threshold gap that the chain reaction stops. For example, if you have a small group of 1’s, 2’s and 3’s, but the next lowest threshold is 10, the movement may be stopped in its tracks.

Network Effects and Filter Bubbles

None of what I’ve described so far is new. People have always been angry and mobs have always formed. What is new, however, is the nature of this particular mob.

As you probably deduced, the threshold model is one of network effects. It depends on finding others who share similar views. It you can aggregate a critical mass of low thresholds; you can trigger bigger bandwagon effects – maybe even big enough to jump threshold gaps.

Up to now, Granovetter’s Threshold model was constrained by geography. You had to have enough low threshold people in physical space to start the chain reaction. But we live in a different world. Now, you can have a groups of 0s, 1s and 2s living in Spokane, Washington, Pickensville, Alabama, and Marianna, Florida and they can all be connected online. When this happens, we have a new phenomenon – the Filter Bubble.

One thing we learned this election was how effective filter bubbles were. I have a little over 440 connections in Facebook. In the months and weeks leading up to the election, I saw almost no support for Trump in my feed. I agreed ideologically with the posts of almost everyone in my network. I suspect I’m not alone. I am sure Trump supporters had equally homogeneous feedback from their respective networks. This put us in what we call a filter bubble. In the geographically unrestricted network of online connections, our network nodes tend to be rather homogeneous ideologically.

Think about what this does to Granovetter’s threshold model. We fall into the false illusion that everyone thinks the same way we do. This reduces threshold gaps and accelerates momentum for non-typical options. It tips the balance away from risk and towards reward.

A New Face of Democracy

I believe these three factors set the stage for Donald Trump. I also believe they are threatening to turn democracy into never ending cycle of left vs. right backlashes. I want to explore this some more, but given that I’ve already egregiously exceeded my typical word count for Online Spin, we’ll have to pick up the thread next week.

When Evolution (and Democracy) Get It Wrong

“I’ve made a huge mistake”

G.O.B. – Arrested Development

The world is eliminating friction. Generally, that’s a good thing. But there may be unintended consequences.

Let’s take evolution, for instance. Friction in evolution comes in the form of survival rates. Barring other mitigating factors over the length of a natural evolutionary timeline, successful mutations will result in higher survival rates and, therefore, higher propagation rates. Those mutations that best fit the adaptive landscape will survive. Unsuccessful ones will die out.

But that assumes a landscape in which survival has a fairly high threshold. The lower the threshold, the more likely it is that a greater number of mutations will “get over the bar”.

Two factors can vary that threshold. One is the adaptive environment itself. It may proved to be “kinder and gentler” for an extended period of time, allowing for the flourishing of “less fit” candidates.

The other is a factor unique to one species that allows them to alter the environment at their will. Like technology, for instance. In the hands of humans, we have used technology to eliminate friction and drag the bar lower and lower – until the idea of survival of the fittest has little meaning any more.

The more friction there is, the more demanding that propagation threshold is. This same phenomenon is true of most emergent systems. What emerges depends on the environment the system is operating in. Demanding environments are called rugged landscapes. There is some contrary logic that operates here. The removal of friction can actually increase the number of mutations (or, in societal terms – innovation). More mutations – or ideas – can survive because the definition of “fittest” is less demanding. But we also build up a tipping point of mediocrity in the gene or meme pool and if something does cause the adaptive landscape to suddenly become more rugged, the extinction rate soars. The reckoning can be brutal.

Lets look at memes. For ideas to spread, there used to be a fairly high threshold of “shareability.” There was a publishing and editorial supply chain that introduced a huge amount of friction into our culture. This friction has largely been removed, allowing any of us to instantly share ideas. This has lead to a recalibration of the shareability threshold – an explosion of viral content that happens to ding our fickle consciousness just long enough for us to hit the share button. Bob Garfield called this “The Survival of the Funnest” in a recent column.

But was the previous friction a good thing? We definitely have more content being produced now. Some of it is very good. This couldn’t have happened under the previous cultural supply chain. But a lot of the content is – at best – frivolous and – at worst – dangerous. That same chain did force thoughtfulness into the filter of content. Someone – somewhere – had to think about what was fit to publish.

Now, one could argue that ultimately the market will get it right. We could also argue that evolution never makes mistakes. But that’s not always true. If the threshold of fitness gets lowered, evolution will make mistakes. Tons of them. I suspect the same is true of markets. If we grow complacent and entitled, we can flood the market with mediocrity. We humans have an unlimited capacity to make bad choices if we don’t have to make good ones

This brings me to the current state of democracy. Democracy is cultural evolution in action. It means – literally – the “people” (demos) “rule” (kratia). It assumes that the majority will get it right. But the adaptive landscape of democracy has also changed. The threshold has been lowered. We are making electoral decisions based on the same viral content that has flooded the rest of our culture. Thoughtfulness is in woefully short supply. There is no shortage of knee-jerk soundbites that latch on to the belief system of a disgruntled electorate. This is an ideological death spiral that could have big consequences.

Correction.

Make that “Huuugggeee” consequences.

 

 

 

Chatting Up a Storm

I’ve been talking about a “meta-app” for ages. It looks like China may have found it in WeChat. We in the Western World have been monitoring the success of TenCent’s WeChat with growing interest. Who would have thought that a simple chat interface could be the killer app of the future?

Chat interfaces seem so old school. They appear to be clunky and inefficient. But the beauty of chat is that it’s completely flexible. As Wired.com’s David Pierce said, “You can, for all intents and purposes, live your entire life within WeChat.” That’s exactly the type of universal functionality you need to become a meta-app.

We’ve always envisioned having conversations with our computers, even going back to Star Trek and 2001: A Space Odyssey. But we didn’t think out conversations would be carried out in text bubbles on a hand held device. A PEW study found that texting is the single most common activity on a Smartphone. 97% of us do it. So if messaging is the new UI, none of us have to learn anything new.

Graphic interfaces are necessarily tied to a particular task. The interface is designed for a specific intent. But messaging interfaces can adapt as intents change. They can quickly switch from social messaging to purchasing online to searching for an address to – well – you get the idea.

But where texting really shines is when it’s combined with artificially intelligent chatbots. A simple, universally understood interface that’s merged with powerful intelligent agents – either human and machine – allows the user to quickly request or do anything they wish. The functionality of intent specific apps can be called on as required and easily introduced into the chat interface.

In effect, text messaging is doing exactly what Apple hoped Siri would do – become the universal interface to the digital world. Given that speaking would appear to be easier than texting, one has to wonder why Siri never really gained the traction that Apple hoped it would. I think this can be attributed to three reasons:

  • The difficulties of spoken interpretation still restricts the functionality of Siri. The success rate isn’t high enough to completely gain our confidence
  • The use case of Siri is still primarily when we need to keep our hands free. It’s not that easy to switch to interactions where tactile input is required
  • We look like idiots speaking to a machine

All of these are avoided in a chat-based interface. We still have the flexibility of a conversational interface but we still have all the power of our device at our fingertips. Plus, we don’t infringe on any social taboos.

Given the advantages, it’s small wonder that a number of players – primarily Facebook – are seriously plotting for the commercialization of chat based messaging services. There’s one other massive advantage that a stand-alone messaging interface has. The more activities we conduct through any particular interface, the greater the opportunity for personalization. I’ve always maintained that a truly useful “meta-app” should be able to anticipate our intent. That requires interactions across the broad spectrum of our activities. Previously only operating systems offered this type of breadth and because OS’s operate “under the hood,” there were some limitations on the degree of personalization – and through that, commercialization – that was possible. But an app we explicitly choose to use seems to be fair game for commercialization. It’s one of those unwritten social modality rules that advertisers are well advised to be aware of.

Between Messenger and WhatsApp, Facebook has a huge slice of the chat market. They just passed the 900 million user mark for Messenger alone. According to a recent study from the Global Web Index, over 36% of users have used Messenger in the past month, followed closely by WhatsApp at 34%, then Skype at 19%, Line at 10% and Viber and SnapChat at 7% each. These numbers exclude the Chinese market, which is dominated by WeChat, but it remains to be seen if WeChat can expand its base beyond Asia.

And leaked documents from earlier this year indicated that Messenger may soon introduce targeted ads. This hardly qualifies as a security breach. It’s more of a “Duh – ya think?” The rumor mill around the commercialization of Messenger has been going full steam in 2016. If chatting is the UI juggernaut it seems to be, of course we will soon see ads there. WeChat is well down this road, and it seems to be working like a charm, if the recent Smart Car promotion is any example.

 

#AlexfromTarget – An Unexpected Consequence of Technology

1414997478566_wps_10_Original_Tweet_of_Alex_frYes, I’m belatedly jumping on the #AlexfromTarget bandwagon, but it’s in service of a greater truth that I’m trying to illustrate. Last column, I spoke about the Unintended Consequences of Technology. I think this qualifies. And furthermore, this brings us full circle to Kaila Colbin’s original point, which started this whole prolonged discussion.

It is up to us to decide what is important, to create meaning and purpose. And, personally, I think we could do a better job than we’re doing now.

So, why did the entire world go ga-ga over a grocery bagger from Texas? What could possibly be important about this?

Well – nothing – and that’s the point. Thinking about important things is hard work. Damned hard work – if it’s really important. Important things are complex. They make our brains hurt. It’s difficult to pin them down long enough to plant some hooks of understanding in them. They’re like eating broccoli, or doing push ups. They may be good for us, but that doesn’t make them any more fun.

Remember the Yir Yoront from my last column – the tribal society that was thrown into a tail spin by the introduction of steel axes? The intended consequence of that introduction was to make the Yir Yoront more productive. The axes did make the tribe more productive, in that they were able to do the essential tasks more quickly, but the result was that the Yir Yoront spent more time sleeping.

Here’s the thing about technology. It allows us to be more human – and by that I mean the mixed bag of good and bad that defines humanity. It extends our natural instincts. It’s natural to sleep if you don’t have to worry about survival. And it’s also natural for young girls to gossip about adorable young boys. These are hard-wired traits. Deep philosophical thought is not a hard-wired trait. Humans can do it, but it takes conscious effort

Here’s where the normal distribution curve comes in. Any genetically determined trait will have a normal distribution over the population. How we apply new technologies will be no different. The vast majority of the population will cluster around the mean. But here’s the other thing – that “mean” is a moving target. As our brains “re-wire” and adapt to new technologies, the mean that defines typical behavior will move over time. We adapt strategies to incorporate our new technology-aided abilities. This creates a new societal standard and it is also human to follow the unwritten rules of society. This creates a cause and effect cycle. Technologies enable new behaviors that are built on top of the foundations of human instinct – society determines whether these new behaviors are acceptable – and if they are acceptable, they become the new “mean” of our behavioral bell curve. We bounce new behaviors off the backboard of society. So, much as we may scoff at the fan-girls that gave “Alex” insta-fame – ultimately it’s not the girl’s fault, or technology’s. The blame lies with us. It also lies with Ellen DeGeneres, the New York Times, and the other barometers of societal acceptance that offered endorsement of the phenomenon.

It’s human to be distracted by the titillating and trivial. It’s also human to gossip about it. There’s nothing new here. It’s just that these behaviors used to remain trapped within the limited confines of our own social networks. Now, however, they’re amplified through technology. It’s difficult to determine what the long-term consequences of this might be. Is Nicholas Carr right? Is technology leading us down the garden path to imbecility, forever distracted by bright, shiny objects? Or is our finest moment yet to come?

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.