Dear Facebook. It’s Not Me, It’s You

So, let’s say, hypothetically, one wanted to get break up with Facebook? Just how would one do that?

I heard one person say that swearing off Facebook was a “position of privilege.” It was an odd way of putting it, until I thought about it a bit. This person was right. Much as I’d like to follow in retired tech journalist Walter Mossberg’s footsteps and quit Facebook cold turkey, I don’t think I can. I am not in that position. I am not so privileged.

This is no way condones Facebook and its actions. I’m still pretty pissed off about that. I suspect I might well be in an abusive relationship. I have this suspicion because I looked it up on Mentalhealth.net, a website offered by the American Addictions Centers. According to them, an abusive relationship is

where one thing mistreats or misuses another thing. The important words in this definition are “mistreat” and “misuse”; they imply that there is a standard that describes how things should be treated and used, and that an abuser has violated that standard.

For the most part, only human beings are capable of being abusive, because only human beings are capable of understanding how things should be treated in the first place and then violating that standard anyway.”

That sounds bang on when I think about how Facebook has treated its users and their personal data. And everyone will tell you that if you’re in an unhealthy relationship, you should get out. But it’s not that easy. And that’s because of Metcalfe’s Law. Originally applied to telecommunication networks, it also applies to digitally mediated social networks. Metcalfe’s Law states that states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.”

The example often used is a telephone. If you’re the only person with one, it’s useless. If everyone has one, it’s invaluable. Facebook has about 2.3 billion users worldwide. That’s one out of every three people on this planet. Do the math. That’s a ton of value. It makes Facebook what they call very “sticky” in Silicon Valley.

But it’s not just the number of users that makes Facebook valuable. It’s also the way they use it. Facebook has always intended to become the de facto platform for broad based social connection. As such, it is built of “weak ties” – those social bonds defined by Mark Granovetter almost 50 years ago which connect scattered nodes in a network. To go back to the afore-mentioned “position of privilege” comment, the privilege in this case is a lack of dependence on weak ties.

 

My kids could probably quite Facebook. At least, it would be easier for them then it would be for me. But they also are not in the stage of their life where weak ties are all that important. They use other platforms, like Snapchat, to communicate with their friends. It’s a channel built for strong ties. If they do need to bridge weak ties, they escalate their social postings, first to Instagram, then – finally – to their last resort: Facebook. It’s only through Facebook where they’ll reach parents, aunts, cousins and grandmas all at once.

It’s different for me. I have a lifetime of accumulated weak ties that I need to connect with all the time. And Facebook is the best way to do it. I connect with various groups, relatives, acquaintances and colleagues on an as needed basis.  I also need a Facebook presence for my business, because it’s expected by others that need to connect to me. I don’t have the privilege of severing those ties.

So, I’ve decided that I can’t quit Facebook. At least, not yet. But I can use Facebook differently – more impersonally. I can use it as a connection platform rather than a channel for personal expression. I can make sure as little of my personal data falls into Facebook’s hands as possible. I don’t need to post what I like, how I’m feeling, what my beliefs are or what I do daily. I can close myself off to Facebook, turning this into a passionless relationship. From now on, I’ll consider it a tool –  not a friend, not a confidante, not something I can trust – just a way to connect when I need to. My personal life is none of Facebook’s business – literally.

For me, it’s the first step in preventing more abuse.

The Strange Polarity of Facebook’s Moral Compass

For Facebook, 2018 came in like a lion, and went out like a really pissed off  Godzilla with a savagely bad hangover after the Mother of all New Year’s Eve parties.  In other words, it was not a good year.

As Zuckerberg’s 2018 shuddered to its close, it was disclosed that Facebook and Friends had opened our personal data kimonos for any of their “premier” partners. This was in direct violation of their own data privacy policy, which makes it even more reprehensible than usual. This wasn’t a bone-headed fumbling of our personal information. This was a fully intentional plan to financially benefit from that data in a way we didn’t agree to, hide that fact from us and then deliberately lie about it on more than one occasion.

I was listening to a radio interview of this latest revelation and one of the analysts  – social media expert and author Alexandria Samuel – mused about when it was that Facebook lost its moral compass. She has been familiar with the company since its earliest days, having the opportunity to talk to Mark Zuckerberg personally. In her telling, Zuckerberg is an evangelist that had lost his way, drawn to the dark side by the corporate curse of profit and greed.

But Siva Vaidhyanathan – the Robertson Professor of Modern Media Studies at the University of Virgina –  tells a different story. And it’s one that seems much more plausible to me. Zuckerberg may indeed be an evangelist, although I suspect he’s more of a megalomaniac. Either way, he does have a mission. And that mission is not opposed to corporate skullduggery. It fully embraces it. Zuckerberg believes he’s out to change the world, while making a shitload of money along the way. And he’s fine with that.

That came as a revelation to me. I spent a good part of 2018 wondering how Facebook could have been so horrendously cavalier with our personal data. I put it down to corporate malfeasance. Public companies are not usually paragons of ethical efficacy. This is especially true when ethics and profitability are diametrically opposed to each other. This is the case with Facebook. In order for Facebook to maintain profitability with its current revenue model, it has to do things with our private data we’d rather not know about.

But even given the moral vacuum that can be found in most corporate boardrooms, Facebook’s brand of hubris in the face of increasingly disturbing revelations seems off-note – out of kilter with the normal damage control playbook. Vaidhyanathan’s analysis brings that cognitive dissonance into focus. And it’s a picture that is disturbing on many levels.

siva v photo

Siva Vaidhyanathan

According to Vaidhyanathan, “Zuckerberg has two core principles from which he has never wavered. They are the founding tenets of Facebook. First, the more people use Facebook for more reasons for more time of the day the better those people will be. …  Zuckerberg truly believes that Facebook benefits humanity and we should use it more, not less. What’s good for Facebook is good for the world and vice-versa.

Second, Zuckerberg deeply believes that the records of our interests, opinions, desires, and interactions with others should be shared as widely as possible so that companies like Facebook can make our lives better for us – even without our knowledge or permission.”

Mark Zuckerberg is not the first tech company founder to have a seemingly ruthless god complex and a “bigger than any one of us” mission. Steve Jobs, Bill Gates, Larry Page, Larry Ellison; I could go on. What is different this time is that Zuckerberg’s chosen revenue model runs completely counter to the idea of personal privacy. Yes, Google makes money from advertising, but the vast majority of that is delivered in response to a very intentional and conscious request on the part of the user. Facebook’s gaping vulnerability is that it can only be profitable by doing things of which we’re unaware. As Vaidhyanathan says, “violating our privacy is in Facebook’s DNA.”

Which all leads to the question, “Are we okay with that?” I’ve been thinking about that myself. Obviously, I’m not okay with it. I just spent 720 words telling you so. But will I strip my profile from the platform?

I’m not sure. Give me a week to think about it.

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

The Psychology Behind My NetFlix Watchlist

I live in Canada – which means I’m going into hibernation for the next 5 months. People tell me I should take up a winter activity. I tell them I have one. Bitching. About winter – specifically. You have your hobbies – and I have mine.

The other thing I do in the winter is watch movies. And being a with it, tech-savvy guy, I have cut the cord and get my movie fix through not one, but three streaming services: Netflix, Amazon Prime and Crave (a Canadian service). I’ve discovered that the psychology of Netflix is fascinating. It’s the Paradox of Choice playing out in streaming time. It’s the difference between what we say we do and what we actually do.

For example, I do have a watch list. It has somewhere around a hundred items on it. I’ll probably end up watching about 20% of them. The rest will eventually go gentle into that good Netflix Night. And according to a recent post on Digg, I’m actually doing quite well. According to the admittedly small sample chronicled there, the average completion rate is somewhere between 5 and 15%.

When it comes to compiling viewing choices, I’m an optimizer. And I’m being kind to myself. Others, less kind, refer to it as obsessive behavior. This is referring to satisficing/optimizing spectrum of decision making. I put an irrational amount of energy into the rationalization of my viewing options. The more effort you put into decision making, the closer you are to the optimizing end of the spectrum. If you make choices quickly and with your gut, you’re a satisficer.

What is interesting about Netflix is that it defers the Paradox of Choice. I dealt with this in a previous column. But I admit I’m having second thoughts. Netflix’s watch list provides us with a sort of choosing purgatory..a middle ground where we can save according to the type of watcher we think we are. It’s here where the psychology gets interesting. But before we go there, let’s explore some basic psychological principles that underpin this Netflix paradox of choice.

Of Marshmallows and Will Power

In the 1960’s, Walter Mischel and his colleagues conducted the now famous Marshmallow Test, a longitudinal study that spanned several years. The finding (which currently is in some doubt) was that children who had – when they were quite young – the willpower to resist immediately taking a treat (the marshmallow) put in front of them in return for a promise of a greater treat (two marshmallows)  in 15 minutes would later do substantially better in many aspects of their lives (education, careers, social connections, their health). Without getting into the controversial aspects of the test, let’s just focus on the role of willpower in decision making.

Mischel talks about a hot and cool system of making decisions that involve self-gratification. The “hot” is our emotions and the “cool” is our logic. We all have different set-points in the balance between hot and cool, but where these set points are in each of us depends on will power. The more willpower we have, the more likely it is that we’ll delay an immediate reward in return for a greater reward sometime in the future.

Our ability to rationalize and expend cognitive resources on a decision is directly tied to our willpower. And experts have learned that our will power is a finite resource. The more we use it in a day, the less we have in reserve. Psychologists call this “ego-depletion” And a loss of will power leads to decision fatigue. The more tired we become, the less our brain is willing to work on the decisions we make. In one particularly interesting example, parole boards are much more likely to let prisoners go either first thing in the morning or right after lunch than they are as the day wears on. Making the decision to grant a prisoner his or her freedom is a decision that involves risk. It requires more thought.  Keeping them in prison is a default decision that – cognitively speaking – is a much easier choice.

Netflix and Me: Take Two

Let me now try to rope all this in and apply it to my Netflix viewing choices. When I add something to my watch list, I am making a risk-free decision. I am not committing to watch the movie now. Cognitively, it costs me nothing to hit the little plus icon. Because it’s risk free, I tend to be somewhat aspirational in my entertainment foraging. I add foreign films, documentaries, old classics, independent films and – just to leaven out my selection – the latest audience-friendly blockbusters. When it comes to my watch list additions, I’m pretty eclectic.

Eventually, however, I will come back to this watch list and will actually have to commit 2 hours to watching something. And my choices are very much affected by decision fatigue. When it comes to instant gratification, a blockbuster is an easy choice. It will have lots of action, recognizable and likeable stars, a non-mentally-taxing script – let’s call it the cinematic equivalent of a marshmallow that I can eat right away. All my other watch list choices will probably be more gratifying in the long run, but more mentally taxing in the short term. Am I really in the mood for a European art-house flick? The answer probably depends on my current “ego-depletion” level.

This entire mental framework presents its own paradox of choice to me every time I browse through my watchlist. I know I have previously said the Paradox of Choice isn’t a thing when it comes to Netflix. But I may have changed my mind. I think it depends on what resources we’re allocating. In Barry Schwartz’s book titled the Paradox of Choice, he cites Sheena Iyengar’s famous jam experiment. In that instance, the resource was the cost of jam. In that instance, the resource was the cost of jam. But if we’re talking about 2 hours of my time – at the end of a long day – I have to confess that I struggle with choice, even when it’s already been short listed to a pre-selected list of potential entertainment choices. I find myself defaulting to what seems like a safe choice – a well-known Hollywood movie – only to be disappointed when the credits roll. When I do have the will power to forego the obvious and take a chance on one of my more obscure picks, I’m usually grateful I did.

And yes, I did write an entire column on picking a movie to watch on Netflix. Like I said, it’s winter and I had a lot of time to kill.

 

Why Disruption is Becoming More Likely in the Data Marketplace

Another weak, another breach. 500 million records were hacked from Marriott, making it the second largest data breach in history, behind Yahoo’s breach of 3 billion user accounts.

For now. There will probably be a bigger breach. There will definitely be a more painful breach. And by painful, I mean painful to you and me.  It’s in that pain – specifically, the degree of the pain – that the future of how we handle our personal data lies.

Markets innovate along paths of least resistance. Market development is a constantly evolving dynamic tension between innovation and resistance. If there is little resistance, markets will innovate in predictable ways from their current state. If this innovation leads to push back from the market, we encounter resistance.  When markets meet significant resistance, disruption occurs, opening the door for innovation in new directions to get around the resistance of the marketplace.  When we talk about data, we are talking about a market where value is still in the process of defining itself. And it’s in the definition of value where we’ll find the potential market resistance for data.

Individual data is a raw resource. It doesn’t have value until it becomes “Big.” Personal data needs to be aggregated and structured to become valuable. This creates a dilemma for us. Unless we provide the raw material, there is no “big” data possible. This makes it valuable to others, but not necessarily to ourselves.

Up to now, the value we have exchanged our privacy for has been convenience. It’s easier for us to store our credit card data with Amazon so we can enable one-click ordering. And we feel this exchange has been a bargain. But it remains an asymmetrical exchange. Our data has no positive value to us, only negative. We can be hurt by our data, but other than the afore-mentioned exchange for convenience, it doesn’t really help us. That is why we’ve been willing to give it away for so little. But once it’s aggregated and becomes “big”, it has tremendous value to the people we give it to. It also has value to those who wish to steal that data from those who we have entrusted it with. The irony here is that whether that data is in the “right” hands or the “wrong” ones, it can still mean pain for us. The differentiator is the degree of that pain.

Let’s examine the potential harm that could come from sharing our data. How painful could this get? Literally every warning we write about here at Mediapost has data at the core. Just yesterday, fellow Insider Steven Rosenbaum wrote about how the definition of warfare has changed. The fight isn’t for land. War is now waged for our minds. And data is used to target those minds.

Essentially, sharing our data makes us vulnerable to being targeted. And the outcome of that targeting can range from simply annoying to life-alteringly dangerous. Even the fact that we refer to it as targeting should raise a red flag. There’s a reason why we use a term typically associated with a negative outcome for those on the receiving end. You’re very seldom targeted for things that are beneficial to you. And that’s true no matter who’s the one doing the targeting. At its most benign, targeting is used to push some type of messaging – typically advertising – to you. But you could also be targeted by Russian hackers in an attempt to subvert democracy. Most acutely, you could be targeted for financial fraud. Or blackmail. Targeting is almost never a good thing. The degree of harm can vary, but the cause doesn’t. Our data – the data we share willingly – makes targeting possible.

We are in an interesting time for data. We have largely shrugged off the pain of the various breaches that have made it to the news. We still hand over our personal data with little to no thought of the consequences. And because we still participate by providing the raw material, we have enabled the development of an entire data marketplace. We do so because there is no alternative without making sacrifices we are not willing to make. But as the degree of personal pain continues to get dialed up, all the prerequisites of market disruption are being put in place. Breaches will continue. The odds of us being personally affected will continue to climb. And innovators will find solutions to this problem that will be increasingly easy to adopt.

For many, many reasons, I don’t think the current trajectory of the data marketplace is sustainable. I’m betting on disruption.

 

 

A Thought on Thoughtfulness

Writing this column (first for Search Insider, then here) has been a private social experiment for me. It’s one that has now lasted at least 14 years and is pushing 700 iterations, in the form of the number of columns I’ve written.  It’s been fascinating to see which topics seem to elicit reaction amongst the MediaPost readership. Granted, the metrics I have available are limited to two: how often I’m shared and how often I get comments. Still, based on this limited feedback, I’ve come to some conclusions.

I’ll be totally honest here. Just a few weeks ago I was considering packing it in. But I didn’t. I attacked advertising instead. Perhaps you could chalk it up to the mood I was in at the time.

If you don’t write for an audience, know that it’s a soul sucking thing to do. You metaphorically chop out little – or large – pieces of your brain and string them up to see what flavor the carrion eaters (that’s would be you, the readers) are favouring today. That sounds gruesome, but when it comes to sharing ideas, you want to be eaten alive. It’s a good thing. I have found – again, based on the limited metrics I have access to – that I’m not usually the most popular taste-du-jour. There are other writers here at MediaPost that are shared far more often than I.

I’m okay with that. That wasn’t why I was considering packing it in. I was considering doing that because I wasn’t sure I had anything thoughtful left to say. After 14 years of doing this, I’ve said a lot of things here on MediaPost, and I was worried the well might be running dry. For heaven’s sake, I don’t even work in the industry anymore! I haven’t for 5 years now. Who am I to be pontificating on advertising, media or marketing?

But then I reconsidered. And I did so precisely because I’m not the most popular writer here in the MediaPost stable. I don’t really care if you share me (okay..I care a little bit). I do care if I make you think. And I think I can still do that. At least, I can on a good day.

The reason I keep carving off chunks of my prefrontal cortex to share with you is because I love thoughtfulness. If I can contribute to the dissemination of thoughtfulness – even in a small way – I need to keep doing what I’m doing.

I believe thoughtfulness is in danger. We are all collectively suffering from FOMO – we are scared of missing something. And so we all flick from meme to meme. I call them cog-bits. These are the proliferate mental tidbits that are thrown at us each day. They may be top ten lists, videos, pictures, posts – even news articles. The one thing they have in common is that they have been crafted for attention spans of 10 seconds or less. If you’re not hooked, you move on to the next cog-bit. They are not designed to make you think – their entire purpose is to make you share, which requires just 0.05 seconds of rational thought.

I admit I am not immune to the charms of a cog-bit. I’m a sucker for them, just like I suspect you are. But I also believe our mental diet should be balanced with some long-form thought provoking content. Thinking shouldn’t always be easy and instant. The end result shouldn’t always be a knee-jerk jamming of the share button. We should mull more. We should roll thoughts over in our mind, picking them apart gradually. We should be introduced to concepts and perspectives we haven’t thought before. And it’s okay if – in this process – we find our own minds changing. We also need to do that more.

To me, my best day writing is when I provoke a conversation. I don’t mean a trolling comment. I mean an honest-to-goodness conversation, where the parties are open to thoughtfulness and are mentally stretching the boundaries of their own perspectives. When is the last time you had a conversation where you really had to think – where you had to pause to catch your cognitive breath? It’s been awhile, hasn’t it?

In looking back at the last 14 years of writing for MediaPost, I have found that while I hope I have introduced some new ideas to you, the real reward has been how this weekly exercise has shaped my own thoughts. Frankly, some weeks it’s a pain in the ass to come up with an idea for the Tuesday slot. But when I actually engage with the creation of a column, I always find my ideas shift, just a little. Sometimes, I throw ideas out there that I know will be contentious – ideas that will make you think. Sometimes they will be half-baked. You may agree, you may not. All I ask is that you think about them.

That’s why I keep doing this.

I’ll Take Reality with a Side of Augmentation, Please….

We don’t want to replace reality. We just want to nudge it a little.

At least, that seems to be the upshot of a new survey from the International law firm Perkins Coie. The firm asked start-up founders, tech execs, investors and consultants about their predictions for both Augmented (AR) and Virtual (VR) Reality. While Virtual Reality had a head start, the majority of those surveyed (67%) felt that AR would overtake VR in revenue within the next 3 years.

The reasons they gave were mainly focused on roadblocks in the technology itself: VR headsets were too bulky, the user experience was not smooth enough due to technical limitations, the cost of adopting VR was higher than AR and there was not enough content available in the VR universe.

I think there’s another reason. We actually like reality. We’re not looking to isolate ourselves from reality. We’re looking to enhance it.

Granted, if we are talking about adoption rates, there seems to be a lot more potential applications for Augmented Reality. Everything you do could stand a little augmentation. For example. you could probably do your job better if your own abilities were augmented with real time information. Pilots would be better at flying. Teachers would be better at teaching. Surgeons would be better at performing surgery. Mechanics would be better at fixing things.

You could also enjoy things more with a little augmentation. Looking for a restaurant would be easier. Taking a tour would be more informative. Attending a play or watching a movie could be candidates for a little augmented content. AR could even make your layover at an airport less interminable.

I think of VR as a novelty. The sheer nerdiness of it makes it a technology of limited appeal. As one developer quoted in the study says, “Not everyone is a gadget freak. The industry needs to appeal to those who aren’t.” AR has a clearly understood user benefit. We can all grasp a scenario where augmentation could make our lives better in some way. But it’s hard to understand how VR would have a real impact on our day to day lives. Its appeal seems to be constrained to entertainment, and even then, it’s entertainment aimed at a limited market.

The AR wave is advancing in some interesting directions. Google Glass has retreated from the consumer market and is currently concentrating on business and industrial application. The premise of Glass is to allow you to work smarter, access instant expertise and stay hands on. Bose is betting on a subset of AR, which it dubs Aural Augmentation. It believes sound is the best way to add content to our lives. And even Amazon has borrowed an idea from IKEA and stepped into the AR ring with Amazon AR View, where you can place items you’re considering buying in your home to see if they are a fit before you buy.

One big player that is still betting heavily on VR is Facebook, with its Oculus headset. This is not surprising, given that Mark Zuckerberg is the quintessential geek and seems intent on manufacturing our own social reality for us. In a demonstration a year ago, Zuckerberg struck all kinds of tone deaf clunkers when he and Facebook social VR chief Rachel Franklin took on cartoon personas to take a VR tour of devastated Puerto Rico. The juxtaposition could only be described as weird..a scene of human misery that was all too real visited by a cartoon Zuckerberg. At one point, he enthused “It feels like we’re really here in Puerto Rico.”

zuckerbergvrYou weren’t Mark. You were safely in Facebook headquarters Menlo Park, California –  wearing a headset that made you look like a dork. That was the reality.