Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

The Retrofitting of Broadcasting

I returned to my broadcast school for a visit last week. Yes, it was nostalgic, but it was also kind of weird.

Here’s why…

I went to broadcast school in the early 80’s. The program I attended, at the Northern Alberta Institute of Technology, had just built brand new studios, outfitted with the latest equipment. We were the first group of students to get our hands on the stuff. Some of the local TV stations even borrowed our studio to do their own productions. SCTV – with the great John Candy, Catherine O’Hara, Eugene Levy, Rick Moranis and Andrea Martin – was produced just down the road at ITV. It was a heady time to be in TV. I don’t want to brag, but yeah, we were kind of a big deal on campus.

That was then. This was now. I went back for my first visit in 35 years, and nothing had really changed physically. The studios, the radio production suites, the equipment racks, the master control switcher – it was all still there – in all its bulky, behemoth-like glory. They hadn’t even changed the lockers. My old one was still down from Equipment Stores and right across from one of the classrooms.

The disruption of the past four decades was instantly crystallized. None of the students today touched any of that 80’s era technology – well – except for the locker. That was still functional. The rows and rows of switches, rotary pots, faders and other do-dads hadn’t been used in years. The main switching board served as a makeshift desk for a few computer monitors and a keyboard. The radio production suites were used to store old office chairs. The main studio; where we once taped interviews, music videos, multi-camera dramas, sketch comedies and even a staged bar fight? Yep, more storage.

The campus news show was still shot in the corner, but the rest of that once state-of-the-art studio was now a very expensive warehouse. The average iPhone today has more production capability than the sum total of all that analog wizardry. Why use a studio when all you need is a green wall?

I took the tour with my old friend Daryl, who is still in broadcasting. He is the anchor of the local 6 o’clock news. Along the way we ran into a couple of other old schoolmates who were now instructors. And we did what middle-aged guys do. We reminisced about the glory days. We roamed our old domain like dinosaurs ambling towards our own twilight.

When we entered the program, it was the hottest ticket in town. They had 10 potential students vying for every program seat available. Today, on a good year, it’s down to 2 to 1. On a bad year, everyone who applies gets in. The program has struggled to remain relevant in an increasingly digital world and now focuses on those who actually want to work in television news. All the other production we used to do has been moved to a digital production program.

We couldn’t know it at the time, but we were entering broadcasting just when broadcasting had reached the apex of its arc. You still needed bulk to be a broadcaster. An ENG camera (Electronic News Gathering) weighed in at a hefty 60 pounds plus, not including the extra battery belt. Now, all you need a smartphone and a YouTube account. The only thing produced at most local stations is the news. And the days are numbered for even that.

If you are middle aged like I am, your parents depend on TV for their news. For you, it’s an option – one of many places you can get it. You probably watch the 6 o’clock news more out of habit than anything. And your kids never watch it. I know mine don’t. According to the Pew Research Center, only 27% of those 18-29 turn to TV for their news. Half of them get their news online. In my age group, 72% of us still get our news from TV, with 29% of us turning online. The TV news audience is literally aging to death.

My friend Daryl sees the writing on the wall. Everybody in the business does. When I met his co-anchor and told her that I had taken the digital path, she said, “Ah, an industry with a future.”

Perhaps, but then again, I never got my picture on the side of a bus.

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

To Buy or Not to Buy: The Touchy Subject of Mobile ECommerce

A recent report from Akamai indicates that users have little patience when it comes to making purchases on a mobile device. Here are just a few of the stats:

  • While almost half of all consumers browse via their phones, only 1 in 5 complete transactions on mobile
  • Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types
  • Just a 100-millisecond delay in load time hurt conversion rates by up to 7%
  • Bounce rates were highest among mobile shoppers and lowest among those using tablets

But there may be more behind this than just slow load times. We also have to consider what modes we’re in when we’re interacting with our mobile device.

In 2010, Microsoft did a fascinating research project that looked at how user behaviors varied from desktop to tablet to smart phone. The research was headed by Jacquelyn Krones, who was a Search Product Manager at the time. Search was the primary activity examined, but there was a larger behavioral context that was explored. While the study is 7 years old, I think the core findings are still relevant. The researchers found that we tend to have three large buckets of behaviors: missions, explorations and excavations. Missions were focused tasks that were usually looking for a specific piece of information – i.e. looking for an address or phone number. Explorations where more open ended and less focused on a given destination – i.e. seeing if there was any thing you wanted to do this Friday night. Excavations typically involved multiple tasks within an overarching master task – i.e. researching an article. In an interview with me, Krones outlined their findings:

“There’s clearly a different profile of these activities on the different platforms. On desktops and laptops, people do all three of the activities – they conduct missions and excavations and explorations.

“On their phones we expected to see lots of missions – usually when you use your mobile phone and you’re conducting a search, whatever you’re doing in terms of searching is less important than what’s going on with you in the real world – you’re trying to get somewhere, you’re having a discussion with somebody and you want to look something up quick or you’re trying to make a decision about where to go for dinner.

“But we were surprised to find that people are using their mobile phones for exploration. But once we saw the context, it made sense – people have a low tolerance for boredom. Their phone is actually pretty entertaining, much more entertaining than just looking at the head in front of you while you’re waiting in line. You can go check a sports score, read a story, or look at some viral video and have a more engaged experience.

“On tablets, we found that people are pretty much only using them for exploration today. I had expected to see more missions on tablets, and I think that that will happen in the future, but today people perceive their mobile phone as always with them, very personal, always on, and incredibly efficient for getting information when they’re in mission mode.”

Another study, coming out The University of British Columbia Okanagan, also saw a significant difference in behavioral modality when it came to interacting with touch screens. Assistant Professor Ying Zhu was the principal author:

“The playful and fun nature of the touchscreen enhances consumers’ favour of hedonic products; while the logical and functional nature of a desktop endorses the consumers’ preference for utilitarian products,” explains Zhu.

“Zhu’s study also found that participants using touchscreen technology scored significantly higher on experiential thinking than those using desktop computers. However, those on desktops scored significantly higher on rational thinking.”

I think what we have here is an example of thinking: fast and slow. I suspect we’re compartmentalizing our activities, subconsciously setting some aside for completion on the desktop. I would suspect utilitarian type purchasing would fall into this category. I know that’s certainly true in my case. As Dr. Zhu noted, we have a very right brain relationship with touchscreens, while desktops tend to bring out our left-brain. I have always been amazed at how our brains subconsciously prime us based on anticipating an operating environment. Chances are, we don’t even realize how much our behaviors change when we move from a smart phone to a tablet to a desktop. But I’d be willing to place a significant wager that it’s this subconscious techno-priming that’s causing some of these behavioural divides between devices.

Slow load times are never a good thing, on any device, but while they certainly don’t help with conversions, they may not be the only culprit sitting between a user and a purchase. The device itself could also be to blame.

Is Google Slipping, Or Is It Just Our Imagination?

Recently, I’ve noticed a few articles speculating about whether Google might be slipping:

Last month, the American Customer Satisfaction Index notified us that our confidence in search is on the decline. Google’s score dropped 2% to 82. The culprit was the amount of advertising found on the search results page. To be fair, both Google and search in general have had lower scores. Back in 2015, Google scored a 77%, it’s lowest score ever.

This erosion of customer satisfaction may be leading to a drop in advertising ROI. According to a recent report from Analytic Partners, the return on investment from paid search dropped 27% from 2010 to 2016. Search wasn’t alone. All digital ROI seems to be in decline. Analytic’s VP of Marketing, Joe LaSala, predicts that ROI from digital will continue to decline until it converges with ROI from traditional media.

In April of this year, Forbes ran an article asking the question: “Is Google’s Search Quality Starting to Decline?” Contributors to this decline, according to the article, included the introduction of rich snippets and featured news, including popularity as a ranking factor and ongoing black hat SEO manipulation.

But the biggest factor in the drop of Google’s perceived quality was actually in the perception itself. As the Forbes article’s author, Jayson DeMers, stated;

It’s important to realize just how sophisticated Google is, and how far it’s come from its early stages, as well as the impossibility of having a “perfect” search platform. Humans are flawed creatures, and our actions are what are dictating the shape of search.

Google is almost 20 years old. The domain Google.com was registered on September 15, 1997. Given that 20 years is an eternity in internet years, it’s actually amazing that it’s stood up as well as it has for the past two decades. Whether Google’s naysayers care to admit it or not, that’s due to Google’s almost religious devotion to the quality of their search results. That devotion extends to advertising. The balance between user experience and monetization has always been one that Google has paid a lot of attention too.

But it’s not the presence of ads that has led to this perceived decline of quality. It’s a change in our expectations of what a search experience should be. I would argue that for any given search, using objective measures of result relevance, the results Google shows today are far more relevant than the results they showed in 2008, the year it got it’s highest customer satisfaction score (86%). Since then, Google has made great strides in deciphering user intent and providing a results page that’s a good match for that intent. Sometimes it will get it wrong, but when it gets it right, it puts together a page that’s a huge improvement over the vanilla, one size fits all results page of 2008.

The biggest thing that’s changed in the past 10 years is the context from which we’re launching those searches. In 2008, it was almost always the desktop. But today, chances are we’re searching from a mobile device – or our car – or our home through Amazon Echo. This has changed our expectations of search. We are task focused, rather than “browsing” for information. This creates an entirely different mental framework within which we receive the results. We apply a new yardstick of acceptable relevance. Here, we’re not looking for a list of 20 possible answers – we’re looking for one answer. And it had better be the right one. Context based search must be hyper-relevant.

Compounding this trend is the increasing number of circumstances where search is going “under the hood” – something I’ve been forecasting for a long time now. For example, if you use Siri to launch a search through your CarPlay connected device when you’re driving, the results are actually coming from Bing but they’re stripped of the context of the Bing search results page. Here, the presentation of search results is just one step in a multi-step task flow. It’s important that the result that is on top is the one you’re probably looking for.

Unfortunately for Google – and the other search providers – this expectation stays in place even when the context shifts. When we launch a search from our desktop, we are increasingly intolerant of results that are even a little off base from our intent. Ads become the most easily identified culprit. A results set that would have seemed almost frighteningly prescient even a few years ago now seems sub par. Google has come a long way in the past 20 years but it’s still losing ground to our expectations.