How Our Brain Decides How Long We Look at Something

In this week, I’ve talked about how our attention focusing mechanism moves the spotlight of foveal attention around different environments: a Where’s Waldo picture, a webpage, a website with advertising and a search engine results page. I want to wrap up the week by looking at another study that looked at the role of brain waves in regulating how we shift the spotlight of attention from one subject to another.

Eye Spy

eyetrackingsaccadesIf you do eye tracking research, you soon learn to distinguish fixations and saccades. Fixations occur when we let our foveal attention linger on an element, even for a fraction of a second. Saccades are the movements our eyes make from one fixation to the next. These movements take mere milliseconds. Below I show an example of a single session “gaze plot” – the recording of how one individual’s eyes took in an ad (the image is from Tobii, the maker of the eye tracking equipment we use). The dots represent fixations, as measured in milliseconds. The bigger the dot the longer the eye stayed here. The lines connecting the dots are saccades.

When you look at a scene like the one shown here, the question becomes, how do you consciously move from one element to another. It’s not like you think “okay, I’ve spent enough time looking at the logo, perhaps it’s time to move to the headline of the ad, or the rather attractive bosom in the upper right corner (I suspect the participant was male)” The movements happen subconsciously. Your eyes move to digest the content of the picture on their own accord, based on what appears to be interesting based on your overall scan of the picture and your attention focusing mechanisms.

Keeping Our Eyes Running on Time

Knowing that the eye tends to move from spot to spot subconsciously, Dr. Earl Miller at MIT decided to look closer at the timing of these shifts of attention and what might cause them. He found that our brains appear to have a built in timer that moves our eyes around a scene. Our foveal focus shifts about 25 times a second and this shift seems to be regulated by our brain waves. Our brain cycles between high activity phases and low activity phases, the activity recorded through EEG scanning. Neurologists have known that these waves seem to be involved in the focusing of attention and the functions of working memory, but Miller’s study showed a conclusive link between these wave cycles and the refocusing of visual attention. It appears our brains have a built in metronome that dictates how we engage with visual stimuli. The faster the cycles, the faster we “think.”

But, it’s not as if we let our eyes dash around the page every 1/25 of a second. Our eyes linger in certain spots and jump quickly over others. Somewhere, something is dictating how long the eye stays in one spot. As our brain waves tick out the measures of attention, something in our brains decide where to invest those measures and how many should be invested.

The Information Scent Clock is Ticking

Here, I take a huge philosophical leap and tie together two empirical bodies of knowledge with nothing scientifically concrete to connect them that I’m aware of. Let’s imagine for a second that Miller’s timing of eye movements might play some role in Eric Charnov’s Marginal Value Theorem, which in turn plays a part in Peter Pirolli’s Information Foraging Theory.

Eric Charnov discovered that animals seem to have an innate and highly accurate sense of when to leave one source of food and move on to another, based on a calculation of the energy that would have to be expended versus the calories that would be gained in return. Obviously, organisms that are highly efficient at surviving would flourish in nature, passing on their genes and less efficient candidates would die out. Charnov’s marginal value calculation would be a relatively complex one if we sat down to work it out on paper (Charnov did exactly that, with some impressive charts and formulas) but I’m guessing the birds Charnov was studying didn’t take this approach. The calculations required are done by instinct, not differential calculus.

So, if birds can do it, how do humans fare? Well, we do pretty well when it comes to food. In fact, we’re so good at seeking high calorie foods, it’s coming back to bite us. We have highly evolved tastes for high fat, high sugar calorie rich foods. In the 20th Century, this built in market preference caused food manufacturers to pump out these foods by the truck load. Now, well over 1/3 of the population is considered obese. Evolution sometimes plays nasty tricks on us, but I digress.

Pirolli took Charnov’s marginal value theorem and applied it to how we gather information in an online environment. Do we use the same instinctive calculations to determine how long to spend on a website looking for the information we’re seeking? Is our brain doing subconscious calculations the entire time we’re browsing online, telling us to either click deeper on a site or give up and go back to Google? I suspect the answer is yes. And, if that’s the case, are our brain waves that dictate how and where we spend our attention part of this calculation, a mental hourglass that somehow factors into Charnov’s theorem? If so, it behooves us to ensure our websites instill a sense of information scent as soon as possible. The second someone lands on our site, the clock is already ticking. Each tick that goes by without them finding something relevant devalues our patch according to Charnov’s theorem.

The World’s Intentions at Our Fingertips

First published January 7, 2010 in Mediapost’s Search Insider

We’ve made Google a verb. What does that mean? Well, for one thing, it means we have a better indication of prospect intent than ever before. Google (or any search engine) becomes the connector between our intent and relevant online destinations. John Battelle called Google the database of intentions and predicted that it would become hugely important. Battelle’s call was right on the money, but we still haven’t felt the full import of it. Our tapping into our zeitgeist (defined as the general intellectual, moral and cultural climate of an era) is usually restricted to a facetious review of the top 10 search terms of the year.

Keep Your Eye on Intent

A couple of columns ago I indicated that consumer intent was one of the most important things to watch in the shift of advertising. Intention changes the rules of engagement with advertising. It switches our perception of ads from that of an interruption we’re trying to avoid to that of valuable information we’re looking for. With intention in place, the success of an ad depends not on its ability to hijack our attention, but rather on its ability to deliver on our intention. Ads no longer have to intrude on our consciousness; all they have to do is inform us.

To this point, some 15 years into the practice of search marketing, the majority of our efforts have been restricted to effectively meeting the intentions of our prospects. And, to be honest, we still have a long way to go to get that right. Landing page experiences still fall far short of visitor expectations. Search ad copy is still irrelevant in a large percentage of cases. Even when the keywords used give a clear signal of intent (unfortunately, a fairly rare circumstance) most marketers come up short on delivering an experience that’s relevant and helpful. Poor search marketing is the reason quality scores exist.

The Keynote Avinash Never Gave

But there’s an immense store of untapped potential lying in this “database of intentions.” When Avinash Kaushik did the keynote at last month’s Search Insider Summit, he intended to touch on three topics. Unfortunately, the third topic had to be dropped because of time limitations. He talked about attribution models and the Long Tail. The third topic was to be the use of search as a source of intelligence. Kaushik was going to explore how to leverage the “database of intentions” to better inform all our marketing efforts.

When it comes to tapping into this extraordinarily rich source of intelligence, even search marketers are slow to realize the potential. And we’re the ones that supposedly “get” the importance of search. For more traditional marketers, most are completely unaware that such a thing even exists. I believe two things are holding us back from effectively mining the “database of intentions” – the isolation of search marketing within an organization, and a lack of tools to effectively mine the intelligence.

SEM is an Island

Search marketing lives as an isolated island within most organizations. It lives apart from the main marketing department — as well as the day-to-day pulse of the corporation. The bigger the company, the more true this is. That means that the one department that has a hope in hell of understanding the importance of all these collected searches has little or no voice in the overall marketing strategy. All those signals of customer intent — indeed, the best barometer of consumer sentiment ever built — lies locked away behind the imaginary door of the search marketing cubicle.  The traditional marketing folks have no idea that this crystal ball, offering a real-time view of the goals, thoughts and aspirations of their target market, even exists, let alone how to use it.

Wanted: Better Mining Tools

Even the relatively minimal efforts Google has made to provide tools to dig into this data have proven to be amazingly valuable for marketers. Google Trends and its bigger brother, Google Insights, provide a glimpse into the power of Google’s query database. Unfortunately, these tools provide a rather anemic interface, considering the wealth of information that could be gleaned. Privacy is one stumbling block, but surely we could have more powerful tools to examine and slice the data, even in anonymized, aggregated form. I would love to hitch the sophistication of a comScore-type application to Google’s back-end data.

Battelle said this about the Database of Intentions:Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture.

Isn’t about time that we marketers clued into it?

How Our Brains “Google”

So far this week, I’ve covered how our brains find Waldo, scan a webpage and engage with online advertising. Today, I’m looking at how our brains help find the best result on a search engine.

Searching by Habit

First, let’s accept the fact that most of us have now had a fair amount of experience searching for things on the internet, to the point that we’ve now made Google a verb. What’s more important, from a neural perspective, is that searching is now driven by habit. And that has some significant implications for how our brain works.

Habits form when we do the same thing over and over again. In order for that to happen, we need what’s called a stable environment. Whatever we’re doing, habits only form when the path each time is similar enough that we don’t have to think about each individual junction and intersection. If you drive the same way home from work each day, your brain will start navigating by habit. If you take a different route every single day, you’ll be required to think through each and every trip. Parts of the brain called the basal ganglia seem to be essential in recording these habitual scripts, acting as sort of a control mechanism telling the brain when it’s okay to run on autopilot and when it needs to wake up and pay attention. Ann Graybiel from MIT has done extensive work exploring habitual behaviors and the role of the basal ganglia.

The Stability of the Search Page

A search results page, at least for now, provides such a stable environment. Earlier this week, I looked at how our brain navigates webpages. Even though each website is unique, there are some elements that are stable enough to allow for habitual conditioned routines to form. The main logo or brand identifier is usually in the upper left. The navigation bar typically runs horizontally below the logo. A secondary navigation bar is typically found running down the left side. The right side is usually reserved for a feature sidebar or, in the case of a portal, advertising. Given these commonalities, there is enough stability in most website’s designs that we navigate for the first few seconds on autopilot.

Compared to a website, a search engine results page is rigidly structured, providing the ideal stable environment for habits to form. This has meant a surprising degree of uniformity in people’s search behaviors. My company, Enquiro, has been looking at search behavior for almost a decade now and we’ve found that it’s remained remarkably consistent. We start in the upper left, break off a “chunk” of 3 to 5 results and scan it in an “F” shaped pattern. The following excerpts from The BuyerSphere Project give a more detailed walk through of the process.

searchheatmap11 – First, we orient ourselves to the page. This is something we do by habit, based on where we expect to see the most relevant result. We use a visual anchor point, typically the blue border that runs above the search results, and use this to start our scanning in the upper left, a conditioned response we’ve called the Google Effect. Google has taught us that the highest relevance is in the upper left corner

Searchheatmap22 – Then, we begin searching for information scent. This is a term from information foraging theory, which we’ve covered in our eye tracking white papers. In this particular case, we’ve asked our participants to look for thin, light laptops for their sales team. Notice how the eye tracking hot spots are over the words that offer the greatest “scent”, based on the intention of the user. Typically, this search for scent is a scanning of the first few words of the title of the top 3 or 4 listings.

Searchheatmap33 – Now the evaluation begins. Based on the initial scan of the beginnings of titles from the top 3 or 4 listings, users begin to compare the degree of relevance of some alternatives, typically by comparing two at a time. We tend to “chunk” the results page into sections of 3 or 4 listings at a time to compare, as this has been shown to be a typical limit of working memory9 when considering search listing alternatives

searchheatmap44 -It’s this scanning pattern, roughly in the shape of an “F”, that creates the distinct scan pattern that we first called the “Golden Triangle” in our first eye tracking study. Users generally scan vertically first, creating the upright of the “F”, then horizontally when they pick up a relevant visual cue, creating the arms of the F. Scanning tends to be top heavy, with more horizontal scanning on top entries, which over time creates the triangle shape.

 

searchheatmap5(2)5 – Often, especially if the results are relevant, this initial scan of the first 3 or 4 listings will result in a click. If two listings or more listings in the initial set look to be relevant, the user will click through to both and compare the information scent on the landing page. This back and forth clicking is referred to as “pogo sticking”. It’s this initial set of results that represents the prime real estate on the page.

searchheatmap66 – If the initial set doesn’t result in a successful click through, the user continues to “chunk” the page for future consideration. The next chunk could be the next set of organic results, or the ads on the right hand side of the page. There, the same F Shaped Scan patterns will be repeated. By the way, there’s one thing to note about the right hand ads. Users tend to glance at the first ad and make a quick evaluation of the relevance. If the first ad doesn’t appear relevant, the user will often not scan any further, passing judgement on the usefulness and relevance of all the ads on the right side based on their impression of the ad on top.

So, that explains how habits dictate our scanning pattern. What I want to talk more about today is how our attention focusing mechanism might impact our search for information scent on the page.

The Role of the Query in Information Scent

Remember the role of our neuronal chorus, firing in unison, in drawing our attention to potential targets in our total field of vision. Now, text based web pages don’t exactly offer a varied buffet of stimuli, but I suspect the role of key words in the text of listings might serve to help focus our attention.

In a previous post, I mentioned that words are basically abstract visual representations of ideas or concepts. The shape of the letters in a familiar word can draw our attention. It tends to “pop out” at us from the rest of the words on the page. I suspect this “pop out” effect could be the result of Dr. Desimone’s neural synchrony patterns. We may have groups of neurons tuned to pick certain words out of the sea of text we see on a search page.

The Query as a Picture

This treating of a word as a picture rather than text has interesting implications for the work our brain has to do. The interpretation of text actually calls a significant number of neural mechanisms into play. It’s fairly intensive processing. We have to visually intrepret the letters, run it through the language centres of our brain, translate into a concept and only then can we capture the meaning of the word. It happens quickly, but not nearly as quickly as the brain can absorb a picture. Pictures don’t have to be interpreted. Our understanding of a picture requires fewer mental “middle men” in our brain, so it takes a shorter path. Perhaps that’s why one picture is worth a thousand words.

But in the case of logos and very well known words, we may be able to skip some of the language processing we would normally have to do. The shape of the word might be so familiar, we treat it more like an icon or picture than a word. For example, if you see your name in print, it tends to immediately jump out at you. I suspect the shape of the word might be so familiar that our brain processes it through a quicker path than a typical word. We process it as a picture rather than language.

Now, if this is the case, the most obvious candidate for this “express processing” behavior would be the actual query we use. And we have a “picture” of what the word looks like already in our minds, because we just typed it into the query box. This would mean that this word would pop out of the rest of the text quicker than other text. And, through eye tracking, there are very strong indications that this is exactly what’s happening. The query used almost inevitably attracts foveal attention quicker than anything else. The search engines have learned to reinforce this “pop out” effect by using hit bolding to put the query words in bold type when ever they appear in the results set.

Do Other Words Act as Scent Pictures?

If this is true of the query, are there other words that trigger the same pop out effect? I suspect this to also be true. We’ve seen that certain word attract more than their fair share of attention, depending on the intent of the user. Well know brands typically attract foveal attention. So do prices and salient product features. Remember, we don’t read search listings, we scan them. We focus on a few key words and if there is a strong enough match of information scent to our intent, we click on the listing.

The Intrusion of Graphics

Until recently, the average search page was devoid of graphics. But all the engines are now introducing richer visuals into many results sets. A few years ago we did some eye tracking to see what the impact might be. The impact, as we found out, was that the introduction of a graphic significantly changed the conditioned scan patterns I described earlier in the post.

eshapedpatternThis seems to be a perfect illustration of Desimone’s attention focusing mechanism at work. If we’re searching for Harry Potter, or in the case of the example heat map shown below, an iPhone, we likely have a visual image already in mind. If a relevant image appears on the page, it hits our attention alarms with full force. First of all, it stands out from the text that surrounds it. Secondly, our pre-tuned neurons immediately pick it out in our peripheral vision as something worthy of foveal focus because it matches the picture we have in our mind. And thirdly, our brain interprets the relevancy of the image much faster than it can the surrounding text. It’s an easier path for the attention mechanisms of our brain to go down and our brains follow the same rules as my sister-in-law: no unnecessary trips.

The result? The F Shaped Scan pattern, which is the most efficient scan pattern for an ordered set of text results, suddenly becomes an E shaped pattern. The center of the E is on the image, which immediately draws our attention. We scan the title beside it to confirm relevancy, and then we have a choice to make. Do we scan the section above or below. Again, our peripheral vision helps make this decision by scanning for information scent above and below the image. Words that “pop out” could lure us up or down. Typically, we expect greater relevancy higher in the page, so we would move up more often than down.

Tomorrow, I’ll wrap up my series of posts on how our brains control what grabs our attention by looking at another study that indicates we might have a built in timer that governs our attention span and we’ll revisit the concept of the information patch, looking at how long we decide to spend “in the patch.”

How Our Brain Scans a Webpage

eyesYesterday, I explained how our brain finds “Waldo.” To briefly recap the post:

  • We have two neural mechanisms for seeing things we might want to pay attention to: a peripheral scanning system that takes in a wide field of vision and a focused (foveal) system that allows us to drill down to details
  • We have neurons that are specialists in different areas: i.e. picking out colors, shapes and disruptions in patterns
  • We use these recruited neuronal swat teams to identify something we’re looking for in our “mind’s eye” (the visual cortex) prior to searching for it in our environment
  • These swat teams focus our attention on our intended targets by synchronizing their firing patterns (like a mental Flash Mob) which allows them to rise above the noise of the other things fighting for our attention.

Today, let’s look at the potential implications of this in our domain, specifically interactions with websites.

But First: A Word about Information Scent

I’ve talked before about Pirolli’s Information Foraging Theory (and another post from this blog). Briefly, it states that we employ the same strategies we use to find food when we’re looking for information online. That’s because, just like food, information tends to come in patches online and we have to make decisions about the promise of the patch, to determine whether we should stay there or find a new patch. There’s another study I’ve yet to share (it will be coming in a post later this week) that indicates our brain might have a built in timer that controls how much time we spend in a patch and when we decide to move on.

The important point for this post is that we have a mental image of the information we seek. We picture our “prey” in our mind before looking for it. And, if that prey can be imagined visually, this will begin to recruit our swat team of neurons to help guide us to the part of the page where we might see it. Just like we have a mental picture of Waldo (from yesterday’s post) that helps us pick him out of a crowd, we have a mental picture of whatever we’re looking for.

Pirolli talks about information scent. These are the clues on a page that the information we seek lies beyond a link or button. Now, consider what we’ve learned about how the brain chooses what we pay attention to. If a visual representation of information is relevant, it acts as a powerful presentation of information scent. The brain processes images much faster than text (which has to be translated by the brain). We would have our neuronal swat team already primed for the picture, singing in unison to draw the spotlight of our attention towards it.

Neurons Storming Your Webpage

sunscreenshotFirst, let me share some of the common behaviors we’ve seen through eye tracking on people visiting websites (in an example from The BuyerSphere Project). I’ll try to interpret what’s happening in the brain:

The heat map shows the eye activity on a mocked up home page. Remember, eye tracking only captures foveal attention, not peripheral, so we’re seeing activity after our brain has already focused the spotlight of attention. For example, notice how the big picture has almost no eye tracking “heat” on it. Most of the time, we don’t have to focus our fovea on a picture to understand what’s in it (the detail rich Waldo pictures would be the exception). Our peripheral vision is more than adequate to interpret most pictures. But consider what happens when the picture matches the target in our “mind’s eye”. The neurons draw our eye to it.

One thing to think about. Words shown in text are pictures too. I’ll be coming back to this theme a couple of times – but a word is nothing more than a picture that represents a concept. For example, the Sun logo in the upper left (1) is nothing more than a picture that our brain associates with the company Sun Microsystems. To interpret this word, the brain first has to interpret the shape of the word. That means there are neurones that recognize straight edges, others than recognize curved edges and others that look for the overall “shape” of the word. Words too can act as information targets that we picture mentally before seeing it in front of us. For example, let’s imagine that we’re a developer. The word “DEVELOPER” (2) has a shape that is recognizable to us because we’ve seen it so often. The straight strokes of the E’s and V’s, sandwiched between the curves of the D’s, O’ and P’s. As we scan the overall page, our “Developer” neurons may suddenly wake up, synchronize their firing and draw the eye here as well. “Developer” already has a prewired connection in our brains. This is true for all the words we’re most familiar with, including brands like Sun. This is why we see a lot of focused eye activity on these areas of the picture.

Intent Clustering

In the last part of today’s post, I want to talk about a concept I spent some time on in the BuyerSphere Project: Intent Clustering. I’ve always know this makes sense from an Information Scent perspective, but now I know why from a neural perspective as well.

Intent clustering is creating groups of relevant information cues in the same area of the page. For example, for a product category on an e-commerce page, an intent cluster would include a picture of the product, a headline with the product category name, short bullet points with salient features and brands and perhaps relevant logos. An Intent cluster immediately says to the visitor that this is the right path to take to find out more about a certain topic or subject. The page shown has two intent clusters that were aligned with the task we gave, one in the upper right sidebar (3) and one in the lower left hand corner (4). Again, we see heat around both these areas.

Why are intent clusters “eye candy” for visitors? It’s because we’ve stacked the odds for these clusters to be noticed peripherally in our favor. We’ve included pictures, brands, familiar words and hints of rich information scent in well chosen bullet points. This combination is almost guaranteed to set our neural swat teams singing in harmony. Once scanned in peripheral vision, the conductor (the FEF I talked about in yesterday’s post) of our brain swings our attention spotlight towards the cluster for more engaged consumption, generating the heat we see in the above heatmap.

Tomorrow, I’ll be looking at how these mechanisms can impact our engagement with online display ads.

How Our Brain Finds Waldo

At Enquiro, we did some interesting work in 2009 with visual attention and engagement with ads. We found, for example, that attention significantly impacts how we see ads and retain the messages shown within them. We also found that brands can play a vital role in this process. Over the Christmas holiday, I found a number of neurological studies that start to shed some light on how we might visually process information on websites and the role advertising might play. This week, I’ll be breaking them into individual elements and exploring them a little more fully, showing the practical applications for advertisers and web designers. Much of this was also covered in my book, The BuyerSphere Project.

Today, let’s spend some time finding Waldo

The “Where’s Waldo” Neuronal Choir

whereswaldoHow much mileage can you get from creating exercises in visual attention. Well, if you’re Martin Hanford, the answer is: a lot! Hanford created the phenomenally popular “Where’s Waldo?” set of books. At last count, Hanford’s playful take on visual attention had produced a couple dozen books, video games, an animated series and even a potential movie deal. And it all comes down to the same basic premise: how long does it take us to find one distinct element in a visually busy environment? How do our eyes pick Waldo out of a visually dense picture, packed with details and optical red herrings?

That was the question researcher Robert Desimone, director of the McGovern Institute for Brain Research and the Doris and Don Berkey Professor of Neuroscience at MIT decided to tackle. Specifically, he wanted to explore two differing schools of thought:

Do we move our attention around the page like a spotlight, physically scanning the environment inch by inch looking for Waldo, the intended target of our attention; or,

Do we scan the image as a whole, looking for clues in the overall pattern about where Waldo might be?

The answer appears to be both. And the reason both systems are active come from our evolutionary past. We need to focus attention on the task at hand, but we also need to scan the environment for signals of something that might suddenly need our attention. And the way the brain does this is fascinating. It does it by literally creating a choir of neurons, all firing in a synchronized pattern. It seems to be this synchronization that represents the focusing of attention.

Picking Waldo Out of the Crowd

Let’s go back to Waldo. Neurons tend to have specialized functions. We have neurons that are better at picking out colors, neurons that a better a picking out the edges of shapes and other neurons that pick out patterns. In the case of Waldo, before we ever start scanning the page, we recruit the neurons that are best suited to recognized the distinct image of Waldo. For example, because Waldo is dressed in red, we recruit the red neurons. We create a picture of Waldo in our “mind’s eye.”

So, we have our handpicked neuronal “swat team” ready to intercept Waldo. But, how do we actually find Waldo? This is where the two mechanisms of the brain work in unison. In eye tracking, you soon learn the difference between foveal attention and peripheral attention. Foveal attention is where the brain focuses our eyes, allowing us to pick up fine detail. When we read, for example, we use foveal focus to pick up the shape of the letters and interpret them. Eye tracking only picks up foveal attention. This represents the “spotlight” function of attention.

But the brain has to tell the eyes where to move next. And to do this, it relies on peripheral attention. This is what we see out of the “corner of our eye”. Peripheral attention allows us to scan a much broader field of vision to determine if there are elements in it that merit the refocusing of foveal attention. Peripheral vision is particularly tuned to movements and coarser visual cues. This has significant impact on the effectiveness of advertising, which I’ll talk about in a future post. For today, it’s sufficient to understand that peripheral vision allows us to scan our environment in a repeating “quick and dirty” pattern.

Now, our neuronal swat team has identified the target pattern for us. This image has been implanted in our prefrontal cortex as a “top down” imperative, a directive to our visual cortex. And, through peripheral vision, we’re scanning the entire picture to find possible matches. To help separate the most promising areas of the picture from the background noise of the other detail, it appears that an area of the prefrontal cortex, the FEF, orchestrates our hand picked neurons to synchronize their firing. This synchrony helps the signals from this group of neurons stand out from the noise of the rest. It works just like the the synchronized dancing in these examples of flash mobs –  the Sound of Music in an Antwerp train station, a Glee medley in a Roman piazza and the Black Eyed Peas surprising Oprah.

Sound of Music | Central Station Antwerp (Belgium)
GLEE – Il FlashMob
Black Eyed Peas – I got a feeling on Oprah Chicago Flashmob

Just like the dancers in these Flash Mobs- the synchronization helps our “Waldo” neurons stand out from the crowd, raising above the noise. As we scan the image through the periphery of our visual focus, the FEF orchestrates the neural synchrony of our group of “Waldo” neurons, drawing the spotlight of foveal attention to the parts of the picture most likely to contain Waldo. There, we switch to a more detailed scanning to determine if Waldo is indeed present.

Tomorrow, we’ll use the same basic theory to talk about what happens when we first visit a website.

If you want to find out more about Dr Desimone’s work, read these two articles:

Long-Distance Brain Waves Focus Attention

Research Explains How The Brain Finds Waldo

How Google Became a Verb

First published December 31, 2009 in Mediapost’s Search Insider

It’s probably because I’m just finishing a book (The Stuff of Thought) by famed linguist and cognitive psychologist Steven Pinker, but grammar has been on my mind more than usual lately. And in particular, I was fascinated by how we use Google in our language. Google, of course, has been “genericided” – the fate that falls on brands that lose their status as a protected brand name and become a generic term in our vocabulary. This causes much chagrin with Google’s legal and marketing team. What is more interesting however is the way we’ve taken Google into our lexicon.

Of Nouns and Verbs

Most brands, when they get incorporated into our language, become nouns. Kleenex, aspirin, escalators, thermoses and zippers all went down similar paths on the road to becoming common terms that described things. It might interest you to know, for instance, that in Japan, staplers are known as Hotchkisses (or technically, hochikisu). Google, however, is different. The word Google doesn’t replace the noun “search engine,” it replaced the act of searching. We made googling a verb. And that is a vital difference. We don’t call all search engines Google. But we do refer to our act of searching as googling.

More than this, we made Google a transitive verb – “I googled it”. That means I (the subject) used Google (the verb) to do something with it (the object). Pinker says the way we use words betrays the way we think about the world. Verbs are the lynchpins of our vocabulary, because we use them to explain how we interact with our physical world. And transitive verbs, in particular, act as connectors between us and the world. I once said that search was the connector between intent and content. The enshrining of Google as a verb reflects this. The act of googling connects us with information.

Sampling the Outside World through Google

But the use of Google as a transitive verb also gives us a glimpse into how we regard the gathering of the content we Google. Transitive verbs tend to reflect a transfer from the outside to the inside, a consumption of the external, either physically or through our senses: I drank it, I ate it, I saw it, I heard it, I felt it. In that sense, their use is personal and fundamental. “I googled it” gives us a sense of metaphorical transference – the consumption of information.

So, what does this mean? If you look at the role of our language, there is something of fundamental importance happening here. Language is our collection of commonly accepted labels that allow us to transfer concepts from our heads into the heads of others. These labels are not useful unless they mean the same thing to everyone. When I say thermos, you know instantly what I mean. Your visualization of it might be slightly different than mine (a Batman thermos from grade 5 is the image that I currently have) but we can be confident that we’re thinking about the same category of item. We have a shared understanding.

Speaking a Common Language

This need for commonality is the threshold that new words must cross before they become part of common language. This means that critical mass becomes important. Enough of us have to have the same concept in our heads when we use the same label before that label becomes useful. Generally, when technology introduces a concept that we have to find a new label for, we try a few variations on for size before we settle on one that fits. Common usage is the deciding vote.

With things like new products, the dominant brand has a good chance of becoming the commonly used label. Enough of us have experience with the brand to make it a suitable stand in for the product category. We all know what’s meant by the word escalator. And new product categories creep up fairly regularly, forcing us to agree on a common label. In the last decade or two, we’ve had to jam a lot of new nouns in our vocabulary: ATM’s, fax, browser, Smartphones, GPS, etc. Few of these categories have had enough single brand domination to make that brand the common label. Apple has probably come the closest, with iPod often substituting for MP3 player.

The material nature of our world means that we’re forever adding new nouns to our vocabulary. There are always new things we have to find words for. That’s why one half of all the entries in the Oxford dictionary are nouns. The odds of a brand name becoming a noun are much greater, simply because the frequency is higher. And by their nature, nouns live apart from us. They are objects. We are the subjects.

The Rarity of a Verb

But verbs are different. Only one seventh of dictionary entries are verbs. Verbs live closer to us. And the introduction of a new verb into our vocabulary is a much rarer event. This makes the critical mass threshold for a verb more difficult to pass than for a noun. First of all, enough of us have to do the action to create the need for a common label. Secondly, it’s rare for one brand to dominate that action so thoroughly. The birth of googling as a verb is noteworthy simply because so many of us were doing something new at the same place.

Why did I share this linguistic lesson with you? Again, it’s because so many of us are doing something at the same place. New verbs emerge because we are doing new things. We do new things because something drives us to do them. That makes it a fundamental human need. And to have that fundamental human need effectively captured by one brand – to the point that we call the act by the brand’s name – offers a rare opportunity to catalogue human activity in one place. One of the most underappreciated aspects of search marketing is the power of search logs to provide insight into human behavior. That’s what my first column of 2010 will be about.

And, just to leave you with a tidbit for next week, currently another brand name is on the cusp of becoming a verb (although it’s exact proper form is still being debated). The jury is still being assembled, but Twitter could be following in Google’s footsteps.

Could Intel Hardwire Your Brain for Google?

Last week, Roger Dooley had an interesting post on his Neuromarketing Blog (great blog, by the way) about Intel’s efforts to implant a computer chip directly into our brains, essentially allowing us to interface directly with computers. Roger ponders whether this will, in fact, become a wired “buy button”. I wonder, instead, if this is the ultimate Google search appliance? The idea was floated, somewhat facetiously, by Eric Schmidt, in an interview with Michael Arrington on Tech Crunch this year:

Now, Sergey argues that the correct thing to do is to just connect it straight to your brain. In other words, you know, wire it into your head. And so we joke about this and said, we have not quite figured out what that problem looks like…But that would solve the problem. In other words, if we just – if you had the thought and we knew what you meant, we could run it and we could run it in parallel.

The Singularity and Hardwired Brains

Okay, this crosses all kinds of boundaries of “creepy”, but if we stop to seriously consider this, it’s not as outlandish as it seems. Ray Kurzweil has been predicting just this for over two decades now..the merging of computing power and human thought, an event he calls the Singularity. Kurzweil even set the date: 2045 (by the way, the target date for the Intel implant is 2020, giving us 25 years to “get it right” after the first implant). Kurzweil’s predictions seem somehow apocalyptic, or, at the least, scary, but his logic is compelling. Computers can, even today, do some types of mental tasks far faster and more efficiently than the human brain. The brain excels at computations that tie into the intuition and experience of our lives – the softer, less rational types of mental activity. It the brain was simply a huge data cruncher, computers would already be kicking our butts. But there are leaps of insight and intuition that we regularly take as humans that have never been replicated in a digital circuit yet. Kurzweil predicts that, with the exponential increase of computing power, it will only be a matter of time until computers match and exceed the capabilities of human intuition.

Google’s Brain Wave

But Intel’s efforts bring up another possibility, the one posited by Google’s Sergey Brin – what if a chip can connect our human needs, intuitions and hunches with the data and processing power available through the grid of the Internet? What if we don’t have to go through the messy and wasteful effort of formulating all those neuronal flashes into language that then can be typed into a query box because there’s a direct pipeline that takes our thoughts and ports them directly to Google? What if the universe of data was “always on”, plugged directly into our brains? Now, that’s a fascinating, if somewhat scary, concept to contemplate.

Let’s explore this a little further. John Battelle, in a series of posts some time ago, asked why conversations were so much more helpful than web searching.  Battelle said that it’s because conversations are simply a much bigger communication pipeline and that’s essential if we’re talking about complex decisions.

What is it about a conversation? Why can we, in 30 minutes or less, boil down what otherwise might be a multi-day quest into an answer that addresses nearly all our concerns? And what might that process teach us about what the Web lacks today and might bring us tomorrow?

Well the answer is at once simple and maddeningly complex. Our ability to communicate using language is the result of millions of years of physical and cultural evolution, capped off by 15-25 years of personal childhood and early adult experience. But it comes so naturally, we forget how extraordinary this simple act really is.

Talking (or Better Yet – Thinking) to a Search Engine

As Battelle said, conversations are a deceptively rich communication medium. And it’s because they evolve on both sides to allow the conversant to quickly veer and refine the dialogue to keep up with our own mental processes. Conversations come closer to keeping up with our brains. And, if those conversations are held face-to-face, not only do we have our highly evolved language abilities, we also have the full power of body language. Harvard professors Nitin Nohria and Robert Eccles said in their book Networks and Organizations: Structure, Form and Action:

In contrast to interactions that are largely sequential, face-to-face interaction makes it possible for two people to be sending nod delivering messages simultaneously. The cycle of interruption, feed-back and repair possible in face-to-face interaction is so quick that it is virtually instantaneous. As (sociologist Erving) Goffman notes, “a speaker can see how others are responding to her message even before it is done and alter it midstream to elicit a different response’.”

The idea of a conversation as a digital assistance medium is interesting. It allows us to shape our queries and speak more intuitively and less literally. It allows us to interface and communicate the way we were intended to. In his post, Battelle despaired of an engine ever being this smart and suggested instead that the engine act as a matchmaker with a knowledgeable human on the other site, the Wikia/Mahalo approach. I can’t see this as a viable solution, because it lacks the scale necessary.

This is not about finding one piece of information, like a phone number or an address, but helping us through buying a house or a car. Search still fall far short here, something I touched on in my last Just Behave column on Search Engine Land. In those situations, we need more than a tool that relies on us feeding it a few words at a time and then doing its best to guess what we need. We need something similar to a conversation, in a form that can instantly scale to meet demand. Google, for all it’s limitations in a complex scenario, still has build the expectation of getting information just in time. And the bottle neck in these complex situations is the language interface and the communication process. Even if we’re talking to another person, with all the richness of communication that brings, we still have to transfer the ideas that sit in our head to their head.

So, back to Intel’s brain chip. What if our thoughts, in their entirety, could instantly be communicated to Google, or Bing, or what ever flavor of search assistant you want to imagine? What if refining all the  information that was presented was a split second closing of a synapse, rather than a laborious application of filters that sit on the interface?  Faster and far more efficiently than talking to another human, we could quickly sift through all the information and functionality available to mankind to tailor it specifically to what we needed at that time. That starts to boggle the imagination. But, is it feasible?

I believe so. Look again at the brain activity charts generated by the UCLA – Irvine research team that tracked people using a Google like web search interface, particularly the image in the lower right.

googlebrains

Let’s dig a little deeper into what is actually happening in the brain when we Google something. The image below is from the Internet Savvy group in the UC study (sorry about the fuzziness).

Brainactivity

The front section of the brain (A) shows the engagement of the frontal lobes, indicating decision making and reasoning. This is where we render judgment and make decisions in a rational, conscious way. The section along the left side of the brain (B) is our language centers, where we translate thought to words and vice versa. The structures in the centre part of the brain, hidden beneath the cortex are the sub-cortical structures (C), the autopilot of the brain, including the basal ganglia, hippocampus and hypothalamus. I touched on how these structures dictate what much of our online activity looks like in a post last week. Finally, the area right at the back of the brain indicates activation of the visual cortex, used both to translate input from our eyes and also to visualize something “in our mind’s eye”.  As shown by the strong activation of the language center, much of the heavy lifting of our brains when we’re Googling involves translation of thoughts to words.

Knowing that these are the parts of the brain activated, would it be possible to provide some neural short cuts? From example, what if you could take memories being drawn forward (activating both the hippocampus and the frontal lobes) and translate this directly into directives to retrieve information, without trying to translate into words? This “brain on Google” approach could be efficient at a degree several magnitudes greater than anything we can imagine currently.

By the way, this interface can work both ways. Not only could it feed our thoughts to the online grid. It can also take the results and information and receives and pipe it directly to the relevant parts of our brains. Images could be rendered instantly in our visual cortex, sounds in our audio cortex, facts and figures could pass directly to the prefrontal cortex. Call it the Matrix, call it virtual reality, call it what you want. The fact is, somewhere in an Intel research lab, they’re already working on it!

Mindless Online Behavior: Web Navigation on Autopilot

One of the biggest problems with Rupert Murdoch’s view of the world is that he’s assuming people are making conscious decisions about where they go to get their news and information. He somehow believes that people are consciously deciding to get their information from Google rather than one of his properties, and Google is encouraging this behavior by indexing content and providing free “back doors” into the WSJ and other sites. In other words, Murdoch has a conspiracy theory, and Google and online users are co-conspirators. The truth isn’t quite so evil or intentional.

Our Stomach’s Autopilot

I talked yesterday about the importance of information foraging and how we use the same strategies we use to find food to find online information. But tell me, how conscious are your decisions about where and what to eat? How long do you deliberate over eating a piece of toast in the morning, a sandwich at lunch or a plate of pasta at night? If you’re hungry, how often do you find yourself standing in front of the fridge, staring inside for a quick snack? It wasn’t as if you had a detailed series of decisions here: Hmmm..I’m hungry. Where would be the best place in the house to find food? The bathroom? No, that didn’t work. How about the bedroom? No, no food there. Hey, this kitchen place seems to be promising! Now..where in the kitchen might there be food? In this cupboard? No, that’s dishes. Down under the sink? Ooops..no, I don’t know what the hell’s under there, but it’s definitely not food. Hey..what’s in this big steel box here? Ah…Bingo!

Okay..it’s a ridiculous scenario, but that’s my point. It only seems ridiculous because we’ve found a more efficient way of doing it. We don’t have to go through these decisions every time because we’ve done it before and we know where to find food. Even if we went into someone else’s house, we would know that the kitchen is the best place to find food, and the fridge is probably the surest bet in the kitchen. We don’t have to think, because we’ve done the thinking before and know we can navigate by habit and instinct.

Where Do You Keep the Cockatoo Chichild Fillets?

But what if you visited the Jivaro tribe of South America, where the culture is so different that we have no cognitive short cuts to follow? Much of the food they eat we’ve never even seen before. And, as one of the most primitive cultures in the world, there are not a lot of kitchens or fridges to act as hints about where we might find something to eat. If we were suddenly dropped into the middle of a Jivaro settlement with no guide, we would have to do a lot of thinking about what to eat and where to find it. And how would we feel about that? Anxious? Frustrated? Uncertain? We don’t like it when we have to think. We much prefer relying on past experience and habits. The brain heavily discourages thought if there’s a more efficient short cut. It’s the brain’s way of saving fuel, because mobilizing our prefrontal cortex, the “reasoning” part of our brain, comes with a big efficiency hit. The PFC is powerful in a “single minded” way, but it’s also an energy hog. The way the brain discourages unnecessary thought is through stimulating unpleasant emotions. If you’ve spent much time in foreign cultures, you know the constant stress of finding something to eat can quickly go from being exciting to being a complete pain.

Here’s the other thing about our brain, it isn’t discriminating about when to kick in and when not to kick in. It usually takes the path of least resistance first, relying on past experience rather than thinking. The more familiar the environment, the more the brain feels safe in relying on past experience and habit. What does this mean? Well, when you’re hungry, it will mean you suddenly find yourself standing in front of the fridge with the door open without even knowing what you’re looking for. When you realize you actually want some crackers (i.e. when your brain finally kicks in), you swing the door shut and go to where the crackers are kept. Online, it means you go to Google and launch a search without thinking through what your actual destination might be.

Google, The Information “Fridge”?

So, I’ve gone fairly far down the path of this analogy to make a point. According to Pirolli, we use exactly the same mechanisms to find online information. We go first to the fridge, or, in this case, Google, because nine times out of ten, or even 99 times out of a hundred, we find what we’re looking for there. And, if we don’t, we start to get frustrated because our brain is suddenly called into service and it isn’t at all happy about it. There’s no conscious conspiracy to screw Rupert Murdoch, there’s just us following our own mental grooves. And these grooves dictate a huge percentage of our online activity. There’s been little neuro-scanning research done on how our brains work during online activity, but the little that’s been done seems to indicate a regular shifting of activity from the “reasoning” to the “autopilot” sections of the brain. I suspect strongly that this is especially true when we use search engines. If we can navigate on autopilot, we will.

This principle holds true for almost all online interaction. I keep hearing about the “joy” of discovery online. I believe that’s largely crap. As online becomes a bigger part of our lives, we depend on it to do more and more and we don’t have the time for “discovery”. We don’t have the time to set aside 2 hours to browse through WSJ.com, meandering through the content and providing a willing set of eyeballs for all those ads. We want to find what we’re looking for, get in and get out. There are occasions when we’re willing to invest the time for a long voyage of discovery, just as there are times when we will go out and graze our way through a smorgasbord buffet, but it’s not the norm. As I said in the last post, Google and search has given us a “just in time” information economy and we have forever shifted our concept of information retrieval. How the providers of the information make money from that remains to be figured out, something I’ll spend some more time talking about tomorrow.

Murdoch and Bing: The Sound of Two Dinosaurs Dancing

This morning in Ad Age:

Why Murdoch Can Afford to Leave Google for Bing

The author, Nat Ives, reasons that Google traffic doesn’t translate into revenue for Murdoch anyway. This is true, but the logical conclusion that you can afford to kiss this traffic goodbye is seriously flawed. I’ll explain why in a minute.

Yesterday in Search Engine Land, Danny offered his thoughts on “The OPEC of News“. He approached it from the flow of information and indexing cycle perspective, and I think he did a good job of hitting the salient points. From the mechanics of the search space, Danny’s right, but what’s more interesting to me is the human behavior that sits behind all this.

The biggest reason why this is a stupid deal is that it’s out of touch with where the market is going. I touched on this in a previous post, but I’ll expand on it this week in a few posts that will tie together Enquiro’s past research and other seminal research :

Today – The Primacy of the Patch – Why Information Foraging is the Key to Behavior

Wednesday – The Mindlessness of Web Search – How We Don’t Think Our Way through Online Interactions

Thursday – Engagement with Online Ads – The Importance of Aligned Intent

Friday – Tying it Together – Why Murdoch and Bing’s Logic is Fatally Flawed

The Primacy of the Patch: Information Foraging is the Key to Behavior

As I said, this week I want to dissect some aspects of human behavior to show why Rupert Murdoch is seriously out of touch and how Bing can’t corner the news market.

The primary reason is that we’re changing how we get information. The implications of this are fascinating, because the implications will soon spread through all marketplaces and aspects of our society. And it comes down to one important factor to consider: Humans are inherently lazy.

Laziness is a Good Thing

Now, before you get all morally indignant on me, let me explain: humans are lazy in the evolutionary sense, the same way that Richard Dawkin’s genes are selfish. We’re lazy because it’s a natural advantage, it’s built into our genome. To be more accurate, we’re lazy when the expenditure of more energy doesn’t make sense. We’re lazy in a subtle, subconscious way. And, like all aspects of human behavior, we’re not all equally lazy. There’s a bell curve of laziness. Laziness has gotten a bad rap in our puritanical, WASPish culture, but the fact is, when it comes to survival, laziness is often the optimal strategy.

Look at it a different way. Say you need to drive from Detroit to Chicago. The only goal is to get to Chicago and pay as little for gas as possible on the way. What vehicle are you going to take – a Hummer or a Prius? The Prius is a no brainer. In terms of fuel efficiency, the Prius is a lazy car. It does what it has to do more efficiently than a Hummer. In a vehicle, this is a virtue, but somewhere in our twisted culture, it’s become a bad thing for humans.

Fat And Lazy? Maybe Not …

Calories are a human’s gas tank. We’ve been genetically hardwired to be very fuel efficient. In fact, we’ve developed very sophisticated subconscious mechanisms to ingest as many calories as possible without expending calories to find them. This worked well when we lived on the African savanna and the only food source was the odd Baobab tree. It doesn’t work so well when there’s a McDonald’s around every corner. It’s not a cruel joke what we’re attracted to high fat, high sugar foods. These provide lots of calories in one sitting. That’s why our society is fat (fat and lazy – how’s that self esteem so far?)

So, what the hell does this all have to do with search? Well, when humans are faced with new challenges, we’re stuck using the tools that evolution has endowed us with. We borrow from other abilities. The technical term for this is exaption. When digital information came along, we had to look into our evolutionary toolkit and find something that would work.

Foraging for Information

At Xerox’s PARC in the late 90’s, Peter Pirolli was exploring how humans navigated hypertext linked information environments. The invention of hyperlinking introduced a new challenge in information retrieval. Throughout history, information was structured into an imposed taxonomy or hierarchy. We sorted it alphabetically or by the Dewey decimal system. And, because information was static, it stayed within the boundaries we built for it. But the creation of the hyperlink meant that information suddenly became unstructured and organic. Topical links from source to source meant that imposed editorial restrictions no longer worked. Links kept leaping above the boundaries we tried to impose on information.

Given this new challenge, Pirolli wanted to explore the subconscious strategies we used to navigate this unstructured information environment. He wanted to reduce it to a predictable algorithm. Time after time, he was frustrated. Humans would start down a predictable path, only to suddenly take an expected turn. The patterns didn’t seem logical. But, as chance would have it, he had recently read some work on biological foraging patterns and decided to overlay that on the behaviors he was observing. It was Pirolli’s “A Ha” moment. Suddenly, the patterns made sense. Humans, Pirolli (along with Stuart Card and others) discovered, foraged for information. We used the same strategies to navigate the web that we use to look for food. And, just as is the case with calories, laziness (or efficiency) is a pretty good strategy for finding information.

In information foraging, there is one overriding concern: take the most efficient path possible to the information you seek. I won’t get too far into the mechanics of how we do that except to say this – it’s not a conscious calculation. We’re constantly scanning the environment to see if a richer information “patch” is on the horizon. Information foraging is fundamentally important to understand if you’re to understand human behavior online. Jakob Nielsen called it “the most important concept to emerge from Human-Computer Interaction research since 1993.”

So, let’s look at how this applies to the Murdoch-Bing scenario. For almost 20 years now, we’ve been retrieving information online, using our foraging strategies. In that time, we’ve become conditioned to go to the most efficient sources of information…the places where we get the biggest information “bang” for our buck – and in this case, our investment is our time. As I’ve said before, this is a conditioned behavior. We don’t consciously think our way through this. Our subconscious efficiency circuits kick in and we do this by habit. To think of how powerful these subconscious loops are, just think about how hard it is to walk past a cookie lying on the counter. It’s not that you’re a bad person if you pick it up. It’s not that you’re stupid, eating it even though you know it’s not good for you. It’s those inherent human behaviors taking over. It’s powerful stuff!

So, for well over a decade, we’ve discovered that the shortest line between our need for information and the right online destination is a search engine. If there was a more efficient retrieval mechanism, we’d use it. This isn’t about brand, or loyalty. It’s just walking past the kitchen counter and seeing a cookie there. We’ll do it without thinking.

Murdoch’s strategy is flawed because he doesn’t realize that we now seek information differently. In the past, we picked the editorial channel that best met our needs. The Wall Street Journal may have been one of our favored patches, because we agreed with it’s “editorial voice”, it met a sufficient number of our information needs and we felt the investment of our time was warranted by the information we retrieved. in return for that, we started to build up loyalty to the brand, giving the publishers the right to sell advertising against that loyalty.

But the hyperlink and the internet didn’t just make information patchy, it also created a “just in time” need for information. 30 years ago, we didn’t suddenly develop the need to know who the director of “Booty Call” was because there was no easy way to retrieve the information. It wasn’t worth the investment. But Google made instant retrieval of information possible. It dramatically improved the efficiency of information retrieval. We started Googling everything because we could, without wasting huge amounts of time.

It’s this paradigm shift in information consumption that Murdoch is completely missing. Yesterday in Search Engine Land, Danny Sullivan did a good job showing how the social web and the indexing of content makes any attempt to wall it off to preserve a revenue model futile. The one thing I disagree with Danny on is his assertion that a mutually exclusive Murdoch/Google relationship won’t hurt Google or Murdoch:

So what happens if the WSJ is out of Google? Nothing. Seriously, nothing. Remember, for years the WSJ was NOT in Google, and yet Google grew just fine. Also, the WSJ seems to have been fine. Neither is crucial to each other.

What we have here is a significant shift in human behavior, and right now we’re in the transition period. Google and other engines have dramatically changed the game of information retrieval and that means a huge upheaval in the industry. Society is moving en mass from one behavior, which publishers had build a revenue model around, to another behavior, which still hasn’t been fully monetized (Google has only monetized one small slice of it). To say that both will do fine is ignoring the lessons of history. These massive behavioral shifts are ALWAYS a zero sum game..somebody wins and somebody loses. Guess who will lose? Hint, it won’t be Google.

So, what about Bing? If my theory is correct, will Bing become the new favored patch by signing with Murdoch? I doubt it. There’s just not enough critical mass there to disrupt conditioned behaviors. The “just in time” information economy has eroded our brand affinity for favored patches. We’ve become more publisher agnostic. Again, this isn’t universally true. We still appreciate “editorial voice” for some types of information and may seek out one specific publisher, but our new promiscuity means an erosion of page views and traffic, which is killing the traditional publishing revenue model. But, more about this in tomorrow and Thursday’s  post.