Could Intel Hardwire Your Brain for Google?

Last week, Roger Dooley had an interesting post on his Neuromarketing Blog (great blog, by the way) about Intel’s efforts to implant a computer chip directly into our brains, essentially allowing us to interface directly with computers. Roger ponders whether this will, in fact, become a wired “buy button”. I wonder, instead, if this is the ultimate Google search appliance? The idea was floated, somewhat facetiously, by Eric Schmidt, in an interview with Michael Arrington on Tech Crunch this year:

Now, Sergey argues that the correct thing to do is to just connect it straight to your brain. In other words, you know, wire it into your head. And so we joke about this and said, we have not quite figured out what that problem looks like…But that would solve the problem. In other words, if we just – if you had the thought and we knew what you meant, we could run it and we could run it in parallel.

The Singularity and Hardwired Brains

Okay, this crosses all kinds of boundaries of “creepy”, but if we stop to seriously consider this, it’s not as outlandish as it seems. Ray Kurzweil has been predicting just this for over two decades now..the merging of computing power and human thought, an event he calls the Singularity. Kurzweil even set the date: 2045 (by the way, the target date for the Intel implant is 2020, giving us 25 years to “get it right” after the first implant). Kurzweil’s predictions seem somehow apocalyptic, or, at the least, scary, but his logic is compelling. Computers can, even today, do some types of mental tasks far faster and more efficiently than the human brain. The brain excels at computations that tie into the intuition and experience of our lives – the softer, less rational types of mental activity. It the brain was simply a huge data cruncher, computers would already be kicking our butts. But there are leaps of insight and intuition that we regularly take as humans that have never been replicated in a digital circuit yet. Kurzweil predicts that, with the exponential increase of computing power, it will only be a matter of time until computers match and exceed the capabilities of human intuition.

Google’s Brain Wave

But Intel’s efforts bring up another possibility, the one posited by Google’s Sergey Brin – what if a chip can connect our human needs, intuitions and hunches with the data and processing power available through the grid of the Internet? What if we don’t have to go through the messy and wasteful effort of formulating all those neuronal flashes into language that then can be typed into a query box because there’s a direct pipeline that takes our thoughts and ports them directly to Google? What if the universe of data was “always on”, plugged directly into our brains? Now, that’s a fascinating, if somewhat scary, concept to contemplate.

Let’s explore this a little further. John Battelle, in a series of posts some time ago, asked why conversations were so much more helpful than web searching.  Battelle said that it’s because conversations are simply a much bigger communication pipeline and that’s essential if we’re talking about complex decisions.

What is it about a conversation? Why can we, in 30 minutes or less, boil down what otherwise might be a multi-day quest into an answer that addresses nearly all our concerns? And what might that process teach us about what the Web lacks today and might bring us tomorrow?

Well the answer is at once simple and maddeningly complex. Our ability to communicate using language is the result of millions of years of physical and cultural evolution, capped off by 15-25 years of personal childhood and early adult experience. But it comes so naturally, we forget how extraordinary this simple act really is.

Talking (or Better Yet – Thinking) to a Search Engine

As Battelle said, conversations are a deceptively rich communication medium. And it’s because they evolve on both sides to allow the conversant to quickly veer and refine the dialogue to keep up with our own mental processes. Conversations come closer to keeping up with our brains. And, if those conversations are held face-to-face, not only do we have our highly evolved language abilities, we also have the full power of body language. Harvard professors Nitin Nohria and Robert Eccles said in their book Networks and Organizations: Structure, Form and Action:

In contrast to interactions that are largely sequential, face-to-face interaction makes it possible for two people to be sending nod delivering messages simultaneously. The cycle of interruption, feed-back and repair possible in face-to-face interaction is so quick that it is virtually instantaneous. As (sociologist Erving) Goffman notes, “a speaker can see how others are responding to her message even before it is done and alter it midstream to elicit a different response’.”

The idea of a conversation as a digital assistance medium is interesting. It allows us to shape our queries and speak more intuitively and less literally. It allows us to interface and communicate the way we were intended to. In his post, Battelle despaired of an engine ever being this smart and suggested instead that the engine act as a matchmaker with a knowledgeable human on the other site, the Wikia/Mahalo approach. I can’t see this as a viable solution, because it lacks the scale necessary.

This is not about finding one piece of information, like a phone number or an address, but helping us through buying a house or a car. Search still fall far short here, something I touched on in my last Just Behave column on Search Engine Land. In those situations, we need more than a tool that relies on us feeding it a few words at a time and then doing its best to guess what we need. We need something similar to a conversation, in a form that can instantly scale to meet demand. Google, for all it’s limitations in a complex scenario, still has build the expectation of getting information just in time. And the bottle neck in these complex situations is the language interface and the communication process. Even if we’re talking to another person, with all the richness of communication that brings, we still have to transfer the ideas that sit in our head to their head.

So, back to Intel’s brain chip. What if our thoughts, in their entirety, could instantly be communicated to Google, or Bing, or what ever flavor of search assistant you want to imagine? What if refining all the  information that was presented was a split second closing of a synapse, rather than a laborious application of filters that sit on the interface?  Faster and far more efficiently than talking to another human, we could quickly sift through all the information and functionality available to mankind to tailor it specifically to what we needed at that time. That starts to boggle the imagination. But, is it feasible?

I believe so. Look again at the brain activity charts generated by the UCLA – Irvine research team that tracked people using a Google like web search interface, particularly the image in the lower right.


Let’s dig a little deeper into what is actually happening in the brain when we Google something. The image below is from the Internet Savvy group in the UC study (sorry about the fuzziness).


The front section of the brain (A) shows the engagement of the frontal lobes, indicating decision making and reasoning. This is where we render judgment and make decisions in a rational, conscious way. The section along the left side of the brain (B) is our language centers, where we translate thought to words and vice versa. The structures in the centre part of the brain, hidden beneath the cortex are the sub-cortical structures (C), the autopilot of the brain, including the basal ganglia, hippocampus and hypothalamus. I touched on how these structures dictate what much of our online activity looks like in a post last week. Finally, the area right at the back of the brain indicates activation of the visual cortex, used both to translate input from our eyes and also to visualize something “in our mind’s eye”.  As shown by the strong activation of the language center, much of the heavy lifting of our brains when we’re Googling involves translation of thoughts to words.

Knowing that these are the parts of the brain activated, would it be possible to provide some neural short cuts? From example, what if you could take memories being drawn forward (activating both the hippocampus and the frontal lobes) and translate this directly into directives to retrieve information, without trying to translate into words? This “brain on Google” approach could be efficient at a degree several magnitudes greater than anything we can imagine currently.

By the way, this interface can work both ways. Not only could it feed our thoughts to the online grid. It can also take the results and information and receives and pipe it directly to the relevant parts of our brains. Images could be rendered instantly in our visual cortex, sounds in our audio cortex, facts and figures could pass directly to the prefrontal cortex. Call it the Matrix, call it virtual reality, call it what you want. The fact is, somewhere in an Intel research lab, they’re already working on it!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s