The Blue Brain Project aims to solve the epic problem of consciousness. But will it let us catch consciousness in the act?
If you’ve been fussing over a birthday gift for the sci-fi nut or armchair transhumanist in your life, consider a ticket to Lausanne, Switzerland. That’s where, in one corner of the Ecole Polytechnique Fédérale de Lausanne, an IBM supercomputer is quietly making some fitful first steps toward consciousness.
Its name is Blue Brain. Its job is to simulate, at the cellular level, the interaction of neurons. Launched as a collaboration with IBM in 2005, its makers have taken it as far as simulating a basic computational unit of a two-week-old rat’s brain. This single neocortical column – around 10,000 neurons locked into 30 million synaptic handshakes – has been doing what they had hoped it would do, which is act much like a real bundle of neurons. What’s especially remarkable is that it accomplishes this feat from the bottom-up, with the complexity emerging from the behaviors of the individually modeled parts alone.
Though the brains behind Blue Brain initially played coy about it, the ultimate goal here is not some disembodied and deranged rat in a digital cage; the ultimate goal is an accurate simulation of a whole human brain, one that, if all goes according to plan, exhibits human-like consciousness. As Henry Markram, the neuroscientist who directs the Blue Brain Project, explained in the February issue of Seed magazine, “Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don’t know why you wouldn’t be able to generate a conscious mind.”
Today, that goal appears very far off. With two million or so neocortical columns in the human brain, scaling up the current simulation architecture would require hundreds of billions of dollars worth of the Blue Gene/L supercomputers it runs on now. Nevertheless, Markram is optimistic that they’ll be able to simulate a human brain on one computer in 10 years or less.
Considering that it took Microsoft half that time just to release the flatulent Windows Vista, I’d be surprised – gobsmacked, actually – if it all went down that smoothly. But perhaps I’m radically underestimating the thirst that exists for machine consciousness. What, after all, is really at stake here, besides the possibility of some very unreliable and neurotic computers? In the same Seed article, writer Jonah Lehrer echoes the high hopes that many seem to have for Blue Brain, noting that, “If the simulation is successful . . . the epic problem of consciousness will have been solved.”
Solved? Once and for all? The quotidian reality, as usual, may be a tad less epic, a tad more banal. If these initial attempts prove to be a bust, Markram’s team will be sent back to the source to see what they left out of their model. It will take a great many such trips by a great many more people before anybody is going to concede that it can’t be done. (Thus, we could be temporarily spared the sad sight of dejected futurists scrambling for other ways that rich folk might transcend their grotty meat bodies.)
If, on the other hand, these efforts do bear self-aware fruit, try not to be too sore with the resulting cultural whimper. Those who are pinning all of their hopes for humanity on this success – mind-body dualism and religion stamped out in one generation! – would be wise to note our collective resistance, from Descartes onward, to getting truly bothered by the multifarious ways in which animals hint at their own self-awareness. The machine has no soul, they’ll shrug. Once the dust has settled, will they even let Blue Brain v35.0 attend a Kansas public school?
I don’t mean to downplay the value of this and similar research, which at the very least will help usher in medical and computing advancements grand enough to make your armchair transhumanist cream his/her/its jeans. But as far as “solving” consciousness, it might all come down to that old chestnut about the tree falling in the woods: whether you respond with a yes (because sound is a pressure-variation wave) or a no (because sound is a sensory phenomenon), the act of answering involves imposing a limit – an act of faith. Despite the reams of wacky neologisms that philosophy and science have given us to bicker about consciousness, all we’ve concluded to date is that it’s one slippery little bugger, lurking somewhere between what you feel, what you do, and what is done to you, between inarticulate fact and indispensable fiction. Seeking it out will entail another such act of faith.
So, when we’ve finally created the computer that speaks to us in human poetry – and my money, for what it’s worth, says that we eventually will – what then? Since we built it from the ground up, we’ll know this tin man pretty damn intimately, and we should be clever enough by then to translate our record of its every last simulated neural event into sensations and thoughts – an embarrassing memory, a tickle on the bum. Looking down on this wealth of data from the outside, however, will we really be able to see consciousness itself, even if we can confidently infer that it’s rattling around in there? Or will we remain one consciousness peering at solid evidence of another, just as we always have? And for all of our accelerating technological prowess, that eel called consciousness would slither away unscathed.