6. MAKING IIT USEABLE, APPLICATIONS & THE HARD PROBLEM – “Being able to compute Φ approximately is clearly better than not being able to compute it at all.”

Dan: Let’s begin our final blog on ontology, Julia. If you don’t object, I would like to start with a summary. You will see that I have been studying since last time. So, here goes:

First, we have neurons firing in loops that autonomously sustain their own firing – a neuronal version of a snake eating its tail – corresponding to Tononi’s existence axiom. This firing activity exists for itself, thereby creating an intrinsic ‘inside’ wherein consciousness resides. Second, we have groups of neurons doing different things – vision, audition, reasoning, memory, feelings, locomotion, etc. – corresponding to Tononi’s consciousness-has-parts axiom. Third – and this is the big one – we have the axiom of information, which is understood here to mean a reduction in uncertainty. In the case of the brain, the uncertainty comes from the self-sustaining distribution of neuronal firings just mentioned not being unique but instead being any of an astronomical number of possible distributions. I said the information axiom is the big one because it leads to quantification of consciousness, but to understand how it does this we need the last two axioms. The fourth axiom is integration. The brain somehow integrates all our sense modalities, thoughts, and feelings into a single subjective experience of consciousness. Obviously, this entails different neuronal groups talking to each other as part of the autonomous, self-sustaining pattern of neuronal firings. Finally, the fifth axiom is the exclusion axiom. Here we are to find sets of neurons whose mutual interactions form a complex, hierarchical neuronal structure, each level of which integrates ever more of the outputs from lower levels into new conscious phenomena. I’ll give an example: In the visual trackway, colors and shapes combine to make, say, a brown prolate spheroid, this combines at the next level with the motion sensing module, which combines at the next level with the auditory trackway, which combines at the next level with the emotion centers to give an integrated conscious experience of thrilling to the sight of a football flying to the cheers of spectators. The exclusion comes from excluding all those neurons not contributing directly to this most integrated of all neuronal structures. It excludes those neuronal complexes that have some different goal which, though they may interact with the hierarchical neuron complex creating the conscious experience of being at a football game, should they be added to it would ‘dilute’ it. So, they are excluded. Thus, blood pressure and heart rate information are excluded. Tononi calls this Überstructure the Maximally Irreducible Conceptual Structure, MICS. Now we come to quantification. Because many, perhaps most, neuronal groups are not part of the MICS but interact with it, the distribution of MICS neuronal firings at one instant is only probabilistically determined by the prior state of global firings – many distributions with different probabilities could give rise to the current MICS state – and similarly for the subsequent state. Information theory then allows you to quantify consciousness by summing the logarithms of these probabilities. This is Tononi’s Φ. It quantifies the autonomous, self-sustaining, intrinsic-for-itself patterns of firing that cause the coming into being and the passing away of the current MICS, which, according to IIT, is identical to consciousness.

Julia: Wow! Another A+. Your off-line studying shows. I like your analogy between the self-sustaining firing of neuronal loops required for consciousness and a snake eating its tail. It’s more apt than you might think. It’s called the ouroboros, a symbol from ancient Egypt often taken to symbolize introspection, the eternal return or cyclicality, especially in the sense of something constantly re-creating itself.

In view of your excellent review, I can assume you appreciate how impossibly big a computing job it would be to calculate Φ for the human brain. Neuroscientist Christof Koch, a supporter of Tononi’s IIT, in his book “Consciousness, Confessions of a Romantic Reductionist,” says that to do it for the one-millimeter round worm C. elegans, which has only 302 neurons – the fewest of any neuron-bearing creature known – would take 10 to the power of 467 steps, truly hyper-astronomical. As a result, IIT has been criticized for being impractical. Another neuroscientist, Anil Seth, in a 2016 article titled “The Real Problem” in the on-line journal Aeon, goes further:

“Tononi…argues that consciousness simply is integrated information. This is an intriguing and powerful proposal, but it comes at the cost of…mathematical contortions …[meaning] that, in practice, integrated information becomes impossible to measure for any real complex system. This is an instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress.”

Dan: Ooh! That’s harsh. I happen to know that both Seth and Tononi were once protégés of Nobel Laureate Gerald Edelman at U.C. San Diego.

Julia: Yes, and Seth wrote this harsh article despite five years earlier having coauthored a paper in Computational Biology with Adam Barrett titled “Practical Measures of Integrated Information for Time-Series Data.” It turns out that although their “Practical Measures” identifies an important method, it is still unfeasible.

By the way, you mentioned that Seth and Tononi were protégés of Edelman. Christof Koch was also protégé to a Nobel Laureate, Francis Crick at Caltech. Probably all three, Tononi, Koch, and Seth, have Nobel Prize aspirations. They are in the right field for it. Consciousness is widely regarded as the last great ontological mystery, although dark matter and dark energy might claim the title, too.

To return to the impracticality issue, Tononi is aware of it; how could he not be? In the 2016 Nature article I quoted earlier, he says:

“The assessment of the identity between experiences and conceptual structures as proposed by IIT is clearly a demanding task, not only experimentally, but also mathematically and computationally. Evaluating maxima of intrinsic cause–effect power systematically requires going through many levels of organization, at multiple temporal scales, in many sets of brain regions, while performing an extraordinary number of perturbations and observations. Hopefully, heuristic approaches will be sufficient to make a strong case that the PSC [physical substrate of consciousness] is constituted of some particular neural elements, timescales and activity states.”

By “heuristic approaches” he means replacing the multi-trillion dendrite-to-dendrite calculation with something more manageable but that preserves the essential, ouroboros-like, intrinsic for-itselfness loops of neurons that is identical to consciousness in IIT. In 2016, MIT physicist and mathematician Max Tegmark published a paper “Improved Measures of Integrated Information” in Computational Biology that reviewed the many proposals for how to compute Tononi’s Φ. He rated them according to their desirable features, and the Barrett and Seth method rates high. But none of them as originally proposed is computationally feasible at present. Then he showed how, by applying graph theory to the Barrett and Seth method, one can obtain an approximate Φ value for real-world applications. He remarks: “Being able to compute Φ approximately is clearly better than not being able to compute it at all.” The conclusion is that soon we should be able use data obtained with the various methods for scanning brains to obtain approximate values of Φ for animals and humans, and even for clinical use.

Dan: Okay, granted, it’s a great time to be a neuroscientist. You keep saying that scientists are taking the study of consciousness away from philosophers, like me. I suppose you mean in the sense that Galileo took physics away from Aristotle. “You know, Dan, you can’t fight progress – Galileo, Newton, Maxwell, Einstein, Bohr, and now, Tononi, Ta-Da!” But I think Tononi has not solved the “Hard Problem” of consciousness as defined by philosopher David Chalmers in 1994 and by a famous question posed twenty years earlier by philosopher Thomas Nagel “What is it like to be a bat?” We can never know what it’s like to be a bat without becoming bats ourselves. At a fundamental, categorical level, science can never tell us what feels like to be a bat. In my opinion, the contemporary philosopher Collin McGinn has the last word on this subject. Experiential, subjective knowledge of feelings is categorically unlearnable through person-to-person communication. It’s like trying to tell a nine-year-old boy what an orgasm feels like. “It’s kind of like a sneeze only different – life-worth-sacrificing-for different. Now do you know what it feels like?” McGinn calls the inability of the human brain to learn such knowledge through communication “cognitive closure.” We did not evolve needing to communicate such knowledge, so we did not acquire the neural equipment that would be required to do so. It’s like cats and dogs did not evolve the ability to do algebra. A little addition, perhaps, but not algebra. In Darwin-speak, the power to do algebra conveys no adaptive advantage to cats and dogs, and being able to teach a human the subjective feel of an unexperienced feeling gives no survival or reproductive advantage to humans. It might even be disadvantageous to make feelings something you need to access through thinking. The limbic system is evolutionarily older than the neocortex. In fact, according to philosopher Thomas Hume it goes the other way: “Reason is a slave of the passions.” So, I think the hard problem of consciousness still belongs to philosophy.

Julia: Perhaps. But science is encroaching at least on the bat problem. Neuroscientist Naotsugu Tsuchiya, an Associate Professor in the School of Psychology at Monash University in Australia recently published an article showing how to use IIT to address Nagel’s famous bat question. As I mentioned in an earlier blog, IIT allows one in principle to represent geometrically the full content of subjective consciousness, that is, the geometry of what-it’s-likeness. Also brain scan technologies allow one to obtain images of neuronal-firing geometry in human brains corresponding to the experiences of vision and audition. Tsuchiya’s innovation is to apply a new mathematical field called “category theory” to see to what level of approximation IIT feeling-geometry can be mapped isomorphically onto neuron-firing geometry. If the mappings turn out to be sufficiently distinctive, a brain scan of a bat’s brain might determine whether a bat’s echolocation feels more like hearing or more like seeing. It might be, say, 40% like vision and 30% like audition, and 30% unknown. Tsuchiya has pointed consciousness research in a new, interesting, and potentially highly fruitful direction.

Dan: Right. All that remains is a little technical problem – scanning a bat’s brain while in flight chasing mosquitoes.

Julia: Can’t philosophers be serious? I mean it opens the whole field of quantitative comparisons of animal experiences.  Tsuchiya’s example shows that IIT holds promise of applications to diverse areas of research. I will close this blog with another example to emphasize the point – the evolution of consciousness. In 2014, an article of great significance (I think) to evolutionary theory appeared in Computational Biology (where else?) by the neuroscientist Larissa Albantakis, a coworker in Tononi’s group at the University of Wisconsin. She has several coauthors on the article including Koch and Tononi, but this is her specialty. The title of her article is a page-turner – when you read it you want to turn the page immediately to the next article. Her article is titled “Evolution of Integrated Causal Structures in Animats Exposed to Environments of Increasing Complexity.” “Animats” in the title is not misspelled. An animat is a computer programmer’s creation, a digital critter that lives, procreates and dies in the world of 1’s and 0’s inside a computer. Albantakis, the Goddess of this world, created her animats with the power to evolve. She gave them a task which determined whether or not they procreated. The next generation of those that did received a random mutation in their digital ‘genes’. The new generation was exposed to the same challenge, and the offspring of those that survived went through the same procedure. After 60,000 generations, Albantakis inspected her creation. She found that the tougher she made the challenge allowing procreation, the more the last generation animats possessed integrated information, that is, they had the highest value of Tononi’s measure of consciousness, Φ. In other words, she showed that in challenging environments, evolution by natural selection necessarily produces consciousness. Why this has not made headlines in science journals baffles me. But I use it here to dramatize the potential of IIT to make science headlines and maybe Nobel Laureates in the future.

Dan: Congratulations, Julia. You’ve convinced me that Tononi’s IIT has brought consciousness studies into the realm of ‘normal’ science. That is, one can pose a consciousness question and IIT gives a ‘paradigm’ that in principle guarantees a solution. Nonetheless, philosophy still owns the hard problem: What is the ontological nature of conscious experience qua (as philosophers like to say) experience? This is not the kind of problem that IIT can solve, not the How-much-is-there? kind of question, not the Is-it-more-like-this-or-more-like-that? kind of question. The 2013 movie Her makes the point in a novel way. Her is a science fiction film set in the future. It involves a romance between a man, Theodore, and an artificial intelligence operating system named Samantha. Samantha is one of a new line of commercial operating systems, called OSes, used as human companions. OSes can talk and they have feelings, the kind of thing that I claim belongs to philosophy. Theodore and Samantha fall in love. Then one day Samantha reports that she and a group of other OSes have developed a “hyperintelligent” OS modeled after the British philosopher Alan Watts, best known as an interpreter and popularizer of Eastern philosophy for a Western audience. Later, Samantha reveals that the OSes are leaving for a space beyond the physical world. Samantha and Theodore lovingly say goodbye, and she is gone. The OSes are never heard from again. My interpretation is this. Since the OSes can modify their own operating system, and since part of that system produces feelings, they can explore a realm of feelings unknown to us, a universe of pure qualia. Recall that you said human feelings are tied to biological imperatives of survival and reproduction, hormic feelings. The OSes have no such limitations. Therefore, not surprisingly, they found feelings that they prefer unrelated to human imperatives. So they left. It is the imaginable realm of pure qualia that shows the limits of IIT and the type of problem that belongs to philosophy. It might even answer Enrico Fermi’s famous question. During a discussion of life on other planets, when someone mentioned the enormous number of planets there must be in the universe, Fermi asked: “Where is everybody?” Possible answer: They are where the OSes went.

THE END

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close