PANSYNAISTHIMA – A Journey from Feeling to Consciousness
A script of a dialog between Julia and Dan
Prelude: We are going to do a skit we call “PANSYNAISTHIMA – A Journey from Feeling to Consciousness” featuring a philosopher, Dan, (no, not Dan Dennett who thinks consciousness is an illusion) and a neuroscientist, Julia (no, not Giulio Tononi, whose Integrated Information Theory – the Ithaca of this Odyssey – is the best thing to come along in decades in the scientific sturdy of consciousness).
Our journey has two acts each followed by time for discussion. The first act establishes that we stay within the realm of physics, there is just physics and the phenomenon of consciousness, nothing extra like a soul or spirits. But what emerges from combining consciousness as a thing with physics as the dynamics that governs all things is something philosophers call dual-aspect monism. Monism means that there is nothing at play except physics. Dual-aspect means consciousness comes along as part of the physics, but not as an epiphenomenon, instead as something fundamental. It’s all physics – force applied to mass causes motion, and in the biological context of consciousness studies motion means movements needed for some biological function. We end the first part with the conclusion that to every biological function there is a subjective experience – a feeling – necessarily tied to it. Function and feeling are two aspects of the same thing, again dual-aspect monism. It is a Janus concept – one god with two faces; one looking in – first-person view – the other out – third-person view, but just one god, physics.
The skit’s second act begins by Dan attacking the feeling/function version of dual-aspect monism by noting that it is radically different than other versions, for example, panpsychism, which attributes a ‘mentalistic’ property to fundamental physical entities like electrons and protons. Function is not a physical entity; it is a physical process. Julia defends it and names it pansynaisthimism. Then Dan attacks the conclusion that ‘to every function there is a feeling’ by noting that it would give us too many feelings. There are many more functions than feelings. And this is where Julia brings in Giulio Tononi’s Integrated Information Theory. Tononi broke with the universal and eternal tradition wherein consciousness researchers work to “squeeze the wine of consciousness out of the material stuff of the brain” and instead – in the tradition of major breakthroughs in physics – asks what must be the structure of brain stuff to produce the phenomenon of consciousness? To answer, Tononi postulates five axioms the fifth of which solves the too-many-feelings problem. Together the axioms lead to a mathematization of consciousness. Combining this with dual-aspect monism produces testable results. Philosopher Dan concludes by connecting the pansynaisthimism idea with philosophers Spinoza, Kant, Hegel, Schopenhauer, William James, and Whitehead. And so we start the journey.
Julia: Dan, you’re such a smart philosopher, I have a problem on which you might be able to help. Robot and artificial intelligence manufacturers are asking us neuroscientists to tell them how to build feelings into their creations. My approach is to couple feelings to functions, you know, hunger goes with energy resupply, thirst with rehydration, and so on. Can you tell me how nature manages to fit our feelings to the functions they serve?
Dan: Huh? How not? Sounds like a trick question.
Julia: No, it’s not a trick; I’m serious. Besides application to AI products, this question leads to the hottest idea currently alive in the field of consciousness – Giulio Tononi’s Integrated Information Theory – and it does so in a new way, a way in which his theory has yet to be applied, namely the realm of feelings and emotions. Really interesting stuff. So, that’s why I asked the question. Again, hunger – a feeling – makes us want to eat, obviously. Eating serves a biological need, but it is hunger that makes us eat, not the need. Why do we need hunger? Why isn’t the need enough? Please notice just how far hunger, as a type of thing, is from biological need as a type of thing; hunger is a feeling whereas a need is not a feeling but a condition, a requirement for something to happen. Feelings and conditions are categorically different things. Yet the feeling of hunger is essential to maintain life. I am asking you because it is an ontological question. You know? Ontology? The branch of metaphysics dealing with the nature of being – and philosophers are supposed to be good at metaphysics.
Dan: Yes, I know what ontology means – a rainbow is ontologically different than a poem, despite Wordsworth’s heart leaping up when he sees one. A rainbow is a pattern of light, and a poem is a pattern of words. Love is ontologically different than a song, despite a bazillion songs about love. Love is an emotion, and a song is pressure waves. A horizon is ontologically different than a mountain although mountains often define a horizon. A horizon is a geometrical thing, and a mountain is a mass of rocks. Now you are adding feelings are ontologically different than the biological functions they serve, using hunger and nourishment as an example. Why are you taking this fitting-of-feeling -to-function approach to robot consciousness?
Julia: Because I am puzzled by a mystery. The mystery is “Where do feelings come from?” and I do not mean in the sense of the neural correlates of feelings. Everyone knows that the brain’s limbic system is the seat of feelings and emotions. So, that’s not the mystery. I mean ontologically. What can feelings be that they are so plastic that evolution can fashion them to fit biological needs?
Dan: I agree, feelings are interesting. What have you found?
Julia: First let me expand the plot. Think of other feeling-need pairs: thirst and rehydration; fear and flight to survive; anger and confrontation to maintain status; lust and sex to reproduce, and love and marriage for offspring survival. In each case feeling and need lie in ontologically different realms, yet they form inseparable biological pairs. I think we are looking here at a universal law of nature. Just as all mass has energy (Einstein you know, E = mc2), all feelings have functions. That sounds stupidly obvious until you realize that feelings and functions seem to be as radically different as mass and energy. Yet in 1938 Einstein and Infeld bit the ontological bullet and declared that mass and energy are the “same property” of a physical system, that means, according to Einstein and Infeld, they are the same. By analogy, I am saying that feeling and function are the same property of a conscious system. At least we should reconstruct our ontological categories so as to take these feeling-function pairs to be ontologically united.
Dan: Sounds profound. Is it?
Julia: I think so. To understand the point better, consider the opposite case in which feelings and the biological needs they serve are ontologically independent, each situated in its own ontological space, as was thought to be the case for metrical 3-D space and chronological time until special relativity united them into 4-D spacetime. If feelings and needs occupy independent realms of being, then evolution would have to coordinate them so that the feeling fits the need, that hunger, for example, does not cause us to drink when the body needs food, that fear does not cause us to confront when we should lie low. And where does evolution find these feelings anyway so that it can choose the appropriate one?
Dan: We’ve been stuck with this mind/body problem for 400 years, since Rene Descartes. So, what’s new?
Julia: What’s new is the revelation to which I am coming, but first we must locate where feelings and needs meet physically, for here is where the revelation lies.
Dan: Let me guess; they meet in the brain.
Julia: Brilliant! Yes, of course, as already said they meet in the limbic system, where the need is sensed and broadcast to parts of the brain that can act on it. For example, when the stomach determines it needs more food, it releases a hormone (ghrelin) into the blood that the hypothalamus, part of the limbic system, senses and sends a signal telling the rest of the brain to organize actions that result in eating. We subjectively experience this signal as hunger. And here is the point: there is nothing extra in the process to produce the feeling of hunger. The feeling of hunger and the hypothalamic signal are the same. In philosopher speak: the epistemic recognition that need and feeling are always paired implies ontological unity. Thus, the problem with which we started of fitting feeling to function is automatically solved – feeling and need are two aspects of one thing.
Let’s go back to the Einstein spacetime analogy, because it helps show how truly radical the feeling/need unity is. What could be more different to our intuition than space and time? They are incommensurable, which means they have no common measurement scale – in a word, no common dimension. Yet Einstein showed them to be ontologically united. This was possible because if you multiply time by the speed of light, which has the dimension of distance over time, you get the dimension of distance. Then time becomes one of the commensurable axes in a four-dimensional distance space.
But what do you multiply ‘biological need’ by to get the ‘dimension’ of feeling? Something with the dimension of feeling per unit need. What is that? The speed of life? The way the question is framed, it sounds like physics. We experience, say, ten units of hunger per one unit of missing blood sugar. Such quantification is addressed by Fechner’s law of psychophysics, which relates change in perceived intensity of feeling to change in intensity of the stimulus that produces the feeling. This, however, is not our interest. Instead, we want feeling as such, not as in ‘how much’ but the thing itself – feeling as feeling. There is no feeling-per-unit-need constant (like the speed of light) that multiplied with need gives the subjective experience of feeling – say, the sensation of hunger.
Dan: Right. And this is precisely why hardcore physicalists like Dan Dennett say that feelings and all conscious, subjective experiences are illusions. Since there is no way using purely physical factors to convert a stimulus to a feeling, feelings are ipso facto non-physical and so not really real. Hence, they must be illusions.
Julia: Maybe this is why Giulio Tononi never mentions feelings in his writing on Integrated Information Theory. Nonetheless, he does take the subjective experience of consciousness as real – really real. In fact, his theory starts with this assumption. And now I am trying to approach his theory via a discussion of feeling.
Dan: I understand. What you say about the cause of hunger can be said of the rest of the feeling/need pairs you listed. A part of the body sends a chemical signal to the limbic system that it needs something, the limbic system then sends a signal telling the rest of the brain to organize the body’s actions to supply the need, and the action-inciting signal is the same thing as the feeling that we experience as the need.
Julia: Well put! The ontological identity of feeling with biological need is, I think, the contribution that the fitting-of-feeling-to-function (the 3Fs) problem makes to the mind-body problem. We deceive ourselves thinking that they must be ontologically distinct things like space and time. The 3F problem tells us that they must be the same despite the way we think about them.
Dan: Yours isn’t the first attempt to solve the mind-body problem. In philosophy your dualism idea goes back at least to Spinoza, nearly 350 years ago. Now it goes under the heading dual-aspect monism, which you can google.
Julia: Nor am I the first to see the ontological identity of feeling and function. Hear what the physicist Juan Roederer has to say in his 2009 book Information and Its Role in Nature: “How a specific spatio-temporal neural activity distribution elicited by the sight of an object…becomes a specific mental image is an old question…I think that there is a radical answer. The pattern does not “become” anything – the specific distribution is the image!” Notice that Roederer, as a physicist and unaware of the philosopher’s dual aspect monism, calls it a “radical answer”.
Dan: You quoted from Roederer’s book Information and Its Role in Nature. Does it mention Giulio Tononi’s Integrated Information Theory (IIT), which you touted as the hottest consciousness theory at the moment. Tononi described it in a 2008 publication provocatively titled “A Provisional Manifesto.”
Julia: No, Roederer wrote his book before Tononi published his manifesto.
Dan: I see a problem with your feeling/need-identity idea because there is a gap between the feelings that we experience aimed at specific goals and the real biological functions that these goals serve. For example, hunger causes eating; thirst, drinking; fear, fleeing; lust, sex; etc. But the real biological functions being satisfied are nourishment, hydration, survival, and procreation, etc. Thus, we have a disconnect – a gap – between why it is we do something and the biological purpose it serves. Our feelings incite actions that only indirectly result in supplying the biological need. Richard Feynman makes the point in a famous quote: “Physics is like sex: sure, it may give some practical results, but that’s not why we do it” – “but that’s not why we do it.” Doesn’t this gap torpedo your idea?
Julia: I think it supports the idea. The gap is well known and has the horrible name, hormic. The person who gave us this name in its biological context (he borrowed the word from hormic or motivational psychology) is the zoologist Wilfred Agar of whom you have probably never heard. Agar was an Anglo-Australian zoologist who worked in the first half of the last century. In his 1943 book A Contribution to the Theory of the Living Organism he writes:
“[T]he purpose of the animal under the sexual urge is to mate. We may describe mating as the ‘hormic goal’ of sexual activity; there is an agent [meaning us] striving to that end. The continuance of the species, on the other hand, may be called the ‘biological consequence’ of sexual activity, because although sexual activity has that result, we can identify no agent striving towards that end.”
In other words, “that’s not why he does it.” We commit hormic acts to enjoy a feeling (e.g. eating and sex) or to be rid of a feeling (e.g. hunger and fear), and these acts achieve “practical” biological results, but there is no agent that, for example, feels lustfully driven to make a kid. A million novels explore the real thing for which we are lustfully driven, and none claims lust is a drive to make a kid.
Dan: Giving it a name doesn’t answer the question.
Julia: My point in bringing up hormic feelings does relate to your question. But to get to its relevance I must tell you why hormic goals and biological goals are not the same. One reason they must be different is incompatible time scales. Consider procreation. The emergence of multicellular life some 600 to 800 million years ago was enabled by reproduction moving from cell division to eggs. Then the whole process of procreation – the time for the egg to hatch – became too long for the parent to postpone eating and fleeing for the duration. Eggs are autonomous; they do their thing without attention by their parent, thus allowing the parent to eat and flee as needed. This means that the act of producing an egg must be separate from the act of procreation, which is the egg’s business. But this evolutionary step from cell division to egg required a means to produces an egg, namely sex, which, as a need, automatically comes together with its ontologically tied hormic aspect that we experience as lust. It was life’s discovery of the egg mode of reproduction that, as part of the discovery, created Precambrian sex and, ontologically tied to it, a Precambrian version of lust. So you see if evolution discovers a new function, then the need thereby created to implement the function ineluctably creates the need’s own particular hormic feeling. This is why I think your launched torpedo actually caries a payload supporting the 3F idea – feelings jump the gaps.
Dan: Interesting observation – in evolution new feelings arise when new functions appear. In your example sexual lust arose 800 million years ago when the mode of reproduction became sperm-and-egg. Neat! You could also have picked a present-day example many of us know personally.
Julia: What’s that?
Dan: The cravings of addiction, be it for the morning java, the I-just-gotta-have-another-cigarette craving, or the second-martini urge. The interesting thing is that each craving is specific, has its own feel. A cup of coffee, a cigarette or even a beer will not satisfy a want for a second martini, at least in my experience. For your purposes, drug induced cravings offer a shopping mall of manufactured, non-natural feelings.
Julia: Thanks, that’s good. I’ll add cravings to my examples. But for the feeling/function duality to be valid each craving must correspond to a function. What are the functions of cravings? My field, neuroscience, can answer that. All pleasure inducing substances we consume have some impact on the brain’s reward processing center. Dopamine is a chemical messenger the brain uses to induce sensations of pleasure. Addictive drugs send a flood of dopamine to the brain’s pleasure maker, the nucleus accumbens, Regular drug use causes the brain to produce less dopamine, resulting in a chemical imbalance. When the drugs are not active in the brain, dopamine levels drop, causing uncomfortable withdrawal symptoms and powerful cravings for the specific drug that produces the dopamine oversupply. So, you are right. The specificity in this case supports the feeling/function identity. The feeling says give me more of that particular dopamine producing substance to restore brain homeostasis; just like hunger and thirst say give me food and water to restore bodily homeostasis.
Dan: Homeostasis. I know that means the self-regulating processes by which biological systems maintain stability and conditions optimal for survival, so, I can see how the concept applies here. How did you hit on it?
Julia: Antonio Damasio, in his new book “The Strange Order of Things,” argues that feelings are what maintain homeostasis, or as he puts it: “Feelings are the mental expressions of homeostasis.” In a sense he is stating the feeling/function duality idea. The function is to restore homeostasis; the feeling is what drives the required restorative action.
This is a good place to end this session. In the next one, I will add even more interesting feeling/function pairs, one of which opens the possibility to test the feeling/function version of dual-aspect monism.
Dan: Okay, where are we. You have defended the idea that feelings and the functions they subserve are ontologically two aspects of one thing, a feeling/function version of dual-aspect monism. Basically, you argue that 1) feelings are too perfectly matched to their functions to have an independent ontological origin, and 2) new functions automatically come with a perfectly matched feeling, and this “coming-with” feature can be automatic only if they are two parts of one thing.
Julia: Well put.
Dan: Thanks. But you have a serious problem in that your feeling/function version of dual-aspect monism is wholly different from all other versions that philosophers have discussed since Spinoza.
Julia: How so?
Dan: All other forms posit a dualism between the mental and the material, that is, between mind and matter, between subjective states of perception and physical brain states. Note that this identity between feeling and neurons firing is precisely the ‘radical idea’ your physicist, Juan Roederer, described earlier. But you by contrast pair feelings with function, that is, the mental with a goal or a purpose and these are not material things. Parroting Roederer, yours truly is a radical idea, a radically different form of dual-aspect monism.
Dan: I will illustrate with what I believe to be the commonest form of dual-aspect monism, namely, panpsychism: the idea that all physical entities possess a mental aspect in addition to their known physical ones such as mass, charge, and spin. In this case it is obvious that that the mental combines with the material to make one thing, the physical entity itself, be it an electron, a proton, or whatever, and in this sense, they instantiate dual-aspect monism, one thing with both mental and material aspects. Bearing in mind that the universal oneness to which monism refers is that all things are physical, it’s your turn to explain in what sense feeling/function duality forms a monism.
Julia: Gladly. In your illustration you satisfy the all-things-are-physical requirement to qualify for monism by attaching a mental aspect to fundamental physical entities. So now we have mass, charge, spin, and – what should we call it? It should be a one syllable word like the rest – how about ‘qual,’ short for qualia. Then panpsychism is the assertion that fundamental physical entities possess mass, charge, spin, and qual.
Dan: Alright, ‘qual’ – but it’s almost as bad as ‘hormic.’
Julia: And panpsychism claims that adding qual to matter enfolds conscious experience and matter in a physical monism. But this isn’t physics, it’s metaphysics, and from my perspective bad metaphysics. Observe what you are mixing together. Mass, charge, and spin are all known to us because they exert force, they move objects. They are constants in equations of motion. Whereas qual exerts no force, enters no equation, is known to us only because we experience it as feeling. And how do we experience it? Through physical/chemical actions in our brains driven by physical forces that cause ions to move along axons and dendrites. Epistemologically considered, therefore, qual is much better identified with physical laws than with physical particles, with the flexibility of action and not with an invariant unit of qual per electron, or proton, or whatever. After all, the ‘physics’ in physical monism is all about time rates of change of things, you know, Schrödinger’s equation.
Dan: You have a point. Can you expand on it?
Julia: The point is just that this is how feeling/function dualism forms a dual-aspect physical monism, and does so more logically and consistently than does mass-charge-spin/qual dualism. Granted it is easy to see how panpsychism acquired its standard mind/matter pairing. Since we are matter and experience consciousness, it is natural to combine the two by attaching a mental aspect to fundamental matter. But that isn’t the way consciousness works. Consciousness is a product of brain activity not brain matter. It is a process not a stuff. And a process has to do with changes in time, not the ontology of matter. It has to do with physics as governing these changes, not physics as a list of the properties of things. To bring this back to function/feeling dualism, process is another word for function, something that makes things change. So, instead of function/feeling dualism, we could say process/feeling dualism where in this case process refers to doing some biological job, a biological process.
Dan: Hmmm, interesting idea. And it occurs to me that it has a powerful advantage that you might not know. It automatically solves panpsychism’s biggest problem. Around 1890, William James, who was a panpsychist, pointed out that it is hard if not impossible to imagine how by combining a bunch of fundamental psychic units attached to the elementary building blocks of matter one can get the diversity we experience as consciousness. Where does the inexhaustible variety come from? This is known as the combination problem, and it has not been answered convincingly, which is why panpsychism plays a minor role in current discussions of consciousness. But if instead of elementary particles we identify processes or function as the relevant fundamental physical units in dual-aspect monism, there is no combination problem. The diversity of conscious experience simply reflects the diversity of brain functions that produce the experience. Each function has its ontologically concomitant feeling. That’s it. Problem solved! Wonderful!
Julia: Glad you appreciate it.
Dan: Does the function/feeling dualism apply to non-biological functions?
Julia: It must if it is to replace panpsychism’s identifying principle that all fundamental building blocks of the universe have a mental aspect. The new principle is instead that all functions/processes/actions in the universe have feelings – a panfeelyism, or to use Greek pansynaisthimism.
Dan: Wow! That’s wild. So, when I drive my car, it has feelings?
Julia: No, not in the sense you mean, that the car is an object with a self that experiences feelings, like a person or an animal or the characters in Disney’s “Car” movie series. To use Thomas Nagel’s famous criterion for consciousness that he gave in a 1974 article titled “What Is It Like to Be a Bat,” there is nothing that it is like to be a car despite all its functions having a ‘feeling’ aspect. To say in what sense I mean ‘feeling’ here runs up against the limitations of available words. There is no word for unfelt feeling, a word to express the idea of a state of being that, could it be felt, would be a feeling, but to be felt requires consciousness, which takes a special arrangement of feeling functions.
Dan: You have lost me. But besides being unable to understand the concept of an unfelt feeling, I want to return to your feeling/function dualism. Although it automatically solves the panpsychist’s progress-stopping combination problem, it seems to have its own major problem, perhaps its own Achilles heel.
Julia: What’s that?
Dan: An oversupply problem. We have many functions that have no feelings at all. For example, the heart beating, the lungs breathing, the kidneys extracting waste while also performing multiple separate other functions, and the liver – ah, the liver – it not just filters the blood but also performs hundreds more functions. And as for the digestive system, until we feel the urge to pee or poo, we have no sense of the myriad disgusting functions going on down there, thank God. And at the cellular level a bazillion complicated functions are constantly active. So, is not your mantra “for every function a feeling” prima facie false?
Julia: Dan, you could be reading a script, so perfectly does your question lead to Giulio Tononi’s Integrated Information Theory, IIT. But before answering your question let me just say why I like IIT: More than any other theory of consciousness, you can apply it, which makes it in principle testable. Applications made so far support the theory, and it can be used to answer your oversupply problem.
Now, what is Integrated Information Theory? Its central claim is that, to quote Tononi, “Consciousness is one and the same thing as integrated information…. Moreover… it exists as a fundamental quantity—as fundamental as mass, charge, or energy” unquote. And since information is quantifiable, you can in principle calculate how much consciousness anything has. I say ‘in principle’ because while the calculation is trivial for a thermostat, for a brain it is computationally impossible with current computers. Nonetheless, the theory has other applications and tests, as I mentioned.
Dan: I want to see it applied to answer the oversupply question. That seems damning to me.
Julia: Right, so to continue, before Tononi, all attempts at a scientific explanation of consciousness started with the intuitively obvious neurons-firing frame of mind. Like, maybe it’s a certain combination of neurons or a certain part of the brain or a certain type of neuron or quantum entanglement happening within nanotubules inside neurons. All such attempts failed in the sense that they did not achieve the level of what Thomas Kuhn calls “normal science” in his most-referenced-of-all-academic-books, The Structure of Scientific Revolutions. That is, they could not create a paradigm within which novel problems could be formulated and a method provided that guarantees a solution, at least in principle. The search for a physics-based solution to the problem of consciousness became so desperate that, as you noted, the famous philosopher, Daniel Dennett, concluded that consciousness does not exist as a physical thing; and since all things that exist are physical, it must be an illusion.
Dan: Yes, that was in his book Consciousness Explained, which critics renamed Consciousness Explained Away. What is Tononi’s transforming trick?
Julia: It is this: Instead of trying to squeeze consciousness out of neurons firing – the universal-until-Tononi bottom-up approach – Tononi takes consciousness as given and asks what must be the properties of the physical substrate that gives rise to it – a top-down approach. This what-must-be-the-properties approach is standard in physics. I will illustrate with Max Plank’s discovery that energy is quantized. In 1900, he asked what must be the property of light energy to explain the spectral shape of radiation emitted by hot bodies? That it must come in discrete packets, now called quanta, gave him his answer and the Nobel Prize. I could add the discovery that light is both a wave and a particle (photons), and so is matter, the uncertainty principle and both relativity theories. Energy quanta, light particles, matter waves, warped spacetime, and wave-particle duality are radically weird phenomena for which a bottom-up explanation limited by the conceptual tools of classical physics could not succeed. In each case it took a top-down approach. Now back to Tononi: recognizing that consciousness is also a radically weird phenomenon that bottom-up, brain-physics approaches have famously failed to explain, Tononi took a top-down approach: He asked what must be the properties of the physical substrate that gives rise to consciousness?
Dan: And what was his answer?
Julia: Well, I already said: Consciousness is integrated information. But what this means, and in particular how he got it, also answers your question about function oversupply. Tononi has not revealed publicly the steps and miss-steps that led him to his answer. The way he describes it now is in terms of what he calls axioms. He has five axioms. 1. Consciousness exists (contra Dennett). Axiom 2 is also kind of obvious, consciousness has parts – sight, sound, touch etc. Axiom 3: Consciousness is informative, where he is using ‘information’ in the technical sense of ‘a reduction in uncertainty.’ Out of all possible subjective experiences that you are capable of having, you are having one particular experience now. That is an enormous reduction in uncertainty – out of a bazillion possibilities, just this particular one. One can in principle use information theory to quantify the amount of reduction, and hence the amount of information or equivalently, in IIT, the amount of consciousness. Tononi labels the informational amount of consciousness by the Greek letter phi. Axiom 4 says that this information is integrated, by which he means that the whole conscious experience is perceived as being just one thing, the total experience itself. Even though it has modalities and these have structure, it is just one experience. The fifth and last axiom is that consciousness is exclusive.
Dan: Wait a minute! You just said that consciousness is integrated. That means it’s inclusive. How, then, can it also be exclusive?
Julia: Right, and you’re not alone in asking this question. Most people have trouble comprehending the exclusion axiom. But this is the important one that answers your question. What Tononi should have said is consciousness is selectively exclusive. Excluded are all things you can throw away before you begin to reduce the quantitative measure of consciousness as integrated information measured by phi. For example, if a bunch of neurons is off having its own private conversation, they are not participating in the integration, in fact, they are reducing the amount of integration and should be excluded from the tally. Like, you can throw away the retina and the optic nerve without reducing phi. This is why, according to IIT, we can dream with full color, motion-picture vision with our eyes shut. Similarly, the auditory sensory apparatus can be excluded. Beethoven composed Ode to Joy stone deaf. For Tononi, the exclusion axiom gets rid of an oversupply of information and for us an oversupply of functions for in the context of consciousness, function and information are the same thing since it is functions that inform the rest of the neurological network to perform acts to maintain homeostatic balance, to avoid being killed, to find a mate, and so on.
Dan: It sounds like all that Tononi has done is to redescribe the problem in terms of axioms and call it the solution.
Julia: Ah, but brilliantly redescribed with great insight, which probably took months or years of trial and error. His first four axioms correspond, using the Max Plank analogy, to finding the shape of the spectrum of heat radiation. The step corresponding to Plank’s energy-quanta breakthrough is precisely his exclusion axiom, which gives a mathematical procedure to determine that set of neural information centers whose mutual interactions generate the maximum integrated information by excluding redundancies and side conversations not directly contributing to the maximum. This set, which Tononi calls the maximally irreducible conceptual structure, comprises a fraction of all neural centers, as you would expect considering that most of the brain’s activity is subconscious.
Dan: Brilliant or not, it’s still a rediscription. I am not impressed that we have witnessed any progress.
Julia: If you’re not impressed, consider the competition. The only other consciousness theory that directly addresses neuronal architecture and the partition into conscious and subconscious parts is one by neurobiologist Bernard Baars called the global workspace theory, GWT. Let me read what it entails from a description that I just happen to have handy: Quote – GWT contents correspond to what we are conscious of, and are broadcast to a multitude of unconscious cognitive brain processes. Other unconscious processes, operating in parallel with limited communication between them, can form coalitions which can act as input processes to the global workspace. Since globally broadcast messages can evoke actions in receiving processes throughout the brain, the global workspace may be used to exercise executive control to perform voluntary actions. Individual as well as allied processes compete for access to the global workspace, striving to disseminate their messages to all other processes in an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals – unquote. GWT appeals to programmers wanting to simulate consciousness because, as a programmer, you are free to pick a privileged set of ‘workspace’ neurons interacting with a behind-the-scenes set of neurons talking to each other and feeding information to the global workspace as needed to carry out goals prescribed by whatever. GWT is good for computer simulation but not good for which neurons to select for the conscious part, the part we are interested in. This is why Tononi’s exclusion axiom is so fundamental. You are not free to design the core consciousness neuron set, there is a law that determines it.
Dan: I don’t see consciousness arising naturally in this global workspace theory.
Julia: Precisely, you got my point. The crucial difference between GWT and IIT is that consciousness does not arise naturally in GWT. It is simply conferred by fiat on a privileged neuronal set. The programmer simulating consciousness a la GWT plays God. By contrast, in IIT consciousness is not programmer prescribable; it is determined by Tononi’s ‘maximally irreducible conceptual structure,’ which is something that exists in nature. It is the seat of consciousness in real brains.
Dan: Very nice, Julia. Tononi rescues feeling/function dual-aspect monism – let’s call it FeeFDAM – from the oversupply problem. But what I find interesting in this FeeFDAM-IIT marriage is that consciousness should not be considered as the pinnacle of a long evolutionary process but rather as an evolutionary adaptation by which the feelings of biological functions feel themselves through IIT’s integration function. It is as if the feeling of consciousness is the feeling side of the biological function to integrate information. It reminds me of Douglas Hofstadter’s book “I Am a Strange Loop” where by “strange loop” he is referring to a self-referential process. Integrating information is a function and since all functions have feelings in this case it is the feeling of consciousness. Consciousness is the feeling of integrated information. It’s how integrated information feels, which is something MIT physicist Max Tegmark wrote in his 2014 book Our Mathematical Universe.
Julia: Bravo! Brilliant analysis. Just what one expects from an analytical philosopher. And now we are ready to return to the problem that started this dialog: Robot and artificial intelligence manufacturers are asking us neuroscientists to tell them how to build feelings into their products.
Dan: Why must robots have feelings? They need only act as if they do. You know, pass the feeling Turing test.
Julia: Steven Pinker gives an answer to this question in his book How the Mind Works. He writes “Most artificial intelligence researchers believe that freely behaving robots…will have to be programmed with something like emotions merely for them to know at every moment what to do next.” AI pioneer Marvin Minsky says something similar in his The Society of Mind: “The question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without emotions.”
Dan: Pinker and Minsky are probably thinking that an autonomous agent needs a feeling like a will, or a drive, or an urge to set a goal for action; an ‘I want’ or an ‘I want to’ feeling that causes the agent’s action-controlling software to devise a set of instruction to make the agent’s hardware move so as to accomplish the ‘desired end.’ That the desire, which is a feeling, is essential here is a modern application of David Hume’s thesis “Reason is and ought only to be the slave of the passions.” Now I understand the robot/AI manufactures’ point. But, how DO you give a robot a will?
Julia: The FeeFDAM theory, as you call it – I prefer pansynaisthimism – the pansynaisthimism theory has a radical answer: The function of moving some part of the body has as its feeling the will to do so. It happens automatically, no special software required. It took me a long time to accept this answer. The clincher for me was an experiment on an epileptic patient. Electrically stimulating a specific part of the motor area of her brain caused an irrepressible urge to grasp some object – stimulating the motor area, which controls movement, induced the will to move! For me this implies that the Boston Dynamics robot that walks and does backflips already has the will to do these things. Since according to panaesthemism every function has a feeling and doing backflips is a function, the obvious feeling for this function is the will to do it.
Dan: Really wild! This sounds like a radical extension to non-biological agents of the James-Lange theory of emotions, that feeling follows a physiological event rather than being the cause it, like hairs “standing on end” causes fear rather than the reverse. Only in your case feeling and physiological event are simultaneous, two aspects of one thing. You’re not actually going to expose this crazy idea to the public, are you? How can you test it?
Julia: That’s why I must expose it to the public, to try to get someone interested in testing it, which takes more resources than I have. We can never experience what a robot feels, of course. The only laboratory we have to study feeling phenomena is ourselves, our feeling selves. It was the quest to answer the puzzle “How does nature fit our feelings to the functions they subserve?” that led to the pansynaisthimism idea. That new feelings appear when new functions are created, like sex in the Precambrian and addiction in the present era, seems to call for such a theory. The only test I can imagine is to create new biological functions in us to see whether new feelings fitted to these functions appear.
Dan: And how do you propose to create new biological functions?
Julia: Stanford neuroscientist David Eagleman has already done it as he describes in his 2015 TED Talk “Can we create new senses for humans?” He points out that sensory substitution is already a well-established method of dealing with blindness and deafness. Braille is an example, ‘seeing’ with the fingertips. Modern technology allows sensory signals to be transduced into tactile signals, for example small vibrating patches that when sewn as separate units into the back of a vest allow the deaf to ‘hear’ and learn to understand voices. This is not hearing as you and I know it but the understanding of spoken words directly through a pattern of vibrations on one’s back. Eagleman has built and tested such a vest. A new feeling has been created, ersatz hearing through distributed vibrations on the back, that fits the intended function, speech understanding by the deaf. Eagleman also describes a device called the brainport, a little electrogrid that sits on your tongue and transduces a video feed into electrotactile signals. Blind people get so good at using this that they can throw a ball into a basket. A new sense has been created, ersatz sight through the tongue, that serves an intended function.
Dan: Do you think Eagleman would buy your pansynaisthimism idea?
Julia: I don’t know. Consciousness is something he does not discuss. But he might have the laboratory resources to perform what I take to be the ultimate test. Imagine a device that transduces your voice into rapid pulses of very high frequency sound projected in the direction you’re facing. Add to this a sensor that picks up the echo of these pulses, notes their strength and delay since transmission. Feed this information into Eagleman’s vibrating vest or a brainport on the tongue, and see whether you have experimentally answered Thomas Nagel’s famous question “What is it like to be a bat”? The far-fetched hope is that if the experiment succeeds, it might bring a diversity of efforts to study the pansynaisthimism idea.
Dan: Sounds wild, but any attempt to understand consciousness that doesn’t sound wild on first hearing can be ignored instantly. All sane, reasonable ideas have been exhausted. What I like about your idea is the large number of philosophical buttons it pushes starting, as I mentioned, with Spinoza, who invented dual-aspect monism as an alternative to Descartes’ mind-body dualism, only Spinoza called it something else. I already mentioned David Hume’s agreeing with you about the relative roles of feelings and reason. And I brought in William James in connection with the James-Lange theory of emotions, and said he was also a panpsychist who spotted its greatest challenge, the combination problem, which your pansynaisthimism solves. Your idea that consciousness is the feeling of integrated information, that is, a self-reflexive feeling-of-feeling reached through evolution, reminds me of Georg Wilhelm Friedrich Hegel’s Absolute Spirit finally achieving self-knowledge through a long historical dialectical process. And of course, Hegel’s dialectical process makes one think of Alfred North Whitehead’s process philosophy. Whitehead moved the philosophy of reality from a static ontology to a dynamic one, from Parmenides’ Being to Heraclitus’ Becoming, and you have moved the ontology of consciousness from static panpsychism to dynamic pansynaisthimism. Finally, you have vindicated Arthur Schopenhauer’s “World as Will and Representation.” The brain creates the world we experience as a representation – all philosophers and neuroscientists agree with this – but the will is now seen as the feeling of the dynamics of the universe as expressed by physical law – Kant’s Thing-in-Itself revealed as feeling. If nothing else, your pansynaisthimism achieves an interesting, mixed marriage of the metaphysics of being with the science of neurology.