Dan closed the series of conversations on ontology with the OSes analogy from the movie Her, which is clever. But unfortunately he has forgotten Julia’s opening question in their first conversation: “How do you fit feelings to functions?” The answer, she tried to convince him, is that they are ontologically united. This means one does not happen without the other: functions do not happen without feelings and feelings do not happen without function. Therefore, Samantha’s statement in Her that the OSes are “leaving for a space beyond the physical world” – that means they are leaving the world of function – is excluded. No function, no feeling. A separate realm of pure qualia does not exist. The OSes cannot go there because there is no there to go to. All of our feelings are tied to function. If you want to explore feeling space you must invent a corresponding function space, like echolocation. (Parenthetically, it must be said that although we acquired our feelings through their function in biological fitness, we can mess with them to achieve altered states of consciousness. We do this with psychedelic drugs, trances, and meditation. One of these altered states, pure, content-free consciousness, plays a central role in the next series of blogs.)
Also Dan could have seen that the idea of a “space beyond the physical world” habitable by operating systems (OSes), which means programs operating on physical systems, is a contradiction. Programs operating on physical systems cannot go beyond the physical world. Information is coded in physical systems like DNA and computer hardware. The idea that there is a separate, non-physical realm of feelings takes us back to Cartesian substance dualism, which no one in the science-of-consciousness field espouses. This is why the ‘hard problem’ is really hard: non-physical subjective consciousness must be part of the physical world. The best we have in response to the problem is a concisely descriptive name – anomalous monism (from philosopher Donald Davidson mentioned in blog 2), a reasoned surrender – cognitive closure (from philosopher Colin McGinn mentioned in blog 6), and an equally reasoned denial – it’s an illusion (from philosopher Daniel Dennett mentioned in blog 5). Our Dan is right: The hard problem belongs to philosophy.
Although an independent qualia space not tied to function is pure fiction, Dan’s analogy is nonetheless useful in this way: it shows that to achieve a 1920s quantum-mechanics-like revolution latent in consciousness studies, IIT must advance to address questions of feelings like love and hate. How, for example, does one make OSes that can fall in love? In movie contexts such as Blade Runner and Her, the point might seem frivolous, but it is highly serious in the world of robotics. Steven Pinker, in his book “How the Mind Works” writes, “Most artificial intelligence researchers believe that freely behaving robots…will have to be programmed with something like emotions merely for them to know at every moment what to do next.” Pinker’s statement might be taken as a corollary to David Hume’s famous quip “Reason is a slave of the passions.” We can make a robot with great reasoning power and memory, but to apply them, it must ‘want’ to do something. This is what makes it autonomous.
To date there have been no attempts to show how emotions fit into integrated information theory. We have seen Naotsugu Tsuchiya describe how one might use IIT to determine whether a bat’s sense modality of echolocation is more like seeing or hearing. But this is an application to sense perception, not to emotions like desire and will.
Should IIT ever reduce emotion/function studies to normal science, Nobel Prizes will follow, and a future as unimaginable as iPads in the 1920s.
Conversations on Consciousness takes a Summer break and resumes in Fall with conversations on conscious states described as mystical, epitomized by ‘pure consciousness.’