One of the reasons I was considering the possible sequence of conscious emergents from sentience to emotion and then whatever comes next, which I discussed in last week’s newsletter, is that I’ve been thinking about the possibility of a taxonomy of intelligence for several years now, and where human intelligence would fit in a master taxonomy of intelligence. Part of our interest as human beings is to know our place within nature, as one among many possible permutations. We are a particular permutation of life and a particular permutation of intelligence, and we don’t know where we fit among other permutations, since we don’t even know if there are other permutations or if we’re the only instance of (what we call) intelligence in the universe. We could in good faith produce a taxonomy of intelligence as regards life on Earth, knowing ourselves to be a species among species in the biosphere.
In last week’s newsletter I mentioned the intelligence of cephalopods, and how they might have a different sense of self, in part due to a different brain structure representing a different evolutionary pathway, and in part due to a different embodiment than human beings. Might cephalopods also possess a different kind of intelligence? We know that human intelligence is marked by the aforementioned sentience and emotion, and we have sentience in common with cephalopods, but do we have emotion in common? Since cephalopod brains don’t have a limbic system, perhaps they represent intelligence without emotion. But recent research has shown the birds, which don’t have a limbic system like mammals, may process emotions nevertheless. Analogously, birds are endotherms, but the mechanism of their endothermy is distinct from the mechanism of endothermy in mammals. Analogously again, research has suggested the cephalopods may have color vision by a different mechanism that color vision in other animals, and certainly the color displays of cephalopods suggest the utility of color vision among cephalopods. The question, “What is it like to be an octopus?” is even more difficult to answer than Thomas Nagel’s question, “What is it like to be a bat?” We share much more with bats, as fellow mammals, than we share with an octopus. Thus the question of whether cephalopods experience emotions, and, if they do, what kind of emotions they experience, is unknowable at present. An exhaustive science of consciousness might help us to answer questions like this, but we are a long way from such a science.
A more exotic way to learnthe nature of the experience of other species, equally beyond our grasp at present, would be a technological method for the transfer of consciousness from one body to another. The idea of the transfer of consciousness is familiar to all of us, but I don’t know how old it is, or whether it has a single canonical source, of if it’s so widely imagined that it’s appeared so many times that no single source can be credited. One could argue that the idea of “channeling,” or what used to be called a “medium,” is an instance of transfer of consciousness. The idea of a technologically facilitated transfer of consciousness is familiar to us as the idea of “mind uploading,” which would involve the transfer of human consciousness into a machine, where that consciousness could live on in a virtual state.
If mind uploading is possible, and we don’t know that it is, then it might also be possible to then download that consciousness into another body, and this mind downloading would give human consciousness an opportunity to experience life from the perspective of a radically different embodiment. It’s not clear that this idea is even coherent, as we’re dealing with the kind of idea that we find in fairy tales. Imaginging the technological facilitation of a state-of-affairs familiar from fiction, where it is enacted by magic, is no guarantee that anything like this can ever be engineered in fact. These kind of ideas exercise an enduring hold on the human mind, and in some cases technology does make possible practices once attributed to magic, but we don’t know the limits of this possibility. That mind uploading is conceivable does not necessarily make it realizable in the actual world. We can think of some ideas that were present in mythology—for example, men turning to stone when they glimpse some forbidden sight, like the face of Medusa—that haven’t held the same fascination in terms of a technological facilitation of a fictional idea. There is a lot of speculation around technologically-facilitated transfer of consciousness, but no speculation (as far as my knowledge goes) around technologically-facilitated turning to stone. One could argue that it is the intrinsically plausibility that makes the one a matter for speculation and not the other, but the basis of that intrinsic plausibility, if there is a basis, isn’t at all clear. Perhaps a more sophisticated philosophy of technology could contribute to our understanding of these puzzles.
I began with the possibility of a taxonomy of intelligence, and somehow made it to consciousness uploading. I could call this a slippery slope of philosophy of mind as practiced in an era of high technology. The exotic possibilities of artificial intelligence, machine consciousness, and superintelligence are all examples of ways in which the successes of technology have stimulated philosophy of mind, and these seductive possibilities may also have derailed traditional philosophy of mind to the point that we’re no longer doing what philosophers in the past did and called philosophy of mind. Or, taking the train metaphor in a different direction, philosophy of mind in an age of high technology has shunted the discipline onto a different set of tracks, possibility leading in a different direction. Much traditional philosophy of mind was closely related to religious concerns, and was often formulated in quasi-religious concepts like the soul, rather than in terms of consciousness. This aspect of philosophy of mind, once the mainline for the philosophical locomotive, has now become secondary track while technology has become the mainline, and the future of human intelligence is the stop for which the train is headed.
If we could establish a definitive developmental order for consciousness—one possible task for contemporary philosophy of mind—perhaps even a developmental order that included at its origins the development of sentient consciousness and then emotive consciousness, and then more abstract intellectual consciousness, and which included at its later stages the technologically-facilitated expansion of consciousness, perhaps repleat with superintelligence and consciousness-uploading, we would be able place contemporary human consciousness and intelligence within this developmental continuum, knowing both its past and its future. This would position us within the overall possibilities for intelligence, even if human intelligence is the only intelligence (of its kind) in the universe, and would be predictive of future development, if that development is allowed to unfold. Here we trespass on another problematic fascination: predicting the future. The developmental process I have sketched embodies a number of presuppositions that have been explicitly called out in other contexts. These philosophical disputes can be interesting in many ways, and I could take up this problem according the problems (or the pretense) of prediction, but what interests me here is the presuppositions laid bare by stepping back from the naturalistic narrative of the development of intelligence.
A developmental account of intelligence suggests a doctrine of cognitive evolutionism that could be contrasted to one of cognitive relativism. In cognitive evolutionism, there would be a developmental sequence of stages of intelligence attained, with each new stage building on the previous stage, while in cognitive relativism each kind of intelligence would be self-contained and independent of other kinds of intelligence. We are familiar with doctrines like this, even if they haven’t been called by these names, and even if they aren’t pure exemplifications of the type. The familiar idea of the cerebral cortex superimposed on the limbic system, which is in turn superimposed on the reptilian brain stem represents if kind of cognitive evolutionism. The “modularity of mind” thesis represents a kind of pluralistic cognitive relativism, since the isolation of cognitive modules from each other implies that a given mind might have or not have a given “module” and all else remains the same. The modularity of mind was also proposed in an evolutionary context, so I’m not suggesting that it’s not naturalistic, but the tendency is quite different from the tendency expressed by the overtly evolutionary account of intelligence as a sequence of stages of development.
There is a kind of natural community of interest among naturalistic, evolutionary, and technological accounts of intelligence; all advance together, even when in disagreement, because they all belong to our modern naturalistic conception of the world. There is a sense in which philosophy of mind in an age of naturalism is bound to explore the particular set of permutations of mind and intelligence highlighted by problems of evolution and technology. Is there also a natural community of interest among non-naturalistic, Platonistic, and religious conceptions of mind and intelligence? Is an agricultural civilization, once it reaches of stage of development at which philosophical speculation can flourish, bound to explore the particular set of permutations of mind and intelligence highlighted in an age of non-naturalism?
"If mind uploading is possible, and we don’t know that it is, then it might also be possible to then download that consciousness into another body, and this mind downloading would give human consciousness an opportunity to experience life from the perspective of a radically different embodiment. "
The fairytale transfer of consciousness or mind to another person is not possible, because they are determined by the wetware of the person's brain. The mind is not a separate thing from the brain.
OTOH, if you can create a brain with the exact same connectome, then the mind is "transferable", or rather it becomes more like matter duplication, and explored in Star Trek when a transporter fails to eliminate the person, leaving a place to be recreated at a destination. If we could duplicate the connectome in another substrate, then the same mind would emerge in this simulation.
This duplication would have to exactly replicate the connectome with all the effects of changes in hormones and neurotransmitters. Current technology, even extrapolations of the Human Brain Project, do not even get close to this similarity to the original wetware. Arguably, the different scales of the silicon vs wetware will prevent the same mind from being replicated in the silicon substrate.
AFAICS, in both cases of brain duplication, the result is a copy of the mind that is a separate sense of self, like identical twins, but with a common base from the moment before duplication.
You both here are the strong and weak forms of the same idea. I don't think we can transfer consciousness but we could image or create an analog copying system to another medium sharing emergent features, unlikely I suspect, even if possible, why bother as it is not a transfer, certainly social learning has overcome any evolutionary need for it, so it is another immortality project.
If however consciousness is a Gödelian machine as some posit, rather than our current emergent hand waving realisations.. then it might be possible. I am not very convinced of that one however.
https://www.researchgate.net/publication/275024071_The_Relativistic_Brain_How_it_works_and_why_it_is_not_stimulable_by_a_Turing_Machine