The Embodiment Imperative for Post-Biological Intelligence
The View from Oregon – 343: Friday 30 May 2025
Although I enjoyed working through last week’s discussion of intelligence and embodiment, I never did get to the point that I had set out to discuss when I explicitly introduced what I call the embodiment problem. But my digression away from my intended discussion was arguably a necessary propaedeutic to discussing the problem of the optimal embodiment for intelligence, because the hypothesis of unique constraint—which, applied to intelligence, means that intelligence is always the one and the same thing whenever and wherever it appears, which in turn means that the particular embodiment of intelligence is indifferent to that uniquely constrained intelligence, including the evolutionary (phylogenetic) and individual (ontogenetic) history of the embodiment in question—is relevant to the embodiment problem. I reject the hypothesis of unique constraint, but intelligence could still be tightly constrained by its intrinsic nature, and is at least loosely constrained by its intrinsic nature.
If we suppose that, whenever intelligence does appear in the universe, it appears as the result of some natural process—in a gesture to my emergent complexity pluralism I observe that we need not limit natural processes to biological processes, but biological processes were the natural processes that led to the appearance of intelligence on Earth—it follows that intelligence is always embodied because intelligence emerges from (or supervenes upon) a natural process such as life. We could call this the imperative of embodiment. (In a scenario in which intelligence opts for a virtual existence lived in a virtual environment, there still needs to be infrastructure in the actual world to run and maintain the computers on which which virtual environment would run. Embodiment hasn’t been avoided, only shifted to a non-biological—or, if you prefer, a post-biological—form.) I can’t think of any naturalistic alternatives to this, but there are non-naturalistic alternatives. This may be due to the infirmity of my imagination, but I will leave aside these non-naturalistic alternatives. In a naturalistic framework, intelligence is always embodied, therefore the hypothesis of unique constraint (as also tightly-constrained intelligence and loosely-constrained intelligence) is always relevant as a regulative principle, even when it doesn’t apply.
If we reject the hypothesis of unique constraint (as I do) and opt for some weaker formulation of constraint, the form of embodiment in which intelligence appears has at least some bearing on the intelligence so embodied, and intelligence always possesses some degree of autonomy from its embodiment. Here there is considerable room for variation, as an intelligence might be tightly constrained by its own nature while only loosely constrained by its embodiment, or it might be only loosely constrained by its own nature while being tightly constrained by its embodiment, and between these two embodiments there might be quite some distance separating them. An intelligence tightly (but not uniquely) constrained by its own nature would be something like an intelligence dominated by what intelligence researchers call general intelligence, or GI, while an intelligence tightly constrained by its embodiment would be something like we find in contemporary philosophies of mind that emphasize embodiment over cognition in the abstract.
In the immediately previous formulations I mentioned intelligence being constrained by its embodiment, and this points to something I overlooked last week. In formulating the hypothesis of unique constraint I was only thinking of intelligence being constrained by its own intrinsic nature, but I could just as well have looked at it from the other side and thought first of intelligence being constrained by its embodiment. Recognizing this, I can reformulate the hypothesis of unique constraint as the hypothesis of unique reflexive constraint (because I’m concerned with the constraint that intelligence reflexively exercises upon itself) and then contrast this hypothesis of unique reflexive constraint to the hypothesis of (unique) irreflexive constraint. This is formulated right off the bat in its greatest generality, and presenting something in its full generality when the idea comes from a particular case that is more easily recognized in its specificity than its generality can be confusing. In an attempt to dispel the confusion, I here explicitly note the generality of my formulation. A formulation specific to the context would be something like the hypothesis of (unique) embodiment constraint on intelligence. Constraint due to embodiment is one example of a constraint on intelligence that does not come from intelligence itself. The entire class of constraints on intelligence that do not come from intelligence itself are all non-intelligence constraints on intelligence, which includes the embodiment of intelligence among them. Embodiment is an irreflexive constraint on intelligence, but not the only such constraint.
The hypothesis of unique constraint was introduced in a context in which it was intelligence that reflexively constrained itself, but the idea formulated in full generality applies to any emergent complexity and not only to intelligence. Indeed, I illustrated the idea with reference to life, since I thought that would be initially more tractable than intelligence. Analogously (or, I could say, symmetrically), the hypothesis of irreflexive constraint applies to any emergent complexity that can be constrained by anything that is not its own nature.
With that introductory clarification, I can move on to what I had intended to discuss. Intelligence that attains to a given level of technology proficiency will be eventually faced with the possibility of whether that intelligence would prefer to embody itself in some form other than that its embodiment of origin. In our case (though we haven’t yet reached the requisite level of technological proficiency, though we can foresee the possibility) this would mean that either because of dissatisfaction with the imperfections of human embodiment, or because of ongoing changes in the environment rendering the planet incapable for supporting human life, but not necessarily incapable of supporting other forms of life, we might choose to embody intelligence in some form other than human beings. Thus we come to the problem of the optimal embodiment for intelligence, which is where I began last week’s newsletter.
For some years I’ve been thinking about the problem for emergent complexity presented by the idea that human civilization will be followed by some technological regime, sometimes called post-biological, which implies that an original biological embodiment has been replaced by a technological embodiment. There is more than one pathway for this to come about, but I won’t attempt an account of these possibilities, though this is an intrinsically interesting question. (I may return to this in the future.) If you undercut the biological embodiment of intelligence on Earth such that terrestrial intelligence is followed not by further evolved biology and intelligence but by intelligence inhering in a novel substrate that is not biological, this would mean that the sequence of emergent complexities that had issued in intelligence becomes irrelevant. Intelligence in a post-biological embodiment could be indifferent to the biosphere, so it would no longer be embodied in a biological body that required the resources of the biosphere to remain alive. This would represent a novel discontinuity in emergent complexity. Again, as with non-embodied intelligence (noted above), I can’t think of any naturalistic example of this prior to intelligence engineering this own state of affairs for itself. I don’t (yet) have any good terminology to describe this possibility. I could call it something like post-biological emergent discontinuity, which covers the idea but isn’t a very elegant formulation. In any case, this strikes me as an important development, if it’s possible, which we don’t yet know to be the case.
The re-embodiment of intelligence into some form other than the embodiment of origin need not be perfect, i.e., it need not be a perfect fit for the naturally evolved intelligence, it only needs to preserve the intelligence to some degree of continuity. What degree of loss of continuity would be acceptable would be relative to the purpose of the re-embodiment, which I discussed last week. With a re-embodiment of intelligence we can imagine a certain sense of alienation attending the re-embodiment, with the intelligence not yet feeling fully at home in its new embodiment. Adjustments and refinements could be made, and intelligence might also simply forget the feeling of its previous embodiment and thus cease to notice the initial alienation of re-embodiment. Further, we can imagine in a far distant future in which successive re-embodiments of intelligence have distanced the intelligence from its origins so many times over that the sense of alienation that might initially accompany re-embodiment would be overcome. Human intelligence might forget that it ever was human. These successive re-embodiments might constitute a sequence of directed evolution toward a particular goal of embodiment engineered by the intelligence. Through these successive re-embodiments intelligence might converge upon some embodiment that suits the intelligence even better than its embodiment of origin, which would be the optimal embodiment I previously postulated.
With each re-embodiment, something could be lost and something could be gained, which could be a compromise chosen in order to arrive at the optimal embodiment of intelligence, but it could also be a compromise chosen for more practical reasons. Suppose an intelligence chooses an embodiment with survival properties, but which also entails a compromise with regard to intellect or some other desirable attribute—this is a gain in which the essence of the being (as intellect) is compromised—not the kind of gain an intellect would choose except under duress. However, given the above observed imperative of embodiment, some embodiment must be accepted as the price of survival in the actual world. In this way an intelligence might have to accept suboptimal embodiment even if it can conceptualize an optimal embodiment (for itself as an intelligence, or for some purpose that it has set itself). However, even a compromised embodiment could embody an intelligence far beyond the capability of human-embodied intelligence. In newsletter 340 I discussed Dyson’s eternal intelligence in relation to the embodiment problem, and we could regard this embodiment of intelligence as a compromise between optimal embodiment and survival into the conditions that will characterize the distant future of the universe.