
## Metadata
- Author: [[Terrence W. Deacon]]
- Full Title: Incomplete Nature
- Category: #books
## Highlights
- Each of these sorts of phenomena—a function, reference, purpose, or value—is in some way incomplete. There is something not-there there. Without this “something” missing, they would just be plain and simple physical objects or events, lacking these otherwise curious attributes. Longing, desire, passion, appetite, mourning, loss, aspiration—all are based on an analogous intrinsic incompleteness, an integral without-ness. As I reflect on this odd state of things, I am struck by the fact that there is no single term that seems to refer to this elusive character of such things. So, at the risk of initiating this discussion with a clumsy neologism, I will refer to this as an absential2 feature, to denote phenomena whose existence is determined with respect to an essential absence. This could be a state of things not yet realized, a specific separate object of a representation, a general type of property that may or may not exist, an abstract quality, an experience, and so forth—just not that which is actually present. This paradoxical intrinsic quality of existing with respect to something missing, separate, and possibly nonexistent is irrelevant when it comes to inanimate things, but it is a defining property of life and mind. A complete theory of the world that includes us, and our experience of the world, must make sense of the way that we are shaped by and emerge from such specific absences. What is absent matters, and yet our current understanding of the physical universe suggests that it should not. A causal role for absence seems to be absent from the natural sciences. ([Location 219](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=219))
- What zero shares in common with living and mental phenomena is that these natural processes also each owe their most fundamental character to what is specifically not present. They are also, in effect, the physical tokens of this absence. Functions and meanings are explicitly entangled with something that is not intrinsic to the artifacts or signs that constitute them. Experiences and values seem to inhere in physical relationships but are not there at the same time. This something-not-there permeates and organizes what is physically present in these phenomena. Its absent mode of existence, so to speak, is at most only a potentiality, a placeholder. Zero is the paradigm exemplar of such a placeholder. It marks the columnar position where the quantities 1 through 9 can potentially be inserted in the recursive pattern that is our common decimal notation (e.g., the tens, hundreds, thousands columns), but it itself does not signify a quantity. Analogously, the hemoglobin molecules in my blood are also placeholders for something they are not: oxygen. Hemoglobin is exquisitely shaped in the negative image of this molecule’s properties, like a mold in clay, and at the same time reflects the demands of the living system that gives rise to it. It only holds the oxygen molecule tightly enough to carry it through the circulation, where it gives it up to other tissues. It exists and exhibits these properties because it mediates a relationship between oxygen and the metabolism of an animal body. Similarly, a written word is also a placeholder. It is a pointer to a space in a network of meanings, each also pointing to one another and to potential features of the world. But a meaning is something virtual and potential. Though a meaning is more familiar to us than a hemoglobin molecule, the scientific account of concepts like function and meaning essentially lags centuries behind the sciences of these more tangible phenomena. ([Location 333](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=333))
- If the most fundamental features of human experience are considered somehow illusory and irrelevant to the physical goings-on of the world, then we, along with our aspirations and values, are effectively rendered unreal as well. No wonder the all-pervasive success of the sciences in the last century has been paralleled by a rebirth of fundamentalist faith and a deep distrust of the secular determination of human values. The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—explaining the origin of life and explaining the nature of conscious experience—both are held hostage by this presumed incompatibility. Recognizing this contemporary parallel to the unwitting self-imposed handicap that limited the mathematics of the Middle Ages is, I believe, a first step toward removing this impasse. It is time that we learned how to integrate the phenomena that define our very existence into the realm of the physical and biological sciences. ([Location 390](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=390))
- It took centuries and the lifetime efforts of some of the most brilliant minds in history to eventually tame the troublesome non-number: zero. But it wasn’t until the rules for operating with zero were finally precisely articulated that the way was cleared for the development of the physical sciences. Likewise, as long as we remain unable to explain how these curious relationships between what-is-not-there and what-is-there make a difference in the world, we will remain blind to the possibilities of a vast new realm of knowledge. I envision a time in the near future when these blinders will finally be removed, a door will open between our currently incompatible cultures of knowledge, the physical and the meaningful, and a house divided will become one. ([Location 402](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=402))
- Although living processes have components that are at least as precisely integrated in their function as any man-made machine, little else makes them like anything engineered. Whole organisms do not result from bringing together disparate parts but by their parts’ differentiating from one another. Organisms are not built or assembled; they grow by the multiplication of cells, a process of division and differentiation from prior, less differentiated precursors. Both in development and in phylogeny, wholes precede parts, integration is intrinsic, and design occurs spontaneously. The machine metaphor is a misleading oversimplification. ([Location 821](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=821))
- We are taught that Galileo and Newton slew the Aristotelean homunculus of a prime mover, Darwin slew the homunculus of a divine watchmaker, Alan Turing slew the homunculus of disembodied thought, and that Watson and Crick slew the homunculus of the élan vital, the invisible essence of life. Still, the specter of homunculus assumptions casts its shadow over even the most technologically sophisticated and materialistically framed scientific enterprises. It waits at the door of the cosmological Big Bang. It lurks behind the biological concepts of information, signal, design, and function. And it bars access to the workings of consciousness. So the image of the homunculus also symbolizes the central problem of contemporary science and philosophy. It is an emblem for any abstract principle in a scientific or philosophical explanation that imports an unanalyzed attribution of information, sentience, reference, meaning, purpose, design, self, subjective experience, value, and so on—attributes often associated with mental states—into scientific explanations. I will call such theories homuncular insofar as these attributes are treated as primitives or unanalyzed “black boxes,” even if this usage is explicitly designated as a placeholder for an assumed, yet-to-be-articulated mechanism. The points in our theories where we must shift into homuncular terminology can serve as buoys marking the shoals where current theories founder, and where seemingly incompatible kinds of explanation must trade roles. Homunculi both indicate the misapplication of teleological principles where they don’t apply and offer clues to loci of causal phase transitions where simple physical accounts fail to capture the most significant features of living processes and mental events. ([Location 1198](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1198))
- Evolutionary biology is still a work in progress, and so one should not necessarily expect that it is sufficiently developed to account for every complicated phenomenon. But evolutionary biologists and intelligent design (ID) advocates treat this incomplete state of the art in radically different ways. From the point of view of ID, evolutionary theory suffers from a kind of incompleteness that amounts to an unsalvageable inadequacy. Although it is common for theoretical proponents of one scientific theory to prosecute their case by criticizing their major competitors, and thus by indirect implication add support to their own favored theory, there is something unusual about ID’s particular variant of that strategy. It is in effect a metacriticism of scientific theory in general, because it attempts to define a point at which the ban against homuncular explanations in science must be lifted. The Intelligent Designer is a permanently unopenable black box. The work of scientific explanation cannot, by assumption, penetrate beyond this to peer into its (His) mechanism and origins. So this argument is an implicit injunction to stop work when it comes to certain phenomena: to shift from an analytical logic of causes and effects to an ententional logic of reasons and purposes. ([Location 1264](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1264))
- People tend to be masters at believing incompatible things and acting from mutually exclusive motivations and points of view. Human cognition is fragmented, our concepts are often vague and fuzzy, and our use of logical inference seldom extends beyond the steps necessary to serve an immediate need. This provides an ample mental ecology in which incompatible ideas, emotions, and reasons can long co-exist, each in its own relatively isolated niche. ([Location 1283](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1283))
- The unfertilized egg begins as a relatively undifferentiated cell. The process of fertilization initiates a slight reorganization that polarizes the location of various molecular structures within that cell. Then this cell progressively divides until it becomes a ball of cells, each with slightly different molecular content depending on what part of the initial zygote gave rise to them. As they continue to divide, and depending on their relative locations and interactions with nearby cells, their cellular progeny progressively differentiate into the many thousands of cell types that characterize the different organs of the body. Although the features that distinguish these different, specialized cell types depend on gene expression, the geometry of the developing embryo plays a critical role in determining which genes in which cells will be expressed in which ways. ([Location 1380](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1380))
- Although James Clerk Maxwell, Ludwig Boltzmann, and Josiah Gibbs were collectively responsible for developing thermodynamic theory in the nineteenth century before the modern conceptions of computing and information were formulated, they did think of thermodynamic processes in terms of order and the information necessary to describe system states. But Lloyd writes almost as though they had modern conceptions of computing and information, and conceived of thermodynamic processes as intrinsically a form of information processing. Intended as a heuristic caricature, this way of telling the story nonetheless misrepresents the history of the development of these ideas. Only after the mid-twentieth-century work of Alonzo Church and Alan Turing, among others, showed that most physical processes can be assigned an interpretation that treats them as performing a computation (understood in its most general sense) did it become common to describe thermodynamic processes as equivalent to information processing (though, importantly, it is equivalence under an interpretation). ([Location 1494](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1494))
- Any time a theory builder proposes to call any event, state, structure, etc., in any system (say the brain of an organism) a signal or message or command or otherwise endow it with content, he takes out a loan of intelligence. . . . This loan must be repaid eventually by finding and analysing away these readers or comprehenders; for failing this, the theory will have among its elements unanalysed man-analogues endowed with enough intelligence to read the signals, etc. ([Location 1618](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1618))
- The golem myth holds a subtler implication embodied in its truth/death pun. Besides being a soulless being, following commands with mechanical dispassion, the golem lacks discernment. It is this that ultimately leads to ruin, not any malevolence on either the golem’s or its creator’s part. Truth is heartless and mechanical, and by itself it cannot be trusted to lead only to the good. The “truth” that can be stated is also finite and fixed, whereas the world is infinite and changeable. So, charged with carrying out the implications that follow from a given command, the golem quickly becomes further and further out of step with its context. Golems can thus be seen as the very real consequence of investing relentless logic with animate power. The true golems of today are not artificial living beings, but rather bureaucracies, legal systems, and computers. ([Location 1733](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1733))
- In their design as well as their role as unerringly literal slaves, digital computers are the epitome of a creation that embodies truth maintenance made animate. ([Location 1739](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1739))
- Computers are logic embodied in mechanism. The development of logic was the result of reflection on the organization of human reasoning processes. It is not itself thought, however, nor does it capture the essence of thought, namely, its quality of being about something. Logic is only the skeleton of thought: syntax without semantics. Like a living skeleton, this supportive framework develops as an integral part of a whole organism and is neither present before life, nor of use without life. ([Location 1742](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1742))
- Another way one could have said this is that (1) the collection of all men is contained within the collection of all mortals; and (2) Socrates is contained within the collection of all men; so inevitably, (3) Socrates is also contained within the collection of all mortals. Put this way, it can be seen that logic can also be conceived as a necessary attribute of the notion of containment, whether in physical space or in the abstract space of categories and classes of things. Containment is one of the most basic of spatial concepts. ([Location 1752](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1752))
- Consider, however, that to the extent that we map physical processes onto logic, mathematics, and machine operation, the world is being modeled as though it is preformed, with every outcome implied in the initial state. But as we just noted, even Turing recognized that this mapping between computing and the world was not symmetrical. Gregory Bateson explains this well: In a computer, which works by cause and effect, with one transistor triggering another, the sequences of cause and effect are used to simulate logic. Thirty years ago, we used to ask: Can a computer simulate all the processes of logic? The answer was “yes,” but the question was surely wrong. We should have asked: Can logic simulate all sequences of cause and effect? The answer would have been: “no.” ([Location 1775](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1775))
- To simplify a bit, the problem lies in the very assumption that syntax and semantics, logic and representation, are independent of one another. A golem is syntax without semantics and logic without representation. There is no one at home in the golem because there is no representation possible—no meaning, no significance, no value, just physical mechanism, one thing after another with terrible inflexible consistency. This is the whole point. The real question for us is whether golems are the only game in town that doesn’t smuggle in little man-analogues to do the work of cognition. If we eliminate all the homunculi, are we only left with golems? ([Location 1796](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1796))
- Though it seemed like a radical shift from behaviorism with its exclusively external focus, to cognitive science with its many approaches to internal mental activities, there is a deeper kinship between these views. Both attempt to dispose of mental homunculi and replace them with physical correspondence relationships between physical phenomena. But where the behaviorists assumed that it would be possible to discover all relevant rules of psychology in the relationships of input states to output responses, the computationalists took this same logic inside. ([Location 1848](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1848))
- As Irving J. Good explained at the dawn of the computer age: “The parts of thinking that we have analyzed completely could be done on the computer. The division would correspond roughly to the division between the conscious and unconscious minds.” ([Location 1940](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=1940))
- precisely because computation is not itself mechanism, but rather just an interpretive gloss applied to a mechanism, changes of the mechanism outside of certain prefigured constraints will not be “defined” within the prior computational system. ([Location 2002](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2002))
- It matters that human thought is not a product of precise circuits, discrete voltages, and distinctively located memory slots. The imprecision and noisiness of the process are what crucially matters. Could it be our good fortune to be computers made of meat, rather than metal and silicon? ([Location 2033](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2033))
- Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public. —J. B. S. HALDANE1 ([Location 2046](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2046))
- American psychologist Donald T. Campbell, who articulated what is probably the most generalized characterization of selection processes as distinct from other forms of knowledge creation. He characterized the essence of selection (as opposed to instruction) theories of adaptation and knowledge generation with the simple aphorism “blind variation and selective retention.” This catchphrase highlights two distinguishing features of the selection logic: first, that the source of variation is unrelated to the conditions that favor preservation of a given variant; and second, that there is differential persistence of some variants with respect to others. As we will discuss in greater detail in later chapters, the defining criterion is not the absence of ententional processes, but the absence of any specific control of the variants produced by the process that determines their differential persistence. ([Location 2387](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2387))
- DNA molecules are just long, stringy, relatively inert molecules otherwise. The question that is begged by replicator theory, then, is this: What kind of system properties are required to transform a mere physical pattern embedded within that system into information that is both able to play a constitutive role in determining the organization of this system and constraining it to be capable of self-generation, maintenance, and reproduction in its local environment? ([Location 2494](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2494))
- materiality is the critical difference between biological evolution and simulated evolution. The specific molecular substrates, the energy required to synthesize them and animate their interactions, and the work required to generate the structures and process that get reproduced with variation are what determines why a given organization persists or doesn’t. In the computer world this part of the story is taken for granted, and even if these factors are part of what is being simulated, the actual physics of computing is still a given. Patterns of signals are produced, transformed, and erased by physical processes whose details are irrelevant to the algorithmic game being played. In life, however, it is precisely this physical embodiment that matters. It is not just pattern and correspondence that matters; the properties of the substrates that things are made of and the thermodynamics of the work being performed on them determine what persists and what perishes over time. Could the apparent ability to eliminate teleological assumptions from simulated evolution be an artifact of having ignored the physicality of its implementation? Can the view of biological evolution as thoroughly non-teleological be an artifact of bracketing out consideration of the details of molecular structure, chemical reaction probabilities, thermodynamics, and so forth? ([Location 2569](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2569))
- So far, efforts to explain these enigmatic properties of life and mind have proceeded from the top down, so to speak. Whether beginning from an effort to make sense of human consciousness, or else appealing to the design of regulatory devices designed to produce end-directed mechanical tendencies, we are already assuming what we need to explain, and then trying to find the best way to disassemble these phenomena to understand their composition. Maybe, however, something blocks this approach. What if, for some reason, these phenomena can’t be analyzed this way? What if, in the process of the emergence of these phenomena, their most important foundational features somehow get obscured or even lost in some way? It may be possible to discover how ententional phenomena come into existence, and yet not possible to trace the process in reverse. So, instead of trying to eliminate ententional properties from science, I propose we try to understand how they could have come into existence where none existed before. In other words, we need to start without any hint of telos and end up with it, not the other way around. In some ways, this is a far more difficult enterprise because we are not availing ourselves of the ententional phenomena most familiar to us: living bodies and conscious minds. Even worse, it forces us to explore domains in which there may be little science to support our efforts. It has the advantage, however, of protecting us from our own familiarity with teleology, and from the appeal of allowing homuncular assumptions to do the work that explanation should be doing. For no other problem in the sciences are there so many potential pitfalls, false leads, and possibilities for self-deception. ([Location 2596](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2596))
- No fractionation of ententional functions into modules of even smaller scope and proportion allows the apparent arrow of causality to reverse. There is no point where ententional dynamics just fades smoothly into thermodynamics. Minds are not just made of minds of simpler form made of minds of yet simpler form that eventually become so “stupid” as to be modeled by simple mechanisms. Nowhere down this ever smaller rabbit hole can we expect that the normal laws of causality will imperceptibly transform into their mirror image. ([Location 2616](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2616))
- To abuse an old metaphor: the fabric of mind is not merely the thread that composes it. Indeed, these physical threads can be traced, and give the fabric its solidity and resistance. But the properties that constitute the fabric vanish if we insist on unraveling the intricate weave until only thread is left. By analogy, the fabric of telos is woven, if you will, from the same material and energetic thread that constitutes all unfeeling, unthinking, inanimate phenomena. In this respect, there is complete continuity. It’s the distinctive pattern of the self-entanglement of these threads that contributes the critical and fundamental difference in properties distinguishing the ententional world from the world described by contemporary physics and chemistry. The intertwined threads of a fabric systematically limit each other’s possibilities for change. There is of course no new kind of stuff necessary to transform the one-dimensional thread into a two-dimensional sheet; just a special systematic form of reciprocal limitation. By analogy, to really understand how the additional dimension of ententional properties can emerge from a substrate that is dimensionally simpler and devoid of these properties, it is necessary to understand how the material and energetic threads of the physical universe became entangled with one another in just the right way so as to produce the additional dimension that is the fabric of both life and mind. This is the problem of emergence: understanding how a new, higher dimension of causal influence can be woven from the interrelationships among component processes and properties of a lower dimension. ([Location 2660](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2660))
- Being unpredictable, even in some ultimate sense, is only a claim about the limits of representation—or of human intellect. Even if certain phenomena are “in principle” unpredictable, unexplainable, or unknowable, this doesn’t necessarily imply a causal discontinuity in how the world works. There may be a determinate path from past to future, even to a radically divergent form of future organization, even if this causal influence is beyond precise representation. ([Location 2715](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2715))
- a number of scientists and philosophers of science realized the necessity of reconciling the logic of physical science with the logic of living and mental teleology. A true reconciliation would need to accept both the unity of material and living/mental processes and the radical differences in their causal organization. Investigators could neither accept ententional properties as foundational nor deny their reality, despite this apparent incompatibility. The key concept that came to characterize an intermediate position was that of emergence. This use of the term was introduced by the English philosopher and critic George Henry Lewes, in his Problems of Life and Mind (1874–79), where he struggles with the problem of making scientific sense of living and mental processes. He defines emergence theory as follows: Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same—their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference. ([Location 2749](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2749))
- As Gregory Bateson has described it, Before Lamarck, the organic world, the living world, was believed to be hierarchic in structure, with Mind at the top. The chain, or ladder, went down through the angels, through men, through the apes, down to the infusoria or protozoa, and below that to the plants and stones. What Lamarck did was to turn that chain upside down. When he turned the ladder upside down, what had been the explanation, namely: the Mind at the top, now became that which had to be explained.4 Previously, starting with the assumption of the infinite intelligence of a designer God, organisms—including those capable of flexible intelligent behavior—could be seen as progressive subtractions and simplifications from Godlike perfection. In the “Great Chain of Being,” mental phenomena were primary. The mind of God was the engine of creation, the designer of living forms, and the ultimate source of value. Mind was not in need of explanation. It was a given. ([Location 2786](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2786))
- The event that probably played the key role in precipitating the articulation of an emergentist approach to life and mind was the publication in 1859 of Darwin’s On the Origin of Species. Although Darwin was not particularly interested in the more metaphysical sorts of questions surrounding the origins of mind, and did not think of his theory in emergence terms, he was intent on addressing the teleological issue. He, like Lamarck, was seeking a mechanistic solution to the problem of the apparent functional design of organisms. But where Lamarck assumed that the active role of organism striving to adapt to its world was necessary to acquire “instruction” from the environment—a cryptic homunculus—Darwin’s theory of natural selection required no such assumption. He reasoned that the same consequence could be reached due to the differential reproduction of blindly generated variant forms of organisms in competition for limited environmental resources. This eliminated even the tacit teleological assumption of a goal-seeking organism. Even this attribute could be achieved mechanistically. Thus, it appeared that teleology could be dispensed with altogether. The theory of natural selection is not exactly a mechanistic theory, however. It can best be described as a form of statistical inference that is largely agnostic about the mechanisms it depends on. As is well known, Darwin didn’t understand the mechanism of heredity or the mechanisms of reproduction. ([Location 2800](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2800))
- What Galileo and Newton had done for physics, Lavoisier and Mendeleyev had done for chemistry (and alchemy), and Carnot and Clausius had done for heat, Darwin had now done for the functional design of organisms. Functional design could be subsumed under a lawlike spontaneous process, without need of spirits, miracles, or extrinsic guidance. ([Location 2814](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2814))
- The Nobel Prize–winning neuroscientist Roger Sperry also updated classic emergentist arguments about the nature of consciousness. In an article published in 1980, Sperry argued that although there is no change in basic physics with the evolution of consciousness, the property of the whole system of interacting molecules constituting brains that we call “consciousness” is fundamentally different from any collective property they would exhibit outside of brains. In this way, he offers a configurational view of emergence. He illustrates this with the example of a wheel. Although the component particles, atoms, and molecules forming the substance of the wheel are not changed individually or interactively by being in a wheel, because of the constraints on their relative mobility with respect to one another, they collectively have the property of being able to move across the ground in a very different pattern and subject to very different conditions than would be exhibited in any other configuration. The capacity to roll is only exhibited as a macroscopic collective property. It nevertheless has consequences for the component parts. It provides a means of displacement in space that would be unavailable otherwise. In this sense, Sperry argues that being part of this whole indirectly changes some of the properties of the parts. Specifically, it creates some new possibilities by restricting others. This trade-off between restriction and constraint on the one hand and unprecedented collective properties on the other will be explored more fully in the next few chapters. ([Location 2982](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=2982))
- If all higher-order causal interactions are between objects constituted by relationships among these ultimate building blocks of matter, then assigning causal power to various higher-order relations is to do redundant bookkeeping. ([Location 3077](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3077))
- Supervenience is in many respects the defining property of emergence, but also the source of many of its conceptual problems. The term was first used philosophically by Lloyd Morgan to describe the relationship that emergent properties have to the base properties that give rise to them.17 A more precise technical definition was provided by the contemporary philosopher Donald Davidson, who defines it in the context of the mind/body problem as follows: “there cannot be two events exactly alike in all physical respects but differing in some mental respects, or that an object cannot alter in some mental respects without altering in some physical respects.” ([Location 3082](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3082))
- if one agrees that there can be no difference in the whole without a difference in the parts, how can it be possible that there is something about the whole that is not reducible to combinations of properties of the parts? ([Location 3092](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3092))
- Philosophically, the study of compositionality relationships and their related hierarchic properties is called mereology. The term quite literally means “the study of partness.” ([Location 3095](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3095))
- At the end of a paper discussing his process approach to emergence, Mark Bickhard boldly asserts: “Mental states do not exist, any more than do flame states—both are processes.”23 This may be a bit too extreme, but it drives home a crucial point: these phenomena consist in the special character of the transformations between states, not in the constitution of things at any slice in time. So, trying to build a theory of the emergence of living or mental processes based on a synchronic conception of emergence theory is bound to fail from the beginning. The phenomena we are interested in explaining are intrinsically historical and dynamic. Being alive does not merely consist in being composed in a particular way. It consists in changing in a particular way. If this process of change stops, life stops, and unless all molecular processes are stopped as well (say by quick-freezing), the cells and molecules distinctive to it immediately begin to degrade. ([Location 3256](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3256))
- Processes are not decomposable to other simpler processes. Or, to put this in simpler terms: processes don’t have other processes as their parts. ([Location 3350](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3350))
- Habits and patterns are an expression of redundancy. Redundancy is a defining attribute of dynamic organization, such as the spiraling of galaxies, the symmetries defining geometric forms such as polygons, and the global regularities of physical processes such as are expressed by the second law of thermodynamics. Each is a form of organization in which some process or object is characterized by the repetition of similar attributes. ([Location 3460](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3460))
- Calling a pattern “chaotic” in this theoretical context does not necessarily mean that there is a complete absence of predictability, but rather that there is very little redundancy with which to simplify the description. ([Location 3617](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=3617))
- How much things change from what would have occurred spontaneously is a reflection of the amount of work exerted to produce this change. ([Location 5915](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=5915))
- Work is a spontaneous change inducing a non-spontaneous change to occur. ([Location 6053](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=6053))
- notice that while it is possible to get work from a system that is in a state far from equilibrium, as it spontaneously develops toward equilibrium, this capacity rapidly diminishes, even if the total amount of molecular level work remains constant throughout. And at equilibrium, the vast numbers of collisions and the vast amount of microscopic work that is still occurring produce no “net” capacity for macroscopic work. ([Location 6064](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=6064))
- Microscopic work is a necessary but not sufficient condition for macroscopic work. ([Location 6069](https://readwise.io/to_kindle?action=open&asin=B005LW5JAS&location=6069))