
## Metadata
- Author: [[Sean Carroll]]
- Full Title: The Big Picture
- Category: #books
## Highlights
- for Newton and Laplace, and to the best of our current understanding in theoretical physics, the flow of time is continuous rather than discrete. That’s no problem at all; this is a job for calculus, which Newton and Leibniz invented for just this reason. By the “state” of the universe, or any subsystem thereof, we mean the position and the velocity of every particle within it. The velocity is just the rate of change (the derivative) of the position as time passes; the laws of physics provide us with the acceleration, which is the rate of change of the velocity. Together, you give me the state of the universe at one time, and I can use the laws of physics to integrate forward (or backward) and get the state of the universe at any other time. We’re using the language of classical mechanics—particles, forces—but the idea is much more powerful and general. Laplace introduced the idea of “fields” as a centrally important concept in physics, and the notion became entrenched with the work of Michael Faraday and James Clerk Maxwell on electricity and magnetism in the nineteenth century. Unlike a particle, which has a position in space, a field has a value at every single point in space—that’s just what a field is. But we can treat that field value like a “position,” and its rate of change as a “velocity,” and the whole Laplacian thought experiment goes through undisturbed. The same is true for Einstein’s general theory of relativity, or Schrödinger’s equation in quantum mechanics, or modern speculations such as superstring theory. Since the days of Laplace, every serious attempt at understanding the behavior of the universe at a deep level has included the feature that the past and future are determined by the present state of the system. (One possible exception is the collapse of the wave function in quantum mechanics, which we’ll discuss at greater length in chapter 20.) This principle goes by a simple, if potentially misleading, name: conservation of information. Just as conservation of momentum implies that the universe can just keep on moving, without any unmoved mover behind the scenes, conservation of information implies that each moment contains precisely the right amount of information to determine every other moment. ([Location 578](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=578))
- These two conservation laws, of momentum and information, imply a sea change in our best fundamental ontology. The old Aristotelian view was comfortable and, in a sense, personal. When things moved, there were movers; when things happened, there were causes. The Laplacian view—one that continues to hold in science to this day—is based on patterns, not on natures and purposes. If this certain thing happens, we know this other thing will necessarily follow thereafter, with the sequence described by the laws of physics. Why is it that way? Because that’s the pattern we observe. ([Location 599](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=599))
- Quantum mechanics has supplanted classical mechanics as the best way we know to talk about the universe at a deep level. Unfortunately, and to the chagrin of physicists everywhere, we don’t fully understand what the theory actually is. We know that the quantum state of a system, left alone, evolves in a perfectly deterministic fashion, free even of the rare but annoying examples of non-determinism that we can find in classical mechanics. But when we observe a system, it seems to behave randomly, rather than deterministically. The wave function “collapses,” and we can state with very high precision the relative probability of observing different outcomes, but never know precisely which one it will be. ([Location 623](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=623))
- All of the popular versions of quantum mechanics, however, maintain the underlying philosophy of Laplace’s analysis, even if they do away with perfect predictability: what matters, in predicting what will happen next, is the current state of the universe. Not a goal in the future, nor any memory of where the system has been. As far as our best current physics is concerned, each moment in the progression of time follows from the previous moment according to clear, impersonal, quantitative rules. ([Location 631](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=631))
- Principle of Sufficient Reason: For any true fact, there is a reason why it is so, and why something else is not so instead. ([Location 685](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=685))
- The observable universe around us isn’t just an arbitrary collection of stuff obeying the laws of physics—it’s stuff that starts out in a very particular kind of arrangement, and obeys the laws of physics thereafter. By “starts out” we are referring to conditions near the Big Bang, a moment about 14 billion years ago. We don’t know whether the Big Bang was the actual beginning of time, but it was a moment in time beyond which we can’t see any further into the past, so it’s the beginning of our observable part of the cosmos. ([Location 745](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=745))
- It wasn’t until Giordano Bruno, a sixteenth-century Italian philosopher and mystic, that anyone suggested that the sun was just one star among many, and the Earth one of many planets that orbited stars. ([Location 805](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=805))
- So the Big Bang doesn’t actually mark the beginning of our universe; it marks the end of our theoretical understanding. We have a very good idea, on the basis of observational data, what happened soon after the Bang. The microwave background radiation tells us to a very high degree of precision what things were like a few hundred thousand years afterward, and the abundance of light elements tells us what the universe was doing when it was a nuclear fusion reactor, just a few minutes afterward. But the Bang itself is a mystery. We shouldn’t think of it as “the singularity at the beginning of time”; it’s a label for a moment in time that we currently don’t understand. ([Location 849](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=849))
- Boltzmann suggested that we could identify the entropy of a system with the number of different states that would be macroscopically indistinguishable from the state it is actually in. (Technically, it’s the logarithm of the number of indistinguishable states, but that mathematical detail won’t concern us.) A low-entropy configuration is one where relatively few states would look that way, while a high-entropy one corresponds to many possible states. There are many ways to arrange molecules of cream and coffee so that they look all mixed together; there are far fewer arrangements where all of the cream is on the top and all of the coffee on the bottom. ([Location 947](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=947))
- With Boltzmann’s definition in hand, it makes perfect sense that entropy tends to increase over time. The reason is simple: there are far more states with high entropy than states with low entropy. If you start in a low-entropy configuration and simply evolve in almost any direction, your entropy is extraordinarily likely to increase. When the entropy of a system is as high as it can get, we say that the system is in equilibrium. In equilibrium, time has no arrow. ([Location 952](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=952))
- Nobody knows exactly why the early universe had such a low entropy. It’s one of those features of our world that may have a deeper explanation we haven’t yet found, or may just be a true fact we need to learn to accept. What we know is that this initially low entropy is responsible for the “thermodynamic” arrow of time, the one that says entropy was lower toward the past and higher toward the future. Amazingly, it seems that this property of entropy is responsible for all of the differences between past and future that we know about. Memory, aging, cause and effect—all can be traced to the second law of thermodynamics and in particular to the fact that entropy used to be low in the past. ([Location 966](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=966))
- When a later event has great leverage over an earlier one, we call the latter a “record” of the former; when the earlier event has great leverage over a later one, we call the latter a “cause” of the former. ([Location 1023](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1023))
- When a later event has great leverage over an earlier one, we call the latter a “record” of the former; when the earlier event has great leverage over a later one, we call the former a “cause” of the latter. ([Location 1067](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1067))
- Among the small but passionate community of probability-theory aficionados, fierce debates rage over What Probability Really Is. In one camp are the frequentists, who think that “probability” is just shorthand for “how frequently something would happen in an infinite number of trials.” If you say that a flipped coin has a 50 percent chance of coming up heads, a frequentist will explain that what you really mean is that an infinite number of coin flips will give equal numbers of head and tails. In another camp are the Bayesians, for whom probabilities are simply expressions of your states of belief in cases of ignorance or uncertainty. For a Bayesian, saying there is a 50 percent chance of the coin coming up heads is merely to state that you have zero reason to favor one outcome over another. If you were offered to bet on the outcome of the coin flip, you would be indifferent to choosing heads or tails. The Bayesian will then helpfully explain that this is the only thing you could possibly mean by such a statement, since we never observe infinite numbers of trials, and we often speak about probabilities for things that happen only once, like elections or sporting events. The frequentist would then object that the Bayesian is introducing an unnecessary element of subjectivity and personal ignorance into what should be an objective conversation about how the world behaves, and they would be off. ([Location 1091](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1091))
- Moreover, many different states in the molecular theory get mapped to the same state in the fluid one. When this is the case, we often call the first theory the “microscopic” or “fine-grained” or “fundamental” one, and the second the “macroscopic” or “coarse-grained” or “emergent” or “effective” one. These labels aren’t absolute. To a biologist working with an emergent theory of cells and tissue, the theory of atoms and their interactions might be a microscopic description; to a string theorist working on the quantum theory of gravity, superstrings might be the microscopic entities, and atoms are emergent. One person’s microscopic is another person’s macroscopic. ([Location 1516](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1516))
- Coarse-graining goes one way—from microscopic to macroscopic—but not the other way. You can’t discover the properties of the microscopic theory just from knowing the macroscopic theory. Indeed, emergent theories can be multiply realizable: there can, in principle, be many distinct microscopic theories that are incompatible with one another but compatible with the same emergent description. ([Location 1530](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1530))
- The reason why emergence is so helpful is that different theories are not created equal. Within its domain of applicability, the emergent fluid theory is enormously more computationally efficient than the microscopic molecular theory. It’s easier to write down a few fluid variables than the states of all those molecules. Typically—though not necessarily—the theory that has a wider domain of applicability will also be the one that is more computationally cumbersome. There tends to be a trade-off between comprehensiveness of a theory and its practicality. ([Location 1534](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1534))
- Even an individual person changing their mind about something can be thought of as a phase transition: our best way of talking about that person is now different. People, like water, can exhibit plateaus in their thinking, where outwardly they hold the same beliefs but inwardly their mental gears are gradually turning. ([Location 1597](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1597))
- Ernest Rutherford, a New Zealand–born experimental physicist who was as responsible as anyone for discovering the structure of the atom, once remarked that “all of science is either physics or stamp collecting.” ([Location 1633](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1633))
- I can imagine focusing on one particular atom that currently resides as part of the skin on the tip of my finger. Ordinarily, using the rules of atomic physics, I would think that I could predict the behavior of that atom using the laws of nature and some specification of the conditions in its surroundings—the other atoms, the electric and magnetic fields, the force due to gravity, and so on. A strong emergentist will say: No, you can’t do that. That atom is part of you, a person, and you can’t predict the behavior of that atom without understanding something about the bigger person-system. Knowing about the atom and its surroundings is not enough. ([Location 1699](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1699))
- There is a continuum of possible stances toward the way that the different stories of reality fit together, with “strong emergence” (all stories are autonomous, even incompatible) on one end and “strong reductionism” (all stories reduce to one fundamental one) on the other. ([Location 1719](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1719))
- Just as investigating dualities between different physical theories provides full employment for physicists, investigating how different vocabularies relate to one another and sometimes intermingle provides full employment for philosophers. ([Location 1775](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1775))
- Planets don’t sit on foundations; they hold themselves together in a self-reinforcing pattern. The same is true for beliefs: they aren’t (try as we may) founded on unimpeachable principles that can’t be questioned. Rather, whole systems of belief fit together with one another, in more or less comfortable ways, pulled in by a mutual epistemological force. ([Location 1794](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1794))
- No analogy is perfect, but the planets-of-belief metaphor is a nice way to understand the view known in philosophical circles as coherentism. According to this picture, a justified belief is one that belongs to a coherent set of propositions. This coherence plays the role of the gravitational pull that brings together dust and rocks to form real planets. A stable planet of belief will be one where all the individual beliefs are mutually coherent and reinforcing. ([Location 1804](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1804))
- There is a crucial difference, in other words, between stable planets of belief, ones where all the different pieces attract one another in a consistent and coherent way, and habitable planets, ones where we could actually live. A habitable planet of belief necessarily includes some shared convictions about evidence and rationality, as well as the actual information we have gathered about the world. We can hope that people working in good faith will, after trying hard to understand reality the best they can, end up constructing planets of belief that are somewhat compatible with one another. ([Location 1836](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1836))
- It’s worth highlighting two important cognitive biases that we can look to avoid as we put together our own planets. One is our tendency to give higher credences to propositions that we want to be true. This can show up at a very personal level, as what’s known as self-serving bias: when something good happens, we think it’s because we are talented and deserving, while bad things are attributed to unfortunate luck or uncontrollable external circumstances. At a broader level, we naturally gravitate toward theories of the world that somehow flatter ourselves, make us feel important, or provide us with comfort. The other bias is our preference for preserving our planet of belief, rather than changing it around. This can also show up in many ways. Confirmation bias is our tendency to latch on to and highlight any information that confirms beliefs we already have, while disregarding evidence that may throw our beliefs into question. This tendency is so strong that it leads to the backfire effect—show someone evidence that contradicts what they believe, and studies show that they will usually come away holding their initial belief even more strongly. We cherish our beliefs, and work hard to protect them against outside threats. ([Location 1858](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1858))
- Our need to justify our own beliefs can end up having a dramatic influence on what those beliefs actually are. Social psychologists Carol Tavris and Elliot Aronson talk about the “Pyramid of Choice.” Imagine two people with nearly identical beliefs, each confronted with a decision to make. One chooses one way, and the other goes in the other direction, though initially it was a close call either way. Afterward, inevitably, they work to convince themselves that the choice they made was the right one. They each justify what they did, and begin to think there wasn’t much of a choice at all. By the end of the process, these two people who started out almost the same have ended up on opposite ends of a particular spectrum of belief—and often defending their position with exceptionally fervent devotion. “It’s the people who almost decide to live in glass houses who throw the first stones,” as Tavris and Aronson put it. ([Location 1867](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1867))
- We’re faced with the problem that the beliefs we choose to adopt are shaped as much, if not more, by the beliefs we already have than by correspondence with external reality. ([Location 1875](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1875))
- How can we guard ourselves against self-reinforcing irrationality? There is no perfect remedy, but there is a strategy. Knowing that cognitive biases exist, we can take that fact into account when doing our Bayesian inference. Do you want something to be true? That should count against it in your assignment of credences, not for it. Does new, credible evidence seem incompatible with your worldview? We should give it extra consideration, not toss it aside. ([Location 1876](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=1876))
- In our hydrogen atom, that orbiting electron carries a certain amount of energy, depending on how close it is to the proton—the closer it gets, the less energy it has. So an electron that is far away from the proton, but still bound to it, has a relatively large energy. And it’s being “shaken,” simply by the fact that it’s orbiting around. We therefore expect the electron to give off light and in the process lose energy and spiral closer and closer to the proton. ([Location 2452](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=2452))
- There are two basic kinds of fields and associated particles: bosons and fermions. Bosons, such as the photon and graviton, can pile on top of each other to create force fields, like electromagnetism and gravity. Fermions take up space: there can only be one of each kind of fermion in one place at one time. Fermions, like electrons, protons, and neutrons, make up the objects of matter like you and me and chairs and planets, and give them all the property of solidity. ([Location 2661](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=2661))
- The reason why we know there are no new fields or particles that play an important role in the physics underlying our everyday lives is a crucial property of quantum field theory known as crossing symmetry. This amazing feature helps us be sure that certain kinds of particles do not exist; otherwise we would have found them already. Crossing symmetry basically says that if one field can interact with another one (for example, by scattering off of it), then the second field can create particles of the first one under the right conditions. It can be thought of as the quantum-field-theory analogue of the principle that every action implies a reaction. ([Location 2762](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=2762))
- The basic reason why gravity matters to us is that it is a long-range force that accumulates—the more stuff you have causing the gravity, the stronger its influence is. (That’s not necessarily true for electromagnetism, for example, since positive and negative charges can cancel out; gravity always just adds up.) ([Location 2809](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=2809))
- (Charged particles at rest are surrounded by electric fields, while charged particles in motion generate magnetic fields in addition.) ([Location 3321](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=3321))
- The second law of thermodynamics says that the entropy of isolated systems increases over time. Ludwig Boltzmann explained entropy to us: it’s a way of counting how many possible microscopic arrangements of the stuff in a system would look indistinguishable from a macroscopic point of view. If there are many ways to rearrange the particles in a system without changing its basic appearance, it’s high-entropy; if there are a relatively small number, it’s low-entropy. The Past Hypothesis says that our observable universe started in a very low-entropy state. From there, the second law is easy to see: as time goes on, the universe goes from being low-entropy to high-entropy, simply because there are more ways that entropy can be high. ([Location 3429](https://readwise.io/to_kindle?action=open&asin=B014EOUMZA&location=3429))