When looking up into the night sky we sense the vast distances between the objects we see. But when looking down at the ground we experience something different. It seems that objects are solid, without internal structure.
We can’t know by looking that the things we see are made of tiny molecules separated from each other by tiny gaps. Even with sophisticated instruments like microscopes we have no chance of seeing any molecules. Molecules are too small.
Think about it. No one has ever seen a molecule. No one. Computers have created pictures based on programming rules and data from sensors to give us the idea of what molecules might look like—if molecules lived in our world at our scales and reacted to sensors and probes the way we do. But, of course, they don’t.
Few professors emphasize to kids in freshman chemistry, as far as I know, that they are learning the rules from models of molecules which have been invented—fabricated—to help make sense of lab experiments done on substances that are able to be felt with the hands and seen with unaided eyes.
Worse still, visual models can never be realistic when applied to the objects scientists call atoms. Atoms are what molecules are made from. They are completely fanciful.
Whatever it is we think atoms are, they aren’t resolvable with light, which is what our brains use to help us imagine things. Their constituents are quantum objects that don’t behave like anything we deal with in ordinary life. Everything we think we know about atoms is made-up by scientists struggling to make sense of the way substances actually behave under every set of experimental circumstances they can imagine.
Scientists have invented models of atoms, which are made from protons, neutrons and electrons (that whirl inside s, p, d, f & g orbitals)—whatever—to aid their thinking about them. No one examines an atom to see if it looks like its model, because they can’t. Whatever it is scientists are modeling can’t be seen by eyes or microscopes. If the models help scientists predict what will happen in experiments, they’re ok with them. Physicist Stephen Hawking calls it model-dependent realism. The models are good enough.
During the past fifty years or so experiments have revealed new layers of complexity, which older models of the atom couldn’t address. So scientists devised new models to help them reason more clearly about the strange events they observed in their experiments.
Scientists invented more structures and more “particles”—quarks being the best known example—to explain and simplify the fantastic results of their most recent experiments.
Before the concept of the quark was invented, scientists struggled with the complexity of a theory that incorporated hundreds of particles. Frustrated phyicists referred to the complexity as a “particle zoo.” After the theory of quarks was accepted, the number of particles in the “standard model” dropped to seventeen.
Some current models of the subatomic world postulate point-size masses immersed in vast volumes of interstitial space. These models reflect the mathematics used to build them, but are probably useless for understanding what is really going on.
John Wheeler, the theoretical physicist who coined the terms worm-hole and quantum-foam, said this about the very small: …every item of the physical world has at bottom—a very deep bottom, in most instances—an immaterial source and explanation…
At the smallest scale we can realistically work with—the scale of molecules—the structure of matter is dense. The space between molecules in a lattice is not much larger than the size of the molecules themselves.
The force fields inside the molecular lattice are powerful—powerful enough to make the lattice impermeable. Vast volumes of empty space don’t exist within it. Matter and energy seem to work together in a kind of soup of symbiotic equivalence.
It might be reasonable to expect that at smaller scales, forces and fields take over. Matter, as folks usually think of it, is gone. Fields (whatever they really are) predominate. When these fields interact with detectors, the detectors behave as if they’ve interacted with physical particles immersed in vast volumes of empty space.
It’s an unfortunate illusion, because people might be missing an underlying reality that, as experimenters descend to smaller scales, they enter regions of disproportionately less space—much less space; certainly not more. Descent down ladders to smaller and smaller scales open densities of force/energy and limitations of space/time comparable to those found in black holes.
For example, in a typical black hole—one estimate claims there could be at least a hundred million in the Milky Way Galaxy—-a typical event horizon might have a circumference of, say, thirty miles, while the same black hole’s diameter might measure millions of miles. These dimensions violate the Euclidian rules of geometry everyone is used to. According to these rules, a spheroidal event horizon that measures thirty miles around its circumference can’t measure more than ten miles across its diameter.
A diameter of millions of miles for an object with a thirty mile circumference seems crazy at first, until we grasp some of the implications of relativity, which demand that the volume of space and span of time within a black hole be densely distorted and wildly warped.
A black hole contains within its volume the energy-equivalent of all the matter of the collapsed and vanished star that formed it plus all the energy-equivalent of any other matter that may have fallen into it. It is a region devoid of matter—it is energy rich but matter empoverished—analogous perhaps to those tiny spaces we think might exist within and between atoms and inside the sub-atomic realms of ordinary matter.
Said plainly, whatever it is that exists at the tiny scales I have been writing about in this essay, no one really has a clue, but maybe knowledge about black holes can provide insights. I think so. The problem: much of our knowledge about black holes is speculation based on mathematics; unless we are already living inside a black hole, we can’t verify the ideas of, admittedly, some very smart and talented people, like Stephen Hawking, for example.
The problem of understanding the very small is serious. The most advanced particle detector we can afford to build blows up protons to examine their debris field. The detector “looks at” debris that measures about 1/100th the size of the protons it smashes. Accelerators—like the one at CERN—can’t “see” anything smaller.
From these tiny pieces of accelerator-trash theories of nature are fashioned. The inability to resolve the super small stuff is a problem. No one can see quarks, for example. Scientists at the ALICE Lab at CERN hope to fashion a “work around” by using the nuclei of iron atoms to make progress in the coming years.
To examine debris at Planck scales—which would answer everyone’s questions—requires a resolution many trillions of times greater than CERN can deliver. Such a machine would have to be much larger than the one at CERN. It would have to be larger than the solar system. In fact, it would have to be larger than the Milky Way Galaxy. Even then, the uncertainty principle guarantees that such a machine could not remove all the quantum fuzziness from whatever images it might create.
According to IAS theoretical physicist, Nema Arkani-Hamed, it might be possible to burrow down to an understanding of the very small by using pure thought—as long as it is consistent with the mathematics that is already known for sure about quantum physics and relativity theory. The problem is, we will never be able to confirm our thinking by doing an experiment.
The good news, Nema says, is that constraints imposed by knowledge we already have may so reduce the number of paths to truth that somebody might find one that is unique, sufficient, and exclusive. If so, folks can have confidence in it, even though experimental verification lies well beyond the reach of our technology.
But again, fundamental problems—like trying to observe an intact, whole atom—remain. No technology of any kind exists that will permit anyone to observe an entire atom at once and resolve its parts. Physicists are reduced to using what they learn from observing atomic-scale debris to help them fashion, in their imaginations, what such an entity might really “look” like. No one will ever have the holistic satisfaction of holding an atom in their experimental hands, observing it, and pushing on its quantum-endowed components to see what happens.
Where does all this leave us? At this stage in its history, science is struggling to figure out what’s happening.
In the USA, (where the big money is) science seems to serve the military and companies which are struggling to create products to capture the imagination and pocketbooks of a buying public. For the moment at least, much of science is preoccupied with better serving those who pay for its services.
But someday—hopefully soon—scientists may refocus their considerable talents on the questions that really matter most to people: Where are we? What, exactly, is this place? Is anyone in charge?