# Renormalization

I have a lot to say about renormalization; if I wait until I’ve read everything I need to know about it, my essay will never be written; I’ll die first; there isn’t enough time. Click this link to read what some experts argue is the why and how of renormalization. Do it after reading my essay, though.

There’s a big problem inside the science of science; there always has been. Experimental facts never match the mathematics of the theories people invent to explain them. The math which people use to express their ideas about the universe always removes the ambiguities that seem to underlie all of reality.

People noticed the problem as soon as they started doing science. The diameter of a circle and its circumference was never certain; not when Pythagoras studied it 2,500 years ago or now; the number is the problem; it’s irrational; it’s not a fraction; it’s a number with no end and no pattern—3.14159…forever into infinity.

The diameter of a circle must be multiplied by π to calculate its circumference; and vice-versa. No one can ever know everything about a circle, because the number π is uncertain, undecidable, and in truth unknowable. Long ago people learned to use the fraction 22/7 or, for more accuracy, 355/113These fractions gave the wrong value for π, but they were easy to work with and close enough to do engineering problems.

Fast forward to Isaac Newton, the English astronomer and mathematician, who studied the motion of the planets. Newton published Philosophiæ Naturalis Principia Mathematica in 1687. I have a modern copy in my library. It’s filled with formulas and derivations. Not one of them works to explain the real world—not one.

Newton’s equation about gravity describes the interaction between two objects—the strength of attraction between the Sun and the Earth, for example, and the motion of the Earth that results. The problem is that the Moon and Mars and Venus and many other bodies warp the space-time pool where the Earth and Sun swim. No way exists to write a formula to determine the future of such a system.

In 1887 Henri Poincare and Heinrich Bruns proved that such a formula can’t be written. The three-body problem (or any N-body problem, for that matter) cannot be solved by a single equation. Fudge factors have to be figured in. Perturbation theory was proposed and developed. It helped a lot. Space exploration depends on it. It’s not perfect, though. When NASA lands probes on Mars, no one knows exactly where the crafts are located on its surface relative to any reference point on the Earth.

Even using the signals from the constellations of a half-dozen or so Global Positioning Systems (GPS) deployed in high earth-orbit by various countries, it’s not possible to know exactly where anything is. Beet farmers out west combine the GPS systems of two different countries to hone the courses of their tractors and plows.

On a good day farmers can locate a row of beets to within an eighth of an inch. That’s plenty good, but the two GPS systems they depend on are fragile and cost billions per year to maintain. In beet farming, an eighth-inch isn’t perfect, but it’s close enough.

Quantum physics is another frontier of knowledge that presents roadblocks to precision for the mathematically inclined. Physicists have invented more excuses for why they can’t get anything exactly right than probably any other group of scientists. Quantum physics is about a hundred years old, but today the problems seem more insurmountable than ever.

Why? Well, the self-interaction of sub-atomic particles, as well as their interactions with the swarms of virtual particles that surround them, disrupt the expected correlations between any theories and actual experimental results. The mismatches are spectacular. They dwarf the N-body problems of astronomy.

Worse—there is the problem of scales. Electrical forces, for example, are a billion times a billion times a billion times a billion times stronger than gravitational forces at sub-atomic scales. Forces appear to manifest themselves according to the distances across which they are interacting. It’s very odd.

Measuring the charge on an electron produces different results that depend on its energy. High energy electrons interact strongly; low energy ones, not so much. So again, how can experimental results lead to theories that are both accurate and predictive? Divergent amplitudes that lead to infinities aren’t helpful.

An infinity of scales pile up to produce unacceptable infinities in the mathematics, which erode the predictive usefulness of the math descriptors. Once again, researchers are forced to fabricate fudge factors. Renormalization is the buzz-word that describes several popular routes to the removal of problems with the numbers.

The folks who developed the theory of quantum electrodynamics (QED), for another example, used perturbation methods to bootstrap their ideas into a useful tool for explanations. Their method produced annoying infinities, which renormalization techniques chased away.

At first physicists felt uncomfortable discarding the infinities that showed up in their equations; and they hated to introduce fudge factors. It felt like cheating. They believed that the poor match between math, theory, and experiment meant that something was wrong; they weren’t understanding the underlying truths they were working so hard to lay bare.

Philosopher Robert Pirsig believed that the number of possible explanations that scientists could invent for phenomenon were, in actual fact, unlimited. Despite all the math and all the convolutions of math, Pirsig believed that something mysterious and intangible like quality or morality guided our explanations of the world. It drove him insane, at least in the years before he wrote his classic Zen and the Art of Motorcycle Maintenance.

The newest generation of scientists aren’t embarrassed by anomalies. They have taught themselves to “shut up and calculate.” The digital somersaults they must perform to validate their work are impossible for average people to understand, much less perform. Researchers determine scales, introduce “cut-offs“, and extract the appropriate physics to make suitable matches to their experimental results. They put the horse before the cart, more times than not.

The complexity of the language scientists use to understand and explain the world of the very small is the most convincing clue, for me at least, that they are missing important pieces of a puzzle that may not be solvable by humans, no matter how much IQ any petri-dish of gametes might be able to deliver to the brains of the scientists in our future.

It’s possible that the brains of humans, which use language and mathematics to ponder and explain the world, are insufficiently structured to model the complexities of the universe around them. We aren’t hard wired with enough power to create the algorithms for ultimate understanding.

We are Commodore 64 personal computers (remember them, anyone?) who need upgrades to Sunway TaihuLight or Cray XK7 Titan super-computers to have any chance at all.

The smartest thinkers—people like Nick Bostrom and Pedro Domingos (who wrote The Master Algorithm)—are suggesting that an artificial super-intelligence can be developed and hardwired with perhaps hundreds or even thousands of levels—each level loaded with trillions of parallel circuits—that might be able digest all the statistical meta-data, books, videos, and other information (i.e. the complete library of human knowledge and understanding over all of history) to create a platform from which the computer will program itself to follow paths to knowledge far beyond the capabilities of the entire population of the earth.

This super-intelligent computer system might discover understandings in days or weeks (who knows for sure?) that all humans working together cooperatively for thousands of years might not have any chance at all to acquire. Of course the risk is that such an intelligence, once unleashed, might enslave the planet. Another downside is that it might not be able to communicate to humans what it has learned, much like a father who is a college math professor trying to teach calculus to the family cat.

The founder of Google and Alphabet Inc., Larry Page, (Larry graduated in the same high-school class as a member of my family) is rumored to be working to perfect just such an intelligence. He owns part of Tesla Motors, which was started by Elon Musk of SpaceX. Imagine a guy who controls a supercomputer teaming up with a guy who has the rocket launching power of a country. What are the consequences?

Entrepreneurs don’t like to be regulated. The temptations that will be unleashed by unregulated, unlimited military power and scientific knowledge falling into the hands of two men—even men as nice and personable as Elon and Larry seem to be—could spell for humanity over time an unmitigated… what’s the word I’m looking for?

I heard Elon say that he doesn’t like regulation, but he wants to be regulated. He believes artificial super-intelligence may be civilization ending. He’s planning to put a colony on Mars to escape it’s power and ensure human survival.

Is Elon saying that he doesn’t trust himself; that he doesn’t trust his friend, Larry? Are these two men asking us to save the world from themselves? I haven’t heard Larry ask for anything like that, actually. He keeps a low profile, God bless him, even as he watches everything we say and do in cyber-space. Think about it.

We’ve got maybe ten years tops; maybe less. Somebody better think of something fast.

Who could ever imagine that laissez-faire capitalism might someday spawn a technology that enslaves the world? Ayn Rand, perhaps?

We humans need to renormalize our aspirations—our civilization—before we generate infinities of misery that wreck our planet and create a future for humans no one wants.

Billy Lee