Skip to main content
Physics LibreTexts

18.27 Life in a Computer

The idea of artificial life, or life in a computer, dates back to the middle of the 20th century, with the computing pioneers Alan Turing and John von Neumann. Back then, the state of the art in computers was a behemoth called ENIAC. It was the size of a small house, weighing thirty tons, consuming 200 kilowatts, and needing three full-time technicians to swap burned-out valves and resistors. But Turing and von Neumann imagined a time when computing would be cheap, easy and portable. Turing was a troubled genius who killed himself at the age of forty one. He invented the idea of an algorithm—a logical step-by-step procedure for solving a math problem in a finite number of steps—and the Turing test—the hypothetical situation where a computer mimics human responses well enough to be considered intelligent. Von Neumann was a giant of mathematics and science who invented the processing architecture that’s still used in most modern computers.

Starting in the 1970’s, researchers in the new field of computer science explored another von Neumann creation: cellular automata. Imagine a single square that’s colored black or white, with each color representing on or off. Simple rules describe whether neighboring squares or cells turn black or white: any live (or black) cell with less than two neighbors dies (turns white) of loneliness, any live cell with three or more neighbors dies of crowding, any dead cell with three neighbors comes to life, and any live cell with two or three neighbors lives on unchanged.

It sounds too primitive to be interesting, yet John Conway at the University of Cambridge found surprising depth to this black-and-white, pixellated world. Conway took the essential idea of cellular automata, most easily displayed as a one-dimensional horizontal sequence where evolution with time is shown vertically, and he animated it in two dimensions. His "Game of Life" features patterns that evolve and die, some growing like cancers, and others creating endlessly changing patterns. It looks like a hybrid of a laser light show and animated fractals. Conway saw beautiful "gliders" and "guns" that generate an endless stream of new patterns. Guns act like wires that transmit information, so guns can combine to form NOT and AND logic gates, the basis of all computation. It’s been proven that the Game of Life is as capable as any computer with unlimited memory. The Game of Life isn’t visually interesting enough to divert a young teenager for more than a few minutes but scientists became very excited about the patterns generated in the software. (In any video game, everything is preprogrammed; nothing happens that the programmers didn’t anticipate.) First, the patterns demonstrate emergence—complexity that derives from simple rules and a primitive starting point. Second, the patterns are self-organizing. Both attributes are central to biology.

Stephen Wolfram, the math prodigy who created the Mathematica software package, has taken the interpretation of cellular automata even further. He’s shown that cellular automata can be used to calculate transcendental or prime numbers, find quick solutions to differential equations, and generate behavior that is random on small scales yet predictable on large scales. This last attribute is striking because it mirrors the way the unpredictable quantum world maps into the well-behaved world of macroscopic objects. This world of tiny squares and simple rules can be used to show that there are axiom systems beyond those of traditional mathematics; in other words, the sum of the knowledge in all math textbooks is just a subset of all possible mathematics. If anyone imagines that these conclusions are caused by the particular properties of two-dimensional arrays of cells, Wolfram has an answer for them too. He shows that the same conclusions hold if there’s no grid (i.e. if a network of connections rather than fixed cells), if there are more than two dimensions, and if there are constraints instead of rules. The formalism has almost unlimited scope.

Chris Langton is the pioneer that organized the first international conference on artificial life at Los Alamos in 1987. He’s one of only five resident faculty members at the Santa Fe Institute, a small think tank where physicists and computer scientists and economists rub shoulders. It was there that Langton articulated the reasons we should take the study and implications of artificial life seriously. He claims that it’s artificial only in terms of components, not emergent processes. In other words, he argues that if these artificial components are properly implemented, the processes they support are every bit as genuine as the natural processes they imitate. This is a big claim. Langton is takes James Watson’s famous statement that "life is digital information" literally. He’s saying that if computational elements carry out the same functional role as biomolecules in natural living systems, they will be "alive" in the same way that natural organisms are alive. In other words, it’s the process that counts and not the substrate.

One way to think about artificial life is that it’s just made of different stuff than the life that evolved on Earth. Simplistically, the human genome is just three billion base pairs written in a four-letter alphabet—information equivalent to a encyclopedia that can fit on a CD. (Although this is just the tip of an iceberg of information coded in the myriad ways that proteins interact with each other and the environment.) By analogy, the brain is just an electrical network of 100 billion neurons and a thousand trillion connections that works at a few kHz. If Moore’s Law—the doubling of computer power and data density every 18 months—continues unabated, computers will have this capability in 2020.