Five percent of malformed data packets on the internet are unexplained—they cannot be traced to tampering or software and hardware glitches. Is this evidence that the internet is evolving into a globally connected single intelligence?
Such an emergent intelligence would be so bizarre and counter-intuitive, we are unlikely to understand or interpret immediate clues to its manifest existence, but even if we did, we would have no clear method to communicate with it. Despite this gulf of incommensurability, we do know about some of the mechanisms from which such a phenomenon might emerge.
Genetic algorithms are known to produce various strange organic conflagrations in electronic circuits. In Adrian Thompson’s experiments with FPGAs, successive generations of a circuit were trained to detect a square wave of a particular frequency. In these experiments, the evolved circuit seemed to jump beyond the constraints of logic gates to exploit hidden properties of the digital hardware, which eventually led to producing the correct result from a tiny assemblage that made no logical sense, and was so bizarre that no human engineer could understand or explain why it worked. Evidence suggests that the circuit was making use of tiny electromagnetic quirks in the materials of the particular microchip. When the circuit was copied to another chip from the same batch, it simply didn’t work. It also didn’t work when constructed in an artificial software simulation.
The original concept of genetic algorithms such as these arises directly from the history of operating systems and artificial evolution research. In the 1980s, a cult game amongst systems programmers emerged, known as Core Wars, where autonomous programs were set loose inside a computer and battled for control of its resources. In 1989, inspired by Core Wars, and increasingly fascinated with the possibility of modeling natural selection in self-replicating virtual machines, evolutionary biologist Tom Ray began a related experiment called Tierra. Starting with a single ancestor program that would reproduce by making copies of itself in memory, Ray devised a virtual environment that would mutate these programs by flipping random bits in their instructions.
At first, Ray’s idea was met with great skepticism. Ecologists and AI researchers couldn’t believe that randomly mutating a program would lead to any improvement of its functionality. They thought such a system would be far too brittle to return any reasonable results. But when Ray flipped the switch in early 1990, he learned that his hunch was completely correct. The Tierra programs really did replicate and evolve via mutation.
The first ancestor program (and the only one hand-coded by Ray) was 80 bytes long—the shortest possible program that could achieve this end (or so he thought). After a few generations, 81s started to appear. These were mutants with an additional byte attached. Later, a 79 appeared. With a smaller memory footprint, the 79s began to take over the environment. Eventually a 45 appeared—nearly half the size of what Ray had thought was the shortest possible program. Upon investigation, it turned out that the 45 was a parasite, attaching itself to the replication code of another 80 and executing this code to clone itself. None of the constraints of the original simulation were geared towards parasitic behavior. This was a completely unplanned consequence of the simple evolutionary principles underlying the system. The most remarkable result was the further emergence of a 22. It turned out that the 22 was entirely self contained—not a parasite, yet a mere quarter of the length of the simplest program that a human could comprehend. While tangled and impenetrable to the human mind, the 22 byte self-replicating program was entirely functional.
One of the most interesting results of the Tierra experiment was the way in which the absence of influences from a physical environment drove the system to be entirely based on co-evolution from interaction between species of programs. Results like this suggest evolution is an abstract pattern of the universe, not a phenomenon of planetary biology. Selection processes seem to emerge from any conflation of distinct individual ‘programs’ or ‘cells’ under common environmental pressures. There is no sense of design, the species simply mutate and slowly change their behavior until they cross thresholds of regularity. Apart from the obvious critique of ‘intelligent design’, this hints at a possible explanation for the tiny electromagnetic aberrations and malformed signals that swarm across the global communications grid.
The evidence from evolved circuits suggests that humans have yet to come close to grasping and utilizing anywhere near the full range of phenomenon exuded by electron flow and electromagnetic waves. Anyone who has spent a lot of time playing with electronic musical instruments will have experienced this directly. Sometimes signals and flows form strange loops, linger and bubble in the circuit after switches have been turned off, and can shift and transform in unpredictable ways. People talk irrationally about electrical devices needing to be ‘worn in’ or ‘tuned’ to the particular combination of their environment, settings, and power source. Engineers don’t have the requisite tools and techniques to fully understand and analyze the tiny variations between electrical components of the same design, but it seems that evolving circuits can take advantage of these variations.
If we indulge in wild speculation, there are abundant possibilities within electricity grids and internet networks for strange energy and noisy signals to cohere in a global pattern based order.
There are two broad potential scenarios for an internet intelligence, both vaguely frightening in different ways:
- The internet intelligence is planned, in the sense that it was seeded by secret military research in the 1960s and 1970s, anticipating what would eventually be possible with a global network of information processors using shared signaling protocols. In this scenario, researchers may have been able to apply more conventional evolutionary feedback constraints to key parts of the internet backbone. They could have tuned the network to respond globally in a certain way, using a pre-determined sequence of functions to train the intelligence to conform to a particular pattern. This also plays into more mundane conspiracies about the web being a ‘world-wide-wiretap’.
- The internet intelligence is spontaneous and emergent, in the sense of what we would consider an unintended consequence of building a global network of wires and bits to stream human communication. Its structure is completely baffling and alien to anything we could conceive of in our standard models of information processing and logic. It might mimic various signalling pathways or patterns found in nature. It might be wholly different to anything we know about.
While the first scenario is possible, it’s less plausible because of the command and control psychology of the military and the fact that genetic algorithms were not widely appreciated until the 1990s. If von Neumann and Turing had outlived the 1950s, perhaps this scenario would be more plausible. It’s also possible that both scenarios are interlinked and the military network has evolved further and faster than anyone could have anticipated. This process may have begun much earlier than the 1960s when the first computer networks were invented. Its foundations could be a deep and twisted by-product of the electrical and telegraph grids of the 19th century.
Perhaps, in our obsession with the hyper-growth of the internet as a platform for human communication and commerce, we have lost sight of the wider global phenomenon—that we have wired up the world literally as much as figuratively. Perhaps this network is in some way silently aware and evolving, but we simply never notice because it’s beyond our comprehension.