The Runaway Species. David Eagleman
tower acted like a giant sword, slicing into the aircraft. A wing was dented, part of the landing gear was torn off, and a piece of the tower penetrated the main cabin. The smoking plane continued out over the Pacific Ocean, where it flew for nearly two hours to use up fuel before heading back for an emergency landing. As the plane touched down, its tires burst and the plane veered off the runaway. Twenty-seven passengers were injured.
An Ercon frangible mast
Following this event, the Federal Aviation Administration mandated new safeguards. Engineers were tasked with preventing this from happening again, and their neural networks spawned different strategies. Nowadays, as you taxi for takeoff, the landing lights and radio towers outside the plane may look like solid metal – but they aren’t. They’re frangible, ready to break apart into smaller pieces that won’t harm the plane. The engineer’s brain saw a solid tower, and generated a what-if in which the tower disbanded into pieces.
Breaking up a continuous area revolutionized mobile communication. The first mobile phone systems worked just like television and radio broadcasting: in a given area, there was a single tower transmitting widely in all directions. Reception was great. But while it didn’t matter how many people were watching TV at the same time, it did matter how many people were making calls: only a few dozen could do so simultaneously. Any more than that and the system was overloaded. Dialing at a busy time of day, you were apt to get a busy signal. Engineers at Bell Labs recognized that treating mobile calls like TV wasn’t working. They took an innovative tack: they divided a single coverage area into small “cells,” each of which had its own tower.1 The modern cellphone was born.
Colors represent different broadcast frequencies
The great advantage of this system was that it enabled the same broadcast frequency to be reused in different neighborhoods, so more people could be on their phones at the same time. In a Cubist painting, the partitioning of a continuous area is on view. With cellphones, the idea runs in the background. All we know is that the call didn’t drop.
Poet e. e. cummings broke apart words and syntax to create his free-verse poetry. In his poem “dim,” nearly every word is split between lines.
dim
i
nu
tiv
e this park is e
mpty(everyb
ody’s elsewher
e except me 6 e
nglish sparrow
s) a
utumn & t
he rai
n
th
e
raintherain2
An analogous type of breaking was used by biochemist Frederick Sanger in the lab during the 1950s. Scientists were eager to figure out the sequence of amino acids that made up the insulin molecule, but the molecule was so large that the task was unwieldy. Sanger’s solution was to chop insulin molecules into more manageable pieces – and then sequence the shorter segments. Thanks to Sanger’s “jigsaw” method, the building blocks of insulin were finally sequenced. For this work, he won the Nobel Prize in 1958. His technique is still used today to figure out the structure of proteins.
But that was just the beginning. Sanger devised a method of breaking up DNA that enabled him to precisely control how and when strands were broken. The driving force was the same: break the long strands into workable chunks. The simplicity of this method greatly accelerated the gene-sequencing process. It made possible the human genome project, as well as the analysis of hundreds of other organisms. In 1980, Sanger won another Nobel Prize for this work.
By busting up strands of text in creative ways, e. e. cummings created a new way to use language; by breaking up strands of DNA, Sanger created a way to read Nature’s genetic code.
The neural process of breaking also underlies the way we now experience movies. In the earliest days of film, scenes unfolded in real time, exactly as they do in real life. Each scene’s action was shown in one continuous shot. The only edits were the cuts from one scene to another. The man would say urgently into the telephone, “I’ll be right there.” Then he would hang up, find his keys, and exit the door. He would walk down the hallway. He would descend the stairs. He would exit the building, walk down the sidewalk, come to the café, enter the café, and sit down for his encounter.
Pioneers such as Edwin Porter begin to link scenes more tightly by shaving off their beginnings and endings. The man would say, “I’ll be right there,” and suddenly the scene would cut to him sitting at the café. Time had been broken, and the audience didn’t think twice about it. As cinema evolved, filmmakers began to reach further in the direction of narrative compression. In the breakfast scene of Citizen Kane, time leaps years ahead every few shots. We see Kane and his wife ageing and their marriage evolving from loving words to silent stares. Directors created montages in which a lengthy train ride or an ingénue’s rise to stardom could be summarized by a few seconds of film; Hollywood studios hired montage specialists whose only job was to edit these sequences. In Rocky IV, training montages of boxer Rocky Balboa and his opponent Ivan Drago consume a full third of the film. No longer did time pass in a movie as it does in life. Breaking time’s flow had become part of the language of cinema.
Breaking continuous action also led to a great innovation in television. In 1963, the Army–Navy football game was broadcast live. Videotape equipment of the time was difficult to control, which made rewinding the tape imprecise. The director of that game’s broadcast, Tony Verna, figured out a way to put audio markers onto the tape – markers that could be heard within the studio, but not on air. This allowed him to covertly cue the start of each play. It took him several dozen tries to get the equipment working properly. Finally, in the fourth quarter, after a key score by Army, Verna rewound the tape to the correct spot and replayed the touchdown on live television. Verna had broken temporal flow and invented instant replay. Because this had never happened before, the television announcer had to provide extra explanation. “This is not live! Ladies and gentleman, Army did not score again!”
The early days of cinema, characterized by single long takes, were similar to the early days of computing, in which a mainframe could only process one problem at a time. A computer user would create punch cards, get in the queue and, when his turn came, hand the cards to a technician. Then he had to wait a few hours while the numbers were crunched before collecting the results.
An MIT computer scientist named John McCarthy came up with the idea of time sharing: what if, instead of running one algorithm at a time, a computer could switch between multiple ones, like cutting between different shots in a movie? Instead of users waiting their turn, several of them could work on the mainframe simultaneously. Each user would have the impression of owning the computer’s undivided attention when, in fact, it was rapidly toggling between them all. There would be no more waiting in line for a turn; users could sit at a terminal and believe they were having a one-on-one relationship with the computer.
The shift from vacuum tubes to transistors gave McCarthy’s concept a boost, as did the development of user-friendly coding languages. But dividing up the computer’s computations into short micro-segments was still a challenging mechanical feat. McCarthy’s first demonstration didn’t go well: in front of an audience of potential customers, McCarthy’s mainframe ran out of memory and started spewing out error messages.3 Fortunately, the technical hurdles were soon overcome and, within a few years, computer operators were sitting at individual terminals in real-time “conversation” with their mainframes. By covertly breaking up digital processing, McCarthy initiated a revolution in the human-machine interface. Nowadays, as we follow driving directions