After reading Eric Schlosser’s Command and Control, an exploration of nuclear safety and implementation, it becomes impossible to think of a nuclear weapon as a singular thing. These bombs live in a complicated system, a network, of people, technologies and politics which manage them; determine how they’re built, test, stored, and activated. One cannot truly discuss a nuclear weapon without understanding these complexities for the bomb is inseparable from this network. It is such a complicated device to design and utilize that it can only be born and managed within a network of support. Schlosser’s work is a fantastic introduction and case study for actor-network theory.

It’s striking that nearly1 all the threatening accidents detailed in the book aren’t due to faults within the bomb itself. The accidents are the result of the uncontrollable, complex systems which are necessary to manage the bombs. The causes, taken on their own, are almost always mundane: a release lever is instinctually grabbed by a pilot during turbulence to stabilize himself, a socket wrench loses it’s socket, or an aircraft’s air conditioning stops working. One time a technician loaded the wrong magnetic tape and triggered a simulation of a massive Soviet attack:

As the computer screens at NORAD filled with Soviet missiles, a Threat Assessment Conference was called. Although the pattern of the attack seemed to fit with the Pentagon’s assumptions about Soviet war plans , its timing made little sense. Tensions between the superpowers weren’t particularly high, and nothing in the news seemed to warrant a “bolt from the blue” attack on the United States. Duty officers at NORAD contacted the radar and ground stations whose sensors were relaying information about the launches. None of them had detected signs of any missiles. The NORAD computers seemed to be providing an erroneous— but highly realistic— account of a Soviet surprise attack.

As a precaution, the Klaxons were sounded at SAC bases nationwide. Bomber crews ran to their planes, and missile crews were put on heightened alert. Fighter-interceptors took off to look for signs of a Soviet attack. The National Emergency Airborne Command Post left Andrews Air Force Base without President Carter on board. And air traffic controllers throughout the country prepared to clear America’s airspace for military flights, warning every commercial airliner that it might soon have to land.

As the minutes passed without the arrival of Soviet warheads, it became clear that the United States wasn’t under attack. The cause of the false alarm was soon discovered. A technician had put the wrong tape into one of NORAD’s computers. The tape was part of a training exercise—a war game that simulated a Soviet attack on the United States. The computer had transmitted realistic details of the war game to SAC headquarters, the Pentagon, and Site R.

This genre of accident – the computer error that threatens to launch a giant volley of weapons, rather than a single warhead failing – should be especially frightening to anyone who’s ever managed a website, server, or any complex code base. And it happened several times:

At about two thirty in the morning on June 3, 1980, Zbigniew Brzezinski, the president’s national security adviser, was awakened by a phone call from a staff member, General William E. Odom. Soviet submarines have launched 220 missiles at the United States, Odom said. This time a surprise attack wasn’t implausible. The Soviet Union had recently invaded Afghanistan, confirming every brutal stereotype promoted by the Committee on the Present Danger. The United States was leading a boycott of the upcoming Moscow Olympics, and relations between the two superpowers were at their lowest point since the Cuban Missile Crisis. Brzezinski told Odom to call him back with confirmation of the Soviet attack and its intended targets. The United States would have to retaliate immediately; once the details of the attack were clear, Brzezinski would notify the president. Odom called back and said that 2,200 missiles were heading toward the United States— almost every long-range missile in the Soviet arsenal. As Brzezinski prepared to phone the White House, Odom called again. The computers at NORAD said that Soviet missiles had been launched, but the early-warning radars and satellites hadn’t detected any. It was a false alarm. Brzezinski had allowed his wife to sleep through the whole episode, preferring that she not be awake when the warheads struck Washington.

SAC bomber crews had run to their planes and started the engines. Missile crews had been told to open their safes. The airborne command post of the Pacific Command had taken off. And then the duty officer at the Pentagon’s National Military Command Center ended the Threat Assessment Conference, confident that no Soviet missiles had been launched. Once again, NORAD’s computers and its early-warning sensors were saying different things. The problem was clearly in one of the computers, but it would be hard to find. A few days later NORAD computers warned SAC headquarters and the Pentagon for a third time that the United States was being attacked. Klaxons sounded, bomber crews ran to their planes— and another Threat Assessment Conference declared another false alarm.

This time technicians found the problem: a defective computer chip in a communications device. NORAD had dedicated lines that connected the computers inside Cheyenne Mountain to their counterparts at SAC headquarters, the Pentagon, and Site R. Day and night, NORAD sent test messages to ensure that those lines were working. The test message was a warning of a missile attack—with zeros always inserted in the space showing the number of missiles that had been launched. The faulty computer chip had randomly put the number 2 in that space, suggesting that 2 missiles, 220 missiles, or 2,200 missiles had been launched. The defective chip was replaced, at a cost of forty-six cents. And a new test message was written for NORAD’s dedicated lines. It did not mention any missiles.

The effects of mundane accidents become ridiculously amplified when connected to a network which manages non-mundane things:

After studying a wide range of “trivial events in nontrivial systems,” Perrow concluded that human error wasn’t responsible for these accidents. The real problem lay deeply embedded within the technological systems, and it was impossible to solve: “Our ability to organize does not match the inherent hazards of some of our organized activities.” What appeared to be the rare exception, an anomaly, a one-in-a-million accident, was actually to be expected. It was normal.

The complexity of such a system was bound to bring surprises. “No one dreamed that when X failed, Y would also be out of order,” Perrow gave as an example, “and the two failures would interact so as to both start a fire and silence the fire alarm.”

Such effects are well known to those that study complexities or knowledge practices. Upon finishing Command and Control I immediately went to the bookshelf and pulled down one of these texts, John Law and Annemarie Mol’s Complexities. From the introduction:

The process of scaling up poses many problems. Large-scale technologies usually grow out of laboratory experiments, but the process of translation is tricky because laboratory experiments are simplificatory devices: they seek to tame the many erratically changing variables that exist in the wild world, keeping some stable and simply excluding others from the argument. This often works well in the laboratory: if one does an experiment in a test tube, it is not unreasonable to assume that the air in the lab will absorb any heat that is produced. Calculation is greatly simplified by choosing to neglect a variable such as “heat.” However, it works less well when what was confined to a test tube is scaled up to become a power plant. What happens now to all that excess heat? where does it go? And where do radioactive waste products go?

So there is scaling, and there are unpredictabilities, erratic forms of behavior. These do not fit the schemes of most sciences very well either because the latter prefer to treat with only a few variables, not too many. The problem is that what was not predictable tends to occur anyway. So how should this be handled?

The answer–one answer– is that such chaotic events are tamed by theories of chance. In being reduced to a probability and framed as a risk they are turned into something that however erratic, is also calculable. The risk of an explosion in the factory on the edge of your town (an explosion that will take your town with it) is, say, 0.000000003 percent per annum. Now go calculate whether this is a good enough reason to be anxious!

This sort of reductionism via probability is all over Schlosser’s book. Bomb safety is measured and justified by standards expressed as odds:

For example, it proposed that the odds of a hydrogen bomb exploding accidentally— from all causes, while in storage, during the entire life of the weapon— should be one in ten million.

These odds, derived in laboratories, were used to allocate money for bomb safety. The factors which cannot reasonability be measured in the laboratory remain a problem. The mundane accidents caused by the messy world, which have been removed from the lab so that variables can be isolated, will always affect the network. Hence, as Scholosser writes:

The probabilities remained unknown. What were the odds of a screwdriver, used to repair an alarm system, launching the warhead off a missile, the odds of a rubber seat cushion bringing down a B-52?

Command and Control is strongly recommended.


The only issues I can recall contained solely within the weapons is a case of an unexplained noise emanating from a thermonuclear warhead (which never posed any danger) and strips of boron disintegrating within the core of a line of weapons, which rendered them useless. Neither of these, oddly, were dangerous (at least relative to any of the other mishaps).