Sun in a Bottle Read online

Page 22


  When Congress passed the 1996 budget, magnetic fusion got about $240 million. It did not take long for things to unravel completely.

  In the meantime, the projected costs for ITER were skyrocketing, and scientists raised new doubts about whether it would achieve ignition at all. Despite the rosy picture painted by the design team, some physicists predicted that new instabilities would cool the plasma faster than expected, meaning ITER would fail, just as generations of fusion machines had failed before it. If ITER was going to fail to achieve ignition and sustained burn, then, some physicists began to argue, domestic devices could fail just as well at half the price. The American scientists (as well as their Japanese counterparts, who were also cash strapped) started talking about scaling it back, making it into a less-ambitious experiment at a lower cost. ITER-Lite, as the plan was known, would only cost $5 billion. However, ITER-Lite would be unable to achieve ignition and sustained burn. It would be just another incremental improvement on existing devices.

  Though ITER-Lite was cheaper, it would defeat the main benefit of pooling four countries’ resources. No longer would the countries be leapfrogging over what domestic programs had been able to accomplish on their own. ITER-Lite would not be a great advance over previous designs. It would just be a massive, more expensive version of what everyone else had already built.

  In late 1997, Japan asked for a three-year delay in construction. It was a terrible sign, and the designers scrambled to bring down ITER’s costs. Physicists and engineers proposed various versions of ITER-Lite, but without the promise of ignition and sustained burn the troubled project was doomed. The United States decided it wanted out.

  In 1998, the House Appropriations Committee noted angrily that “after ten years and a U.S. contribution of $345 million, the partnership has yet to select a site” for ITER, and slashed all funding for the project. (They even questioned whether a tokamak was the best way to achieve fusion energy.) In July, the United States allowed the ITER agreement to expire, refusing to sign an extension that the other parties had signed; in October, the U.S. pulled its scientists out of the ITER work site in Germany. ITER was dead, at least for the United States.

  When ITER died, America’s dream of fusion energy was officially deferred. Since the inception of the magnetic fusion effort in the United States, the government had considered it an “energy program”—Congress funded it in hopes of generating energy in the not-too-distant future. As ITER entered its death throes, the Office of Management and Budget changed magnetic fusion research into a “science program.” This meant that the program’s funding was no longer officially tied to the goal of building a fusion power plant. It was just pure research, science for science’s sake. Consequently, it became a lower priority for Congress. An energy program was easy to drum up support for, but pure science was always iffier.

  By the turn of the millennium, magnetic fusion was but a shadow of what it had been in the 1980s. The U.S. magnetic fusion budget stabilized at approximately $240 million, which was worth less every year as inflation nibbled away at the value of the dollar. The golden age of magnetic fusion was over in America.

  Scientists in Europe, Russia, and Japan struggled to keep the ITER project alive without the United States’ participation. They quickly decided that ITER, as originally envisioned, would be impossible to build. The three parties settled upon an ITER-Lite design. Gone was hope of ignition and sustained burn. Gone was the hope of a great leap toward fusion energy. And without the United States, even a drastically reduced ITER would be decades away.

  In the meantime, fusion scientists had to make do with their increasingly obsolete tokamaks. They did their best to put a positive spin on a bad situation. Even as the original plans for ITER were dying, European and Japanese researchers finally claimed they had achieved the long-sought-after goal of breakeven. It was not as impressive as ignition and sustained burn, but if true, scientists had finally broken the fifty-year-old jinx and gotten more energy out of a controlled fusion reaction than they had put in.

  In August 1996 and again in June 1998, researchers at Japan’s JT- 60 tokamak insisted that they had achieved “breakeven plasma conditions” and claimed their tokamak was producing 5 watts for every 4 that it consumed. A closer look showed that this wasn’t quite what happened. JT-60 was using a plasma made of deuterium, so the fusion reactions in the plasma were entirely between deuterium and deuterium. These are less energetic than deuterium-tritium reactions. If you really want to get a magnetic fusion reactor producing lots of energy, you will use a mixture of deuterium and tritium as the fuel rather than pure deuterium. JT-60’s “breakeven plasma conditions” did not really mean that the tokamak had reached breakeven. Instead, the JT-60 had reached pressures, temperatures, and confinement times that, according to calculations, would mean breakeven if researchers had used a deuterium-tritium mix rather than just deuterium as fuel. Every time JT-60 reached its “breakeven conditions,” it was still consuming much more energy than it produced. So much for Japan’s claim. What about Europe’s?

  JET, the big European tokamak, actually used deuterium-tritium mixtures in attempts to achieve breakeven. In September 1997, scientists loaded up a such a mixture into the reactor, heated it, compressed it, and . . . and what? What happened? It depends on whom you ask.

  Some people insist that JET reached breakeven. Britain’s Parliamentary Office on Science and Technology, for instance, states blandly in a pamphlet that “Breakeven was demonstrated at the JET experiment in the UK in 1997.” This is a myth, just like the myth about JT-60. In truth, JET got 6 watts out for every 10 it put in. It was a record, and a remarkable achievement, but a net loss of 40 percent of energy is not the hallmark of a great power plant. Scientists would claim—after twiddling with the definition of the energy put into the system—that the loss was as little as 10 percent. This might be so, but it still wasn’t breakeven; JET was losing energy, not making it.

  National magnetic fusion programs are unable to achieve breakeven, let alone ignition and sustained burn. The national tokamaks like JET and JT-60 are reduced to setting lesser records: the highest temperature, the longest confinement, the highest pressure. However, these records are all but meaningless. Without getting beyond breakeven, the dream of a fusion reactor will remain out of reach. All the glowing press releases in the world won’t turn an energy-loss machine into a working fusion reactor.

  Laser fusion scientists didn’t suffer nearly as much in the 1990s as their magnetic fusion counterparts. As magnetic fusion budgets sank, laser fusion ones rose, because laser fusion scientists had a secret weapon: nuclear bombs.

  Publicly, laser fusion scientists billed their experiments as a way to free the world from its energy problems. What John Emmett, a Livermore laser scientist, declared to Time magazine in 1985 was typical: “Once we crack the problem of fusion, we have an assured source of energy for as long as you want to think about it. It will cease to be a reason for war or an influence on foreign affairs.” Emmett’s optimistic vision was no different from what fusion researchers had been promising since the 1950s. Just like their magnetic fusion counterparts, laser fusion scientists had promised, again and again, unlimited, clean energy. Just like their magnetic fusion counterparts, laser fusion scientists had been disappointed again and again as instabilities and other problems demolished their overly optimistic predictions. Shiva had failed, and by the 1990s, so had Nova. Inertial confinement fusion’s story was paralleling magnetic fusion’s, down to the shattered dreams and broken promises.

  Less loudly, though, scientists were pushing laser fusion for a completely different reason. They weren’t really going after unlimited energy: they were pursuing laser fusion as a matter of national security. Without a working laser fusion facility, they argued, America’s nuclear weapons arsenal would be in grave danger. Congress was sold. Even as magnetic fusion scientists were wringing their hands in the mid-1990s, their laser fusion brethren were rolling in money—thanks, in part, to t
he danger posed by the test ban. On September 23, 1992, the United States detonated its last nuclear bomb, Julin Divider, before ceasing testing altogether. Throughout the 1990s, the world’s nuclear powers were negotiating a permanent ban on nuclear testing. Though a few nations conducted a small number of such tests while the discussions went on, the United States held firm. No nuclear explosions.

  Of course, nuclear testing was the way weapons designers evaluated their new warheads; no nuclear testing means no new types of nuclear warheads—more or less. There’s some debate about whether the United States could manufacture slight variants on old weapons designs without resorting to underground detonations. However, it is certain that any sizable design change wouldn’t be considered reliable until it was subjected to a full-scale nuclear test.

  It’s not a huge problem if the United States can’t design new nuclear weapons; the ones on hand are sufficient for national security.76 Instead, the test ban presented a more insidious problem. Without periodic nuclear testing, weaponeers argued, they could not be certain that the weapons in the nuclear stockpile would work. Nuclear bombs, like any other machines, decay over time. Their parts age and deteriorate. Since nuclear weapons use exotic radioactive materials, which undergo nuclear decay as well as physical decay, engineers don’t have a firm understanding of how such a device ages. An engineer can moth-ball a tank or airplane and be certain that it will still function fifty or a hundred years from now. Not so for nuclear warheads. So, to assure the reliability of the nuclear stockpile, engineers would take aged weapons and detonate them to see how well they worked. With a test ban, though, scientists could no longer do this. Many weaponeers insisted there was no way to guarantee that the weapons in the nuclear stockpile would still work in ten or twenty or thirty years. So what was the government to do? Enter the Science-Based Stockpile Stewardship program. Weapons scientists assured federal officials that with a set of high-tech experimental facilities they could ensure the reliability of the nation’s arsenal. Some facilities would concentrate on the chemical explosives that set off the devices. Some would study how elements like plutonium and uranium respond to shocks. But the jewel in the stockpile stewardship’s crown would be NIF, the National Ignition Facility at the Lawrence Livermore National Laboratory.

  NIF is the successor to Nova. According to its designers, NIF, ten times more powerful than Nova, will zap a pellet of deuterium and tritium with 192 laser beams, pouring enough energy into the pellet to achieve breakeven. It will also ignite and have what is called propagating burn: at the center of the pellet, the fuel will begin fusing, and the energy from those fusions will heat the fuel and induce nearby nuclei to fuse. And of course, the fusion will produce more energy than the lasers put in. This is the same promise the designers made with Nova. And Shiva. But while Shiva cost $25 million and Nova cost about $200 million, in the early 1990s NIF was projected to cost more than $600 million. That number increased to more than $1 billion by the time the facility’s construction started in 1997. That was just the beginning.

  As late as June 1999, NIF managers swore to the Department of Energy that everything was peachy, that the project, which was scheduled to be finished in 2003, was on budget and on schedule. This was a lie. Within a few months, officials at Livermore had to admit to enormous problems and cost overruns. Some of the issues were simple oversights. The laser facility, for instance, had problems with dust settling on the laser glass. Dust motes would scatter the laser light and burst into flame, etching the glass. To fix this problem, NIF engineers had to start assembling laser components in clean rooms and tote them around by robotic trucks with superclean interiors—at enormous cost. That was just one of the issues that had to be solved with piles of money.

  It was as if everything that could possibly go wrong with NIF was, in fact, going wrong. Some of the issues were minor annoyances: a brief delay in construction followed when workers found mammoth bones on the NIF site. Some were major: the glass supplier was having difficulty producing glass pure enough to use in the laser, forcing a revamp of the entire manufacturing process. Some were just bizarre. The head of NIF, Michael Campbell, was forced to resign in 1997 when officials discovered he had lied about earning a PhD from Princeton University.

  Some problems were unexpected but easy to deal with, such as an issue with the capacitors, the devices that store the energy used to pump the laser glass. These devices were packed so full of energy that occasionally one would spontaneously vaporize. It would explode, spraying shrapnel around the room. Engineers solved the problem by putting a steel shield around the capacitors; when one exploded, flapper doors would open and the debris would spray toward the floor.

  Some problems were more complex. For example, scientists had long since gone from infrared to green to ultraviolet light to reduce the disproportional heating of electrons compared with nuclei, but ultraviolet light at such high intensities was extremely nasty to optics. It would pit anything it came into contact with. The laser would damage itself every time it would fire. The solution was less than perfect: at NIF’s full power, the optics will have to be replaced every fifty to one hundred shots or so, an extremely expensive prospect.

  Furthermore, scientists were still struggling to deal with the Rayleigh-Taylor instability—the one that turns small imperfections on the surface of the fuel pellet into large mountains and deep valleys, destroying any hope of compressing the fuel to the point of ignition. Not only did scientists have to zap the target very carefully—so that the energy shining on the target was the same intensity on every part of the pellet—they also had to ensure that the pellet was extremely smooth. Even tiny imperfections on its surface would quickly grow and disrupt the collapsing plasma. To have any hope of achieving ignition, NIF’s target pellets—about a millimeter in size—cannot have bumps bigger than fifty nanometers high. It’s a tough task to manufacture such an object and fill it with fuel. Plastics, such as polystyrene, are relatively easy to produce with the required smoothness, but they don’t implode very well when struck with light. Beryllium metal implodes nicely, but it’s hard to make a metal sphere with the required smoothness. It was a really difficult problem that wasn’t getting any easier as NIF scientists worked on it.

  The cost of the star-crossed project ballooned from about $1 billion to more than $4 billion; the completion date slipped from 2003 to 2008. Worst of all, even if everything worked perfectly, even if NIF’s lasers delivered the right power on target, nobody knew whether the pellet would ignite and burn. As early as the mid-1990s, outside reviewers, such as the JASON panel of scientists, warned that it was quite unlikely that NIF would achieve breakeven as easily as advertised. The prospects for breakeven grew worse as time passed. By 2000, NIF officials, if pressed, might say that the laser had a fifty-fifty shot of achieving ignition. NIF critics, on the other hand, were much less kind. “From my point of view, the chance that [NIF] reaches ignition is zero,” said Leo Mascheroni, one of NIF’s main detractors. “Not 1%. Those who say 5% are just being generous to be polite.” The truth is probably somewhere in between, but nobody will know for sure until NIF starts doing full-scale experiments with all 192 beams.

  If NIF fails to ignite its pellets, and if it fails to reach breakeven, laser fusion experiments will still be absorbing energy rather than producing it; the dream of fusion energy will be just as far away as before.77 Furthermore, analysts argued, NIF wouldn’t be terribly useful for stockpile stewardship without achieving breakeven. And NIF’s contribution to stockpile stewardship is crucial for... what, exactly? It’s hard to say for sure. Assume that NIF achieves ignition. For a brief moment, it compresses,confines, and heats a plasma so that it fuses, the fusion reaction spreads, and it produces more energy than it consumes. How does that translate into assuring the integrity of America’s nuclear stockpile?

  At first glance, it is not obvious how it would contribute at all. Most of the problems with aging weapons involve the decay of the plutonium “pits” that start the reac
tion going. Will the pits work? Are they safe? Can you remanufacture old pits or must you rebuild them from scratch? These issues are relevant only to a bomb’s primary stage, the stage powered by fission, not fusion (except for the slight boost given by the injection of a little fusion fuel at the center of the bomb). The fusion happens in the bomb’s secondary stage, and there doesn’t seem to be nearly as much concern about aging problems with a bomb’s secondary. If the primary is where most of the problems are, what good does it do to study fusion reactions at NIF? NIF’s results would seem to apply mostly to the secondary, not the primary.

  Since so much about weapons work is classified, it is hard to see precisely what problems NIF is intended to solve. But some of the people in the know say that NIF has a point. The “JASONs,” for example, argue that NIF does help maintain the stockpile—but not right away. NIF will contribute to science-based stockpile stewardship, the panel wrote in 1996, “but its contribution is almost exclusively to the long-term tasks, not to immediate needs associated with short-term tasks.” That is, NIF will help eventually, but it is not terribly useful in the short term.

  What are those long-term tasks? Two years earlier, the JASON panel was a little more explicit. NIF would help a bit with understanding what happens when tritium in a primary’s booster decays. (However, since tritium has a half-life of only twelve years, it stands to reason that weapons designers periodically must replace old tritium in weapons with fresh tritium. This is probably routine by now.) NIF will also help scientists understand the underlying physics and “benchmark” the computer codes—like LASNEX—that simulate imploding and fusing plasma. (But why is this important if you are not designing new weapons? The ones in the stockpile already presumably work just fine, so you presumably don’t need a finer understanding of plasma physics to maintain them.) The JASON members have access to classified information, but even so, their justifications for NIF seem a little thin—at first. And then JASON lists one more contribution that NIF makes to stockpile stewardship: “NIF will contribute to training and retaining expertise in weapons science and engineering, thereby permitting responsible stewardship without further underground tests.” That’s the main reason for NIF.