During the Cold War, when many underground nuclear tests were measured in kilotons to megatons, data would be collected by literally hundreds of seismic monitoring stations around the world, which would then be analyzed by scientists who would decode the “boom.” Today, no one is conducting tests that large, which is good — but it makes small nuclear tests much harder to find and to understand. We’re lucky if we get data from more than a dozen monitoring stations.

So how can we get accurate information about a test when we have so little information?

The simple answer is: We need better physics and computer models to compensate for the lack of data. But that’s not a simple answer at all. To do this requires that we push the mathematical and experimental boundaries to make these models valid — something that can only be done with exceptional computing, analytics and brain power. And these models need to predict all types of sensor data from underground explosions.

When a country tests a nuclear weapon underground, we are confronted with a host of uncertainties in our models that are related to the physical world but can’t be accurately determined: What was the interaction of the explosion with the surrounding rock? How much rock was on top of the test? What was the close-in geology for the test? What was the water content in the surrounding rock of the test?

The ability to answer these questions can significantly alter our understanding of an adversary’s capabilities — which can, in turn, influence how our government responds.

At the nation’s national security laboratories, technical staff are devoted to making sense of all types of sensor data from underground explosions that, at first glance, might appear to be nothing. The ground shakes every day, thousands of times, all over the world. Differentiating between a natural quake and a nuclear one is critical to national security. After all, no one wants to ignore a real signal or respond to a false one. We need to be able to delineate what those real threats are by filtering out the noise, which is only possible when multiple streams of sensor data are mathematically combined.

When the ground shakes in a suspicious way, the first thing decision-makers ask is: What was it? And then: How confident are you that it is what you think it is?

Of course, the answer to the first question means very little if the answer to the second question is “not very.”

When we’re making our assessments, we’re confronted with so many unknowns, which our current models lump into one. We need to be able to individually assess the uncertainties that come with a suspected nuclear test so we can better estimate things like its yield.

One only needs to look at how often initial yield estimates are revised to see that this is true. For example, last year, a paper asserted that the 2017 North Korean underground nuclear test was two-thirds more powerful than previously thought. Better estimations require better math, which means writing out the mathematical details and understanding all the variants, including the location of the explosion and the path the seismic signal followed through the rock.

One of the ways we’re improving our estimations is through experimentation. Recently, multiple National Nuclear Security Administration laboratories, including Los Alamos National Laboratory, participated in a series of experiments that analyzed underground chemical explosions to advance nuclear detonation detection capabilities. The experiment used buried explosives in the Nevada desert to generate seismic and acoustic signals similar to those emitted by an underground nuclear detonation — allowing scientists to better understand how certain signals move through the Earth.

This experiment gave us critical understanding that has helped us refine our mathematics, and, by extension, our computer models.

As our models get better, we’ll actually see more errors than before. That might seem counterintuitive, but the old adage, “The more you know, the less you understand,” holds some truth in the world of mathematics, too.

As we advance our methods, the uncertainties will increase before we know more. But eventually we will know more because we will ground-truth those formulas with experimentation.

With this knowledge will come the ability to confidently assess the source of the test and the yield — information that decision-makers need to determine how best to move forward. Small nuclear tests are an unfortunate reality in today’s world. Our job is to make sure we understand what’s happening in hopes of ending them.

Dale Anderson is a mathematician specializing in seismology. He is the science lead for explosion monitoring research projects at Los Alamos National Laboratory in New Mexico, which is run by the U.S. Energy Department.

Share:
More In Commentary