Last Valentine's Day (in the U.S.), a remarkable event took place over Chelyabinsk, Russia. The spectacular explosion of an asteroid led to the conception of several research projects that took nine months to gestate into publications. Last week, the first round of papers related to the airburst appeared in the prestigious journals Science and Nature.
I am not in the habit of using this blog to report on my own work, but in my opinion one of the most significant conclusions is that of Brown et al. (I am a co-author) suggesting that we may have underestimated the number of building-sized asteroids by up to a factor of 10. This is important is because more asteroids mean more impacts, and more impacts mean greater risk.
In a happy coincidence, a paper I wrote two years ago, "Airburst Warning and Response," was also published in its final form last week by the journal Acta Astronautica. I calculated the impact risk using various assumptions. One was that the astronomy-based estimate of asteroids as big as buildings is correct, even though that estimate itself is based on other assumptions (like how much sunlight they reflect).
After the next telescope survey is complete, 90 percent of asteroids greater than 140 meters (the size of a stadium) will have been discovered. I found that after the survey the remaining risk will be dominated by asteroids that are too small to reach the ground, but big enough to explode in the air and kill people with descending fireballs and air blasts. These rocks range in size from small hotels to large government buildings.
Chelyabinsk threw a monkey wrench into one of my assumptions, because it shouldn't have happened. At least it wasn't very likely. One freak event doesn't tell us much, but there was another freakishly large airburst in 1963 over the ocean. It was so isolated that nobody would have noticed if it hadn't been for microphone arrays that had been set up to monitor low-frequency sound waves from nuclear tests during those Cold War years. And the most freakish airburst of all was the 1908 Tunguska explosion that knocked down trees over a swath of forest larger than the Washington, D.C., metropolitan area.
We see enough smaller bursts (called bolides) to give us good estimates for the numbers of car- and bus-sized asteroids. There are a lot, but they aren't dangerous. As we go up in size, the number of events goes down until we reach the ragged boundary between signal and noise, without enough data to be sure.
A single one of the big events would have been unlikely to have happened over the period of time that we have been capable of observing and recording such phenomena around the globe. Each big one, by itself, is not statistically significant. Freaks of nature happen. But three freaks in just over a century are enough to make us question whether astronomers have their numbers right. It doesn't prove anything, but when you try to measure the same thing in two different ways and get two very different answers, it increases your uncertainty.
Al Harris is a close colleague whose telescope-based population numbers I used in my risk calculations. He agrees that his estimates have a large uncertainty (and his graphs often have "error bars" to show it). Here's what he thinks:
What I do find a bit disturbing is that we (telescope surveys and bolide data) agree very closely at the smallest sizes, and diverge at larger sizes, where the telescopic surveys become more definitive and the bolide estimates less so. If we agreed at the larger sizes and diverged at smaller sizes, I would have greater confidence that they are right and my estimates are wrong, because admittedly (by both them and me), their estimates are more reliable at the smallest sizes, and ours (telescope surveys) are more reliable at larger sizes, simply because of the numbers of objects/events detected.
My own personal uncertainty has increased. What is the best way to resolve the difference between telescope populations and airburst frequencies? Is nature trying to tell us something with three unlikely events, or is it just a random streak of luck (good or bad)? Should I use a highly uncertain estimate based on our three data points, or should I keep using Al's best estimates? Is it better to use Al's worst-case estimate -- meeting him halfway -- or to assume that the worst case is now even worse? Does this uncertainty make you feel safer?
Climate scientists have a similar quandary. Just like us, they agree on all the basic physical principles. Energy is conserved, gravity exists, greenhouse gases trap heat, planets and asteroids orbit the sun, water vapor is a global warming amplifier. Like us, they don't agree on the numbers. They don't know for sure how quickly the temperature of the Earth will rise as CO2 levels go up, because there are uncertainties in things like clouds (like how much sunlight they reflect). Will temperature go up 2 degrees or 10 degrees this century? Which growth rate should we use to estimate risk? Does this uncertainty make you feel safer?
If your answer is different for climate than it is for asteroids, your assignment is to explain why in the comments section.
Follow Mark Boslough on Twitter: www.twitter.com/MarkBoslough