kiem tien, kiem tien online, kiem tien truc tuyen, kiem tien tren mang
Thứ Năm, 7 tháng 3, 2013

It was going to one of the largest snowstorms to hit Washington D.C. in years.  An event termed Snowquester, in honor of the latest D.C. budgetary failure.

Official National Weather Service (NWS) forecasts, amplified by media outlets, were calling for as much as 8 inches in the D.C. Metro area, with greater amounts to the west.  Government offices and schools were closed, hundreds of flights were cancelled, and the region hunkered down for a once in a decade snow event.

But the big storm never came, bringing substantial embarrassment to my profession.  Here are the snowfall totals.  Virtually nothing in the city, a few inches in the western and southern suburbs, trending to zero east of the Beltway.  Perhaps 6-9 inches in the Appalachian foothills to the west.


The cost of closing down D.C. and it suburbs were huge, certainly in the tens of millions of dollars.

Why was this storm so poorly forecast? 
Could we have done better?
What lessons could be derived from this failure?

I will try to answer these questions and the nature of the main failure mode may be something you didn't expect.

 Let me begin by saying this was a difficult forecast-- the celebrated Superstorm Sandy prediction was a walk in the park in comparison.  There are few more problematic meteorological challenges than forecasting snow under marginal temperature conditions.

First, you have to get the AMOUNT of precipitation right, something my profession is not good at.  A rule of thumb is that you multiple the amount of liquid precipitation by ten to get the amount of snow (although that ratio can vary as well).    So if your storm total is off by .3 inches of precipitation, you could have an error of 3 inches of snow.  But an error in rain total of .3 inches would hardly be noticed.

But it is worse than that.  Under marginal conditions, the intensity of the precipitation can decide whether you  get rain or snow at the surface.  Heavy precipitation means there is a lot of snow aloft to melt as it descends into the warmer air below.  The melting cools the air, allowing the freezing level (and snow level, generally about 1000 ft below the freezing level) to descend.  So messing up the intensity forecast can mess up forecasting whether you will get rain or snow!

Snow depth Analysis Thursday
Snow Dept Analysis from the NWS

And then there is the impact of evaporation, which can also cool the air. To get that right you need to forecast the humidity structure of the atmosphere correctly and how that will evolve in time.

Still think this is easy?  You also have to predict the temperatures of the air approaching the region through depth, predict the evolution of the ground temperature (so you know how much will accumulate), and the effects of solar radiation during the day.  Not easy.


So let's understand some of the failure modes.   The U.S. NAM model, the main U.S. high resolution model, produced too much precipitation and so did the U.S. global GFS (but to a lesser extent).   Guess what modeling system did far better than either?  Yes, the European Center (ECMWF) model did a superior job. Importantly, it realistically predicted less precipitation than the U.S. models.   The graphic of storm total precipitation below, provided courtesy of WeatherBell, Inc., shows DC getting about 1 inch in the ECMWF model and 1.7 inches in the U.S. GFS model.  The U.S. NAM model produced even more.


Both the EC and U.S.  models were bringing above-freezing air in aloft. The 24-h EC model temperature forecast at 850 hPa (around 5000 ft) valid at 1200 UTC (7 AM DC Time) on Wednesday (see below) shows above freezing air entering the southern Chesapeake and moving towards DC (the tan to light green transition is 0C).
The 850 hPa temperatures from the NAM model showed a similar pattern...something that should have worried NWS forecasters.  Only very heavy precipitation intensity could overwhelm such a flux of warm air from off the ocean, particularly in March when the sun is getting stronger.


As I have mentioned before in this blog, an increasingly important tool for forecasters is ensemble prediction, where we run our models many times to get a handle on forecast uncertainties.   The U.S. ensembles systems were screaming that forecast uncertainties were very large.  For example, this figure shows the spread of the 24-h snowfall forecasts for the ensembles started at 4 PM EST on 5 March.  The yellow colors show that the uncertainties were 5-10 inches!  And the solid lines show the mean of the ensemble forecasts:  there was a very large gradient of predicted snowfall just east of D.C.
Also worrying was that the NWS statistical postprocessing system (called MOS-Model Output Statistics) from the GFS model for Washington National Airport was not indicating much of a snow event (see below), with a LOW temperature of 35, a high of 41, and rain most of the day. (HR in UTC, temperature (TMP), probability of snow (POS))


The screaming message in all of this (and I am leaving a lot out) was that there was HUGE uncertainty in this forecast, uncertainty that was not communicated to the public by my profession or the media.  Would decision makers have sent government workers home or cancelled schools if they knew that the chances of a big snow were marginal?   I don't know....but they deserve to have had this information, and I believe they could have made better decisions.

I don't want to sound like a broken record, but with a little investment we can fix this.  The U.S. global model should not only equal, but surpass the European Center.  We need to run state-of-the-art high-resolution ensemble predictions over the U.S. at 2-4 km resolution to give us reliable uncertainty and probabilistic forecasts. The NWS can not do any of this with current inadequate computer resources, which they don't have (see my previous blogs documenting this).  NOAA management has given weather prediction computation low priority...this needs to change and will only change if the American people demand it.  Computer resources are, of course, only the first step.  With better model guidance, my profession must move to providing the public comprehensive probabilistic forecasts for all weather parameters (e.g., temperature, wind, snow, etc.). 

Enhanced computer resources at the National Weather Service could have paid for itself in this one storm.  Think about that.  The national media is thinking about it:  here are two segments on the NBC nightly news in which the reporter notes both the forecast failure and the contribution of inferior NWS computers.   When NBC nightly news is telling you to get a new computer, you know you have a problem...

A special segment on Friday night:

http://www.nbcnews.com/video/nightly-news/51108647/#51108647

and this video from two days ago:


Visit NBCNews.com for breaking news, world news, and news about the economy

0 nhận xét:

Đăng nhận xét

domain, domain name, premium domain name for sales

Bài đăng phổ biến