We’ve all heard weather broadcasts where a meteorologist will say something such as “This afternoon there will be a 40 percent chance of snow showers” or “A 75 percent chance of rain will likely make it a wet night,” but how do they come up with these figures? To many audiences this might seem like mere guesswork on the part of the television weatherpeople. Actually, they arrive at such figures using a series of computer models. What typically happens is that a meteorologist will gather together as much data as possible from local and national weather stations, including from weather satellites, Doppler radar, temperature readings, etc., feed this data into a computer weather modeling program, and run dozens of scenarios of what could happen given these initial conditions. If, for instance, 20 out of 60 of the scenarios that are run predict that there will be rainfall, then the meteorologist will predict a 30 percent chance of rain. While chaos theory dictates that there is no reasonable way one can expect meteorologists to have a 100 percent—or even a 90 or 80 percent—accuracy rate, they come as close as possible given our current knowledge of how weather works and the limitations even computer technology has.