Machine learning models are taking over in the field of weather forecasting, from a quick “how long will this rain last” to a 10-day outlook, all the way out to century-level predictions. The technology is increasingly important to climate scientists as well as apps and local news stations — and yet it doesn’t “understand” the weather any more than you or I do.
For decades meteorology and weather forecasting have been largely defined by fitting observations into carefully tuned physics-based models and equations. That’s still true — there’s no science without observation — but the vast archives of data have enabled powerful AI models that cover just about any time scale you could care about. And Google is looking to dominate the field from now to eternity.
At the short end of the spectrum we have the immediate forecast, which generally is consulted for the question “do I need an umbrella?” This is served by DeepMind’s “nowcasting” models, which basically look at precipitation maps like a sequence of images — which they are — and try to predict how the shapes in those images will evolve and shift.
With countless hours of doppler radar to study, the model can get a pretty solid idea of what will happen next, even in fairly complex situations like a cold front bringing in snow or freezing rain (as shown by Chinese researchers building on Google’s work).
This model is an example of how accurate weather predictions can be when made by a system that has no actual knowledge about how that weather happens. Meteorologists can tell you that when this climate phenomenon runs up against this other one, you get fog, or hail, or humid heat, because that’s what the physics tell them. The AI model knows nothing about physics — being purely data-based, it is simply making a statistical guess at what comes next. Just like ChatGPT doesn’t actually “know” what it’s talking about, the weather models don’t “know” what they’re predicting.
It may be surprising to those who think a strong theoretical framework is necessary to produce accurate predictions, and indeed scientists are still wary of blindly adopting a system that doesn’t know a drop of rain from a ray of sunshine. But the results are impressive nevertheless and in low-stakes matters like “will it rain while I’m walking to the store” it’s more than good enough.
Google’s researchers also recently showed off a new, slightly longer-term model called MetNet-3, which predicts up to 24 hours in the future. As you might guess, this brings in data from a larger area, like weather stations across the county or state, and its predictions take place at a larger scale. This is for things like “is that storm going to cross over the mountains or dissipate” and the like. Knowing whether wind speeds or heat are likely to get into dangerous territory tomorrow morning is essential for planning emergency services and deploying other resources.
Today brings a new development at the “medium-range” scale, which is 7-10 days in the future. Google DeepMind researchers published an article in the journal Science describing GraphCast, which “predicts weather conditions up to 10 days in advance more accurately and much faster than the industry gold-standard weather simulation system.”
GraphCast zooms out not just in time but in size, covering the entire planet at a resolution of .25 degrees longitude/latitude, or about 28×28 kilometers at the equator. That means predicting what it will be like at more than a million points around the Earth, and while of course some of those points are of more obvious interest than others, the point is to create a global system that accurately predicts the major weather patterns for the next week or so.
“Our approach should not be regarded as a replacement for traditional weather forecasting methods,” the authors write, but rather “evidence that MLWP is able to meet the challenges of real-world forecasting problems and has potential to complement and improve the current best methods.”
It won’t tell you whether it will rain in your neighborhood or only across town, but it is very useful for larger scale weather events like major storms and other dangerous anomalies. These occur in systems thousands of kilometers wide, meaning GraphCast simulates them in pretty considerable detail and can predict their movements and qualities going out days — and all using a single Google compute unit for less than a minute.
That’s an important aspect: efficiency. “Numerical weather prediction,” the traditional physics-based models, are computationally expensive. Of course they can predict faster than the weather happens, otherwise they’d be worthless — but you have to get a supercomputer on the job, and even then it can take a while to make predictions with slight variations.
Say for instance you aren’t sure whether an atmospheric river is going to increase or decrease in intensity before an incoming cyclone crosses its path. You might want to make a few predictions with different levels of increase, and a few with different decreases, and one if it stays the same, so that when one of those eventualities occurs, you have the forecast ready. Again, this can be of enormous importance when it comes to things like storms, flooding, and wildfires. Knowing a day earlier that you’ll have to evacuate an area can save lives.
These jobs can get real complex real fast when you’re accounting for lots of different variables, and sometimes you’ll have to run the model dozens of times, or hundreds, to get a real sense of how things will play out. If those predictions take an hour each on a supercomputer cluster, that’s a problem; if it’s a minute each on a desktop-sized computer you have thousands of, it’s no problem at all — in fact, you might start thinking about predicting more and finer variations!
And that’s the idea behind the ClimSim project at AI2, the Allen Institute for Artificial Intelligence. What if you wanted to predict not just 10 different options for how next week might look, but a thousand options for how the next century will play out?
This kind of climate science is important for all kinds of long-term planning, but with a tremendous amount of variables to manipulate and predictions going out decades, you can bet that the computation power needed is equally huge. So the team at AI2 is working with scientists around the world to accelerate and improve those predictions using machine learning, imrproving the “forecasts” at the century scale.
ClimSim models work similarly to the ones discussed above: instead of plugging numbers into a physics-based, hand-tuned model, they look at all the data as an interconnected vector field. When one number goes up and reliably cases another to go up half as much, but a third to go down by a quarter, those relationships are embedded in the machine learning model’s memory even if it doesn’t know that they pertain to (say) atmospheric CO2, surface temperature, and ocean biomass.
The project lead I spoke to said that the models they’ve built are impressively accurate while being orders of magnitude cheaper to perform computationally. But he did admit that the scientists, while they are keeping an open mind, are operating (as is natural) from a place of skepticism. The code is all here if you want to take a look yourself.
With such long timescales, and with the climate changing so rapidly, it is difficult to find suitable ground truth for long-term predictions, yet those predictions are growing more valuable all the time. And as the GraphCast researchers pointed out, this isn’t a replacement for other methods but a complementary one. No doubt climate scientists will want every tool they can get.
Source link