IBM showcases ‘Deep Thunder’ weather forecasting app on Apple iPad

“By IBM’s own admission, weather forecasting seemed like an unusual use for it’s supercomputing technology,” Carl Franzen reports for TPM.

“But 16 years after it began work the parallel processing supercomputing system that would become known as ‘Deep Thunder’ —- a targeted weather forecasting program — IBM has taken the technology mobile, putting it on an iPad app and showing it off to lawmakers on Capitol Hill at a breakfast event on Wednesday and to reporters at its New York offices later in the week,” Franzen reports. “‘When you think of supercomputing, your mind doesn’t immediately jump to weather forecasting,’ said Michael Valocchi, vice president and partner at IBM’s energy utilities division, in a briefing with TPM. ‘Weather forecasting doesn’t sound exciting, but what we’ve found is that our system allows for an unprecedented granular look at incoming weather over an 84-hour period in a specific location, down to within a square mile, much more detailed than any other current weather forecast can provide.'”

Franzen reports, “The iPad app is the latest iteration of the software, although it isn’t available to the consumer market, yet.”

Read more in the full article here.

IBM’s “Deep Thunder” FAQ is here.

MacDailyNews Take: Uh, along with nuclear bomb simulations, the first thing that pops into mind when we hear “supercomputing” is weather forecasting. It’s not an “unusual use” for supercomputing at all.

[Thanks to MacDailyNews Reader “Ron Robertson” for the heads up.]


    1. A pity—I so depend on weather forecasters (I mean: meteorologists) to tell me it’s raining, so I don’t have the bother of turning around to look out the window.

  1. As someone who actually worked with the Deep Thunder team well over a decade ago…

    The issue has never been the actual models themselves, implementation of the models or the software or even the hardware upon which the models are run. It has always been the extreme lack of fine granularity in both time and space of the raw data going into the models.

    When the raw data has a time granularity of sometimes 24 hours or more between data points — even within the contiguous 48 states… or the spatial granularity is greater than 100 square miles per thermal cell or greater than 64 square kilometers per data cell representing lightning data when a storm is evolving… then the output of the model, no matter how great that model is or how fine grained the output of that model is, is always in question — except as a ROUGH approximation of what is possibly to come.

    Deep Thunder has always been a, “It will be great IF we can ever get the temporal and spatial fine granularity of data that we really need.”

    With the proper base data and the proper granularity of data it really is possible to predict tornadoes within 4 km of where they will initially touch down over 40 minutes in advance. That fact has been known to the scientific community for over a decade. It’s just that no one wants to pay for the systems that will give that data. Therefore, the current systems predict tornadoes MIGHT happen sometime in the next several hours somewhere within the 500 square mile area. Thus people ignore these warnings and people die.

    (Sorry, I’ve been bugged by this issue, and people dying because of it, for well over a decade.)

    1. Good stuff! Thanks for posting it. As someone who grew up in rural Kansas and saw nearby deadly tornados as a boy twice, I am appalled to learn that accurate forecasting is stymied by lack of the political vision needed to gather the requisite data.

  2. @shadowself
    Don’t apologize. That might be the most I’ve ever learned in a MDN comment. Most classic of computer problems: garbage in, garbage out.

  3. That sounds disingenuous to say the least: weather forecasting requires massive crunching of gigantic data sets, preferably in parallel. What could be a better use of a super computer than that?!

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.