Fourier Transformations Reveal How Artificial Intelligence Learns Complex Physics

Fourier Transformations Reveal How Artificial Intelligence Learns Complex Physics ...

A new research claims that Fourier analysis, a mathematical technique that has existed for 200 years, may be utilized to uncover important information about how deep neural networks learn to perform complex physics tasks, such as climate and turbulence modeling. This research highlights the potential of Fourier analysis as a tool for gaining insights into the inner workings of artificial intelligence, which might have significant implications for the development of more powerful machine learning algorithms.

Fourier variations reveal how a deep neural network processes complex physics.

According to a new study, a 200-year-old mathematical technique known as Fourier analysis can reveal valuable information about how a deep neural network learns to perform tasks involving complex physics.

A paper describing Rice University's mechanical engineering research has been published in the journal PNAS Nexus, a sister publication of the National Academy of Sciences.

Pedram Hassanzadeh, the study's author, said the present is the first comprehensive framework to explain and guide the use of deep neural networks for complex dynamical systems such as climate. It might significantly accelerate the use of scientific deep learning in climate science and lead to much more reliable climate change estimates.

Researchers at Rice University have trained a deep learning neural network to recognize complex air or water flows and anticipate how they will evolve over time. This visual demonstrates the significant differences in the scale of features the model learns to recognize (bottom) to make its predictions.

Hassanzadeh, Adam Subel, and Ashesh Chattopadhyay, both former students, and Yifei Guan, a postdoctoral research associate, describe their methods in the paper. "Our research revealed not only what the neural network had learned, but it also enabled us to directly link what we had learned to the physics of the complex system it was modeling," said Hassanzadeh.

"Deep neural networks are notoriously difficult to understand and are often referred to as "black boxes," according to the author. "That is one of the main issues with using deep neural networks in scientific research."

Rice University researchers used Fourier analysis to compare all 40,000 kernels from the two iterations, finding that more than 99% of them were identical. The findings demonstrate the potential for identifying more efficient re-training strategies that require significantly less data.

The research presented by Hassanzadeh "opens up the black box, allows us to peek inside to see what the networks have learned and why, and allows us to connect that knowledge to the physics of the system that was learned."

Subel, the study's lead author, began the project as a Rice undergraduate and is now a graduate student at New York University. He believes the model could be combined with transfer learning methods to "enable generalization and ultimately increase scientific deep learning trust."

Hassanzadeh said he, Subel, Guan, and Chattopadhyay chose to approach the problem from a different angle.

Pedram Hassanzadeh. Photo: Rice Universit

"The common machine learning techniques used to understand neural networks have not had much success for natural and engineering system applications," Hassanzadeh said. "We thought, "Let's use a tool that's common for studying physics and apply it to the study of a neural network that has learned to do physics."

He claims that Fourier analysis, which was first proposed in the 1820s, is a popular method used by scientists and mathematicians for identifying frequency patterns in space and time.

"People who do physics usually look at data in the Fourier space," he said. "It makes physics and mathematics easier."

If a researcher had a minute-by-minute record of outdoor temperature measurements over a one-year period, the information would be a string of 525,600 numbers, a time series physicists refer to as a time series. To analyze the time series in Fourier space, a researcher would use trigonometry to transform each number in the series, creating another set of 525,600 numbers that would retain information from the original set, but look quite different.

"You would see only a few spikes rather than seeing temperature every minute," Subel said. "One would be the cosine of 24 hours, which would be the day and night cycle of highs and lows."

Scientists have developed other techniques to analyze time-frequency data. For example, low-pass transformations filter out background noise, while high-pass filters do the opposite, allowing one to concentrate on the background.

Adam Subel. Rice University. Credit:

The Hassanzadeh's team has first performed the Fourier transform on a fully trained deep-learning model. Each of the model's approximately 1 million parameters acts as multipliers, applying more or less weight to specific operations in the equation during model calculations. These are gradually refined as the algorithm learns to arrive at predictions that are closer to the known outcomes in training situations.

Hassanzadeh said that when we studied the Fourier transform of the equation, it told us that we should examine the Fourier transform of these matrices. No one has ever done this activity before. We have not attempted to connect these things to the physics.

"And when we did that, it came out that what the neural network is learning is a combination of low-pass filters, high-pass filters, and Gabor filters," he said.

"The beauty of this is that the neural network isn't doing any magic," Hassanzadeh said. "It's not doing anything crazy." It's actually doing what a physicist or mathematician might have attempted to do." But when we talk to physicists about this work, they're like, "Oh! I understand."

Subel claims that the findings have significant implications for scientific deep learning, and that some findings from research on machine learning in other environments, such as the classification of static images, may not apply to scientific machine learning.

'This, on its own, is a significant finding.'

Adam Subel, Yifei Guan, and Pedram Hassanzadeh, refereed by PNAS Nexus on 23 January 2023. DOI: 10.1093/pnasnexus/pgad015

Chattopadhyay earned his Ph.D. in 2022 and is now a research scientist at the Palo Alto Research Center.

The Office of Naval Research (N00014- 20-1-2722), the National Science Foundation (2005123, 1748958), and the Schmidt Futures program provided computational support.

You may also like: