Supercomputers have played an essential role in advancing researchers’ understanding of turbulence physics, but even today’s most computationally expensive approaches have limitations.

Recently, researchers at the Technical University of Darmstadt (TU Darmstadt) led by Prof. Dr. Martin Oberlack and the Universitat Politècnica de València headed by Prof. Dr. Sergio Hoyas started using a new approach for understanding turbulence, and with the help of supercomputing resources at the Leibniz Supercomputing Centre (LRZ), the team was able to calculate the largest turbulence simulation of its kind. Specifically, the team generated turbulence statistics through this large simulation of the Navier-Stokes equations, which provided the critical data base for underpinning a new theory of turbulence.

“Turbulence is statistical, because of the random behaviour we observe,” Oberlack said. “We believe Navier-Stokes equations do a very good job of describing it, and with it we are able to study the entire range of scales down to the smallest scales, but that is also the problem—all of these scales play a role in turbulent motion, so we have to resolve all of it in simulations. The biggest problem is resolving the smallest turbulent scales, which decrease inversely with Reynolds number (a number that indicates how turbulent a fluid is moving, based on a ratio of velocity, length scale, and viscosity). For airplanes like the Airbus A 380, the Reynolds number is so large and thus the smallest turbulent scales are so small that they cannot be represented even on the SuperMUC NG.”

In 2009, while visiting the University of Cambridge, Oberlack had an epiphany—while thinking about turbulence, he thought about symmetry theory, a concept that forms the fundamental basis to all areas of physics research. In essence, the concept of symmetry in mathematics demonstrates that equations can equal the same result even when being done in different arrangements or operating conditions.

Oberlack realized that turbulence equations did, in fact, follow these same rules. With this in mind, researchers could theoretically forego using the extremely large, dense computational grids and measuring equations within each grid box—a common approach for turbulence simulations—and instead focus on defining accurate statistical mean values for air pressure, speed, and other characteristics. The problem is, by taking this averaging approach, researchers must “transform” the Navier-Stokes equations, and these changes unleash a never-ending chain of equations that even the world’s fastest supercomputers would never be able to solve.