Modern, Scalable, and Reliable Modeling of Turbulent Combustion
S. Levent Yilmaz
Pitt Center for Simulation and Modeling
Thursday February 16, 2012
4:00 pm - Sennott Square - Seminar Room 5317
Refreshments will be served.
Hosted by G. Elisabeta Marai
Abstract"Turbulence is the greatest unsolved problem of classical physics."
That was Richard Feynman decades ago, referring to a centuries-old problem. Today, the situation is no different. Turbulent combustion, which deals with a fluid mixture reacting and mixing under turbulent conditions (as found in rockets, jet engines, power generators, car engines, furnaces, ...), is harder still. While a solution that would satisfy a physicist is yet to be found, engineers all over the world are tackling the problem with computational modeling and simulation.
There are a plethora of models for turbulence and combustion with a whole wide range of competing characteristics of applicability, accuracy, reliability and computational cost. Nowadays, reliability is the key feature required of such modeling (but, most often than not, sacrificed or oversight) for the design of environment friendly and efficient machines.
There exists an unproven (but undeniable) direct correlation between reliability and computational cost. However, the era of sacrificing the former because one cannot overcome and afford the latter for a full scale engineering application is over, thanks to highly abundant computational resources for open research (XSEDE and others) and relentless efforts of countless developers providing software that runs faster and better. In the talk I will outline how we utilize these resources to overcome an important research problem. I will introduce our research tool, the Filtered Density Function (FDF) for large eddy simulation (LES) of turbulent reacting flow. This is a novel and robust methodology that can provide very accurate predictions for a wide range flow conditions. FDF involves an expensive particle/mesh algorithm where stiff chemical reaction computations cause quite interesting, problem specific, and in most cases extremely imbalanced (a couple of orders of magnitude) computational loads. I will briefly outline our implementation based on a simple and smart parallelization strategy that combines optimized solvers and high-level parallelization libraries (eg. Zoltan). I will present some immediate results and benchmarks, and mention the challenges we face on big data and visualization.