I note you slipped in an extra Re ^ 0.25. At Re = 10^8, that is a factor of 100. Leave out that factor of 100, and your example of an aircraft computation would not take a month, but 8 hours. In a year you could do all 1000 cases, a reasonable time for a new aircraft design cycle.
Thanks for the comment :). You are right, the simplification of changing from Re^(11/4) to Re^(12/4)=Re^(3) is a rather rough one and results in the factor of 100 in runtime for the aerospace example. However, I'd argue it is okay since our estimates are super rough in the first place. We are not accounting for potential additional effects and also build upon the Kolmogorov scales. In the end, it doesn't greatly matter whether it will take 8 hours or a month, as it is still running on (the world's fastest) supercomputer. The estimate also assumed the CFD simulation would scale as good as benchmark software (which it will not, to a large amount). Hence, runtimes will be considerably longer. Ultimately, industrial CFD applications don't have access to supercomputer-like resources, and it would also be unreasonable to use a (full) supercomputer for an entire year, the costs are unbearable. Let me know what you think :).
Thank you so much for the awesome video. I am not super familiar in this field so please bear with me a little here. I have the book of pope "Turbulent flows" he doesn't however use the theta that you do. How is it i should interpret the theta you are using? is it purely to tell that is is the order of magnitude that is important? Thanks in advance :D
Hi, thanks a lot for the comment and the kind words 😊 I think you are referring to the "big O"-notation. In the context of this video, I wanted to express with it that something grows similarly as whatever is inside the "big O" expression. As an example, take the time scale which is O(1/sqrt(Re)): if you have 4 times higher Reynolds number, your time scale halves. Hope that helped 😊
You're welcome 😊 Thanks for the comment. Of course, to be precise, 10^6 seconds are only 11.sth days. Although there is of course a factor of 2 to 3 between this estimate and a month, you could argue it's still in the same order of magnitude. The estimates done in the video are extremely rough, we lose many significant digits along the way. Also, the rounding we did in the exponent to Re^3 is mathematically questionable :D. You should also remember that the full utilization of supercomputers require perfect parallelization of the problem (and perfect programming) which is impossible to achieve. That alone would probably add another order of magnitude to the runtime. In the end, I think the important takeaway is the infeasibility of the simulations with our current (and future) hardware 😊
This really was an awesome Video, thank you do much! In case I'm writing my Thesis about turbulent flow simulations, is the Book you mentioned the only one I need to read?
Glad it was helpful! :) You're very welcome. I think it's hard to say, whether it will the only thing you have to read, but Pope's "Turbulent Flows" is a classical reference. Also check out Wilcox' "Turbulence modeling for CFD" if you are interested in turbulence models.
Great video : D I have a few comments regarding the DNS simulation for someone who wants to dive in bit more into DNS. 1. In reality, you won't fail from simulating something in low-Res DNS simulation. Otherwise, you don't really need that high resolution to perform a DNS simulation. This is the nature of DNS simulation. The solution usually would converge to itself. The main point here would be, how much detail would you like to resolve in the DNS. (Or eddies scale) So, don't be afraid from performing the low-Res DNS if you just want to get the large-scale information as DNS usually would give your the full physical information. 2. They are few ways to minimize the computational cost in DNS, as usually, people are just interested in only a friction of the domain of simulation. For example, the astrophysicist who performs the simulation of star-forming cloud would only be interested in those regions which are dense for star to form. In those cases, some methods such as SMR/AMR (Static/Adaptive Mesh Refinement) would help. Those methods would set a low-Res grid in the region that you are not interested in and focusing the fine structure of the dense region(High-Res grid). As a result, sometimes you can even save few orders of magnitude of the computational cost using those method but getting the information that you want.
TURBULENCE_XYZ: Tn (Like Capacitance) Lx (Like Capacitance) Hy (Like Capacitance) Wz (Like Capacitance) SOLID Position: Check CENTERED SURFACE Position: Check SURFACED turbulent_xyz: tn (Capacitance Roll) lx (Capacitance Roll) hy (Capacitance Roll) wz (Capacitance Roll) ATOMIC VOLUME Position: Check AIR When TURBULENCE_XYZ take direction, and velocity, turbulent_xyz effected this situation. Influence does not actually reach a higher level than affecting, but the sum of the active dimension stitches can be higher than certain values of the influencer. Mean: turbulent_xyz [tn, lx, hy, wz] total bigger than TURBULENCE_XYZ [Tn, Lx, Hy, Wz].