You know what's crazy. I was pronouncing it the correct way for my whole life. But I watched like one video in research that pronounced it the wrong way and for some reason decided that they were right and I was wrong. So I decided to intentionally change how I pronounced it to the wrong way. I'm actually so dumb lmao
@@copywright5635 that's crazy; on a similar note I'm convinced that Italian physicists can't pronounce "Dirac" Correctly; no professor at my Uni called the poor guy right. (Hasty generalization I know, kinda funny tho)
Great video! It's funny that 3Blue1Brown's manim environment became the official animation framework of RU-vid math videos. Every time I see some video made by manim, before watching it, I know that it is gonna be a good one. It never disappoints.
Runge-Kutta still leaks energy. For the equations of motion, the one integration scheme to rule them all is Velocity-Verlet. That conserves energy _exactly_ (apart from floating-point round-off), is 2nd order, and is just as computationally cheap as Euler.
2 месяца назад
To make it easy for others: en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet
Yes, I decided to leave off Symplectic Integrators. But of course, Velrlet and Velocity Verlet would have been better options. Seeing a lot of comments about this, I will probably do a sequel to this video in the future
Idk man seems youtube glitch or something but it's missing 'k' after 898 in your sub count hope youtube fix the issue soon, once again thanks for a great video
This is a great intro to approximation schemes! Really well explained and I love how you included all the animations to visualize what happens for each method. It was very helpful to see how drastic an effect the lack of energy conservation can have.
Nice video! However, at 9:38 it should be noted that the "order" of a method does not refer to the number of terms/stages (k1, k2), but rather the truncation of the Taylor series. This means that a 2nd order method will exacly match the taylor series up to the x^2/2! term within each time step, while the following terms (x^3/3!,...) are not exact or missing. For some fully implicit methods (Gauss-Legendre), the order can be two times the number of stages. (They're computationally expensive and I wouldn't recommend using them a lot but provide inpressive results for large time steps.)
Thank you for mentioning this. I could have mentioned this, however, since I didn't go through the Taylor series derivation, I thought it would just confuse most viewers.
I love this kind of videos. I work on driveshafts, which are rotational mass-spring-damper systems to some degree. I loved doing the Taylor expansion and wrote some homework about how much accuracy you gain for increased compute, as you increased the order.
Nothing in the universe is "powered by differential equations"! Differential equations CAN DESCRIBE a lot of things. This is very important to understand.
2 minutes in, I can already say: I like your style! Just one remark: I think it would be better to clarify that we can best describe nature with the help of differential equations, instead of saying that nature is governed by differential equations (even if it was true that we live in a simulation, this would be outside the scope of scientific thinking). This reminds me of talking about charged particles "feeling" a force and thereby intentionally reacting to it, or explaining darwinism by active adaptation to a changing environment, or even "lonely atoms which want to form bonds to share electrons" to fulfill some godly given octet rule so that all of them can live a long and happy life, and every other thing our teachers tought us "although we should keep in mind that this is just a simplification" - although the concepts you are about to explain (from my point of view at this moment in time, at minute 2 in your video) exactly oppose these views, of course. But keep in mind: Now the maths begins, and view counts will only drop from here on. To be clear: I really like your video! Narration, animation, general style: wonderful! And I had a look at your channel, and I will watch a few more of your videos.
Thank you for the feedback, yes I should have been more clear about that. I was mostly focused on the mathematics here, but of course, everything physical we describe with mathematics is just a model. I'll keep this mentality in mind for future videos.
i think metaphysics is a part of science. many great scientific insights came from studying metaphysics. i.e. what is true , how much can we know of truth, and what is that truth composed of? and there maybe a correspondence between our models and reality, i.e. reality = diff equations, and it's interesting enough for you to bring up, and I think it's helps strengthen our mind's talking about this... and it can give us insights into math from a point of view of physics and vice versa.
The RK algorithms are a very fascinating topic, and I've even implemented a few of them in a C++ application before specifically RK4. Yet, I still feel or think that FFTs and their inverses are some of the most interesting algorithms out there. Complex Vector and Field analysis, the Hamiltonians, especially the quaternions and octonions, and so much more are all interesting topics. We stand on the shoulder of giants! I truly enjoy videos like these, keep up the great work!
Thanks! FFTs are a topic I think deserve a longer and more dedicated video, but I'm considering doing one on them. Even for this, I barely touched the surface level of what RK is (didn't even derive RK4 lol), just wanted to provide some intuition for it. Both for the motivation, and why it would be more accurate than a simple Euler-Cromer method
@@copywright5635 Not exactly but similar to why Quaternions prove to work better than Euler Angles when performing rotations in 3D about the independent axes. When using Euler Angles there is the phenomenon of producing Gimbal Lock within the use of the rotation matrices. This where one axis ends up being rotated onto another where they become coincidental and from there you end up losing a degree of freedom as the two axes are locked and you can no longer differentiate between them. Quaternions helps to prevent this. Also, quaternions, even though the mathematical notations and expressions are fairly complex, implementing them in software is fairly trivial and they also have a very nice added benefit of being able to be calculated against other vectors and matrices as well as being converted to them. Because of this, they are also computationally cheap, very efficient and quite effective. It's not exactly the same, but it goes to show where a variety of Euler methods although is simpler to digest and work out by hand, also has their shortcomings. FFTs just provide a very good and efficient way to transpose from one system to another especially when working with wave patterns or anything with a frequency domain. Without FFTs, audio processing either being a wave file, a midi file, or even an MP3 file wouldn't be as efficient as they are today. Audio even when compressed requires a lot of information and can be fairly computationally expensive. FFTs reduces that by a couple orders of magnitude. Instead of trying to perform 20 thousand sine or cosine function calls per second for a 20kHz frequency. We can just sample it and use the sample rate to reconstruct a good enough approximation of the individual waves. Well sort of, as that's the abridged version. But yeah, I find it all to be very interesting and intriguing. I'm not just intrigued with this type of stuff either. I'm also intrigued by 3D Graphics Rendering, Game Engine - Physics Simulations, as well as actual CPU Hardware Design (ISA design). Then again, this gets into physics when you go beyond the logical device level and get into the actual structure of the transistors, resistors, etc. that are designed to manipulate electricity. And here we are again, with wave propagation. Right back to the use of wave functions and the power of FFTs lol! I just like things related to engineering. Factorio, Satisfactory, Dyson Sphere Program, Oxygen Not Included, Planet Crafter, Mindustry, Turing Complete, etc. there're all a part of my Steam Library and are my hobbies. And I'm no stranger to Music as I did play the Trumpet for close to 10 years back in my school days.
If you like this and Fourier methods, you should check out dispersion and dissipation analysis (sometimes referred to as "Fourier analysis") for ode solvers (and pde solvers too, but that's a bit more complex). It essentially allows someone to understand how a solver will respond to any initial condition of a linear problem.
You need to look into Clifford Algebras and Geometric Calculus. Try Macdonald's two books "Linear and Geometric Algebra" and "Vector and Geometric Calculus". Books are small and concise. He and a couple other people really blew the field open not long ago. Tensors and quaternions are subsets. Clifford makes them much simpler. Computer graphics is using this now, especially sims and games.
I'd say equations are logical abstractions, they don't exist outside of human understanding, nature does. Equations describe natural interactions in simple notation. In short, it's too abstract to be considered above nature.
Actually nature doesn't give a fk of what we think, it has some rules/constants that cannot be crossed and axioms such as discreteness and continuity and it happens to be explained by differential equations because we cannot imagine a stopped frame of time and we need to measure change in variables(differentials) to understand the universe
Another method (or sub-method) that maybe deserved a mention is the so-called leapfrog integration, where the average derivative for xI is taken from an average value of the previous tick acc and an extrapolated value for halfway towards the next one. It's sort of similar to RK2, but the samples are offset back by one(half) of a tick. It's relatively stable, and unlike RK2, you don't actually need to computer the derivative twice for each tick, as the first one is carried in from the previous iteration.
Yes, this is also a good method. I just wanted to keep the video simple. Perhaps I should have teased the sequel video at the end. I'm getting a lot of responses about this, so I think I'm going to make a sequel to this video covering Symplectic Integrators, along with some others like leapfrog
Great video and VPython code! Thanks for sharing. Just one heads-up: I think there's a tiny blunder in one of the equations around 2:20. The velocity goes with sin(), not cos(). And I also vote for modelled, not governed ;)
Wow, I honestly had no idea these methods were even connected! Thank you for the straight-forward explanation and visualizations. Top notch content, Sir.... 👍
Amazing video ! I loved your pace and your little jokes, it really helps staying engaged with your presentation. The visualisations are of course really good too. :)
Very clear! Implicit Gaussian Collocation for the win though! (For numerically fully conserving skew-symmetric use cases.) EDIT: But you already mentioned Sympletic integrators in one of your responses.
@@copywright5635 Thanks! Subbed as I’m looking forward to hearing more of (sympletic) integrators. I’ve used Gaussian Collocation for simulating with conservation accuracy down to machine-precision (with help of skew-symmetric PDE form), which is cool, but I can’t say I really ever understood what is sympleticity means nor how to derive, e.g., exponentially sympletic variants.
Oh this video is unfortunately a little bit late for me, just had my numerical simulation exam last month ;) still watching this video since it got recommended to me and its so fascinating how people came up with such things decades ago!!!
I think I learned some things. sqrt(k/m) is an angle. And because it uses regular trig functions, and not hyperbolic functions, then it has an imaginary component to it. See, e^t is hyperbolic, e^(i*t) is trig. That runs counter to what I was thinking about massless stuff being trig. So, there must be a situation where both are true. Maybe the spring constant is the angular part.
1:09 this sound is equivalent to undertaker's unexpected entry during an ongoing wrestling match😅 haven't watched the complete video, I'll leave a 'review' comment after watching it but it seems video will be awesome and informative.
I recall Euler's method being introduced to me explicitly as a tool that does not produce good approximations, but rather convergent ones, which is useful for proving existence of actual solutions.
From my days in college, when I took Numerical Analysis, I had an idea. Do the iterative equation that, with each iteration, produces Output values relative to increasing Time… BUT… for each Output value, use it as a STARTING VALUE for the iterative equation method known as *Picard’s Method.* When Picard’s Method iterates, it STAYS ON a single Time value (i.e. as you iterate, Picard’s Method doesn’t “move you” along the Time axis.) I never got to try my idea, but I always wanted to. Overall, the idea is that you keep switching back ‘n forth between an Increasing Time iterative equation AND a Picard’s Method iterative equation. For each “invocation” of Picard’s Method, you perform enough iterations until a suitable degree of convergence is achieved.
I saw one comment about how differential equations only "model reality and not govern it", and thought "hey, sometimes one or two random philosophical comments are good" and upvoted it. Then I realised half of this comment section is obsessed with that one phrasing for some reason 😄 Working with scientists, I've gotten used to the "this system is governed by" phrasing so much that to me it seems a weird thing to get hung up on. But I guess it's never a bad thing to get a reminder that the map is not the territory, or even a bunch of redundant reminders 😂
Nice! I coded RK4 and other methods in python for a 3 body problem simulation. RK4 and Velocity Verlet were way more stable than Euler or even a 2nd order Taylor series when we consider conservation of energy. Thank you for the video;
This is great. You seem to be getting a bit of flak from the very knowledgeable on the subject, but I just feel the video is directed to those like me, who *in theory * can follow the equations but have trouble with the 'where are we going with this' part. And in that respect the video succeeds with flying colours. So, thank you!
Hey, a minor correction, Matlab's ode45 uses a 4- and 5-STAGE Runge Kutta method (as it uses 5 k_i), but the 5-stage method is still an order 4 method, due to I think it being able to trace exactly polynomials of degree 4 but not 5.
So, one gripe... Nature isn't GOVERNED by Differential Equations... it can be modeled by them... and... really only a small tiny portion of nature. It's not like it's not like Nature was consulting a Math text and decided hey ... that sounds fun...
You might think my comment is mean or unsupportive of your work. But actually I really enjoy watching your video. The problem is you didn't answer your title. You didn't explain why the method is better than Euler. You didn't derive it or show any reason why it is more stable. You just showed graphically how it plays out but never actually proved anything about it
Thank you for the feedback. I think your concern comes from a differing opinion on what "why" means. I do concede that the title is not true in a strict mathematical sense (I didn't derive RK after all). However, I did provide reasoning for why RK2 (and by proxy RK4) seem to conserve energy better than an Euler method. I could have mentioned Symplectic Integrators, or even the Euler-Cromer method (which only changes one term, yet conserves energy for these problems). RK methods also don't inherently conserve energy, they simply converge much faster. I approached this video with the notion in mind that it is unclear why for certain systems, RK methods, even RK2, seem to simulate so much better than standard Euler Methods. I wanted to provide a sort of intuitive motivation, and I think I accomplished this. You are correct in saying I did not mathematically prove anything about Runge-Kutta methods. This was never the intention. I apologize if you found the title misleading. TL;DR, The "why" in the title is not the "why" of a mathematician. It's the "why" of an engineer or experimental physicist.
Thank you so much! I hear about Runge-Kutta so often at the lab but never understood it until now! But it bothers me that pretty much the same math (at least in my brain) has so many different names: finite difference, Euler, Runge-Kutta, Taylor expansion... I am bad with names :')
It's similar, but there are differences, as I outlined in the video. The point of this was mostly to show why we use "RK4", and what it is, since that term is often thrown around without actually understanding how it works
@@copywright5635 Yeah a lot of people in my field just say "we use Runge-Kutta" then use ODE45 without thinking about what's behind the scene. The video is great!
Yes! I really wanted to incorporate this into the video, but I wanted to get it out before SoMEπ ended, so I ended up not incorporating it. I'm not sure where I'll do this, but maybe I'll make a video on my patreon or a second channel demonstrating this. Otherwise, a sequel covering symplectic integrators will be coming at some point!
I would point out that even the "exact" answer is an approximation, because you have to approximate the value of sine or cosine in order to draw the graph or get a numerical result for the position of the object on the spring. Now I know that you can easily calculate the value of the trigonometric functions to far more accuracy than you need -- but those number are still calculate by an approximation algorithm.
Of course this is correct. I figured including this would be a bit off topic, as the approximation we're concerned with in the video is of the "initial value problem" type rather than for function values. Thank you for the comment though
It's clear to see that the higher order algorithms are more exact per timestep but they're also more computationally expensive because of calculating multiple derivatives per timestep. It would've been nice to see how exact each algorithm is per derivative evaluation. Because it might be more efficient computationally to use a smaller time interval with a lower order algorithm than using a higher order algorithm.
0:21 eveything in nature isn't governed by differential equations (DE), DEs describe nature, they don't govern them. I know it can be seen as a nitpik, but I felt that the semantic difference between 'governing' and 'describing' were big enough to warrant the comment. The rest of the video was great!
The "easiest" step up from explicit euler and implicit euler is "semi implicit euler", because you just need to swap a line of code and get 10x better results than both methods. Runge Kutta 2+ is the step after that.
For any linear system (such as the one modeled here) a discrete state-space model can be accurate even with a course time step. If you model a single iteration accurately, then you have a template that can be applied simply at each iteration.
yes this is of course true. I'm covering other systems in another video that should be out within ~1 month (maybe longer). That one will focus on symplectic integrators
when i was in high school i asked my math teacher how to find the square root of a number and she told me to use a calculator. i told her i didn't have a calculator. she said to look in the back of the book. i realized she had no idea how to do more than basic math. i looked up how square roots are devised and i realized no one knows how tf to devise square roots. like, literally, there's no formula for it. all we can do is factor it closer and closer to the desired proximity, but we can never formulate an exact number. to me, this means that the universe is in no way mathematical, and that we are just approximating as best we can with the limited thought processing we have 🤔🤷♂️
Another integration method that you can consider is Verlet one, third order error for position and second order error for velocity. It is highly used in games, since we care also about object interactions and with Verlet this is really easy. We can enforce non penetration constraints without necessarily applying a the force on those objects, but just displacing their positions and still not completely break the system. Obviously non physical correct, but robust and somewhat believable.
During university I did a project on halo orbits and used a RK of order 10. During the exam the professor asked why I didn't used a symplectic method (one that preserves the energy): RK still had an energy error of machine's precision's order and was much faster.
yeah RK is really good for a lot of things. Also, symplectic integrators are also not 100% accurate anyways. Though, Velocity Verlet is faster than RK4 and is quite good as well
Hm, could be a good topic, maybe as a sort of sequel to this video? I'm trying to not present topics in a super dry manner. I'd rather motivate them first, so perhaps continuing the conservation of Energy throughline (or Hamiltonian ig) would be good for that. Thanks for the suggestion.
If one of the problems is unwanted gain or loss of energy as the approximation proceeds, are there methods that calculate the total energy initially and after each step and compensates for energy gain or loss as it goes?
Cool [B)] There are other methods which use a constraint to ensure energy is exactly conserved (perhaps at the cost of accuracy in other ways, or computational cost? Not sure) right? Edit: nice bonus
Mhm. Though, Runge-Kutta methods are generally preferred in almost all cases. Euler's method is just generally a bit easier to implement. There are of course many other approximation schemes.
Actually, there are a class of ODE solvers called Symplectic Integrators which work incredibly well for Hamilton differential equations. For example, if you're doing simulations of satellite orbits, a Symplectic integrator will allow for accurate and stable simulations over very long periods of time.
@@jameswright4732 Yes, I was debating on including them in this video. I decided against it as I wanted to keep the focus narrow, and the video not too long. I may end up doing a sequel to this video covering Symplectic Integrators.
Thanks for sharing your videos. I love to see how everyone has a different perspective. Would love to see more animations / videos on computational / numerical methods - difference equations, Runge-Kutta (regardless of pronunciation : ), Fourier transform. Check out GoldPlatedGoof ‘Fourier for the rest of us’. I bet you could do a very interesting video on his Dot Product / Fourier relationship. The ability to represent any curve with Fourier epicycles is truly mindblowing! Thanks, keep it up!
For some reason, there are a lot of weird comments only trying to correct something here and there, without recognizing the relevance of what you're doing with this video. After a semester having tedious differential equations classes without any proper visualization or ilustrations to follow from my professor, or even the connection with physics, this video just recovered my passion on the subject, Thanks a lot for the video and for the effort!
Couldn't you just make energy conservation explicit? That is, calculate the total kinetic + potential energy in the system at t0 and then adjust velocity or velocities at every subsequent step to force the total energy to match?
No, because energy conservation is fundamentally at odds with "velocity adjustments" (i.e. impulses.) In other words, they are what got you there in the first place. By the time you've noticed a physically incorrect circumstance, it's already too late to "fix" it in a physically correct way. For some very simple and often frictionless contexts, we actually do have exact solutions in terms of energy conservation, but for almost all Lagrangians we have only approximation methods, and we can only improve their accuracy by really including the higher-order terms.
holy fuck. I hate math, and i hate physics. I dont get any enjoyment out of it, but i get enjoyment out of self improvement. I have a weird obsession with the term "level up" and anything related to it. You just earned my subscribe because this is good stuff, could help me level up
Does the error of the 4th order Runge-Kutta always stay below 0.09 in this case? I notice that it never seems to go above that value which seems quite surprising.
Nope. Though, it does sound similar I agree. All the music I use in this video is in the description. Even with classical music I'm trying to only use stuff that's either public domain or Creative Commons licensed
I've been wondering for some time how to extend time step in Velocity Verlet. Could you extend Velocity Verlet using the same logic as here? How about VV extension which is higher order in force (VV kinda assumes that force doesnt change during the time step)?
Perhaps, though not a full video dedicated to it. I'm thinking about doing a sequel to this covering symplectic integrators, as that seems like the next logical step. Though, that's not for AT LEAST a month
So is it correct to say that the Runge-Kutta method is essentially repeating the process of following a curve's tangent until another point on that curve, and taking that tangent too, repeat?
I think you have the right idea. If you want a more rigorous definition, here's an MIT article on that. web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node5.html A lot of approximation involves taking tangent lines (linearization), so it's a bit hard to distinguish between them if you think of it that way
I found this video fascinating, and very cool overall. [Subscribed] It is surprising how rare it is to see the words "Runge-Kutta" compared to Euler tho. However, with deep respect re: @0:20, NOTHING in nature is "governed" by differential equations, rather differential equations allow us to see how nature is governed. (nice bonus at the end!)