Hi everyone! My goal with this channel is to intuitively explain the mathematical concepts behind quantum mechanics in a way anyone can understand. In the future, I look forward to making videos about other concepts in math and physics.
A bit about me: my name is Brandon Sandoval and I recently graduated from Stanford with a degree in physics. In Fall 2022, I am very excited and lucky to be starting my PhD in physics at Caltech. If you have an idea for a video or have any questions, go ahead and shoot me an email! Thank you for visiting!
Thank you so much for this series. I remember failing my Quantum Mechanics II class in university more than 10 years ago, and subsequently never really studying the field in any detail. This not only brings back a lot of old memory but also finally things are put into place in my brain. Really helpful and it feels very good to finally start to really understand the meaning of all these bras, kets and operators. Thanks!
The result d2f/dx2 = (<f>around -f)/dx2 can be derived straightforwardly by expanding twice the definition of a derivative as the limit of the incremental ratio. No neeed of Taylor series here.
There must be a connection between exponential and logarithmic increase and decrease in between positive and negative charge density as it moves towards and away from it it forms a potential energy as against gravity energy.
Thank you for this series. In reference to the i in the equation, you refer to the i as needed to conserve probability. What do you think about what my browser's AI says, when I asked? "Does the i in the schrodinger equation conserve probability The Schrödinger equation is a fundamental equation in quantum mechanics that describes the time-evolution of a quantum system. The equation is given by: iℏ(∂ψ/∂t) = Hψ where ψ is the wave function of the system, H is the Hamiltonian operator, i is the imaginary unit, ℏ is the reduced Planck constant, and t is time. The question of whether the i in the Schrödinger equation conserves probability is a subtle one. The answer is that the i does not directly conserve probability. However, the Schrödinger equation as a whole does conserve probability. Local Conservation of Probability The Schrödinger equation conserves probability in a local sense, meaning that the probability of finding the system in a particular state remains constant over time. This is because the probability density |ψ(x,t)|² is a conserved quantity, meaning that it does not change over time. Non-Local Conservation of Probability However, the i in the Schrödinger equation does not directly conserve probability. In fact, the imaginary unit i is responsible for the non-local behavior of the wave function, which can lead to non-conservation of probability in certain situations. For example, in the presence of a potential barrier, the wave function can tunnel through the barrier, leading to a non-local transfer of probability. This is known as quantum tunneling. Conclusion In conclusion, the i in the Schrödinger equation does not directly conserve probability. However, the Schrödinger equation as a whole conserves probability in a local sense, meaning that the probability of finding the system in a particular state remains constant over time. The non-local behavior of the wave function, facilitated by the imaginary unit i, can lead to non-conservation of probability in certain situations." THANKS AGAIN FOR THE SERIES OF VIDEOS! Geoffrey Faust
This is making no sense . we should put not equate time and space in that way altought they are similar but they invertly proprtion to each other that is why dt should be inverted placed from dx in equation and that is why the system nearly or may crash soon because bad math teacher in college that i debated and gave low grade on advance engineering math class
I used to think of second derivatives as second dimensions of something...they are not! It is more like you can fit all the dimensions you want even in the first derivative like in an average; hint people do it with linear algebra all the time. It is beyond me why I would want to put in all the dimensions in the second derivative, but it is intuitive enough to say that the equations become more clearer because now you can see not a line, but a contour of them (a point in a line, a line in 2D, and a ball in 3D, etc); another average. Learning the limit definition of the first derivative is easy, but it is even more useful when using it at the second derivative level. The meaning is no longer dimensional, it is trascendental and humbling to explain many equations describing nature.
I recommend the people who are all beginners for quantum mechanics before watching the video try to read QCQI by Nielsen & Chuang. It might improve your understanding. Thanks for the series Sir.
You need to go back and study basic calculus. You represented a finite range along the x-axis with a differential, dx, which is an infinitely small quantity. You should have used delta x, which has a finite value.I don't know what you studied in math but where I studied it, the meaning of the 2nd derivative was clearly explained. In fact, we went as far as the 4th derivative, much to our chagrin, since we were engineers and the prof was ego-tripping at our expense. If the first derivative of a curve is a straight line with a slope, what do you suppose the derivative of that straight line may be? Since the 1st derivative is the instantaneous rate of change of a tangent line to a curve, what do you suppose the instantaneous rate of change of that tangent line may be? The problem I found with Feyman was his smart-assed attitude. He once inferred to a group of student in a lecture in New Zealand that he could not explain his theory to them because they were too stupid to understand it. Feynman lived in a world of thought-experiments, much like Einstein, in which nothing could be observed or proved. How convenient? Of course, scientists claim to have verified their theories but those scientists were often groupies who were going long to get along.
Where the first derivative is a tangent telling you the rate of change like the shift in change of state. The second is secant, a measure of curvature. In Hooke's law it focuses value from the field into the spring. If you are talking energy from the field subject to weak mixing, that angle applies to the secant to establish the focus of position=mass. Equilibrium for a set is defined by its curvature.
Oh my god is this how they teach this in other side of the puddle? Don't you talk about velocity and acceleration?... For me the first derivative value in x0 it's just the coeficient for a line (g(x)=x) to be tangent to to f(x0). And, by extension, the second derivative of x0 is just the coeficient (if more stretchy, wide, or upsidedown) for a parabola to match the surrounding of f(x0). For example the second derivative of x³ and I beg forgive my unrigurous and unproper vocabulary I'm half asleep also english is not my first language: - Approaching to x=0 from the left it wildly comes from minus infinity to getting gradually less wild as a parabola, so the values must go from a wild inverted parabola (-2x², -1x², ...) - On x=0 well it pretty much looks like a flat line so (0x²) - Starting from 0 on, what's observed is the opposite, evolves like a parabola but gets wilder as you go further (1x², 2x², ...) Naturally, this evolves as a parabola getting wilder in the ratio of x, as x³ is x²·x so we can see the second derivative of x³ is x (-2, -1, 0, 1 ,2) and pretty much I can have an easy idea of this when I see whichever function. And specially if these define physical phenomena: - For example where can you find systems where a variable works with parabola-like things? (Simplyfying) Driving a car and stepping on the accelerator. If you press the accelerator slightly the position of your car evolves as a parabola (ignoring friction with the surface). The second derivative of your car position is how hard you press the accelerator. The same for the brake which would be a negative constant value (until the car stops). I must admit I just watched half of the video (and I am sorry) as I saw this was getting so complicated for something that, in my point of view, can be explained so easy. I still can't believe that the examples here presented are not ideas that the normal college student don't associate naturally with the derivatives. So if I am wrong forgive my arrogance and please show me what reality is. About third an so on derivatives, they would be "How much of a x³ is the surroundings of f(x) here?" but being x³ flat in 0 well they don't provide much useful information really, unless you're talking about velocity where exist the concept of overacceleration and other more specific cases. The same for the rest.
Given a norm, the length of a vector is the same under every basis, and since we must also satisfy that the probability adds to 1, assuming that length = 1, it seems convenient to define the probability function as ci*·ci such that it adds up to <ψ,ψ> = ||ψ||^2 = 1. Less formal, but perhaps easier to understand.
Any hint about what you’re getting at with the final example? The heat equation one is intuitive to me but not sure what is meant about higher energy being related to shorter wavelengths. Is it something to do with the higher curvature at the crests of the waves for shorter wavelength?
Very cool! I was thinking about how to think about the first derivative in this way and I'm thinking that it's like the average of the points on the positive side minus the average of the points in the negative side. I haven't done the analysis in the same way to verify that but I do really like this alternate way of thinking about derivatives.
This is for people who understand calculus, have a clear visual and conceptual intuition for what a derivative is, but just think of the second derivative as "the derivative of the derivative", which is a definition that intuitively tells us nothing about the original function. This video is building that intuition using the Feynman lectures at a base, which are all college level
change in energy drives time evolution… and change in momentum drives spacial transformation… that’s astonishing, never thought about this approach in understanding the quantum states being described by this equation!
#Newton was an #ET stuck in the year 1600 mathematically fiddling on the side of his main passion … #Alchemy. And, from this isolated stimulation arose his masterwork, “The Principia” after a swift kick by Sir Haley, he of the comet’s orbit calculation problem, which #Newton had solved. But, only after inventing (or, discovering) the math from the #Alkashic-records that solved the problem. 😊 8:17
So, the function f(x) varies in direct correlation to the movement of variable (x). And, to what relative amount of movement the variable (x) moves, the function f(x) moves, as well. Could be (1) to (1). But, in this example … the expansion of the function f(x) appears to be greater than the movement of the variable (x) as if the function is representing a curve f(x) with a focal point of (x). Do we know the radius of the circle now? 😊 2:50