Тёмный
No video :(

The exponential of the derivative? 

Michael Penn
Подписаться 304 тыс.
Просмотров 34 тыс.
50% 1

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 141   
@physicsatroeper974
@physicsatroeper974 Год назад
This kind of thing is also very important in Quantum Mechanics, as this exponential operator is playing the role of the "Translation" operator acting on a wavefunction, for example. This is manifesting the fact that the operator "e^{ipx}" where "p" is the "momentum operator" is doing a translation, which in turn gives you things like "Momentum-conservation translation-invariance" via Noether's Theorem. Similarly gives you a tool to prove Heisenberg's Uncertainty Principle in wave mechanics. Nice video!
@2kchallengewith4video
@2kchallengewith4video Год назад
Are you like a guy with a phd in physics
@thenateman27
@thenateman27 Год назад
This is so true. It's literally in Chapter 1 of Sakurai's Modern Quantum Mechanics. I'm also fairly certain that the exercise at the end of the video is a homework problem I did for my grad QM course lol.
@edoardoferretti5493
@edoardoferretti5493 Год назад
I was thinking the exact same thing, it was something as a physicist I never paid attention to, but giving a mathematical focus on the "tools" I use everyday is truly amazing
@user-et9ub3dc3j
@user-et9ub3dc3j Год назад
As well as “Conservation of energy time-invariance.” How fun!
@apteropith
@apteropith Год назад
i remember this from undergrad QM, but that course approached the matter very differently, and i didn't absorb this generalizable bit at all (if it was even there)
@jeffbarrett2730
@jeffbarrett2730 Год назад
I noticed this neat connection the other day. A linear time-invariant system can be written: dx/dt = A x Where x : R -> R^n, and A is an nxn matrix. The solution to this system is well known in terms of the matrix exponential: x(t) = exp(tA)x0, where x0 is the initial condition. Now consider the PDE: ∂u/∂t = v ∂u/∂x Its solution is u(x,t) = u0(x+vt), but from the result in this video, that is exactly u(x,t) = exp(vt ∂/∂x) u0(x). It's very interesting to see the similarities between the matrix A for the system of equations and the operator v∂/∂x for the partial differential equation. It's sort of like a continuous generalization.
@howardthompson3543
@howardthompson3543 Год назад
The x^{n-m} should be x^{m-n} instead.
@michaelbaum6796
@michaelbaum6796 Год назад
You are right👍
@chester2479
@chester2479 Год назад
reminds me of the operator algebra from quantum mechanics
@insouciantFox
@insouciantFox Год назад
It certainly is used there. The shift operator basically is the exponential derivative.
@realfunnyman
@realfunnyman Год назад
All I was thinking about for the first half of the video
@isodoubIet
@isodoubIet Год назад
Also used a lot in perturbation theory, in an even more explicit form.
@danielmilyutin9914
@danielmilyutin9914 Год назад
exp(t d/dx) is exactly shift operator f(x) |-> f(x+t) upd: but it works only on analytic functions
@ianrobinson8518
@ianrobinson8518 Год назад
Yes. It was formalised back in the mid 1800s. It was generally referred to as the calculus of finite differences. A book by Boole covered and extended the underpinning ideas in great depth. It’s a fascinating area which was largely forgotten with the advent of modern computers. There’s also a Schaum book and an old actuarial text delving into it with practical applications in integration, summation and what we call today numerical analysis
@danielmilyutin9914
@danielmilyutin9914 Год назад
@@ianrobinson8518 you sound nostalgic. I've first met this operator in book (translating it in English) "Modern geometry" by Dubrovin, Novikov, Fomenko. But it was just used there in one small case. I loved this operator when I was a student because it was easier to memorize Taylor series of general function and just exp in one go. And for me it was like hidden law under Taylor series.
@ianrobinson8518
@ianrobinson8518 Год назад
@@danielmilyutin9914 I also was fascinated by the operator equivalence identities: e^D = 1+Δ or equivalently D = ln(1+Δ) and ΔΣ= 1 which parallels integration being the reverse of differentiation. It works on factorials of x instead of powers of x. There are numerous other combinations of these operators which enable quick solving of otherwise very difficult problems in series summation for example.
@gamerpedia1535
@gamerpedia1535 Год назад
This is such a cool feature of operational calculus too. T = e^D = ∆ + 1 God I love it
@samosamo4019
@samosamo4019 Год назад
In fact this will hold for all absolutely continuous function W1,1
@christophdietrich4240
@christophdietrich4240 Год назад
This is an example of a strongly continuous one parameter operator semi group. What we did by hand is that the derivative operator generates the shift (semi-)group. There is a whole field of mathematics to explore if you are interested.
@coursmaths138
@coursmaths138 Год назад
Thx. I am interested. Do you have an online reference plz?
@soyoltoi
@soyoltoi Год назад
​​@@coursmaths138 Looking around, I found the books of Engel and Nagel (A Short Course on Operator Semigroups + its bigger brother One-parameter Semigroups). They seem to cover this exact topic.
@elzilcho94
@elzilcho94 Год назад
What's neat with that convergence requirement of x>1 is that, with a little fiddling, it is always satisfied. Because when x=1, the original question just becomes the trivial e^(d/dx)0=0, and when x
@Cazolim
@Cazolim Год назад
So you're saying, 0
@elzilcho94
@elzilcho94 Год назад
@@Cazolim can you not just use log rules? log(x^-1)=-log(x)
@alessandrosangregorio293
@alessandrosangregorio293 Год назад
@Ssteves do you mean you can arrive to the conclusion assuming the statement of the formal Taylor theorem? Given you rewrite the problem as finding -e^(d/dx) ln (x^-1) for x
@Alex_Deam
@Alex_Deam Год назад
Surely at x=1 the RHS is ln(2) not 0?
@a52productions
@a52productions Год назад
Oh my god, this is the shift operator from QM! I never understood its definition until now -- it was kind of skipped over in the lecture. This makes so much sense, thank you! Of course, I still don't understand the subsequent result that the momentum operator must be the derivative term in the exponential (the generator of translation or whatever), but that's very much a physics result and outside the scope of this channel.
@smolboi9659
@smolboi9659 Год назад
This might help: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-W8QZ-yxebFA.html
@smolboi9659
@smolboi9659 Год назад
Apologies I realize the video I linked does not feature exponentiating the momentum operator but it is still a good watch for understanding the fourier transform of position space wavefunctions into momentum space wavefunctions.
@Balequalm
@Balequalm Год назад
Isn't convergence used a step before what Michael claims? I'm pretty sure you need uniform convergence to interchange the integral and summation to begin with, which gives the same condition in the end.
@gustavk3227
@gustavk3227 Год назад
yes, though the same thing
@eliosedrata
@eliosedrata Год назад
Or you can directly recognize the power series of ln(1+1/x) and say that this power series converges only if |1/x|
@spiderjerusalem4009
@spiderjerusalem4009 Год назад
isn't the appirition of the series enough to conclude that it is indubitably the expansion of ln(1+1/x) since ln(1+ζ) = 𝚺⁰⁰ₙ₌₁ (-1)ⁿ⁻¹ ζⁿ/n 🤔🤔🤔🤔 or he did it for the sake of completeness🤔
@eliosedrata
@eliosedrata Год назад
@@spiderjerusalem4009 As I said yes you just have to indentify it but you still need to verify |1/x|
@zachbills8112
@zachbills8112 Год назад
This gives a very slick derivation of of the Euler-Maclurin summation formula. We get sum f(x+n)=sum e^nD(f)=(1/1-e^D)f=D^-1(D/1-e^D)f. If you then Taylor expand the final parentheses and interpret D^-1 as an integral you get your formula. The parentheses expression is the exponential generating function for the Bernoulli numbers, explaining their presence in the formula.
@iabervon
@iabervon Год назад
Applying it to e^x is really easy, since the derivatives don't change anything, giving you the Taylor series for e^x evaluated at 1, with an e^x in each term. Then factor the e^x out, and you get e^xe^1, which is e^(x+1).
@PeterBarnes2
@PeterBarnes2 Год назад
This continues to be easy for functions e^bx, as well. This further means that, after paying heed to convergence, any function which can be defined linearly from exponentials need only rely on the proof for the exponentials themselves. The simplest functions defined linearly from exponentials would be weighted sums of exponentials, say something like 5e^x + 3e^2x + e^3x. More interesting would be an integral of the following form: integral(a, b) f(t)*e^tx dt This includes functions like the Gamma Function, the 'Exponential Integral' function, the Laplace Transform of any function (single- or double-sided) as well as the Mellin Transform of any function. Of course, some of these examples require a substitution to yield the form I gave above, but all the same, exponentials are highly versatile here. I have not, however, found a satisfactory way of expressing polynomials (monomials, really) in terms of exponentials. There are some limits of exponentials that yield them, for example lim t->0 (e^tx - 1) / t = x, but I've not been satisfied with this. Perhaps it's something to revisit, though. Attempting to use integrals to do the same thing for monomials yields the derivative of the Dirac Delta function under the integral. This is essentially equivalent, however, to another limit of countably many (indeed finitely many) exponentials: lim t->0 (e^tx - e^-tx) / 2t = x which is equivalently unsatisfactory.
@emanuellandeholm5657
@emanuellandeholm5657 Год назад
In the DSP world of Z-transforms, this is exactly the discrete delay operator z^-k. Multiplying the Z-transform with z^-k has the effect of causally shifting the original function by k steps.
@manucitomx
@manucitomx Год назад
I loved this. Clear, straightforward, well explained. Thank you, professor.
@squeezy8414
@squeezy8414 Год назад
The exercise suggested is excellent - using the cyclic nature of the derivative of sin(x) you can do the following: - Write out the expansion, in which you will get something like: exp(a d/dx) sin(x) = sin(x) + acos(x) - a²/2! * sin(x) - a³/3! * cos(x) + ... - Factor by grouping to get sin(x)[1 - a²/2! + a⁴/4! - ...] + cos(x)[a - a³/3! + a⁵/5! - ...] - Recognise the series in brackets to be the Maclaurin/Taylor Series expansions of cos(x) and sin(x) respectively evaluated at a: - Thus exp(a d/dx) sin(x) = sin(x)cos(a) + cos(x)sin(a) - Using the addition formula, exp(a d/dx) sin(x) = sin(x+a) as required. [I don't believe there are any convergence issues here as this example only used the series expansions of cos(x) and sin(x) which are valid for all x, but do correct me if I'm wrong.] Very interesting video, I'd love to see more operational calculus on this channel :)
@kkanden
@kkanden Год назад
i love how clear your explanation of each step is! what's important tho is that it's not overexplained either. perfect, just perfect!!
@Leonardo-G
@Leonardo-G Год назад
Maybe e^(a d/dx) e^x would be interesting? All derivatives of e^x are itself so we can factor it out of the summation: e^(a d/dx) e^x = e^x (1 + a + a^2/2! + a^3/3! + a^4/4! ...) The series on the right looks familiar. In fact, it's the maclauren series for e^a! So we replace it like so: e^x (e^a) e^(a d/dx) e^x = e^(x + a) And we arrive at the same result. Amazing!
@eliosedrata
@eliosedrata Год назад
I'll show that exp(aD(sin(x)))=sin(x+a) : First you can show easily by induction that the nth derivative of sin is sin(x + n*pi/2). Then exp(aD(sinx))=sum from 0 to infinity of (a^n)/n! *sin(x+npi/2) At that point you split the sum between even and odd terms, so that cos(a) and sin(a) appear. You get : Sum of a^2n/(2n)! * sin(x+npi) + sum of a^(2n+1)/(2n+1)!*sin(x+npi+pi/2) And sin(x+npi)=(-1)^n sin(x) and sin(x+pi/2)=cos(x) Finally you recognize the power series of cos(a) and sin(a), so you get exp(aD(sinx)) =cos(a)sin(x)+sin(a)cos(x) =sin(x+a) And of course it works for every real x and a since the radius of convergence of sin and cos is infinity.
@euanthomas3423
@euanthomas3423 Год назад
Control engineers use this a lot. In Laplace transforms the s in e^-st plays the role of d/dt and the e^-st is physically a phase shift ( s= jw) , as you anticipated in your exercise suggestion for operating on the sine function. As other commentors below state, this is also a translation in quantum mechanics, or a phase shift for the momentum operator. Amazing how all this stuff from completely different fields fits together.
@maximilianarold
@maximilianarold 7 месяцев назад
17:30 I never felt so unvalued in my entire life
@kyintegralson9656
@kyintegralson9656 3 месяца назад
😀Don't rush to judgment. He didn't specify "you to be less than one" what.
@obiske0
@obiske0 Год назад
Do this kind of operation in the path integral formulation in quantum mechanics and quantum field theory, particularly in perturbation theory, where expansions of an exponential (functional) derivative gives you your Feynman diagrams.
@obiske0
@obiske0 Год назад
Also this is like the time evolution operator which acts on a soln to the time independent Schrödinger equation to give you a soln to the time dependent SE
@spiderjerusalem4009
@spiderjerusalem4009 Год назад
one thing i also noticed is that eᵈᐟᵈˣf(x) = f(x+1) Since f(ξ) = 𝚺⁰⁰ₙ₌₀ f⁽ⁿ⁾(x)(ξ-x)ⁿ/n! whence f(x+1) = 𝚺⁰⁰ₙ₌₀ f⁽ⁿ⁾(x)/n! = eᵈᐟᵈˣf(x)
@conanedojawa4538
@conanedojawa4538 Год назад
I want to know more about the exponential operator "e^d/dx"
@ILSCDF
@ILSCDF Год назад
Umbral Calculus
@cd-zw2tt
@cd-zw2tt Год назад
michael again with the crazy math objects! wild stuff
@Dondalorian
@Dondalorian Год назад
Assuming the function has a fourier transform, this proof becomes quite trivial. you just use the derivative and shift / mult by e^ia properties.
@gnomeba12
@gnomeba12 Год назад
I wouldn't mind seeing you do a proof of the Baker-Campbell-Hausdorff theorem.
@Math_Rap_and_GOP_Politics
@Math_Rap_and_GOP_Politics Год назад
I think a good thing to point out for the Real-Numbered Powered case is that we have to now utilize the complexity of the Gamma Function (factorial-extension function) to deal with the Combination of two arbitrary Real Numbers.
@samosamo4019
@samosamo4019 Год назад
Very nice explanation. We could start the course of one parameter semigroup by this! In fact the derivation operator is the infinitisimal generator of the shift semigroup. One should precise the domain.
@gonzaloarias8442
@gonzaloarias8442 Год назад
If you consider the derivative operator over a suitable space (H^1 functions, for instance), it can be seen as the infinitesimal generator of the shift operator semigroup. When you considere this operator over the space of analytic function you have that nice representation!
@lukasschmitz9030
@lukasschmitz9030 Год назад
This reminds me of my (quantum) field theory class. More specifically, we are witnessing the action of the connected identity component of the (Lie) group of translations. Vector fields are the generators of the translations and a vector field over the real numbers is just a real number times the derivative operator.(I believe that if you combine the connected identity component of translations with the connected identity component of "rotations" (possibly including boosts for non-Euclidean signatures) you get the small diffeomorphism group, id est the connected identity component of the diffeomorphism group.).
@lookupverazhou8599
@lookupverazhou8599 Год назад
So, that's why electrons do that!
@schmud68
@schmud68 Год назад
Let me fix the discussion to 4D Minkowski space for simplicity, which is R^4. Some comments: The translation group is connected (it is literally R^4 with addition), hence there is no need to restrict to the identity connected component, you are already there. Vector fields can describe more than generators of rotations; one can obtain the entire isometry algebra (Poincare algebra in 4D Minkowski space) as vector fields on a manifold. More on this soon. Combining translations with the identity connected component of the Lorentz group SO^+(1,3) gives the the identity connected component of the isometry group (i.e. the identity connected component of the Poincare group), and this is just a TINY part of the diffeomorphism group. I suspect (and might very well be wrong) that the group of diffeomorphisms is the set of gauge transformations (or fiber preserving principal bundle automorphisms) of the frame bundle of whatever manifold you are working on (say Minkowski space here). One perspective on why momentum generators are derivatives: One can consider the vector space of smooth (real-valued) functions on spacetime (R^4), in physics language this is the set of real scalar fields. One can then consider a representation of the translation group on smooth functions by f(x) -> T(a)f(x) = f(x+a). Note that this representation is infinite-dimensional, so it no longer makes sense to treat T(a) as a matrix. One can then formally ask if T(a) =e^{ip.a}, then what is the action of the generators p^\mu on f(x), then recover that p^\mu is proportional to d/dx^\mu (say by Taylor expansion of T(a) and of f(x+a) if you desire). There probably are ways to make this more rigorous, like restricting to analytic functions. Similar arguments work for quantum mechanics using only spatial translations on the L^2 space of wave functions; in terms of rigour, this is where one needs to start being careful about which L^2 functions are even differentiable (though smooth functions with compact support provide a dense set in L^2). Another method, which is more geometric, is to solve the Killing equation on Minkowski space. This is formulated in terms of geometry in the sense that solving the Killing equation specifies vector fields X such that, when moving infinitesimally along their integral curves, the metric does not change (i.e. the Lie derivative of the metric with respect to X is zero). Hence we are describing the infinitesimal isometries of the Minkowski metric. The solutions yields the desired result that p^\mu is proportional to d/dx^\mu and also some corresponding ones for boosts and rotations, giving us a differential operator representation of the Poincare algebra. This sort of avoids technicalities in the sense that, in geometry, the tangent spaces have basis vectors which are precisely directional derivatives; and by Lie differentiation, we can act these operators on any tensor. However, we sort of lose the action of the group in favor of getting a clearly well-defined action of the algebra (on all smooth tensor fields). Edit: Maybe recovering the group action is easier than I thought, one just considers the integral curves of the Killing vector fields and reconstructs the group action as a local diffeomorphism, then act this on tensor fields. Though I'm not sure of the rigour of the more analytic approach with rep theory above this geometric one. Maybe someone else with more knowledge could add onto this.
@dariofagotto4047
@dariofagotto4047 Год назад
I was sure I had seen something similar, but with less calculations, and well it was clearly the exact same thing for e^(as) for the Laplace transform, which makes sense and is an application where the extra annoyance of derivatives becomes a plus
@timothywaters8249
@timothywaters8249 Год назад
I saw this in field theory first time. Phase shifting explains the relationship between current and voltage of electrostatic fields. The math makes sense but trying to explain it to the layperson was nearly impossible. Seeing the use in DSP and QM is sweet.
@whatelseison8970
@whatelseison8970 Год назад
Hey Michael, love the channel! I was wondering if you might be able to throw together a video on the Pade approximant. The current RU-vid offering on the topic is extremely sparse but the videos that do exist make it seem like a very useful tool.
@txikitofandango
@txikitofandango Год назад
I love this stuff. Wildberger has a video (good luck finding it) about the connections between the derivatives of a polynomial and the binomial theorem. He uses the binomial theorem to calculate derivatives without limits, and the higher derivatives are off by a factor of k!
@marcorademan8433
@marcorademan8433 Год назад
There is a direct connection to Fourier via the translation property here somewhere...
@brollejunior
@brollejunior 3 месяца назад
This result is really cool!
@fartoxedm5638
@fartoxedm5638 Год назад
There's a mistake in 6:50. The power should be m-n. It also affected the further calculations.
@scottmiller2591
@scottmiller2591 Год назад
This looks exactly like the Laplace transform shift theorem. Interesting. It's also giving me equivariant vibes in machine learning convolutions.
@user-et9ub3dc3j
@user-et9ub3dc3j Год назад
This is the thought that crossed my mind as well. Very satisfying to see the math behind it all.
@apteropith
@apteropith Год назад
oh! now this is interesting probably also useful for affine coordinate systems? rephrasing translation in a language similar to rotation is very useful when the "origin" is entirely a matter of basis changes
@GeoffryGifari
@GeoffryGifari Год назад
some things: 1. does the algebraic properties of exponents (exp(A+B)=exp(A)exp(B) still hold for exp(d/dx) and exp(d/dy)? 2. for g(operator), does the function g need to be continuous/differentiable? 3. It seems like the variable shifting can be used to interchange the variable: if exp(a d/dx)f(x) = f(x+a), does that mean exp((y-x)d/dx)f(x) = f(y)? kind of like applying delta distribution then integrate 4. what kind of fun can we get from a "gaussian of derivative" exp(-(d/dx)²/2) ? 5. also, lets try the derivative of that exponent! d/dx[exp(d/dx)] f(x) = ?
@almostme80
@almostme80 Год назад
Very interesting questions! I think the answer to the first is only if A and B commute, which the the derivatives (aka the linear momentum operators for the physicists among us) do!
@eliosedrata
@eliosedrata Год назад
To answer the first question, exp(u+v)=exp(u)exp(v) if u and v commute, which is correct here by the Schwarz theorem only if the function you differentiate is C2 ( two times differentiable and the second derivative is continuous).
@christophdietrich4240
@christophdietrich4240 Год назад
For question 2 I recommend reading about functional calculus :)
@insainsin
@insainsin Год назад
1. If, for A and B, AB-BA=0, then exp(A)exp(B)=exp(A+B). So in that case yes. If by derivative you mean partial derivative. Else no. 2.Yes. Unless your working in a calculus that takes in account global properties like with Laplace/integral calculus. Then maybe. 3. No. x*d/dx-d/dx x=1 not 0 (look back to first rule). In fact, exp(a*x*d/dx)f(x)=f(e^a*x) 4. Look up Weierstrass transform on wiki 5. That's nothing special. f'(x+1)
@urumomaos2478
@urumomaos2478 Год назад
For question 4, Ive found that operator useful when expressing the solution to a particular set of second order differential equations. I tried generalizing this to any second order differential equation but failed to do so
@surelydone
@surelydone 28 дней назад
This is amazing, i was like WTF when you wrote the shift claim
@michaelbaum6796
@michaelbaum6796 Год назад
Very nice video. Thanks a lot Michael 👍
@eduardochappa4761
@eduardochappa4761 Год назад
the first term of the series of e^(d/dx) is not 1, it is the identity operator, so you are not multiplying 1 times x^2, say, but applying the identity operator to x^2, in the same way that you do not multiply d/dx by x^2 but apply d/dx to x^2. Sure, a multiplication by 1 acts as an identity operator, but it makes it confusing when you do not treat all the terms as operators.
@clearnightsky
@clearnightsky Год назад
We can prove something about ln(d/dx) also. That is, ln(d/dx) f(x) = lim (d/ds (d/dx)^s f(x)), s->0. That's pretty much the first derivative of the continuous derivative of f arround 0. Now we can use Taylor series to get an expansion for the fractional derivative of f(x)!
@CCequalPi
@CCequalPi Год назад
Love this kids stuff. Does this generalize for functional and exponentials of functional derivatives?
@IsomerSoma
@IsomerSoma Год назад
One should be aware of that this is the composition of exp with the differential operator acting on some appropriate function space. Loses its mystery when defined properly.
@dmolson512
@dmolson512 Год назад
What would the operator sin(a(d/dx))? What would it do a function?
@kyintegralson9656
@kyintegralson9656 3 месяца назад
@6:50, & subsequently, should be x^(m-n), rather than x^(n-m).
@Schraiber
@Schraiber Год назад
This is a result that feels like it should have some really intuitive explanation, I'd def love for someone to point it out!
@christophdietrich4240
@christophdietrich4240 Год назад
As I wrote in another comment, this is an example of a strongly continuous one parameter operator-semi-group. Consider for any t>=0 the operator T(t) that operates on a function f by shifting by t. Then (T(t)f) (x) = f(x+t) for all x. Then we can easily see that T(t)*T(s) = T(t+s) and T(0) is the identity. Thus these operators form a semi group. Now I'm getting a little hand-wavey: Consider the "derivative" of the function t-> T(t), i.e. lim (T(t+h) - T(t))/h = T(t) lim 1/h (T(h) - I) as h->0. Thus the "derivative" at any particular t>=0 is really just the "derivative" at 0 (call it T'(0)) times the semigroup itself, i.e. t->T(t) solves the differential equation T'(t) = T'(0) T(t). Intuitively, this means T(t) = exp(T'(0)*t) in some sense. Now for the shift operator it just so happens that the derivative operator is the derivative of the semi group (it is a very easy calculation, you just plug in definitions, feel free to do it :)). Again, this is very hand wavy, but I hope it became a bit clearer. Read about C0-semi groups, if you are interested :)
@viliml2763
@viliml2763 Год назад
just look up taylor series
@Schraiber
@Schraiber Год назад
@@christophdietrich4240 That's a nice way to think about it, but I guess I was thinking about something more geometric, somehow.
@sdal4926
@sdal4926 Год назад
Michael can you make a video about asymptotic series? They are very important in physics. Especially in quantum physics. I would like to learn that.
@asdasfghgf
@asdasfghgf Год назад
Can you use this and the definition of the gamma function to create a gamma of a derivative acting on f(x) as a follow up to the factorial of the derivative
@robshaw2639
@robshaw2639 Год назад
When will the Galois Theory and Lie Algebra courses begin on the MathMajor channel? In the Fall? Or maybe, are they "summer" courses? Really want to see both those courses...
@mathoph26
@mathoph26 Год назад
translation operator, the exp(u r ^ grad) the rotation operator around u
@giacomorapisardi877
@giacomorapisardi877 Год назад
I really would like to see what sin(d/dx) looks like on elementary functions
@General12th
@General12th Год назад
Hi Dr. Penn! Very cool!
@soyoltoi
@soyoltoi Год назад
I like how so many comments are able to relate this to another piece of mathematics
@imeprezime1285
@imeprezime1285 Год назад
You can also write it like e¶ and define in the same way. At least looks more cool😜
@gerardvanwilgen9917
@gerardvanwilgen9917 Год назад
But d/dx is not a value but more a sort of operator, isn't it? Is this a kind of "abuse of notation"?
@nianyiwang
@nianyiwang Год назад
12:40 are you wiping chalk traces with your sleeeeeve?
@user-nz3mc1fj9z
@user-nz3mc1fj9z Год назад
14:41 I immediately recognized taylor series for ln(1+t) where t = 1/x
@user-et9ub3dc3j
@user-et9ub3dc3j Год назад
It occurs to me: what happens when this is extended to the domain of complex numbers? It seems like it would have broad applicability to analytic functions. Wait. That's quantum mechanics, isn't it?
@profdc9501
@profdc9501 Год назад
I am guessing there is some connection with the umbral calculus here as well.
@roberttelarket4934
@roberttelarket4934 Год назад
Why isn't it written e^[f'(x)]?
@ultrio325
@ultrio325 Год назад
It kinda reminds me of lambda calculus
@11cookeaw14
@11cookeaw14 Год назад
Can you do this with an integral instead?
@waverod9275
@waverod9275 Год назад
So, I'm wondering something: can you have problems of the type "the unknownth derivative of a function equals a certain thing, what can the derivative be?"
@eterty8335
@eterty8335 6 месяцев назад
is there an exp(integral dx) or something like that?
@logo2462
@logo2462 Год назад
Does this have a relation to the form of the shifted impulse in the laplace domain e^{-a*s} ?
@ingolifs
@ingolifs Год назад
It has to, right? The maths looks almost the same.
@assassin01620
@assassin01620 Год назад
(e^(d/dx))^(x^2) = e^(2x)
@insainsin
@insainsin Год назад
or is it e^(x^2*d/dx)? That kind of operation isn't defined in non-commutative mathematics.
@assassin01620
@assassin01620 Год назад
@@insainsin I was joking, lol. Also, x^2*d/dx = 2x So, e^(x^2*d/dx) = e^(2x) 😜
@zakiabg845
@zakiabg845 Год назад
Whish of you guys yant a video about the prove of the nth derivitive of x^m?
@kappascopezz5122
@kappascopezz5122 Год назад
D = e^(a d/dx) sin(x) = sum_{n=0}^inf a^n/n! d^n/dx^n sin(x) Split in even/odd n. Because d²/dx² sin(x) = -sin(x), d^(2n)/dx^(2n) sin(x) = (-1)^n sin(x), and also d^(2n+1)/dx^(2n+1) sin(x) = (-1)^n cos(x). D = sin(x) sum_{n=0}^inf (-1)^n a^(2n)/(2n)! + cos(x) sum_{n=0} (-1)^n a^(2n+1)/(2n+1)! With this, you can notice that the factor of sin(x) is exactly the maclaurin series of cos(a), and the factor of cos(x) is exactly the maclaurin series of sin(a). With this, you get D = sin(x) cos(a) + cos(x) sin(a) which, by the angle addition formula of the sine function, can be simplified: D = sin(x) cos(a) + cos(x) sin(a) = sin(x+a) But since at the very start, we defined D to be e^(a d/dx) sin(x), this gives us: e^(a d/dx) sin(x) = sin(x+a)
@johns.8246
@johns.8246 Год назад
Yo, I need help finding natural numbers x, y, z such that x^3 - y^3 = 2^z I have a hunch there's no solution, but my proof is shoddy. Help?
@mathunt1130
@mathunt1130 Год назад
A simple example of a pseudodifferential operator.
@xXJ4FARGAMERXx
@xXJ4FARGAMERXx Год назад
Can somebody tell me why we have e^(d/dx) ?? My problem is the fact that d/dx doesn't have anything next to it. In the very first definition he passed: e^(d/dx) = Ʃ (1/n!)(d/dx)ⁿ but this is not really doing anything because d/dx still doesn't make sense to me. Why isn't there anything next to it? Like df/dx or dy/dx or anything!
@APaleDot
@APaleDot Год назад
We're working with operators here. The exponential function is often written as e^x, but is defined according to a power series as shown in the video. If x is a number, e^x is a number. If x is a matrix, e^x is a matrix. And if x is an operator (like the derivative operator) then e^x is an operator.
@ChronoQuote
@ChronoQuote Год назад
d/dx and e^(d/dx) can be seen as operators, or functions that map functions to other functions. In the same way we can write f(x) as f to talk about the function itself rather than the function evaluated at x, we can write d/dx(f(x)) as d/dx to talk about the function d/dx itself rather than a specific derivative. This is not really an abuse of notation either: a function "itself" is a well-defined object in set theory (as long as you are clear what the domain and codomain are, which is harder for operators). So e^(d/dx) is a function that maps the function y to the function Ʃ(1/n!)dⁿy/dxⁿ.
@csikjarudi
@csikjarudi Год назад
@@ChronoQuote Here is my question. For example the expression e^A, where A is a matrix works because the norm of the matrix is bounded: ||A||
@trucid2
@trucid2 8 месяцев назад
What is this voodoo and why does it work?
@davidhand9721
@davidhand9721 Год назад
It seems like e^(X*z*pi/2) is always X^z. Am I being an obvious math clown or does it actually work out that way?
@muse0622
@muse0622 Год назад
I used taylor series to proof exp(d/dx) about all f(x). By taylor series, f(x+a)=Σ(f^(n)(a)/n!)x^n and we can also say like f(x+a)=Σ(f^(n)(x)/n!)a^n =Σ((aD)^n/n!)f(x) [D=d/dx] Then this changes into e^(aD) f(x) So e^(aD) f(x)=f(x+a) is true about all f(x)
@AndriiGudyma
@AndriiGudyma Год назад
Video based on an elementary introduction to quantum mechanics courses
@LucasDimoveo
@LucasDimoveo Год назад
Is this a branch of Analysis?
@insainsin
@insainsin Год назад
exp(aD)sin(x)= exp(aD)( exp(-ix)-exp(ix) )i/2= ( exp(aD)exp(-ix)-exp(aD)exp(ix) )i/2= ( exp(-ai)exp(-ix)-exp(ai)exp(ix) )i/2= ( exp(-i(x+a))-exp(i(x+a)) )i/2= sin(x+a)
@user-zl1qu3gw5u
@user-zl1qu3gw5u Год назад
or maybe m-n would be more correct?
@charleyhoward4594
@charleyhoward4594 Год назад
Him just saying Smart Words makes him sound smart (which he is !)
@milanrebic9650
@milanrebic9650 Год назад
Хвала проф. Пенн
@scp3178
@scp3178 Год назад
Nice
@davidesguevillas
@davidesguevillas Год назад
It’s x to the m-n, not n-m
@minwithoutintroduction
@minwithoutintroduction Год назад
رائع 🎉
@annanemustaph
@annanemustaph Месяц назад
🌵🌵🌵🌵☂☂☂☂🌿🌿🌿🌿
@PeterBarnes2
@PeterBarnes2 Год назад
I guess I'll try to quickly batch out the Gamma Function for your exercise. [e^aD_x] Γ(x) = [e^aD_x] int(0, inf) t^x-1 * e^-t dt = int(0, inf) e^-t * [e^aD_x] e^(x*ln(t)) * t^-1 dt Considering b = ln(t) [e^aD_x] e^bx = sum(n=0, inf) (a^n/n!) [D_x^n] sum(m=0, inf) (b^m/m!) x^m = sum(n=0, inf) sum(m=0, inf) (a^n * b^m / n!m!) [D_x^n](x^m) Noting [D_x^n](x^m) = 0 for n>m, the sum over 'n' may be truncated: = sum(m=0, inf) sum(n=0, m) (a^n * b^m / n!m!) (m!/(m-n)! x^m-n) = sum(m=0, inf) sum(n=0, m) (a^n * b^m / n!(m-n)!) x^(m-n) Working for the binomial, we multiply by m!/m! and shuffle: = sum(m=0, inf) (b^m/m!) sum(n=0, m) a^n (m!/n!(m-n)!) x^(m-n) = sum(m=0, inf) (b^m/m!) sum(n=0, m) nCr(m, n) a^n x^(m-n) = sum(m=0, inf) (b^m/m!) (x+a)^m = e^b(x+a) Remembering b = ln(t), and from line 3: int(0, inf) e^-t * [e^aD_x] e^(x*ln(t)) * t^-1 dt = int(0, inf) e^-t * e^ln(t)(x+a) * t^-1 dt = int(0, inf) t^(x+a-1) * e^-t dt = Γ(x+a) Now, did I cheat by selecting a function which is already expressed in terms of an arbitrary exponential? Maybe. But I think it's a very cool example of how versatile these exponential functions are in this context. Now, where's the convergence? Surely there's some discovery of divergence to be had somewhere, right? Well, as evaluated here, I'm pretty sure the only question is for the integral. The two sums were shuffled around, but in isolation those steps should converge over C. If I recall correctly, the integral diverges exactly where the function is undefined: the poles on the non-positive integers, but shifted in value by -a. I would be interested to know if I'm correct on the convergence, here. I need to study more on matters of convergence in this area.
@user-zl1qu3gw5u
@user-zl1qu3gw5u Год назад
or maybe m-n would be more correct?
Далее
A unique approach to the half-derivative.
29:42
Просмотров 31 тыс.
Euler's other constant
23:28
Просмотров 35 тыс.
Matrices with equal entries are super interesting!
12:54
The derivative isn't what you think it is.
9:45
Просмотров 700 тыс.
Exponential derivative
25:53
Просмотров 42 тыс.
THE MAGNUS CARLSEN INTERVIEW
24:09
Просмотров 88 тыс.
The most advanced definition of sine and cosine?
25:33
a surprising collapse of this iterated sine function.
18:28
Every Crucial Equation in Math and Physics
13:17
Просмотров 89 тыс.
The quantum derivative.
22:23
Просмотров 32 тыс.