I love how math let's us create operations that seem to not make sense, but by taking all the definitions related to the topic we can get a complete sense of what we created
@@farfa2937 I agree because the structure created must be checked for internal inconsistencies. I'm not a mathematician, so I voice no opinion on any of these creations. AI might be useful. If the AI crashes, it's no good.
I would argue that the best" setting" for the factorial of the derivative operator would be a generalized Fourier or Taylor series. 90% of all interesting functions can be written as one of those two so it would be interesting to see the mess of an infinite dimensional matrix that results.
Taking L^2(R) as your space (and who needs more?!), the Hermite functions would quickly give you the matrix you want. en.wikipedia.org/wiki/Hermite_polynomials#Hermite_functions
I have a general solution, which also reproduces both of your examples (except for some factors of 2 you forgot in your derivation ^^) d! = Prod{exp(log(1+1/n)d)*Sum[(-d/n)^k]} with the product running for n =1 to infinity, and the sum for k=0 to infinity. d is short for d/dx. The exponential of d is defined the usual way, by the series expansion. When you replaxe d with a real (or complex) number a, you get the product representation of a! That shows that for any f(x)=exp(a x) you get d! f(x) = a! exp(a x) for polynomial functions you can cut off the series expansion of exp and the sum... d! 1 = 1 d! x = gamma d! x^2 = x^2 - 2 g x + g^2 + pi^2/6 edit: i calculated the next term (g is short for gamma): d! x^3 = x^3 - 3 g x^2 + 3(g+pi^2/6)x - g^3 - 3 g pi^2/6 - 2 Zeta(3) i think i stop at this point, it becomes increasingly confusing ;-) even missed a minus sign, it's corrected now. just checked by taking the third derivative of x! at x=0. At last, my formular is just a product representation of a0 + a1 d + a2 d^2 + ... with a_n being the nth derivative of x! at x=0. But the convergent domain of the product form is bigger (it converges everywhere except for negative integers). Which does not matter for polynomials, but for exponentials and other functions. and ANOTHER correction: the product as given converges only for |d|
Of course, this can be pushed way further - Polynomial spaces of higher degree will give something similar to the second example, and it would be interesting to see whether expanding exponential functions (or other interesting examples) as a series in span{1,x,x²,...} will give us the same result as applying the method directly (which I think it should, and if so would make a strong case for this definition of the factorial derivative operator being the proper one)
I tried to do it in Wolfram Mathematica but the results of applying the derivative factorial of the space of degree-n polynomials to the degree-n taylor series of e^3x doesn't seem to converge to 6e^3x. The coefficients alternate between positive and negative so maybe some sort of analytic continuation or Riemann summation or something is possible but that's beyond my abilities.
After Fourier transform, derivative is multiplication by omega (the frequency), so for the factorial of the derivative, we could then do something like multiplying by the Gamma function of the frequency, Gamma(omega). With some details to straighten out as an exercise for the reader, of course...
X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
@@justinlavine9209Look up the RU-vid channel "lines that connect". It goes into how one can construct an interpolation of the factorial function. We are working with this interpolation. (It's not unique, but it has some special properties, and it's the only interpolation with those properties.)
To define d/dx! in general, we firstly need to agree on a continuation of factorial - taking that to be the Gamma function, the only sensible thing to do is to consider the Taylor series of Gamma at 1 (i.e. “Maclaurin series of a factorial) and replace the (x-1)^n factors with d^n/dx^n
You can also use the integral formula, which yields that (d/dx!f)(x) is the integral from 0 to infinity of f(x+log(t))exp(-t) dt, using exp(a d/dx) f = f( . + a), and assuming f nice enough, maybe you want f to have compact support, be L² or at least have moderate growth, ie |f(log(t)|exp(-t) is integrable both at 0 and at infinity. This formula yields the correct result for both exponentials and polynomials, which is pretty nice when you think about it. The formula exp(a d/dt) f = f( . + a) can be "accepted" either by checking that it's true on polynomials and analytical functions or considering the transport equation df/dt = a df/dx and the associated evolution operator. No matter how you see it, exp(a d/dt) has to be the translation by a.
@@thsand5032 That’s a really neat closed form representation. You can then also change the variable to get Integral(-inf,inf) f(t) * exp(t-x-exp(t-x)) dt, i.e. the form of an integral transformation with kernel K(x,t) = exp(t-x-exp(t-x)).
My take is that we should try to use either the Taylor expansion or the Fourier expansion of functions, to try to define the factorial derivative for a large class of useful functions. I would love to see a part 2 of this video where this is attempted. One potentially interesting question to answer is: for functions with both a Taylor series and a Fourier series, do the factorial derivatives in both domains agree?
@@justinlavine9209 This video isn't talking about the derivative of x!. It's talking about the factorial of the derivative operator. But FWIW, there is a continuous version of the factorial function, called the gamma function, that is generally accepted as the generalization of the factorial to continuous values. Also, the definition of the integer factorial isn't circular, it's recursive. Namely: 0! = 1 (n+1)! = (n+1)*n! for any natural number n It's easy to see that you can use this definition to get any value of the factorial for any natural number input.
3:35 Motivated by this part, it seems like Fourier transform gives a reasonable generality, as the transform "diagonalizes" the derivative. There one may find D! f(x) = integral( Gamma(1 + iw) f^(w) exp(iwx) / (2 pi) dw, w in R ) = integral( integral( u^(iw) exp(-u) f^(w) exp(iwx) / (2 pi) du dw, u >= 0 & w in R )) = integral( exp(-u) integral( f^(w) exp(iw(x+log(u))) / (2 pi) dw, w in R ) du, u >= 0 ) = integral( exp(-u) f(x+log(u)) du, u >= 0 ). This formula fits with all the examples discussed in the video!
I feel like I'd just cut to the chase and use a power series representation of the Gamma function and insert the derivative into that series. It would certainly behave the same way on polynomials as you describe, and for other functions that don't fall into the nullspace of some d/dt^n, it could still work.
@@ianmathwiz7 Yes, you could. I guess I am thinking in terms of Functional Calculus more than anything, which lets you defined new operators through composition with continuous functions. The derivative isn’t a bounded operator (over most reasonable spaces), but the power series representation still can work. The power series does make it blindingly obvious how to apply the derivative operator in this case, and can lead to approximation schemes from truncations of the power series.
I see a few comments here about the use of the taylor series, and even the integral representation, of the gamma function, to more generally and explicitly define this operator, which I find excellent. There are, however, even more direct ways to access these same results in a manner which very directly gives your first and second results, here. (Note: I call these 'derivative operators' distinct from 'differential operators' because they are a generalization of a specific kind of differential operator, as they are infinite-order constant-coefficient linear differential operators. I think 'derivative operator' is a good term because to me it rings well with the idea of being more loosely but linearly related to the derivative.) These are the exponential rule and its generalization with a monomial. (Obviously you can extend from monomial to polynomial, as these operators are linear in the operand.) The exponential rule is rather straightforward: [f(D_x)] e^bx = f(b) e^bx Which is pleasant in how simple it is. The generalization to a monomial is not so simple. It's very much like the expansion of a binomial, which is because it is. [f(D_x)] x^p * e^bx = e^bx * sum{n=0, p} nCr(p, n) f^[n](b) * x^(p-n), where f^[n](b) is the nth derivative of f at b, and nCr(p, n) is the inline text notation for 'p choose n.' These can be found from taylor expanding f, and you might want to try that for yourself. I suggest plugging in Gamma(D_x +1) for f(D_x) to verify the results of the video as such. (There are considerations for convergence regarding the parameter b, but whenever there is such a divergence, in these simple examples, the proper answer is also DNE. To this, you might try [Gamma(D_x +1)]e^-x to see this divergence. In fact, this also implies that [Gamma(D_x)]x^p also diverges, which makes the taylor series form of this operator not easy to evaluate, because this same expression would appear. The fact such an expression would appear in an infinite taylor series, however, entails an example not so simple as to imply DNE.) But there is an even better result, an even more general result, which you may find interesting. Indeed, it is even a generalization of the generalized Leibniz rule, or product rule. In particular, it allows you to convert a product in the operand into a nested derivative of the sort we see here. It does not always simplify the work, but it can at times enable work to happen at all, or equivalently so. Before that, I will explain a minor detail of my notation: All such derivative operators must be within square brackets [], and a substitution may be appended in subscript on the right bracket like you would a pipe |. (For unformatted, inline text, this gives unsightly results like [g(D_x +h)]_(h=b), but I hope you can forgive that.) Further, the substitution can be to a new derivative operator, as long as an additional set of square brackets goes over the entire expression outside that substitution. The rule, here called the generalized rule: [f(D_x)] (y(x)*g(x)) = [[g(D_z +s)]_(z=D_x) f(z)]_(s=x) y(x) The substitution from x to s in the expression is to avoid confusion. This rule can be found through similar taylor series means as the other two, but involving two or three (depending on your approach) taylor series at once. It's tricky, but it's definitely there. This form also might remind you of non-constant coefficient differential operators. I have put some work into establishing connections to prominent such differential equations, but this rule is as obtuse as it is useful. Of that usefulness, consider applying the rule, simplifying an expression, then applying it in reverse. Or simply the fact that the exponential rule and its monomial generalization are implied by this rule. Evaluating these implications is quite an exercise. Of the simple exponential rule, it is fun and intriguing. Of the monomial form, it is tedious and not so enlightening, but perhaps important. I shall here show how to obtain the exponential rule from the generalized rule, and leave the monomial form as an exercise to the reader. We start by appending the unit constant function in x, which I notate with 1(x). This is simply to follow the rule most closely. [f(D_x)] (1(x)*e^bx) = [[e^b(D_z +s)]_(z=D_x) f(z)]_(s=x) 1(x) These operators are also linear in the operator term (but not linear simultaneously in both the operator and operand, of course). = [e^bs [e^b(D_z)]_(z=D_x) f(z)]_(s=x) 1(x) As well, we can extricate the e^bs term by applying our _s=x rule, so long as the new e^bx term never moves from the left to the right of an extant [f(D_x)] operator, at least without parentheses. = e^bx [[e^b(D_z)]_(z=D_x) f(z)]_(s=x) 1(x) As pointed it in other comments, the exponential of a derivative is the shift operator. = e^bx [f(z+b)|_(z=D_x) ]_(s=x) 1(x) And apply the remaining rules: = e^bx [f(D_x+b)] 1(x) Now, we are a little bit stuck. In particular, I am all but certain we are forced to use the taylor series of f. To be as lax and ornery as possible, we can take this taylor series at any point where the series will converge at 'b.' Where f^[n](k) is the nth derivative of 'f' at 'k,' [f(D_x+b)] 1(x) =sum{n=0, inf.} f^[n](k)/n! * [D_z+b-k]^n (1(z)) =sum{n=0, inf.} f^[n](k)/n! * sum{m=0, n} nCr(n, m) [D_z]^m (1(z)) * (b-k)^n-m We can see the mth-order derivative is 0 for all but m=0, =sum{n=0, inf.} f^[n](k)/n! (b-k)^n =f(b) So, from before, e^bx [f(D_x+b)] 1(x) =f(b)*e^bx The fact that the taylor series is absolutely necessary for this step I am not sure of (rather certain, but I am without proof). I suppose the best evidence would be the existence of a function 'f' which does not have such a taylor series but which does give some convergent result not equal to any existing or limiting value of f(b). ... Perhaps such an idea could be used to find new generalized summation methods. Suppose you had a taylor series which diverged, but you found some alternative answer for [f(D_x+b)] 1(x), in principle your new answer would have some relation to that divergent taylor expression of f(b). What would be even more interesting is if the result was not a constant value, but a new function of 'x.' Interesting, even if not necessarily useful. You can hopefully see the monomial form can be proven fairly simply in equivalent fashion. I hope you found this interesting, or even useful, for your own explorations of this field.
X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
@@justinlavine9209 You're right that the derivative of a discrete function is not defined. This is why people refer instead to the gamma function, which extends the factorials from the natural numbers all the way to the complex numbers (save for the negative integers). Other than the gamma function being off by one (n! = Γ(n+1)), the gamma function still follows the functional equation for factorials: Γ(x+1) = xΓ(x). However, the video isn't talking about the derivative of the factorial. It's talking about what happens when you evaluate the factorial function not at an integer number, not at a complex number, but _at_ the derivative operator. To mathematicians, this very definitely sounds like nonsense, but a good mathematician is never turned away by nonsense! The video does a good job of taking the idea seriously and narrowly so as to be as rigorous as nonsense can be.
X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
at 10:25 : Isn't f''(0) = 2a_2? Thus shouldn't the matrix entry be lambda*my*f''(0)/2? Does that change anything later on? Edit: Yeah I guess the upper right entry of D! should be gamma^2-(p^2/6) if I'm not mistaken.
Since the Fourier transform turns differentiation by x into multiplication by ik, could we just transform f(x) into F(k), multiply by Pi(ik) (where Pi(x) = Gamma(1+x)), and then take the inverse transform?
It should be possible to define this for a wide range of cases using the Fourier transform - the useful property here is that the Gamma function decreases rapidly in both directions along the imaginary axis, working in favour of convergence.
In general one can define the function of a selfadjoint operator via functional calculus. Talking about e^(i d/dx t), e^(d/dx t), sin( d/dx t) are pretty well undersood when one needs the semigroups of the Schrödinger, heat or wave equation. Taking Gamma(d/dx+1) works just fine, unless you hit a negative integer with the eigenvalues. But it is true, the domain does mater. The most straightforward would probably be using H_0^1([0,1]) Sobolev space.
For the matrix f(A), the entry in the first row and third column should be divided by 2. Differentiating f(x) twice makes the first term in the expansion equal to 2*a_2 and powers of x in the rest. Thus, f'(0)=2*a_2, which means a_2=f'(0)/2.
Nice video! The matrix for the 2nd degree polynomial reminds me of the ladder operators (creation/annihilation ops.) from quantum mechanics. Also, the solution for the quantum harmonic oscillator consists of Hermite polynomials. Hermite polynomials can be expressed with gamma functions! In terms of these operators, the Hamiltonian of the HO depends on the number operator. I think this matrix clearly related with the number operator and Hermite polynomial. If I find any relation, I will post here.
What about d! f(x)=Gamma(d+1)f(x)=\int_0^\infty f(x+log t) e^(-t) dt, which follows from the definition of the shift operator. Then d! x=x-gamma, where gamma is the euler mascheroni constant.
I think the core idea of the factorial operator is "repeated action until you reach a minimum." E.g. "n factorial" is multiplying all integers between n and 1 inclusive, because if you went all the way to 0 it would just zero out for all possible inputs, and would continue to do so for all inputs below 0. We then define 0!=1 as a special case. This would seem to imply that we should look for some similar "natural stopping point" for the derivative operator. Not all functions will be valid inputs by this analysis; for example, f(x)=e^x never has a "stopping point," because the derivative is always equal to itself. This seems analogous, to me, to how negative numbers and fractions do not have a factorial by the standard definition of "factorial." Instead, you can only get sensible answers to "what is the factorial of 6.25" or "what is the factorial of -3" via analytic continuation, and thus something similar could apply to this "derivative factorial." For all polynomials, we do have a clear end-point: a polynomial of degree 0 has derivative zero. Thus, the "derivative factorial of f(x)" only has a simple definition for polynomials, functions with only real-valued constants and variables raised to constant, nonnegative integer powers. (d/dx !) would then be defined recursively: d/dx[f(x)] = f(x)×(d/dx![f'(x)]) so long as f'(x)=/=0. This is then perfectly analogous to the way the "integer factorial" is defined: n! = n×(n-1)! for any n>0. As noted with the "integer factorial," we define the special case of 0!=1, we can thus also define the special case of (d/dx!)[q] = 1 for all real numbers q. For a power function like ax^n, this would result in the product (a×x^n)(n×a×x^(n-1))(n(n-1)a×x^(n-2))...(n!×a)(1). This would become very hairy very quickly! However, we can do some things to simplify it: the powers of x become just the sum of integers from 1 to n, which by Gauss must be (n)(n+1)/2. The constants out front will collect up a total of a^(n+1), and inherit from the chain rule a superfactorial sf(n) of the initial power n. E.g. if you were looking at 7*x^3 the product would be 7(x^3)7(3x^2)7(6x)7(6)(1) = 7^4×(sf(3))×x^(6). This then gives us the following formula for the power function's "derivative factorial": (d/dx!)[ax^n] = a^(n+1)×sf(n)×x^(n(n+1)/2) Thankfully, due to the sum rule, this then gives us enough to work with all polynomials with real coefficients: simply treat each monomial individually, then sum them together: (d/dx!)[f(x)] = Sum((d/dx!)[c_n×x^n] from k=0 to k=n). That said, this definition does fall afoul of things like algebraic manipulation of the derivative (you can't just factor out a constant here!), so it's possible that a slightly tweaked definition might be preferable. For example, maybe perform the derivative-factorial only on the "function part," not on the "constant part"? That would give (d/dx!)[ax^n] = a×sf(n)×x^(n(n+1)/2). But this is a matter that deserves more analysis and introspection than a YT comment can contain.
X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
@@justinlavine9209 The definition of factorial is *recursive,* not circular. There's a big difference. Plenty of important things are perfectly well-defined and give exact answers even though they're defined recursively, e.g. Ackerman's function. Further, you're conflating the recursivity of the factorial with the fact that the standard factorial function is *discrete,* but this isn't a problem either, there is such a thing as a discrete derivative too--indeed, there is a whole field of discrete calculus.) If you want a definition of the factorial that *is* continuous, however, all you need do is Gamma(x+1.) So even that isn't a problem. More pertinently, the point of this exercise is to define an operator. That operator should either terminate (finite set of actions), or converge (infinite set of actions, but which reach a fixed point.) My proposed "factorial derivative" is of the former type, and plenty of important mathematics works that way. The Mandelbrot set, for example, is the set of all points which eventually "explode" away from the origin of the complex plane, with the color of a point being defined by how quickly or slowly that point "explodes" away. (Formally, it's recursively feeding values into a complex-valued polynomial function and the determining how many steps you need to take to "escape" from the maximum radius of convergence.) All of this is playful questioning in mathematics. What can we do? What can be given a sensible definition that gives interesting results? And once we set that definition, what other interesting things follow? It might end up that my proposal above cannot be made to work the way I want it to, or that even when so defined it does nothing particularly interesting. That's okay. Creative exploration in mathematics should embrace the possibility that what you look into won't actually go anywhere.
I m working in that normalization for more than 5 year pass from discrete function in f(n) to a function F(x) in complex space(topology) so F(x)at n = f(n); and but i don’t reach the aim but i get very interesting results about généralisation of functional equation of generalized gamma function
Forming a hilbert space spanning all L2 functions, maybe via fourier series, and than looking at what the factorial of the derivative would mean for that might be fun.
What about using the definition of the gamma function? Looking at gamma of d/dx acting on f(x). This would put the derivative into the exponent of t in the dt intergral, which assuming convergence this would become the shift operator in the intergral
Initially, when I first saw the thumbnail, I thought of the factorial of a derivative as the operator D(D-I)(D-2I)... where D is the derivative operator and I is the identity operator but the flaw of this idea is that we kinda have to specify the number of steps that we must stop, just like the normal n!. If (d/dx)!_{n+1} denote the nth factorial of the derivative operator, we can write it like (d/dx)!_{n+1} = D(D-I)(D-2I)...(D-nI) = \sum_{k=1}^{n+1} a_k D^k where a_k = \sum_{1
Great content!, gives a lot of insight to math enthusiasts (yet I don't know if there's a new generation of them or only us the old ones.) Note that there's a small calculation flaw at 13:15 (not diminishing the value of the video) the middle term of the first expression should be (-2 little gamma . x), not (-little gamma . x)
Using the standard integral for x! I get [EDIT AS PER ANGEL BELOW] D! f(x) = integ(u=0 to +inf) du exp(-u) exp((ln u)D) f(x) Expanding the second exp, in particular setting D^0 f(x) = f(x) we get exp((ln u)D f(x)) = f + ln u f'(x) + (lnu)^2 f''(x)/2! + (lnu)^3 f'''(x)/3! + ... which is the Taylor series for f(x + ln u). So D! f(x) = integ(u=0 to +inf) du exp(-u) f(x + ln u) E.g. D! e^(kx) = integ(u=0 to +inf) du exp(-u) exp(kx + k ln u) = e^(kx) integ(u=0 to +inf) du exp(-u) u^k = k! e^(kx) Or D! x = integ(u=0 to +inf) du exp(-u) (x + ln u) = x - gamma where gamma is the Euler-Mascheroni constant
@@angelmendez-rivera351 I wrote it wrong! f(x) is outside the exponentiation as you rightly point out. Thanks for correction! D! = integ (u=0 to +inf) du exp(-u) exp((ln u) D) D! = integ (u=0 to +inf) du exp(-u) (D^0 + ln u D^1 + (ln u )^2 D^2 / 2! + ... ) D! f(x) = integ (u=0 to +inf) du exp(-u) (D^0 f(x) + ln u D^1 f(x) + (ln u )^2 D^2 f(x)/ 2! + ... ) D! f(x) = integ (u=0 to +inf) du exp(-u) f(x +ln u)
The end result of the FT method is convolution with the inverse FT of isGamma(is), whose modulus is bounded and decreases rapidly away from s=0. This can be compared with the effect of exp(d/dx) which can be interpreted as a shift operator.
X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
My choice: f(x) = a * x^n ⬅ When trying to transform this simple function, I got transformation matrix { ( (n-1)/n ; 0 ) ; ( 0 ; n ) }. This is already diagonalized, so applying factorial, using euler reflection formula (on the first entry) and putting it back to the original function gives f ' (x) = a * n! * x^(n * pi / (sin(pi / n) * Gamma(1/n))). If i'm wrong, please correct me.
if someone would have asked me that question on the street (if that's even possible), I'd have said that it is nonsense.. now I know that they'd think i'm the crazy one XD
That was extremely interesting to me As I just finished my first linear algebra course at the open university (and my second course in general) And searching for ways and motivation to do math untill I will get qualified for the degree In my country
Would there be a problem in defining it as just the derivative operator with transformed eigenvalues as the new eig value being the gamma function of the "old" eigenvalue (eig value of d/dx)?
If the setting is powers of x then it is a transformation on the taylor series and since changing the origin can be seen as a linear transformation of the taylor coefficients. This is coordinate indepednent
How about this as an interpretation. Let's generalise the factorial to the Gamma function Γ(n)=(n-1)!. Every derivative operator has a corresponding symbol in Fourier space. Denote the symbol of a differential operator, D, by σ(D)(k), where k is the Fourier variable. The Gamma function of this is Γ(σ(D)(k)), apply this to the Fourier transform of a function f̂(k), and then take the inverse Fourier transform to get D!.
Hi, Whatever the linear transformation is, if it can be represented by a matrix, I would suggest: D (D-Id) (D-2Id)... where D is the matrix of the transformation, and Id the one of the identity. But I don't see "the good place to stop" 😁 Or may be a generalisation starting from the integral formula of the gamma function.
as i understood it, you basically used and function that have a taylor series at 0 to order at least 3 (counting up from 0, so up to 2nd order polynomial). what i dont understand is why around specifically around 0? other than the formula is nicer that way. in general in both cases it looked like taking the factorial of the constant we get from differentiation, i wounder if there is a more mathematical way of stating that
Gamma function is terrible, y'all should work with Π(s) instead. The original reason Gamma shifted the factorial is because Legendre thought it'd be more important that the first pole happened at s=0. The defining feature of the factorial function Π is that it expands the factorial, who the hell thinks having the first pole at zero is more important than that??
In QFT, when you do dimensional regularization, that's the only case where it's consistently easier to work with Gamma. Any other place Π is much better
A lot of things stick because of convention. Like in modular arithmetic, we say x \equiv y (mod m) rather than adding a subscript under that congruence sign, despite it being less efficient.
@@HershO. The modular arithmetic notation is fine if you don't write "( mod _m_ )" after every congruence/at the end of every line like Michael does xD It is much better to write something like "a ≡ b ≡ c ≡ d ≡ ... ≡ e ≡ f ≡ g ≡ h ( mod _m_ )" which is also less cluttered than if you used "≡_{ _m_ }".
Great video! But I think its incorrect to represent (d/dx)! operator as a matrix. Because facrorial is nonlinear operation. Ex. f(x) = 2 apply transformation, which was shown at the end of the video and get 2. But (d(2) /dx)! = 0! = 1 Contradiction
This is pretty powerful. is this a research topic/open question or has this already been generalized? My take? Well, maybe something to do with functors between the category of modules of linear operators (polynomials, differentials etc.) and modules of nonlinear operators (Gamma Function, Factorials)? My answer is vague and sloppy because I'm really just throwing something out on intuition here.
this will sound disgustingly unrigorous, but assuming exp(d/dx ) operator is well-defined for the function f(x) we're interested with, can we just use the gamma function again? d/dx! f(x) = d/dx ∫ exp(ln(t)•(d/dx-1))•exp(-t) f(x) dt
You can actually see the relation between the derivative of x! and the harmonic numbers using some quite simple algebraic manipulations, without involving the gamma function
@@cheems1337 I know where I'm not wanted. My apologies for saying anything because apparently I haven't learned my lesson about opening my mouth around groups like this.
The Fourier transform turns derivation into the so-called "spectral derivative" which is just multiplication in the spectral domain: F[d/dx f(x)] = i ω F[f(x)] Can a factorial of this operation be soundly defined at all?
That's impressive. CS student here - how do I learn to operate on... operators? What are the prerequisites? My foundations include basic matrix algebra (up to diagonalization) and Calc II, with a bit of Calc III.
Tosio Kato - Perturbation theory for linear operators. This is what is needed to rule the world.
Год назад
This in fact has a bit of André Weil (no, I don't mean Andrew Wiles even though that could apply reasonable too) flavor to it. On the other question mentioned. The derivative of the factorial. What's given below as ln(x+1/2) is in reality a simplification of a continuous harmonic series. Or a simplified use of it. When it comes to gamma, factorial etc. I personally use Gauss Pi function, though it's easier to find tabulated values and properties of the Gamma function, saving time and effort. Being lazy isn't leading anywhere in mathematics though. What was I referring to in the begining? Without proof; d(x!)/dx=x! • ln(x+1/2) approximately. Another thing I want to mention. Saunders Mac Lane once saved my life. Just by appearing at an approriate time on some info/add on I think Discovery Channel.
I think there's a mistake in your math in the second example. The coefficients a_n are not just the nth derivatives of f at 0, there is an additional factor of 1/n!. So your final matrix D! will come out a little different.
I don't know much about category theory or commutative diagrams but would it make sense mathematically to add one more step in terms of functions instead of using factorial on D? Like for example if we called d/dx D, the factorial F (as a morphism of some kind) and used a two-step operation as (d/dx !)v = (DF)v and get a result of (DF) first before having the input of D or F, to interpret this concept? Here we applied D on the spaces and then got a result by using diagonalization and a lemma. What if we defined a factorial operator along with a derivative transformation for any two arbitrary isomorphic vector spaces?
Wouldn't the factorial act on the derivative as it acts on n ? Meaning to apply d^n/dx^n till d^0/dx^0 (which probably the identity of f) of f(x) Like (d²/dx² !) f(x) = d²/dx² (d/dx ( f(x)) = d³/dx³ f(x) ?
I am always pleasantly fascinated by Michael's videos and this is no exception. But I do get a bit flummoxed at times. Take about 4:00 zero factorial is taken as zero by applying factorial operator to matrix [ [3 0], [0 4] ] giving 0! = 0 twice about 12:10 zero factorial is taken as one by taking Gamma(1) = 0! = 1 thrice I have a feeling Michael may being playful with us on this....
Okay - I think I have got this - but there again maybe not. If we take matrix [ [1 0], [0 1]] as identity matrix 2d then problem of 0! = 0 arises. There is a (potential and justified) solution and it means replacing identity matrix with [ [1 {null}], [{null} 1] ] Justification: in 'x' directions with orthogonality in 'y' directions well x value cannot take any y value or values at all due to independent and orthogonal tensors (if that is the word). By independence and orthogonality of chosen x, y tensors(? is that the word?) y tensor has no projected or injected influence on x tensor at all and other way around as terms x, y are merely convenient labels to describe something. This may be an instance of difference between infinite values in all directions, zero values or -ahem- straightforward null by nullity of orthogonal and independent variables. It also implies that null factorial is indeed null, nil zero - or even the null factorial does not exist at all. There is/are another nice implication(s) of this interpretation such as projected/injected influence between orthogonal tensors is not zero nor infinite but indeed: null. And can be extended to higher dimensions and things like that. Here in the above I use null to mean it does not exist in fact other than to say it is a label for a non-existent thing, zero to mean thing takes value(s) and do indeed exist and sometimes equal zero and in cases of complete intersection values exist and are infinite in appropriate directions. Another concept? Zero may be taken as many things beware? Perhaps we should do null the honor of attaching a nice label to it - maybe like empty set zero and diagonal pairing but with a double diagonal sort of like an X paired on 0 or even a big + paired on 0 ?
If you define the exponential of the derivative operator through the power series representation you get a translation operator. (Or at least that's how we think in physics). I wonder if doing the same approach for the gamma function will give the same answer as Michael's approach.
Great stuff! Thank you! @6:25 it feels like there is some hand waving in that the space b + 2cx is best represented by vectors with basis x^0 and x^1, whereas a + bx + cx^2 is in a three dimensional space with a different basis x^0, x^1, x^2. So why is P_2 to P_2, or a 3 times 3 matrix?
Guy asked a stupid question. Why you think no able answer? Here why, bruh X! = x * (x-1) * (x-2) *...* 3 * 2 * 1 In some definitions of x!, x! = x * (x-1) * (x-2) *...* 3 * 2 * 1 * 0! This definitions is circular, however, meaning that 0! is used as a basis to define the unary operator ! so that x! can be computed. This is why I don't trust mathematicians. There is no derivative of x! by any definition of a differentiable function that requires continuity, etc. This is the correct answer to the 1st problem this guy started with. As for the integral of the factorial function which does exist (unlike the derivative which DNE), consider the basic step function [x]. For example. Let [3] = [1] + [2] + [3]. Then the integral of [3] = 6. For another example. [5] = [1] + [2] + [3] + [4] + [5]. Then the integral of [5] = 15. And in general the integral of [x] = ( x * (x+1))/2. This same concept can be applied to find the integral of of x!. Then the integral
I thought it would be that d/dx!(f(x))= f'(x)*f''(x)*f'''(x)*f''''(x)......*f''''..'''(x) where in the last term there are x ' (this would be only defined for integer x but i would love an extension of it like gamma function for x!)
Again, a question of nomenclature. When the apostrophe symbol (') is used to indicate a derivative of a function, it is in replacement of a Roman numeral I in superscript font. It is read prime after Latin prima for first. Thus, it means first (derivative). The second derivative is denoted by superscript II, which is read second, after Latin secunda. It is a mistake to read it "double prime" ("double first"? Come on!). Same for third (superscript III), fourth (superscript IV, alternatively IIII), fifth (superscript V), and so on, respectively read third, fourth, fifth, etc., never triple prime, etc.
Would your second approach coincide with the first, if a change of basis was performed? I.e. using the Chebyshev polynomials instead of the obvious powers of x choice.
This is interesting, but I'm not quite satisfied with what it means to take a factorial of a matrix as shown here. Of course, I know that this is the standard (and reasonable) way of taking a function of a matrix. So I guess what I'm saying is that maybe it's not best to think of the factorial as a function. Maybe whatever the factorial does that is so important - maybe that job has to be done by different functions depending on the dimension. Or something. I'm not sure. I guess I'll look at the continuity of this function on R^n when I get the chance. Not sure what this'll accomplish, but it's my first instinct.
Well it's fascinating, but what are the real-world applications? Is the representation of the general form of the quadratic as a 3 x 1 matrix arbitrary, or is there justification for it elsewhere?
If we assume that in general (f(x)+1)! = (f(x))!•(f(x)+1) then: Let’s call Df=df/dx, !f=(f(x))!, If=f, 1f=1; let’s also call X=D! !(I+1) = !I•(I+1) !(I+1) = !•(I+1) (D!)(I+1)•D(I+1) = D!•(I+1)+!•D(I+1) X(I+1)•D = X•(I+1)+!•D X(I+1)•D = X•I+X+!•D I don’t think anything more specific can be said without extra assumptions.