I loved this course, probably the best introduction on Lie groups in this platform! Is there, by any chance, a possibility of a course such as this but on Riemmanian manifolds?
Very nice and comprehensive video! Thanks a lot! I'm wandering whether the link to the video in the last slide will be maintained. Currently it is not available.
All videos can be found by searching for "Lie theory for the roboticist", on YT. There starts to be a few of them! They are all roughly the same, but not equal!
Very nice lecture! Could you please make these slides available for other viewers? Also, I have a question: Could you please emphasize the key differences between EKF and IEKF that you showed on the slides? Why do we want to use Lie Algebra in localization tasks, especially in EKF? Thank you!
What do you mean by IEKF? Invariant? Iterative? Information? Indirect? They are all possible choices. In the course, however, I dont remember referring to any of them. I suppose then that you refer to the ESKF or error-state KF. All ESKF work with a nominal state and an error state. All Lie-based KF are indeed ESKF because the error is defined in the tangent space. For example, let the state be a quaternion q \in S3 \in R4. The tangent space is isomorphic to R3. Now given a computed Kalman gain K, the update on the state q for EKF and ESKF are: EKF: q_new = q + K * ( y - h(q) ) --- here dq = K * ( y - h(q) ) \in R4 ESKF: q_new = q * Exp ( K' ( y - h(q) ) ) = q (+) ( K' * (y - h(q) ) ) --- here dq = K' * ( y - h(q) ) \in R3 so the updates are indeed quite different, but the shortcut (+) makes it look the same. Remark that K is for EKF and K' is for ESKF, they are not equal.
Will it make a big difference in practice if I apply the (+) operator only for the "group variables" and keep the regular + for the remaining states (for instance also if I want to include sensor biases)?
@@urewiofdjsklcmx All variables that can be described as pertaining to R^n can be treated normally with a '+' sign. In fact, the R^n spaces are also Lie groups under addition, and the (+) operator in R^n boils down to the '+' operation. Even more, since R^n under addition is a commutative group, then left-(+) and right-(+) are both the same and equal to regular '+'.
@@joansola02 Hmm but this will decouple the error states right? If I understood the invariant EKF (IEKF) by Barrau and Bonnabel correctly they stay in the SE_n(3) group to define the error. Apparently this is more accurate, but I guess also quite complicated if you need to consider biases and other states..
@@urewiofdjsklcmx Yes, in this manner the errors are decoupled. The question is how much. And the answer is not much. But again, not much might be too much depending on the appication, objectives, and particular numeric values of the involved variables. the advantage of decoupling is that you have all the algebra you need for each one of the blocks. If you want a completely coupled state, then sometimes you will not have all the closed forms you need (exponential map, adjoint matrix and right Jacobian being the key 3 elements for which you would like to have closed forms -- al the other forms can be deduced from these three)
Let's say I have two processes that both estimate the same group element (e.g. an element of SO(3)) and for both I have an associated covariance than the covariances are defined in different tangent spaces (at the individual estimate), right? So in order to combine them I somehow have to transform them with the Jacobians such that they are mapped to the same tangent space before I can combine them? The hypothecial application that I have in my mind are two Kalman Filters that estimate the same system. Before I watched the video I would have naively fused the two covariance matrices directly, which is apparently not the correct way..
It is unclear how do you "combine" both estimates. If you provide the formulas for such a combination in the case of vector spaces, I can then hopefully guide you through the process of doing the equivalent thing in Lie groups.
@@joansola02 Ok a bit late my response: So in vector spaces I can simply add the information matrices (inverse covariance matrices) of state estimates x_1 and x_2 like so: I_fused = I_1 + I_2. But if x_1 and x_2 belong to a Lie group, my understanding is now that the I_1 and I_2 are defined in the tangent spaces defined at x_1 and x_2. So I guess I cannot just add them up like in a vector space ? I probably need to transform first both I_1 and I_2 to some common tangent space and then add them afterwards?
@@urewiofdjsklcmx You are right. If the covariances or the info matrices are defined locally, then you have to combine them in the same reference space. You can use for this the adjoint operator, which can be used to transform covariances matrices from one tangent space to another. Let X and Y be two elements of group. Let E be the identity. Let Ad_X be the adjoint at X, and Ad_Y that at Y. Now, Ad_XY = Ad_X.inv * Ad_Y transforms vectors from the tangent at Y to the tangent at X. You transform covariances as Q_X = Ad_XY * Q_Y * Ad_XY.transpose. You can easily sort out the equivalent conversion for info matrices. You can also express all info matrices at the identity E. To do so, you do e.g. Q_E = Ad_X * Q_X * Ad_X.tr. You can then directly add I_X + I_Y = Q_X.inv + Q_Y.inv. The conversions for the info matrices are easy: I_X = (Q_X).inv = (Ad_XY * Q_Y * Ad_XY.tr).inv = Ad_XY.inv.tr * I_Y * Ad_XY.inv