Тёмный

SLAM-Course - 03 - Bayes Filter (2013/14; Cyrill Stachniss) 

Cyrill Stachniss
Подписаться 55 тыс.
Просмотров 90 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 47   
@hemantyadav6501
@hemantyadav6501 7 лет назад
the best videos a person can find free of cost
@shaunvonermirzo7581
@shaunvonermirzo7581 5 лет назад
Thank you Cyrill, you've made our lives as students easier
@sciencetube4574
@sciencetube4574 4 года назад
This lecture really helped me understand the idea behind the Bayes filter. I had read a little about it and heard a few basic concepts, but this really connected all of the loose ends. Thank you!
@jays907
@jays907 3 года назад
Thank you for keeping these videos up and even the new video you posted on bayes filter! Very good lectures and make me even want to go to where you're teaching!
@1volkansezer
@1volkansezer 5 лет назад
Thanks for the great lecture professor. I would like to make a clarification for 46:04 , where you explain the max-range effect of measurement model. I think the reason of that part is not about the 5m a way obstacle for a 4m range sensor. I think the real reason is: sometimes the sensor may fail to measure an object even it is just in front of it (let say 2m away), and gives us the result of max range (4m), which is a sensor failure. That part models these kind of errors I guess, isn't it?
@jinseoilau2543
@jinseoilau2543 5 лет назад
wonderful courses, thanks!
@GaryPaluk
@GaryPaluk 7 лет назад
Hi Cyrill... Thanks for your great videos on SLAM and robotics. At 35:58 - Are you basically just saying this is a 2D quaternion using spherical linear interpolation (slerp)?
@wahabfiles6260
@wahabfiles6260 4 года назад
at 5:50 it is mentioned that if we have sensor bias then previous measurement can help get better estimate. I want to know how because the previous measurement is also taken from the same sensor so it also has the same bias?
@giwrgos1349
@giwrgos1349 5 лет назад
great lectures
@Paarth2000
@Paarth2000 7 лет назад
Amazing lecture series - good distillation of the probabilistic concepts. However a question: why when predicting x(t) using odometry we use a triple transform = initial-rotation-translation-final-rotation. Given that the bayes formulation is inherently recursive , i.e. x(t) => x(t+1) => x(t+2) one would imagine that the second rotation would be naturally the initial part of the next estimation, i.e. in the estimation of x(t+2). Otherwise it appears (naively) that we might end up double counting the second rotation.
@victorsheverdin3935
@victorsheverdin3935 Год назад
Thank u!
@OttoFazzl
@OttoFazzl 6 лет назад
For autonomous cars we probably should not use rotation-translation-rotation model because cars cannot rotate in place. Therefore, circular motion model as described at 35:18 should be more appropriate.
@ilanaizelman3993
@ilanaizelman3993 5 лет назад
That's wrong. my autonomous car rotates in place.
@arthurew8523
@arthurew8523 3 года назад
amazing class
@romagluskin5133
@romagluskin5133 8 лет назад
In the velocity-based model, where we assume that the robot receives the command with parameters (v,w) and executes them for a predefined time interval delta-t, shouldn't we also include in the model some uncertainty about the robot's internal clock ? Or can it just be represented as a scaling term for the uncertainty of executing (v,w) ?
@nicolasperez4292
@nicolasperez4292 3 года назад
at 5:00 I didn't quite get how you applied baye's rule. how are you able to swap out only z_t?
@SaiManojPrakhya
@SaiManojPrakhya 10 лет назад
I have a small doubt with respect to Markov assumption to reduce the first complex term to p(Zt / Xt). As you said that having previous observations and control commands helps to get better estimates, then why is it that this assumption is made ?
@CyrillStachniss
@CyrillStachniss 10 лет назад
Otherwise we would not end up with such an easy and effective algorithm - and the approximation error can be assumed to be small, especially when eliminating systematic errors beforehand through calibration.
@WingingItOnWheels
@WingingItOnWheels 9 лет назад
Hi all, I have a silly Question. Around 27mins in when we are talking about the Odometry Model, why do we measure the translation as the euclidean distance between the two poses? While that does make sense, I thought the Odometry Model meant measuring the rotation of the robot wheels, so I was expecting some formula that included RPM and the wheel radius. I am sure I am missing something silly however. Great lecture btw. Looking forward to watching the rest :)
@CyrillStachniss
@CyrillStachniss 9 лет назад
+Kevin Farrell Most robot control systems provide the pose in a robot coordinate frame, so for the prediction step, you need to compute the rigid body transformation between two poses and use a noise model for it. The Rotate-Tranalate-Rotate Model is just one possible choice, you can take others as well.
@deepduke4188
@deepduke4188 3 года назад
Could anyone tell me what's the difference between beam-endpoint model and ray-casting model ?
@NitinDhiman
@NitinDhiman 9 лет назад
I have a doubt regarding Bayes's expansion of bel(x_t) in slide no 4. As per my derivation, denominator should have a term P(Z_t | Z_{1:t-1}, u_{1:t}). I am not able to understand how this term is subsumed in the constant as Z_t is dependent on u_t.
@CyrillStachniss
@CyrillStachniss 9 лет назад
The whole denominator sits in the normalization constant.
@NitinDhiman
@NitinDhiman 9 лет назад
Thanks for the reply. I am not able to comprehend it. P(Z_t | Z_{1:t-1}, u_{1:t}) is not constant as it is dependent on u_t
@Superslimjimmy
@Superslimjimmy 9 лет назад
Nitin Dhiman The original expression is bel(x_t) which is only a function of x_t. It states that bel(x_t) = p(x_t | z_{1:t}, u_{1:t}) which indicates that z_t and u_t are given (i.e. known), so P(Z_t | Z_{1:t-1}, u_{1:t}) would be a constant.
@devyanivarshney1100
@devyanivarshney1100 3 года назад
I am a bit confused with p(x(t) | x(t-1),u(t)). In a motion model for prediction, at (t-1) , we need u(t-1) and x(t-1) to predict x(t). u(t) is a future move at time (t). I guess, it should be p(x(t) | x(t-1),u(t-1)). Kindly let me know where am I going wrong.
@CyrillStachniss
@CyrillStachniss 3 года назад
It depends how you define u_t. I used the notation from the probabilistic robotics book where u_t leads from x_{t-1} to x_t. I guess you mean the right thing but are probably used to the other notation.
@vicalomen
@vicalomen 6 лет назад
What happend when in the velocity model the w=0? How you can do in that case?
@sagy90
@sagy90 3 года назад
where can we see the tutorials of this course and excersises?
@rajatalak
@rajatalak 7 лет назад
At 13:42, shouldn't we assume that the control action u_t depend on the current state x_t? Control can't be oblivious to the system state can it? If so, then knowing u_t we may be able to infer something about x_{t-1} and the two will not be independent. Is this true? and if it is then we can't use independence to ignore u_t in the last equation.
@donaldslowik5009
@donaldslowik5009 4 года назад
Yes, that's the approximation/simplification which he says may or may not be true, at ~13:30. So yes, u_t might reasonably depend on x_{t-1}, so knowledge of u_t would inform as to p(x_{t-1}). But since u_t is informed by x_{t-1}, which only depends on z_{1:t-1}, u_{1:t-1}, it can't provide anymore info about x_{t-1}.
@qutibamokadam879
@qutibamokadam879 6 лет назад
Hiii Dr Hi guys I have a question , how can I add this additional noise term to the final orientation in the velocity model? could you help me ?
@ahmadalghooneh2105
@ahmadalghooneh2105 5 лет назад
thank you
@mohammadjavadalipourahmadc9424
@mohammadjavadalipourahmadc9424 3 года назад
Thanks a lot for the lecture. How can we find the lecture slides of the whole course?
@CyrillStachniss
@CyrillStachniss Год назад
Send me an email
@Yanni89-
@Yanni89- 9 лет назад
When you are talking about the odometry model, does the robot have to make these motions in reality or is that the effective path it will go in the end? If we for instance have a car, that does a lot of curves but the effective path can be summarized as shown in your odometry model in the slides, meaning two rotations and a translation, could the movement still be simplified as that or does the full path traveled with all the different curves have to be taken into account? In that case the model would be very complicated right? This was not entirely clear for me so I thank you for any help :)
@CyrillStachniss
@CyrillStachniss 9 лет назад
Yannick M The models describes the intended motion of the robot/car between two time steps. From t to t+1, we consider a simply rigid body transformation, basically from start configuration to end configuration at t+1. But If you chain all commands starting from t=1 ... T, you get a (discretized) trajectory,
@Yanni89-
@Yanni89- 9 лет назад
Cyrill Stachniss Thank you for the reply! I got it now :)
@GCOMRacquet
@GCOMRacquet 10 лет назад
Could someone Explain me what kind of Information the Odometry measurements reports to us? (talking about the measured ones) I mean does the Robot have a internal coordinate System and what relation does it have to the global coordinates? Or does we simply measure everytime from point x_{t-1} = 0 to x_t = 2.5(example) meters and use this information to get the rotations and translation? Just what kind of information ist x_{t-1} and x_t. Since we need in the example x,y,Orientation to calculate the 3 steps the Robot must have some kind of coordinate system or am i tottaly wrong ^^
@CyrillStachniss
@CyrillStachniss 10 лет назад
I depends on the platform. Most systems (e.g., a Pioneer) have an internal coordinate system and integrate the motion command within that local frame (which drifts). The pose in this frame is reported to the outside world. Thus in most cases, one uses the internal coordinate frame to compute the relative motion, which is used as the odometry in the methods presented here.
@TeoZarkopafilis
@TeoZarkopafilis 5 лет назад
Why is omega on the denominator at around 32:56 ?
@hairynutsack9704
@hairynutsack9704 5 лет назад
v = (Omega)X(r). this is cross product of omega and distance from rotation axis. now r = v/Omega then intially(when orientation is theta), position of r is (v/omega sin(theta)) and after delta t, r is (v/omega sin(theta + omega*delta t). now final robot position is (x',y',Theta') = (x,t,theta)+displacement in delta t = (x,y,theta) + (r final - r initial)
@niravc10
@niravc10 7 лет назад
Will you make your assignments public?
@CyrillStachniss
@CyrillStachniss 7 лет назад
Yes, the assignments are public, see the Course Website in WS 13/14 taught by myself at Freiburg University. The solutions, however, are not public.
@UrbanPretzle
@UrbanPretzle 4 года назад
Solutions are not public but here are my solutions if you want to take a look or what not. Let me know if you sport anything wrong about them github.com/conorhennessy/SLAM-Course-Solutions
@sau002
@sau002 4 года назад
Quite unlike other stellar videos that you have published. This is far too abstract! I have no clue what problem we are attempting to solve. You lost me.