This is the best overview of SLAM that I've seen yet. Excellent presentation and just enough detail to get people started. I'd love to see a deep dive on getting ROS setup on a small rover like this.
Really good video :) gives enough information to intrigue me into looking into the individual subjects further, but broad enough to not increase the length and complexity of the video. A real masterpiece of a video
sometime in the future, would it be able to detect changes in the environment? where an obstruction has been added or a wall moved so that it can update it's map?
Currently, the robot is using the gmapping ROS package to map the environment which only works for static environments. However, there are other SLAM implementations in ROS that allow for dynamic maps such as slam_toolbox. gmapping: wiki.ros.org/gmapping slam_toolbox: github.com/SteveMacenski/slam_toolbox#lifelong-mapping
Hi Mr. Kai Nakamura, could you please tell me where you are able to get the walls for the arena? Is it custom or you buy it from online store? I have a project that needs such wall. Thank you in advance
I just contacted my professor and they told me the walls are custom made from rectangular thin plywood sheets and 3D printed parts. The curved walls are made by laser cutting a zig-zag pattern into the plywood, allowing it to bend. If you're interested in creating your own I could probably contact the lab staff and get the files needed to reproduce them if you're interested. Hope this helps!
Wow, really outstanding results! How many people were on this project and how long it took you to accomplish it? Are you planning to release the code for this project? Again, thank you for sharing!
Three people over the course of a seven week term. Unfortunately, I cannot share the code in its entirety due to academic policy (students next year would just be able to copy it). But I could share code snippets or point toward resources with some of the algorithms I used if you’re interested. Thank you so much for your interest in this project! :)
@@kaihnakamura Yeah, it would be awesome! If you could share info about the hardest points, not in a code, but maybe as a theory, methodology or something similar. It could help others and i think will be a interesting material for your blog!
It's hard to say for sure if LIDAR would work well in an outdoor environment for your needs. One thing to keep in mind for the robot I used is that it only makes LIDAR scans parallel to the ground. This worked find for my needs because the only obstacles were the walls. But if the environment were full of obstacles shorter than the LIDAR sensor, then the robot would be unable to detect them. There are 3D LIDAR sensors that allow you to create point clouds in 3D, but these can be quite expensive. Some SLAM robots use a 2D LIDAR scanner in combination with a stereo camera to achieve the same effect. The robot in the video is a TurtleBot3, but I know the TurtleBot4 uses the LIDAR and camera approach. Hope this helps! turtlebot.github.io/turtlebot4-user-manual/overview/features.html
@@kaihnakamura I really appreciate your thoughts and input. I was considering using GPS, but realized I would need RTK to make it accurate enough and that’s a bit more expensive than I was hoping for.
Thank you so much bro! Even the most detailed tutorial didn't had the part that uses datas on python, and you have it on your website. One question though, which SLAM program did you use? I am planing to use lidar mapping with raspberry pi 4 and hector SLAM, but most of the online sources says it is too slow, any advice?
Thank you! For this project I used the gmapping ROS package for SLAM, but I've also heard that slam_toolbox is a good choice as well. gmapping: wiki.ros.org/gmapping slam_toolbox: wiki.ros.org/slam_toolbox
Unfortunately I can't share the source code in its entirely because of academic policy (this was part of a school project so next year's students would just be able to copy the code). But I just added some code snippets to my website for some of the important bits along with links to additional resources: kainakamura.com/project/rbe3002