Hey Rangel I wonder if you added gazebo.launch manually. I'm asking it because i couldn't find that launch file from the realsense2_description package. Thank you in advance :)
Hello Rangel, thank you so much for the video. I'm currently facing an issue where the camera appears in RViz but doesn't seem to spawn in Gazebo. Looking at the Worlds -> Models in my Gazebo UI I only see ground_plane... Any advice on how to resolve this? I am using those exact files with ROS Noetic and Gazebo 11.
You have to first check if the urdf parses. If parses with xacro then try to spawn manually Finally try to spawn with the launch file Sometimes is problem of the path, sometimes is problems of the physics if you modified the file
@@issaiass Thank you for your quick response! I solved the problem, I was working in a ROS workspace with a lot of packages and I had misconfigured paths. I created a clean workspace and everything worked fine :) I attached this camera in a pioneer3at with p2os_urdf pkg with the help of this video of yours: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DWLXYEmcUsA.html
No, because i think the default plugin does not have it. Anyway, there are three ways. One is directly to the plugin, add a gaussian noise function and mix it with the image and The other is to do that in the sdf file as explained in the gazebo tutorial gazebosim.org/tutorials?tut=sensor_noise. The last one is to use the image topic, then use opencv to add a gaussian noise to the image and republish to a new topic; in that way you will not mess with the plugin. Probably the last one is the easiest.
Thanks Rangel, this is excilent and a very clear explanation. Just a quick question though, how would you recommend going about using some of the Intel Realsense library's in cooperation with this i.e get_distance or deproject_pixel_to_point?
If you want a project example related to perception and depth cameras common and interesting tasks could be classification (analyze and filter point clouds to see which object is), visual serving (knowing the depth, you can approach and grab the object), reconstruction 3d (make a 3d map of an unknown environment)
@@issaiass thanks for your response Rangel, I am currently trying to port my visual servoing code over to simulation. It is reliant on using the Intel python libraries, so I'm just wondering if they are compatible with the simulated camera?
@@issaiass Can you please explain further? I am trying to implement this sensor on my drone but i can't find the documentation on how to do it, or any tutorial...
Hi Rangel great video! Thanks to your instructions I was able to add this camera to my robot really easy! I have one question regarding the plugin: Is there a reason that the point_cloud rgb values are all gray? The d435 should have rgb depth imagery support so I am not sure if I missed a setting or if this is a limitation from the plugin.
@@issaiass I am not able to find a setting in rviz that displays the object colors. My settings are PointCloud2, depth/color/points topic, color transform rgb8. Could you please verify on rviz as soon as you have the chance? I don't want to start rewriting the plugin before I am sure that I could have not avoided it. Thanks!
Can you please tell me which versions of ros and gazebo are being used? I need an intel realsense camera plugin for ros humble hawksbill & gazebo ignition fortress.
Thank you! I have a question about plugin in Gazebo. Is that normal that the model of camera in Gazebo is all grey? It seems that the camera rendered correctly in Rviz but in Gazebo, it was grey and didn't have any color.
They do not have a mesh file just an RGB value for gazebo, check the urdf materials of the realsense and you will get more clear about that. If you want to link the mesh file then you will have to put it on the uri
Thank you for the video! I have a question. Why is the pointcloud2 on Rviz appearing as a rainbow-ish colour and why is the camera of the simulated realsense grey/black in colour (in Rviz too)? I changed the visual of the ball in front of the camera to bright orange colour but I get the same rainbow in the pointcloud2 and dark silhouette in camera in Rviz. I need to be able to detect the orange using the RGB values to proceed to detect the orange object. Any idea on why this is happening? Thank you!
Okay so i fixed the issue of the camera showing in grayscale. When i change the topic to camera/color/image_raw (instead of camera/depth/image_raw), i can see the orange. However when i try to observe the pointcloud in RGB8, it is in grayscale all of a sudden. When i set it to AxisColour, it becomes that rainbow colour i mentioned previously. I do not want this grayscale point cloud, I want the RGB as well go show the orange colour in the depth point cloud. How can I achieve this? Thanks!
I realise the reason i am getting grayscale. The depth is not aligned to the colour. In the actual realsense camera, i can set align_depth to true in rs_camera.launch (inside the realsense2_camera package). How do I do it here in gazebo? Thanks!
@@naruto7350 rviz is just a tool for visualizing robot interactions, if you get the PCL you will get points as features of objects, if you get the raw image you get pixels at, in example, rgb then you could detect the color by range of HSV
Hi Rangel, I've cloned both your repos, unfortunately when I run it, the camera doesn't capture anything according to Rviz. Under camera it says "No Image Received", and "No CameraInfo received on [/camera/depth/camera_info]. Topic may not exist." Any reason why this might be happening?
@@issaiass Thanks for the reply, I checked rostopic list, and can see all the topics are there. I checked rostopic info for the specific topics but there are no publishers, only subscribers, which may be the issue. How do I figure out specifically what's wrong and how to fix it? Is it the fact that I have a different ROS version?
@@muhammadshaheerbalouch6244 any problem or something not loading, but i recommend you to start then with the basics, i have a course based on very basic stuff in ros in the list here
Hi Rangel! I am currently working on a thesis with ur5 robot and TIAGo from pal-robotics, and how can intel realsense camera publish the position for an object that for an example, ur5 can subscribe and perform a pick and place task?
I did not do that in simulation because a physics error I have but, when the octomap is generated you could use the moveit grasping example, they explain how to generate different grasping patterns and select the optimal one. Depending of the object it will be different how how to grasp it, but again, moveit examples of graaping are good enough.
@@issaiass Ah I see, Thanks! However when I am implementing this in my ur5 gazebo file and made changes in the xacro also, only the camera spawn. Ever encounter that problem before? I am wondering that i might have go make a new launchfile were i include them both?
@@RaviKumar-ub2ng use vscode first to see the urdf, also ros has some urdf tools to check the sintaxis, actually I think could be if rviz error to change the reference frame or If not then a link or some controller not loaded. I have a video that could give you some hints if you check on my video tab, something like "problems ..." Do not remember the title