Great work. This package is for use only single camera and single lidar calibration? What will we do if we need to calibrate multiple cameras and multiple lidars use case?
Unfortunately at the moment if you need to calibrate multiple cameras and lidars you'll have to individually calibrate each lidar to all cameras. Alternatively, you could try inferring the transforms i.e. if you did lidar1-camera1, lidar1-camera2, then you could infer the calibration of camera 1-camera2; though this is less accurate.
@@Ts4iD Okey thanks for suggestions. And one more question did you find intrinsic parameters for camera and lidar -individually- by yourself or founded by manufacturers?
great works! I'm doing similar projects and I already have 3d point cloud data as pcd and sample images. Could I apply data what i have on your package?
You can, however this package currently only supports sensor data in the form of ros messages (i.e. using bags or live sensor data). You can try to get your data into the ROS message formats or alternatively, modify the code.
Nice work. Do you provide the other distortion model of camera, such as pin-hole,... ? I don't know why I can't open issue in gitlab, so I leave question here.
We do provide other ROS distortion models like rational_polynomial or plumb_bob however they haven't been extensively tested. Unfortunately, the university GitLab doesn't let non-university individuals open an issue. If you have any other questions, you can contact me at d.tsai@acfr.usyd.edu.au
@@darrentsai2655 I see the paper. I find that LiDAR is left-handed coordinate in the Fig. 7 top-right. Is it right ? The other sensor is right-handed coordinate. Because I set the same configuration(vlp-16 and realsense) like as you, the calculated value(-1.57, 0, -1.57) has a negative sign. I think that the value is must positive. Thank you.
@@顏隆-v6k Hi yes you are correct, thank you for pointing that out. I've corrected it for the final submission of the paper. Also we've ported over the code base to github at github.com/acfr/cam_lidar_calibration and you can now create an issue.
For stereo camera, one way to do it would be to stitch both images into a single image, then use the tool as you would with a single camera-lidar pair. Another way is to calibrate with one camera-lidar pair, then use the transforms from that camera to the 2nd camera in order to get the calibration of the 2nd camera-lidar pair. If you do this, the calibration may be less accurate, however. The last way is to calibrate twice with this tool for each camera.
Hello, sir. Currently, I am copying the tutorial with the bag file I made myself, but there is a problem. When I press the capturing sample button after roslaunch, the terminal window displays the sentence "[WARN] [1661763950.580652412]: Invalid argument passed to canTransform argument source_frame in tf2 frame_ids can't be empty. Do you know the solution?
Hey, fantastic job; I'm working on something similar for my undergrad capstone would love to talk to you about your paper. I sent you a connection on LinkedIn. Hope to talk to you soon.