Combining the Camera and the Jetson Nano certainly opens a lot of possibilities for robotic navigation and obstacle avoidance. A gps receiver would be a good addition. The demo only shows a fraction of the potential that’s available to someone with an imagination and the money to invest. This could easily be turned into a serious autonomous lawn mower or other independent robotic device. I hope you’re planning on taking this in a specific direction at some point. It’s about time for a project. Good job.
Thanks for showing! I always love to see your videos about the Nano. I also make (SBC) review videos on RU-vid. I had to make the choice between the Odroid N2 and the NVIDIA Jetson to buy. I went for the N2. I needed it to do a lot of Blender renders. And it does it great with it's powerful CPU. But now I've gotten a 4K USB3 camera to review. The N2 can only handle 720p software encoding with it. I should have a board with USB3 and hardware encoding(I hope). Maybe the Odroid XU4. I'll have to get it out of the dust. Again thank you.
Right clicking within the terminal window should paste what held on the clipboard. No need to select paste. Similarly, whatever is highlighted within a terminal windows, is sent to the clipboard
Thanks for the tips! I think that you need to click the middle button in the Terminal to paste the clipboard contents, as the right button should always bring up the context menu. Of course we always (or at least try) to use the menus in the videos so that it's easier to follow in the tutorials. Thanks for watching!
Thanks for the video once again! I loved the PRO TIP about film covering! I am planning to buy one of this set (Jetson Nano + RS T26) for a prototype application only for detecting skeletal tracking and applying some projection matrix at the highest frame rate I can get. I wonder if this could be up to 60fps, and if I could use more than one RS T26 plugged on the same board...
Thank you for your totorial. I just want to now if i use realsense cameras instead of normal mipi csi cameras for object tracking, what advantages will i have ?
Great Video! In the B&W left & right images do you see the IR pattern (maybe test in low light condition)? Is this pattern always the same or is it changing over time?
It remains across reboots, no need to add to a start up file. However once you are done with rebuilding the kernel, you may not want the swap file. You can use swapoff to turn off the swap file, and modify the /etc/fstab file. Thanks for watching!
The install for the d435 on the nano is the same as the t265 correct? So if I plug both camera's in they should both work.... but When I start the ./realsense-viewer I can only add the d435 I can not add the t265. Any idea's?
Hello Jim, Can you help me figure out how to use both intel realsense t265 and d435 in rviz (ros) as I'm really in trouble right now can you help me out. Thanks in advance!
I want to get the X,Y,Z coordinates of the camera (displayed in the real sense windows) into my python code as a variable. Seems like pyrealsense2 can be used to get that data. But I can't find which function do that exactly. Do you have any idea for that ?
RTABMap is designed for use with depth cameras. The images from the T265 need to be rectified before you can use them for that particular package. Thanks for watching!
What is the accuracy of T265? Can you measure the distance and length of an object? The accuracy needs to be able to reach an error of about 1 cm, and the carrier may be unstable.
Installed librealsense on my Nano last night and was able to confirm that the T265 is working with realsense-viewer. The problem I am having is that the T265 is not detected by realsense-viewer or any other realsense software after a restart where the T265 is left plugged in. If I unplug it and then plug it back in everything works again. Anyone else have this issue? Is it possibly a firmware issue on the T265?
Nice tutorial! Do you think you could test GPU rendering for blender on it? I'm looking into buying 1 or 2 to add to a small render farm and the nano might be very useful
hmm...the 3D at the end looks terribly imprecise. You picked up from specific geolocation at first and returned it to almost the same point. Yet there is a differential distance that seems to be around 30 cm when we took a side view of the 3D location diagram. Can you show us that again or tell us what that tolerance was, please? It makes a difference to me. Thank you.
@@JetsonHacks I admire the support you give and the videos you do are very helpful to me. Thank you! I wonder if it is possible to tweak/improve the location of the realsense with functions or settings that come with it?
There are a large number of parameters to control. It depends on your application as to how you would get better accuracy. For example, if you are using a wheeled robot you can send odometry information to the camera and it will incorporate that information to get better positioning. To be clear, this is a "how to use the T265 with a Jetson Nano" how to, not a qualification of the T265 for a particular use.
Hey! I wanted to ask are there any alternatives for fast paced applications like for example race drones and even if the Tracking and slam is on par for the job would nano be able to handle it?
It's hard to tell, there's a lot of variables. The T265 is pretty new, I certainly don't have enough experience with it yet to give a meaningful answer. Thanks for watching!
@@ozy5332 It's a very large subject. For the drone stuff, Pixhawk and DIY Drones is a good place to start. For SLAM, that's a really big area. It's beyond what I can describe here, but robot operating system (ROS) is as good of starting point as any.
@@JetsonHacks btw for slam with python I was GeoHotz he has a a video where he codes it for hours understood a few things but still long way to go.. thanks for the other references though :)
@@JetsonHacks Hey man would you be able to help me out, I really want to make an AI to trade forex for me. I can't find any information and im not budging any closer on moving towards my goals I thought you would be the best person to ask for direction. Thanks!!
JetsonHacks thanks for the reply, I ended up finding some information on quant trading and making charts with R programming, buddy said the neural net he was running for years was too stressful, less complicated the better
the line "Build CMAKE: false" at 4:32 does not show up for me, which later shows up as an error message CMAKE_CUDA_COMPLIER not set, after EnableLanguage" How can I fix this? Thanks!
I do not know which L4T version you are running. Newer version of the repository do not rebuild CMAKE. I do not know if you have CUDA installed on your Jetson. Typically you would look for the CUDA compiler and set the path appropriately.
@@JetsonHacks Thanks for your quick reply. I am actually using the depth and tracking camera together with python on a RPi4. I'll keep looking how to fix the issue.
@@pratikparajuli5991 The scripts are for a NVIDIA Jetson Nano which has a GPU. For the RPi you will need to modify the script both for that environment and architecture. Good luck on your project.
@@JetsonHacks Right, I probably should have asked against the D435. I am looking at a navigation / obstacle camera for a remote lawnmower project, and am between the T265/D435 and the Zed. Main board will be a jetson nano, of course :) Do you have an opinion on the comparison? You have videos on all three cameras.
@@kalsprite No opinion. Each has its strengths and weaknesses. If you are using a Jetson Nano, you should be aware that the ZED does all of its processing on the Nano, which can be computationally expensive. That is, there's not a lot of extra computation capacity available past that. The Intel cameras do their computation on board the camera.
@@kalsprite Late reply, but oh well. In addition to the heavy computation needed by the ZED, it also works terrible indoors, like in a small room for example vs the Realsense D435, and the Realsense D435 is terrible compared to a TOF sensor like the Azure Kinect or the Realsense L515. Outdoors is another story and TOF/IR does not work as well, so the ZED is competitive outdoors at long ranges.
Jim, I have tried installing several times on my nano(with no install issues) , but I am still not able to access t265 using the RealSense-Viewer. I have looked on git hub and found similar problems github.com/IntelRealSense/librealsense/issues/3361 , the only difference is they were using ubuntu16. When I run lsusb I can see the t265 on Bus 001 Device:003: ID 03e7:2150 . But the realsense viewer does not seem to be able to pick it up. If I plug the d435 camera in the viewer instantly picks the camera up. One difference is the Bus the d435 connects to bus 002 while the t265 connects to bus 001. I can plug the t265 into my Windows10 PC and it works fine. Any idea's? The device number is also different t265 it is device 003 while the d435 is device 005. I also noticed that using a high end power supply that can provided 5Amps at 5volts really improved the nano's performance. I was using a 2amp max supply and it just was not enough...the nano was slow. Using the d435 I did not get any of the frame errors you were showing. The viewer was the only thing limiting the frame rate and size.
@@JetsonHacks I am plugging directly into the nano USB slot(s). Yes I have had both the t265 and the d435 plugged in at the same time. I have tried swapping USB ports. I have unplugged the d435 and left the t265 plugged in. With the viewer then showing nothing to add at all. The other USB Slots I have a keyboard and mouse directly plugged into the nano. I do have a usb hub(same as you have) but it is not plugged into the nano and I can give it a try if you think it is needed. Here maybe something of interest: The Linux Foundation 3.0 root hub is on the 002 Bus. The Linux Foundation 2.0 root hub is on the 001 bus. The camera's are USB 3.0.... yet the only bus that the t265 seems to attach to is the 001 bus... while the d435 seems to always on the 002 bus which is USB 3 hub. My question is how do I get the t265 on the 002 Bus that the USB 3 hub is on? Or is this not relevant.
@@sy2532 First, I would try the powered hub. The T265 does not need a modified kernel, you can use librealsense on its own. With that in mind, create a new SD card and build librealsense on it, but do not modify the kernel. You should see the T265 work. Also, sometimes just replugging a camera can help it be recognized.
i tested this camera last year using raspberry4 and jetson nano, it has an important issue: github.com/IntelRealSense/librealsense/issues/4518, not very stable
I tried to use RealSense T265 on Jetson Nano. I had 4.9GB free space when I clone the installSwapfile directory. But when I run ./installSwapfile.sh command I got an error "fallocate : fallocate failed: No space left on device" . And my free space became 0. The clone is not that big. Do you know about this issue ?
The swapfile is 4.0 GB. The library and firmware for the T265 are of significant size. If you were trying to build the kernel too, there's no way that all fits in 4.9GB. Typically when building something like this, you should have an adequate size SD card/USB drive with enough free space. Also, it is recommended that you build on a fresh install.