Thank you for the long form video, walking through the code and explaining the caveats so clearly. We have missed you. Hopefully there will be more content coming our way soon. Cheers and stay safe.
Brilliant video. New to the Nano. Gleaned ALOT from the vid. Never thought to thread the camera capture, was a "I could have had a V8." moment. Project I'm working on mandates high frame rates, now I got a clue. Big thanks.
Thank you for the kind words. Threading is usually the first trick to use on smaller machines if available. The goal is to try to take advantage of as much of the machine resources as possible. The smaller the machine, the more you have to know to make it performant. Thanks for watching!
@@JetsonHacks Yeah, concur regarding the need to get closer to the hardware & do a little "bit twiddling". I'm an "old school" coder. Cut my teeth on Univac main frames in the 70's-80's, transitioned to the 8086, 6800, & 68000 families in the late 80's, quite coding professionally in the middle to late 90's. Worked w/ the Nestor NI1000, back in the day (1,024 neurons in silicon) . My forte, language wise, was machine language, assembly (on several chips sets), C, C++, Ada, & several high level languages so Python is new to me, it being a relatively new OO language. So your "tutorials" are beneficial in multiple arenas. My dev machine is a HP DL580 (40 cores/80 threads, 1Tb memory, 4x 1080Ti's, etc.), am amazed @ just how much AI @ the edge can do (RPi 4 + NCS, Nano, AGX). Ever done anything combining the Nano + NCS (OpenCV + OpenVINO + CuDNN)? Think it would be interesting to encapsulate NLP "at the edge" versus the Alexa/Wolfram Alpha client server style implementation. Anyway, keep up the good teaching/experimentation, I'll be watching. What's the best way to subsidize your endeavors here on RU-vid, et al.? Feel the least I can do is throw you some pizza $$$. :)
I didnt order the Jetson Nano last year :) - but i came back to Your channel to check up if there's something new coming out :) - Thank You for a new video :)
THIS IS AWESOME! thank you for this primer, next i need to feed each frame into a real time SLAM algorithm and build a volumetric representation of the space around the drone so it can A* path hmm
Imo, firstly it's better to check if everything is ok with cameras using ls /dev/video* You can see 2 cameras with video0 and video1 names, means both must visible as devices. otherwise gst gives errors, wich are difficult to understand. Also dual camera sample didn't work for me with sensor_mode parameter, I've commented them out, then it was ok.
I am currently working on a project using an inland camera and when I try to test the camera this error shows up: "Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available" can anyone help?
There are tradeoffs. Typically stereo cameras have global shutter and hardware sync which using two RPi cameras do not offer. The results won't be perfect, but you should be able to play around with the concept. Thanks for watching!
Hi! I purchased the cam too. For stereo vision with Jetson Nano, could you help me? I have tried to solve the connection between the cam and Jetson Nano.
i am currently using it, it works the same maybe try changing CSI settings to IMX219 from jetson-io (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-eImDQ0PVu2Y.html around 9-min guy uses the same program to configure pwm pins)
hello. I am a student studying Jetson Nano and opencv for the first time. Because I use a translator, the writing may not be smooth. I want to save the video output from the code you uploaded as an mp4 file, but the VideoWriter function doesn't seem to work. Do you have any code to solve this? I've searched many examples, but haven't found a solution yet. Thank you for good material.
You are welcome. I do not have any code to share about video writing. You may ask on the official NVIDIA Jetson Nano forum, where a large group of developers and NVIDIA engineers share their experience. Good luck on your project!
Hello Jim, i took a tutorial on your videos and did everything. But some point i got stuck with the syncing of two cameras in instrumened folder. I got an error like this: python -u "/home/jetson/Documents/3D_workspace/CSI-Camera/instrumented/dual_camera_fps.py" Traceback (most recent call last): File "/home/jetson/Documents/3D_workspace/CSI-Camera/instrumented/dual_camera_fps.py", line 16, in from csi_camera import CSI_Camera File "/home/jetson/Documents/3D_workspace/CSI-Camera/instrumented/csi_camera.py", line 14, in class RepeatTimer(threading.Timer): TypeError: Error when calling the metaclass bases function() argument 1 must be code, not str Can you let me know what the error is.. Thanks in advance.
when i lunch th dual_camera_fps.py i have this message : Traceback (most recent call last): File "csi_camera.py", line 14, in class RepeatTimer(threading.Timer): TypeError: Error when calling the metaclass bases function() argument 1 must be code, not str
Hi (i'm new to embedded AI), would the xavier nx be able to take advantage of the 60 fps? also unrelated; for Jetpack 4.4 does opencv come with Cuda support? I cannot tell from the build info. Thanks so much for all your helpful videos!!!!!
Hi ... thanks for the wonderful tutorials .. by the way, i just would like to know if you had experienced connecting flir lepton breakout board on jetson nano?
Thank you for the video! I realized that the video output is much smoother with lower fps (e.g. 30) but there's so much lag for higher fps (e.g. 120). This seems counterintuitive to me... Shouldn't the video output have a faster/smoother output with higher fps?
Hi, I could only get a hold on a jetson nano A02 here in our country.. Is it possible to use 2 usb cameras in a stereo setup for depth estimation using A02?
Are there any differences in the board size between a02 and b01 ? Board screw holes etc? Tried finding the cad files for both, but didn't see them (still looking btw) :)
Why not do a while loop around the cv2.waitKey checking the internal clock until 1000/fps has elapsed since you last presented a frame, and sleep for like, 1ms so it sleeps for exactly the right amount of time to match the frame rate?
Hello, Can you please tell what is the maximum fps that can be achievable using Jetson nano and RPi camera(dual as well as single)? In general a RP camera specfi max is written as 120/90fps but I guess using OpenCV or bash command we can achieve higher. here using dual camera along with Jetson nano what's the max FRAME PER SEC limit is possible?
Please ask your question on the official NVIDIA jetson Nano forum, where a large group of developers and NVIDIA engineers share their experience. Good luck on your project!
Hello, when I display the camera feed from my raspberry pi, i get some weird white dots that appear all over the screen. They seem to be permanent and not moving. Additionally, the image quality seems to be much worse than what your are getting out of your cameras. I tried buying the rpi camera from different suppliers, even from the one linked in your video description. Three different cameras in total, they ALL have this same issue and its very frustrating. Would you happen to know why or have any advice? Thanks!
I do not have any experience with your issue. Please ask this question on the official NVIDIA Jetson Nano forum, where a large group of developers and NVIDIA engineers share their experience.
Jetson Nano B01 Development Kit. The earlier version with one CSI connector (A02) can be seen here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-dHvb225Pw1s.html
You make it look so easy, but it's really not, at least for me. Looks like I'm going to have to devote a lot more time to this. There must be a way to do facial recognition from off axis. There's always a way to do something that seems imposible. I like that the fire extinguisher is always in frame. You should plug in your guitar and crank out some power cords at the end of some of your videos. I like saying rock on dude, instead of stay safe. I say that all the time.
Depends on the application. On a moving robot, you can aim the cameras in different directions, one being a front view, the second being a rear view. Same thing with side views. You can angle the cameras slightly from each other, giving you a wider field of view. You can point the cameras in the same direction a known distance apart and use the pair as a stereo camera. Using trigonometry, the stereo camera can estimate the depth of a given object from the cameras. On an inspection line, you can monitor two different areas on the line at once. Thanks for watching!
Hi there, wonderful video.. I have tried everything you said but still getting errors after running the Python code in visual studio.. not sure why.. I have installed everything, cython, numpy
@@JetsonHackswhen running the Python codes like face detect, simple camera scripts, it's throwing number of errors and I can't see the camera window. No matter what I do. Not sure if I am missing any plugins or software . I did whatever you have mentioned in your video
Hi Jim, very nice video again! From my understanding this software synchronisation of the cameras is not perfect but close to perfect, right? Also shown in this video from Arducam: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-MbLOcaAJ7Ug.html Do you have an idea what the lag is (in ms, ns) between the cameras? Would the only option be to use global shutter and harware sync to get them perfect? Did you try to save the recorded frames (as idividual images) on the SD card or on a SSD and what would be the achievable frame rate in that case? I'm asking because I need to do postprocessing on the images and my script is so computational heavy that I cannot do it within the acquisition loop. Thanks Bart
In the video referenced, they use a global shutter and external trigger for the cameras. This is effectively a synchronized stereo image capture. It requires a different camera sensor, driver for the camera sensor, and external hardware trigger board. You describe a project which is beyond the scope of a quick answer. There should be enough information in this video to get you started in in the search to answer your questions. Thanks for watching!
Hi Bart, thanks for that question - I am also wondering about the maximum number of stored (HD) images per second, as my requirements look similar to yours.... BTW, I´m currently working on that issue on a StereoPi, which resulted in around 1 fps, however with some long waits of several seconds after 10 images or so. So in continuous burst mode not at 1 fps