Hey, Thanks for your tutorial, I know it is a little bit outdated, but do you know if it works with a Jetson orin nano? Once I use the command "ninja" it does not finish the process, I can not use the camera with the computer, any suggestion?. thanks in advance.
Is there any chance to get a nicely documented walkthrough on how to train a model using tao? It is incredible that there is no such a material. Anybody helping a dummy like me? It is super frustrating to get this quick and dirty videos. They are super useless
Can i use this expansion board with the jetson nano development board to implement CAN communication between the different sensors and microcontroller to control the motor wit CAN compatible motor driver to control the motor ??
Joe, I found your work on the Ignition maker project showcase. It inspired me to take on a similar project for my senior capstone project. Your website, AI Triad, seems to be offline. Do you have a contact email at which I can send you some questions I have? Thanks, CT
Thanks for the good word. I have prior commitments so I will have to attend GTC online. Not happy about that, not happy at all. Really wanted to be there in person.
I bought this very same Lidar but have been having issues with the USB device being unrecognized if the adapter module was being supplied power at the same time. Was wondering you've had similar issues
How in the world did you manage to get GPU inference on the Jetson Orin Nano? I am trying to run YOLOv5 (or any yolo version) and cannot get it to run on GPU.
Are you using Jetson-Stats to monitor the Nano. pypi.org/project/jetson-stats/ Everything I run just seems to default to the GPU. If you have installed tensorflow make sure its the GPU version. If you run YOLO using deepstream it defaults to the GPU.
Hello Joev. Very nice to see your tests. I bought one last week and I could make it work under windows environemet. I was a bit puzzled because the company who sent it is Youyeetoo but the manufacturer seems to be unitree. I wrote them and both replied with software for Windows and Linux machines.they told me the only way to get the point cloud is through the Sdk. I made that to work in Windows but it seems it just capture from one point so slam solution must be under Ubuntu. Im findidng some issues bybinstalling the Ubuntu version and I was wondering if you would be so kind to share how did you setup the Linux software please?. I think I will also write them again but its great too see that working. Also, I had the same issue with the rotation. It cannot be placed on a table without stixking to to something. I was wondering what did you used as a base and if it is possible to use a battery instead of a charger. Would you mind give me some advice please? Thanks so much and great to see your videos!
Its nice to see somebody else giving this LIDAR a chance. Here is a link to all the docs for the Unitree 4D LiDAR L1: m.unitree.com/download/LiDAR Down at the bottom of the page are downloads for the SLAM and data applications. I used the doc "Unilidar SDK User Manual_v1.0" to setup the SDK and run Rivz with Ros2 on Ubuntu 20. Located in the folder "C:\a42f75fdba044f8a9f73ba1972488027\unitree_lidar_sdk\examples" are 2 programs that are quit useful. A C++ data publisher "unilidar_publisher_udp.cpp " and a Python data subscriber "unilidar_subcriber_udp.py" After some experimenting I will post a video with more results.
@@joevvaldiviaGreat! Thanks so much Joev! Really appreciate it. These are exactly what Im looking for. I could manage to make it work under windows but actually Slam is what I really need. The láser is great but, as you said it would be great if Unitree could develop some video tutorials about it spezially how to in increase the number of points and how to exporta it vía Sdk, but its also great trying to figure out. Great to see your videos. I will look forward for your next tests and I will try to post mine too. Thanks so much!
After many hours/days, with so much information that had overloaded my brain, looking into OpenNI/ROS/Libfreenect2 and freenect2.... I have finely got mine to work, I had thought my machine wasn't going to work, as its rather old. I will look more into what you've presented here. watching your video though has got me thinking, would it not be more suitable to also include the IR for more accuracy to pick objects up? the data could be used to "paint" the object, so that the depth camera can focus on it better? if that makes sense lol. thank you for the video and information.
hey joev, thanksf for the great video! it's odd that csi is choppy, was wondering what's the fps compared to the USB, i thought USB would have some more CPU processing overhead and would have less fps comapred to the CSI?
Hi Naidol. Love your video. I'm currently working on a project which the Create 3 robot might work. Can i ask what is the maximum payload you can place on top and still have the robot working effectively. And How loud is it? I'm currently using a Roomba but noticing this is quite loud. Any help is much appreciated.
Hello, Joev. Maybe stupid question, but i cant find answer. Does it do all calculation by hardware onboard, or it only provide pictures and all calculations of depthmap must be done by software on external processor?
🥰Respect! I've seen MQTT... and I've said, nice, but way later I understood you make something with PLC!!! Btw, with Orin NX 16GB you have tested something?
Hii, thanks for your video!! It really helps me a lot. Could you write the name of the software you use to monitor the Jetson performance? Please. I need it for a university project. Thanksssss ;-)
hi thanks for your videos i like it do you know if ist possible install deepstream 6.2 in my nvidia card rtx 4090 with ubuntu 22.04 or i need to downgrade ubuntu 20.04? Thanks
I have your same setup. I'm trying to think of a cool think to do with skeletal tracking outside of the typical UE / Unity livelink stuff. Would be cool to even be able to live render something in threejs for cross platform access, and I know how to work with animations without issue, but mapping skeletal rigs is a whole other thing.
Hii Joe, thanks for the video it's really helpful. I faced an issue while working with this app, I git cloned your repo got all the dependencies , but I keep getting this error : ERROR: nvdsinfer_backend.cpp:38 cudaStreamCreateWithPriority failed, cuda err_no:222, err_str:cudaErrorUnsupportedPtxVersion 0:05:51.168639720 55514 0x55922bab1b30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::allocateResource() <nvdsinfer_context_impl.cpp:293> [UID = 1]: Failed to create preprocessor cudaStream 0:05:51.168660590 55514 0x55922bab1b30 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:1029> [UID = 1]: preprocessor allocate resource failed ERROR: nvdsinfer_context_impl.cpp:1275 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_TENSORRT_ERROR 0:05:51.197942721 55514 0x55922bab1b30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance 0:05:51.197986385 55514 0x55922bab1b30 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary-inference> error: Config file path: config_infer_primary_yoloV3.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR Do you know what might be causing it ? A reply would be reaaly really helpful thanks again
I have not. There is depth data in the gstreamer stream but Its beyond my capability's to figure how to get it out. What I have done is use this example: github.com/stereolabs/zed-yolo Its easier to get out the depth data and the AGX Orin can run this at 28FPS
Hi Mr. Valdivia, Thank you for your work. I ran your code on my jetson agx orin devkit and i wanted to launch the stream onto my laptop straight from the zed2i camera. I'm using VLC media player on my laptop and i tried to launch a network stream but it wont open, telling me that it can't reach the localhost:8554. any advice on how to fix this? or do i have to put in the ip address of the jetson agx orin
@@joevvaldivia hi again, sorry for the disturbance, but i connected them to the same network and used the IP address of said Orin, it worked in my office but not at home again. are there any common problems with the connection?
@@saifzamer5432 If the wired connection and WIFI are 2 different I.P addresses there is sometimes an issue on with the Orin deciding which path to use. You may have to shut down one of the connections
@@joevvaldivia i'm only using the wifi connection. however on one of the ORINs i am facing no problems and it immediately connects to the vlc media player and im able to open the stream, However on the second one i am using zed-sdk 3.8.2, and your github repository. any idea why this might be happening? should i install any additional programs?
Thank you for the amazing playlist on NVIDIA TAO Toolkit! Could you please add another video about Mask R-CNN with a custom dataset using NVIDIA TAO Toolkit? I'm eager to try it out and see the results. I believe it will be quite fascinating.
Thanks for sharing, your a hidden gem, I just found your channel and IM binging your videos. Can you please make a detailed video on yolo8 custom object deepstream with neural magic?
Hi, thanks for sharing this great work and video! I'm also working on a similar work to record the traffic. I'm wondering how tools did you use to mount the camera on the car? And which battery did you use? Thanks:)