This is a place where you can find detailed tutorials on Data Science, Data Analysis, Artificial Intelligence, Machine Learning, Deep Learning and computer vision with proper implementation of every topic.
Email: aarohisingla1987@gmail.com
Subscribe to my channel to get latest videos on emerging technologies.
Aarohi i am working on ffcresnet with lstm for video classification i extracted features from train and test using ffcresnet but i struct at lstm getting error tensor mismatch between sequences and targets
Hell Thanks for your all videos and efforts. I am following your channel, but I request you please upload one detail video on how to finetune Yolov5 model for custome images classification.
Hell mam Thanks for your all videos and efforts. I am following your channel, but I request you please upload one detail video on how to finetune Yolov5 model for custome images classification.
Any chance you'll be doing a demo for fine-tuning? In the next couple of weeks, I'll definitely be doing some fine-tuning for custom tasks (i.e., given this image and a prompt, determine whatever), but I'm not particularly familiar with DocVQA's format, which is what the HuggingFace article uses. I would greatly appreciate a high-level overview and options in the meantime, if you have the time. Thanks for this though!
Awesome work Aarohi. Thank you for sharing this tutorial that helps us to have a better understanding on how it works. Is it possible to have multi-task from a single input, it will generates more information output. Like ANPR, Vehicle Colour Recognition (VCR), Vehicle Make/Model and Vehicle. Thanks
After detecting an object, I would like to perform a specific action, like activating a Carbon laser for that object. How can I do that? @CodeWithAarohi
Madam Aarohi, when I wanted to convert to .tflite i got this error: AttributeError: 'NoneType' object has no attribute 'outputs'. previos run cod:.. >python export.py --weights ....best.pt --include tflite .. Do you have any solution?
You need to modify the dataset loading code to filter out annotations and images corresponding to the classes you want to train on. This step ensures that during training, only images and annotations for class1 and class2 are processed. Make changes in your data.yaml file. Only provide the name of classes on which you want to train.
@@CodeWithAarohi yea I did that but first I run the dataset like annotations txt file classes are 0 and 2 , in yaml file i gave nc:2 , names: [class1, class2 ] but it didn't run , so after that i research about this and names :[ ] has relationship with 0,2 in annotations files, I have only 2 classes but these 0 and 2 means it has 3 classes right like 0,1,2 so it didn't run ..so after that i made changes in yaml file nc:3 names : [ class1, sample, class2 ] it runs , what I thought Is class1 represents 0 ,sample represent 1 but I don't have this , class2 represent 2 , it runs without error.. is this correct or not?
Can i use RS 485 expansion board with the jetson nano development board to implement CAN communication between the different sensors and microcontroller to control the motor wit CAN compatible motor driver to control the motor ??
hi Aarohi, i have some problems with yolov5 part, which release you use in you video, i want to test a custom training in this board but i have problems runing yolov5, i can install sucesfully torch, torch vision and opnecv
THis was a very old version. It will not work now due to ultralytics package. You can try to run yolov5 or yolov8 through deepstream I have recently done this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Ufd86duobvc.html
I don't understand why 1*1 in the DenseBlock is used to decrease the number of channels then followed by 3*3 to increase the number of channels, whereas it should be the opposite.
when i run the command "sudo apt-get install deepstream-6.0" I get this: E: Couldn't find any package by regex 'deepstream-6.0' I'm on a jetson orin nano, i have all dependencies installed too.
hello may I ask , currently I am using YOLOV8 OBB and want to train my own data set, when creating a new project in Roboflow for annotation, should I choose the "Object Detection" one or the "Instance Segmentation" one when for annotating my custom data set for yolov8 obb
The YOLO OBB format designates bounding boxes by their four corner points with coordinates normalized between 0 and 1. It follows this format: class_index, x1, y1, x2, y2, x3, y3, x4, y4 Check this: docs.ultralytics.com/datasets/obb/
Actually I have a doubt , The same thing I have done using yolov8L but the problem is while implementing easyocr what are the preprocessing techniques i need to apply to get accurate result because for car its working fine but for bike or HCV its recognizing garbage values.
hello i need a question, i'm making project yolov5 on jetson nano and csi camera with realtime detection, and if i tried running config txt for just camera csi it's running, and if i'm added config txt for deepstream like a model engine and will be error, and i'm trying just config txt camera again and will be error. and it have to reboot jetson and tried again running just config txt camera and its running. can you help me? :')