Thanks to this video, I finally found the answer to my questions about Yolov8. As of 03.20 on April 20, I started training my dataset consisting of 1700 pictures and 7 classes, I can sleep comfortably now 🥰 Thank you for your useful video.😉
@@AliOsmanHocam I am getting some error after 10 epochs.. OSError: [WinError 1455] The paging file is too small for this operation to complete.. I was curious whether u got it or not .
@@AliOsmanHocami completed it but I am unable to see the boxes around my images( not detecting I guess). I have around the same images as ur dataset and have 4 classes.. can u share the parameters u used to train?
thank you so much man update : i had to continue off my project after 6 months and there was still no tutorial as good as this. had to scrounge youtube for 30 mins to find ur vid again so happy to find it
I wish I had watched this tutorial in the beginning! All these other tutorials work with colab and all these other environments but don't mention that pytorch is installed WITHOUT CUDA!!! I had an RTX 4080 and was banging my head against the wall trying to figure out what was wrong when it was the ultralytics package that had installed pytorch and torchvision without cuda, and wondering why the training wasn't happening on my cuda enabled machine. You also brilliantly showed the package/directory structure that the ultralytics package is expecting, where each directory of datatype (train, val, test) has their own images + labels directories inside, which some other tutorials did not show and just expected you to have.
I rarely comment on RU-vid but this time I need to thank you! I did it all the steps and there was no error. The only problem is that CUDA doesn't use Intel graphics cards so my training lasted 6 hours ;D
@@tridibroyarjo418 I concur. His tutorials work at the first time. How would you do a training scheme to estimate the surface area of objects? I don't mean computing the area from the bounding boxes, I mean training a model whose input are pictures and numeric values of areas. How would you use the code LABELME instead of the code LABELIMG?
i cant thank you enough for making this video! This is the first comment i have ever given on youtubea youtube video just because of how good this is! Concise, straight to the point, comprehensive and he even explains everything. For those of you who are wondering if this works in 2024, yes it surely does! 27/01/2024
Thank you for the amazing explanation as usual , just one little note especially in windows 11 users ==> add absolute path in the beginning of the file data custom.yaml so will be like this path : C:\Users\\yolov8_custom\ #mind space as well train: .\train\ val: .\val\ #mind space as well nc: 2 names: ["hat","jacket"]
Great video! Clear and well explained. Thank you! 2 questions: 1) Can the detection return the coordinates of the detected object (if detected)? 2) Would you consider doing a similar video to rub the detection on embedded machine, such as Jetson Nano?
Excellent tutorial. An idea for a different tutorial, if your interested, is how to get better detection. The hat is a great example. Just increasing the training doesn't always work. What are the choice you can make to get more accurate results
Really helpful tutorial, any chances of making a custom object detection tutorial for yolo nas? Would be greatly appreciated if you can do that. For the looks of it it's newer and better than v8.
Thank you for the clear explanation! I have a labeled data set with 11 classes. I am only interested in 4 of the classes. Do you have any information how to train the model on a subset of the classes? Should I remove them from the label files or is there maybe a parameter in YOLO8 that tells which classes to use?
Another question: Is it possible to train the model on higher resolution than 640? We have images with 1080 up to 4k (and need the details on the image), and the processing power to process it, but can the model handle this? Training the model from ground up is also no problem as we have a huge dataset.
You can train the model on higher resolution. That won't be a problem. For ground up, you shouldn't use pretrained model weights and pass resolution of your choice.
Ey I hope u wont give up makinng this video pls. This can help in our thesis using YOLOv7 pls pls. In addition can u make for EfficientDet and RetinaNet?
Hi, can i ask a qs....i tried running this upgraded the torch and saw the torch version it shows torch+cuda, but when i run is cuda available it returns false
Good video. I'm messing with this using RTPS feed from a home security camera. I believe my tracking is being processed by CPU so that might be a big reason for low FPS. However, since it is a stable frame with low amount of objects required to be detected (Car, truck, person, dog, etc...) would deleting the list of possible objects of the pre-trained model down to the ones i want increase the FPS, if yes how to do it?
Thank you for this great video. I want to ask that question: When convert YOLO v8 model to tensorflow lite, how much fps can you give with tensorflow lite for object detection codes? (I use jetson nano for this project)
@@TheCodingBug no I mean the structure of yolov8. Because in some websites I read that it can be used with different backbones. I want to know the details if it is possible
Thx for great video. I've followed everything from your tutorial, and it works up to training. But I can't detect anything in pictures or videos. In runs/train folder i get train_batch jpg files with correctly assigned label lumbers - 0,1,2, etc but nothing while running detection on jpg or video files. And yes, im running detection with learned .pt file
hello, I want to ask, I wanna train my model again because i think my annotations of the images were pretty bad because my predictions were either 0 or 0.3/0.4 accuracy or the box never even shows, will deleting the run folder be enough just so i know there is no overlap and errors and taking up space and stuff. are there other things that i should delete before redoing the annotations? I am doing it to detect tomatoes btw. Thanks alot for your videos, its been really helpful
I'm working on a vey similar usecase, while in mine , for ppe detection, i have people at a distance instead of close ones, and its like giving random ones as helmet and vest etc. How many labels would you recommend we label at the least?
Thanks so much! I've been exploring object detection models like YOLOv8 for my project, and I'm curious about their capability to recognize finer details like specific attributes. Can YOLOv8 detect attributes beyond basic objects, such as distinguishing between a 'yellow hat' and a 'yellow plastic hat' or recognize textures like cement or tiled floors and maybe a wooden chair in a classical style? I'm wondering if it's capable of recognizing such nuanced characteristics or if additional training or techniques are needed for this level of detail. Any insights or experiences on this would be greatly appreciated! Thanks!
"Hey there! I came across a GitHub discussion about implementing multi-label object detection with YOLOv8. It seems YOLOv8 isn't directly built for multi-label scenarios, but I found some pointers for modifications: Data Annotation: Encode labels in a one-hot manner instead of duplicating bounding boxes. Architecture: Modify the model's architecture to output multiple labels per bounding box. Loss Function: Consider using concurrent softmax or other loss functions like BCE for better results. They mentioned it involves deep learning expertise and dataset characteristics can impact the results. Anyone tried this or have more insights? It'd be great to learn from your experiences or any unique approaches you've taken! Thanks!"@@TheCodingBug
NiceTutorial. Excellent explanation.Thank you So much. Can you suggest how to save the name of the objects detected in the image or video in a txt file using yolov8?
i followed all ur steps, but when its at the training part, I face an issue: "No labels found in path\labels.cache", can not start training. Deleting and restarting doesn't help... (The labels.cache file is only created in the train folder)
whenever I got an assignment on object detection firstly I look for your videos. It's really so helpful. I am facing a problem in weights, when i train my YOLOv4-tiny i got 99.95 map but when i inference it using opencv code it gives very wrong prediction. could you please suggest me some solution
Hey! The ImgLabel software doesn't work anymore, Do you have a fix I could use? I tries using cvat, but when I exported in the yolo 1.1 format, it only gave me 1 xml file with all the annotations. I'd really appreciate if you could help me fix it! Thanks for the video!
Yes. Do a pip install label-studio Then type label-studio and hit enter. Then create new project, upload images, select object detection template, add your classes and remove old ones, then annotate, and export in yolo format.
Hi. Your tutorial was extremely helpful. I created a detection model for drones. However the model has a lot of false positives and the webcam feed is not smooth. Can you please tell me how I can rectify this.
@@TheCodingBughi i have a question. I have a nvidia jetson nano with me. Is it possible to do the same thing on that so that I get better processing speed and smooth webcam feed. Can you help me with how to do this
hello, Can we add the object trained with the custom dataset to the other 80 object YOLO weights? As a single weight of 80+1. Can we increase the weight of the existing 80 objects? thanks. normally yolo weight consists of 80 objects. Can we add new objects to objects of this weight by training with custom datasets?
Hello, I have a question when I open an image to see the training, only a part of the image appears, which when I go to the folder I see that there is nothing about the training but if I put a video, it can just be seen there
I facing a problem. when i run your whole project and train with CPU then working fine. But when i train with GPU then mAP is 0, but cuda command is perfectly run, even training time my gpu perfectly working with full memory allocation. what is my fault. please help me
hi, thank you for the documentation. I have a problem about predict images. i trained my model and predict image grayscale but i come into view error : ValueError: axes don't match array. What should I do? I must predict image grayscale.
Im trying to use this to differentiate dirty solar panels from clean one but its difficult to do the labels as most of the images are at an angle. Do you have any tip for this? Thnaks for the awesome tutorial!
IndexError: index 2 is out of bounds for dimension 1 with size 2 I'm getting this error when im trying to train the model. Please help. Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 0%| | 0/3 [00:09
hello the video is great but the only prediction it does correctly is for the validation dataset. if i insert any real time image then it doesnt predict anything any help?
Dear sir I was finally able to integrate an ip camera with this and it's working fine. Thank you for the tutorial. Can you tell me if I need to make a simple ui for this with a button to start running the detect command how can that be done
Running in linux , everything goes well the model trained ,but after traning i received multiple best.pt files in a zip i didn't receive the single file ? can you please tell the reason
I'm a newbie, I want to train a model but I don't care if it takes time but I want to do it with my gpu from my laptop which is an rtx 2060 6gb. If it can be configured to consume less vram but without error? Thanks in advance
I keep getting the error OSError: SavedModel file does not exist at: TrainedPlantModel_saved_model\{saved_model.pbtxt|saved_model.pb} when trying to convert to tflite. onnx works fine tho for some reason
While doing custom object detection, the class name is coming as class_0 whereas in the labelImg it has been labelled as figure, an particular reson for this?