How to solve while tracking if the bounded box got lost and when the bounded box come again on that same person then the id is changed how to solve that?
I try to train my custom dataset for detecting metal cans, but I dont know why I get a problem because the train result I get was that the detection is th same that appears in this video. I change the dataset and download it correctly from roboflow but the file .pt detect protection equipment, no metal cans. Do you know how to solve this....?
Hi, I would like to make some modifications where the model first detects a person, then checks for the PPE equipment they are wearing. If the equipment is not fully worn, the system should capture an image of the person and save it. Could you provide a tutorial for this? Thank you!
Hello teacher, I enjoyed the video. While following the teacher's example, I had difficulty because the address (link) containing the weight file does not exist. Is there any way to solve this? I'm waiting for your reply
Hi, I have a question I used ByteTrack and the default parameter for sorting(in video 2 people) but I didn't find any difference. Do you agree with me?
Your Udemy course has exceeded my expectations; I'm pretty pleased! Enrollment growth, encouraging feedback, and revenue optimization are our objectives. To further improve your course, I'm eager to design original strategies.
I am working on trying to code and implement something like this for underwater footage (videos) of fish, for ecological studies. Could you make it, so that each and every fish get's counted somewhat correctly? The challenge i face, is figuring out how to accurately count the species individuals when they are entering and leaving from all angles (even from the front the back). WOuld appreciate your input
Hi friend i am getting a subprocess error for protobof, I have tried to individually install all packages but the error seems to persist. Can you please help me in this?
If you are using latest version of all the libraries, here is the working code, copy and paste. import os import warnings # Suppress TensorFlow warnings os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' warnings.filterwarnings('ignore', category=FutureWarning) warnings.filterwarnings('ignore', category=UserWarning) warnings.filterwarnings('ignore', category=DeprecationWarning) # Disable oneDNN optimizations os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0' # Suppress specific TensorFlow warning import tensorflow as tf tf.get_logger().setLevel('ERROR') # Other imports import cv2 import numpy as np from super_gradients.training import models import torch import math # Load the video file cap = cv2.VideoCapture("\videos\test.mp4") # Get frame width and height frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = int(cap.get(cv2.CAP_PROP_FPS)) # Set up the device for torch (CUDA if available, otherwise CPU) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load the model with pretrained weights and move to the correct device model = models.get(model_name="yolo_nas_s", pretrained_weights="coco").to(device) # Initialize a counter for frames count = 0 # Load class names from a text file with open("coco.txt", "r") as my_file: classNames = my_file.read().splitlines() # Initialize VideoWriter object out = cv2.VideoWriter('outpy.avi', cv2.VideoWriter_fourcc(*'MJPG'), fps, (frame_width, frame_height)) # Set frame skip frame_skip = 2 # Process every 3rd frame # Process video frames while True: for _ in range(frame_skip): ret = cap.grab() # Skip frames if not ret: break ret, frame = cap.read() if not ret: break # Predict using the model result = model.predict(frame, conf=0.25) # Access predictions from the result object detections = result.prediction for i in range(len(detections.bboxes_xyxy)): bbox = detections.bboxes_xyxy[i] confidence = detections.confidence[i] class_id = detections.labels[i] x1, y1, x2, y2 = map(int, bbox) class_name = classNames[int(class_id)] conf = math.ceil(confidence * 100) / 100 label = f"{class_name} {conf:.2f}" print(f"Frame Number: {count}, Class: {class_name}, Confidence: {conf:.2f}, X: {x1}, Y: {y1}") # Calculate text size and draw rectangles t_size = cv2.getTextSize(label, 0, 1, 2)[0] c2 = x1 + t_size[0], y1 - t_size[1] - 3 cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 255), 3) cv2.rectangle(frame, (x1, y1), c2, (255, 144, 30), -1, cv2.LINE_AA) # filled cv2.putText(frame, label, (x1, y1 - 2), 0, 1, [225, 255, 255], thickness=1, lineType=cv2.LINE_AA) # Write the frame to the output video out.write(frame) # Display the frame cv2.imshow("Frame", frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord('q'): break count += frame_skip + 1 # Increment frame counter # Release resources out.release() cap.release() cv2.destroyAllWindows()
Hi, I have done a real time web app for detection by Yolov8 with your course. However, I have an issue with the display of the output in the HTML page. In this issue, the output is suffering from image clutter in webcam output and video output. Can you help me how to fix this?
this is best ever video i found on RU-vid after two weeks struggle. keep it up. you are gem.in search of this exact thing ,and you made my day thank you so much. but one problem i am getting while inferencing image when i put link of my image through drive , colab did not show same results as yours. please tell me from where you put that link in step 4 first line
i want to use llama 3.1 70b model but what embeddings should i chose in pinecone?? i have these options to set up by model : multilingual-e5-large 1024 cosine text text-embedding-3-small 1536 cosine text text-embedding-3-large 3072 cosine text Cohere-embed-multilingual-v3.0 1024 cosine text text-embedding-ada-002 1536 cosine text CLIP-ViT-B-32-laion2B-s34B-b79K 512 cosine text + image
Object Detection/YOLOv8-DeepSORT-Object-Tracking/ultralytics/yolo/v8/detect/predict.py", line 13, in <module> from ultralytics.yolo.engine.predictor import YOLO ModuleNotFoundError: No module named 'ultralytics.yolo' while running this it shows me error of no module name ultralytics.yolo