Тёмный

Speed Estimation & Vehicle Tracking | Computer Vision | Open Source 

Roboflow
Подписаться 35 тыс.
Просмотров 35 тыс.
50% 1

Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.
Use this knowledge to enhance traffic control systems, monitor road conditions, and gain valuable insights into vehicle behavior.
Chapters:
- 00:00 Intro
- 00:36 Object Detection
- 03:43 Multi-Object Tracking
- 05:11 Filtering Detections with Polygon Zone
- 06:39 Math Behind Perspective Transformation
- 14:35 Perspective Transformation in Code
- 16:46 Math Behind Speed Estimation
- 18:42 Speed Estimation in Code
- 21:29 Visualization Improvements
- 22:45 Final Results
Resources:
- Roboflow: roboflow.com
- 💻 Speed Estimation Open-Source Code: github.com/roboflow/supervisi...
- 📚 "How to Estimate Speed with Computer Vision" blog.roboflow.com/estimate-sp...
- 📓Colab Notebook: colab.research.google.com/git...
- ⭐ Supervision GitHub: github.com/roboflow/supervision
- ⭐ Inference GitHub: github.com/roboflow/inference
- 📚 “How to Track Objects” Supervision Docs: supervision.roboflow.com/how_...
- 📚 “Annotators” Supervision Docs: supervision.roboflow.com/anno...
- 🎬 “Track & Count Objects using YOLOv8 ByteTrack & Supervision” RU-vid video: • Track & Count Objects ...
- 🎬 “Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking” RU-vid video: • Traffic Analysis with ...
Remember to like, comment, and subscribe for more content on AI, computer vision, and the latest technological breakthroughs! 🚀
Stay updated with the projects I'm working on at github.com/roboflow and github.com/SkalskiP! ⭐

Наука

Опубликовано:

 

5 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 179   
@iam_kilicaslan
@iam_kilicaslan 5 месяцев назад
As a mathematician, your analytical geometry skills are admirable. I've been following your work on image processing applications closely and find it crazy. Keep it up Piotr.
@Roboflow
@Roboflow 5 месяцев назад
I plan to include more of those whiteboard explanations in future videos. I’m just a bit scared that some people will get board of me talking and drawing and just skip to the next section.
@iam_kilicaslan
@iam_kilicaslan 5 месяцев назад
@@Roboflow It is very important to know the theoretical part of the project, especially the theoretical part of coding, those who skip to the next part can only advance one step at most, even if they move to the second step, they will not be successful. My personal opinion is to continue in the direction you have planned, congratulations again.
@atomix_2402
@atomix_2402 4 месяца назад
​@@Roboflow We need more of the whiteboard explanations more man and possibly detailed explanation or you can suggest some of the pre requisites to understand the concept. The ones who want to be successful would love to watch those.
@bigflakes6699
@bigflakes6699 2 месяца назад
@@Roboflow Hi, any ideas on how the coordinates of the region of interest were computed?
@minasamir6232
@minasamir6232 5 месяцев назад
Great work This is amazing! Thank you!
@patricksimo9045
@patricksimo9045 5 месяцев назад
Thank you for your efforts. The video is perfect and very well explained. Great work !
@Roboflow
@Roboflow 5 месяцев назад
Thank you! Awesome to hear people notice the effort.
@the_vheed1319
@the_vheed1319 4 месяца назад
Thank yo so much for this video. It greatly simplified the entire speed estimation process
@Roboflow
@Roboflow 4 месяца назад
Thank you!
@smccrode
@smccrode 5 месяцев назад
This is amazing! Thank you! Been wanting to do this for years. Now I’m going to do it!
@Roboflow
@Roboflow 5 месяцев назад
Glad you like it! Let me know how it goes!
@alirezaee
@alirezaee 23 дня назад
Great work, thank you for sharing!
@amirsv6014
@amirsv6014 5 месяцев назад
Crazy how object detection is just getting better and better!
@Roboflow
@Roboflow 5 месяцев назад
That’s right. I’m waiting for zero-shot detectors to be so good we will not need to train models anymore.
@HannaWojciechowska-Biszko
@HannaWojciechowska-Biszko Месяц назад
Thank you for the instructions! :)
@theoldknowledge6778
@theoldknowledge6778 3 месяца назад
These application videos are amazing!!
@Roboflow
@Roboflow 3 месяца назад
Thanks a lot!
@Studio-gs7ye
@Studio-gs7ye 5 месяцев назад
That is unique type of tutorial I have seen so far and thanks for such a good content.
@Roboflow
@Roboflow 5 месяцев назад
We plan to make more of those longer videos this year. :)
@Miinuuuuu
@Miinuuuuu 2 месяца назад
does he provided complete project with code? please tell i wanna use it in my college project
@tjoec90
@tjoec90 5 месяцев назад
Amazing tutorial. Learnt something new today. Thanks a lot.
@Roboflow
@Roboflow 5 месяцев назад
I absolutely love to hear that!
@rluijk
@rluijk 5 месяцев назад
Great! Thanks for your clear explanations, showing what is possible. Very inspiring. Subscribed so I hope to see more creative tracking concepts explained.
@Roboflow
@Roboflow 5 месяцев назад
We will probably release video on time in zone next :) You can keep track of what I’m doing here: twitter.com/skalskip92
@rluijk
@rluijk 5 месяцев назад
I keep thinking about tracking ants, we might discover a lot of interesting things. @@Roboflow
@hoangng16
@hoangng16 5 месяцев назад
This is great; I've wanted to do this for a long time.
@Roboflow
@Roboflow 5 месяцев назад
Now we can donut together haha
@blessingagyeikyem9849
@blessingagyeikyem9849 5 месяцев назад
Supervision is super useful. I have been using it in my computer vision workflow. I now prefer it over opencv. Keep up with the good work Piotr.
@Roboflow
@Roboflow 5 месяцев назад
This is probably the biggest complement I could get!
@minhnguyenquocnhat3796
@minhnguyenquocnhat3796 3 месяца назад
Thank you so much for this tutorial. Your instruction is very great
@Roboflow
@Roboflow 2 месяца назад
Thanks a loooot!
@cappittall
@cappittall 5 месяцев назад
Thanks Peter, That is great tutorial. :)
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot!
@william-faria
@william-faria 5 месяцев назад
That's great! Thank you, bro!
@Roboflow
@Roboflow 5 месяцев назад
My pleasure!
@Oliver_Lam
@Oliver_Lam 5 месяцев назад
Thank you so much!
@luisescares
@luisescares 5 месяцев назад
Congratulations by this video, greatings from Santiago!
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot! Greetings from Poland!
@g.s.3389
@g.s.3389 5 месяцев назад
very well done!
@DilipKumar-jm3ly
@DilipKumar-jm3ly 2 месяца назад
you are making videos on latest technology in the fields cv , is interesting knowledgeable be continue like that. thankyou!
@Bassel48
@Bassel48 5 месяцев назад
Thanks for the video. It is not clear to me how did you calculate the points C and D outside the image boundaries. I understand the y axis value, but how about the x value, how is it calculated?
@kimridaaa1298
@kimridaaa1298 2 дня назад
terimakasih bang bule, thankyou sm brok buleee
@elhadjikarawthiam4595
@elhadjikarawthiam4595 5 месяцев назад
Thank you very much for sharing, it’s really interesting. I would like support for my subject on the analysis of congestion up to measuring the distance of traffic jams
@asilbekrahimjonov7475
@asilbekrahimjonov7475 5 месяцев назад
For getting higher accuracy speed, can we take distance with camera calibration parameters ?
@LukasSmith827
@LukasSmith827 5 месяцев назад
Very nice
@6Scarfy99
@6Scarfy99 5 месяцев назад
One of the best channels... I love u piotr
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot! Stay tuned for next video. Time in zone is coming soon.
@joelbhaskarnadar7391
@joelbhaskarnadar7391 5 месяцев назад
Interesting 👍🏿
@alexanderfritsch6612
@alexanderfritsch6612 2 месяца назад
Good work! Keep it poppin' :)
@ceo-s
@ceo-s 2 месяца назад
Very cool video! Btw which drawing app do you use?
@elviskiilu3977
@elviskiilu3977 5 месяцев назад
Hey,is it possible to intergrate these models to a database,ie detected vehicle speed
@ahmadmohammadi2396
@ahmadmohammadi2396 4 месяца назад
Simply excellent
@Roboflow
@Roboflow 4 месяца назад
Thanks a lot!
@tobieabel7474
@tobieabel7474 5 месяцев назад
Another great video Piotr! I am currently working on a project using Supervision to track the speed of hand movements as part of a hand gesture recognition system, and your tutorials are really timely. I'm detecting the hands, performing some minor perspective transformation as you do here, tracking their movements within certain zones, and calculating their speed over several frames to determine the specific gesture. One issue I'm noticing is that Byte track has a tendency to lose detections even within a small area, and I was wondering if you have any tips for improving tracking performance other than playing with the byte track parameters?
@Roboflow
@Roboflow 5 месяцев назад
ByteTrack is using IoU to match boxes between frames. So if your hand is moving fast you can loose tracking.
@Santiagobgb18O
@Santiagobgb18O 4 месяца назад
Your explanations have been incredibly helpful. Thank you sir! I'm currently working on a project where I apply similar tools to estimate the velocity of tennis players. However, I've encountered a challenge: the players often have part of their bodies outside the designated court polygon, which complicates the tracking. Is it possible to define multiple polygons to capture the full range of their movements, or do you have any recommendations for this scenario? Thank you once again for your valuable contribution to the community!
@ps-dn7ce
@ps-dn7ce 5 месяцев назад
amazing!
@jeffcampsall5435
@jeffcampsall5435 10 дней назад
There needs to be correction factor along the path…it’s like drawing the globe on a flat piece of paper. If you watch cars driving away on the right side, their speed is 140 kph and “reduces” to 133 kph: which is very unlikely. I know the trapezoid can be limited to those vehicles closest to the camera but I thought you might like to tweak your algorithm. 👍
@Roboflow
@Roboflow 9 дней назад
Sure 👍🏻 the whole algo is a bit of simplification as we only have 4 points. If road is not perfectly flat and straight some divisions may occur. Still I think it’s one of the complexity/accuracy tradeoff is okey.
@mayurmali2715
@mayurmali2715 4 месяца назад
gys any ideas on what new features we can add to this?
@user-vv8my4lj9i
@user-vv8my4lj9i 4 месяца назад
Is there a way to count the time of object that in the zone?
@fredericocaixeta9015
@fredericocaixeta9015 18 дней назад
Hello, Piotr Skalski! Hello everyone... I am diving a little into the code here... 😁 Quick question - how do I add an image into a detection-box from Supervision? Thanks
@kirtankalaria7239
@kirtankalaria7239 5 месяцев назад
There's some cool stuff I reckon you can do with the Deepsense 6G dataset.
@GenieCivilNumerise
@GenieCivilNumerise 2 месяца назад
Thank you too much. Can you do the same application with yolo v9 for me ?
@mileseverett
@mileseverett 5 месяцев назад
Great tutorial. Do you think you could make a video that covers implementing re-identification for multi cameras? There is a real lack of tutorials on this topic now that you have covered tracking so well
@SkalskiP
@SkalskiP 5 месяцев назад
Hi! It's Piotr from the video here. I'd love to make it. I just don't have data that I could use to make it :/
@mileseverett
@mileseverett 5 месяцев назад
@@SkalskiP what kind of data do you need? I might be able to help.
@Roboflow
@Roboflow 5 месяцев назад
Two or more videos looking at area from different perspectives at the same time so we could use it as example in video.
@AlainPilon
@AlainPilon 5 месяцев назад
​@@RoboflowShould the camera be looking at the exact same area from different angles? Or we could have one camera watching one street corner and the other looking at the next intersection? I too would be interested in such a tutorial.
@JoshPeak
@JoshPeak 5 месяцев назад
Absolutely crazy idea here… could you simulate reidentification with multi camera looking at a hot wheels or slotcar track? Like a scaled down simulation?
@onyekaokonji28
@onyekaokonji28 5 месяцев назад
Great job as usualy @Piotr. Is there a way to automate the generation of points A,B,C,D because I believe the current implementation requires one to use a mouse to hover around the the 4 points to get their cordinates, that won't be feasible in production.
@Roboflow
@Roboflow 5 месяцев назад
There is no way to reliably automate this. But you only need to do it once for each camera. So you can save the configuration in JSON and load it.
@rupeshrathod6588
@rupeshrathod6588 5 месяцев назад
Roboflow has an issue at the time of augmentation the annotation doesn't go according to the augmentation its an big issue. in case of instance segmentation i hope it will be resolved soon!!
@NicholasRessi
@NicholasRessi 3 месяца назад
Amazing work! Does anyone know how to estimate/predict distance in a 2d image? I assume the 250m length and 25m width of the road was discovered by doing an online research, I wonder if there is an algorithm or method that would allow one to estimate distance in a 2d image.
@Roboflow
@Roboflow 3 месяца назад
Do you mean without passing any information? Fully automatically?
@cliqshorts
@cliqshorts 5 месяцев назад
Nice .Could you please share youtube video link on how to run this notebook on AWS Sagemaker Studio.
@Roboflow
@Roboflow 5 месяцев назад
Did you faced any issues trying to run it on AWS?
@JellosKanellos
@JellosKanellos 5 месяцев назад
Thanks a lot for the awesome video Piotr! One thing I always wonder about applying yolov8 object detection to video is: it seems kind of naive to handle every successive frame as a separate image. What I mean by that is, can't we be more smart about taking information from the previous frame(s) into the inference of the current frame? For example: if there was a car detected somewhere in the camera image, it must be somewhere near that position in the next. What are your thoughts about that?
@Roboflow
@Roboflow 5 месяцев назад
Hi! Depends what you do. There are some systems. Like parking occupancy, where you can easily get away with running inference every 1 second or even less frequently, and just assume all cars are parked in the same places. Here the cars are moving, and that movement is particularly interesting for us. We are using ByteTrack. This tracker use only box position and overlap to match objects. If you will not run inference sufficiently often, there will be no overlap between the frames, and you loose track.
@ilamathimanivannan8315
@ilamathimanivannan8315 Месяц назад
Can you please explain how you determined the coordinates of ABCD ([1252, 787], [2298, 803], [5039, 2159], [-550, 2159])
@crazyKurious
@crazyKurious 5 месяцев назад
Piotr, great video, can you provide instructions on how to make it realtime ?
@Roboflow
@Roboflow 5 месяцев назад
Any specific problems you face when you try to run in real-time?
@adarshraj3208
@adarshraj3208 Месяц назад
Hey, I am facing error at the part "calculate_dynamic_line_thickness" . I read in the documentation that it has been changed to "calculate_optimal_line_thickness" but even after doing so i am getting the same error. What should i do now? thickness = calculate_dynamic_line_thickness( resolution_wh=video_info.resolution_wh )
@pingyang8963
@pingyang8963 5 месяцев назад
Awesome presentation! Thanks for sharing. One question, since speed is detected, Is there a way to get the distance to the camera instead of speed?
@Roboflow
@Roboflow 5 месяцев назад
Well we ould need to know the distance from camera to some reference point.
@pingyang8963
@pingyang8963 5 месяцев назад
@@Roboflow for the reference point, Will that possible using 2 cameras (which has a known distance between those two cameras) and creating a fused map from the two cameras and get the distance and speed?
@lindseylombardi2910
@lindseylombardi2910 Месяц назад
Where do I add the configurations for both "vehicles.mp4" and "vehicles-result.mp4" in the ultralytics script? I see that the ultralytics example lists "--source_video_path" and "--target_video-path", but does not specifically include "vehicles.mp4" or "vehicles-result.mp4"?
@Roboflow
@Roboflow Месяц назад
Take a look here: github.com/roboflow/supervision/tree/develop/examples/speed_estimation Example commands are in the README.
@vitormatheus8112
@vitormatheus8112 5 месяцев назад
This video is without a doubt one of the best I've seen, thank you very much. I would like to know if it is possible to calculate the distance of an object from the camera?
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot! Such a big complement. Unfortunately not. We like need some reference distance from camera to some point.
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot! Such a big complement. Unfortunately not. We like need some reference distance from camera to some point.
@matthiasjunker8685
@matthiasjunker8685 5 месяцев назад
Cooles Video
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot! I spend a lot of time making it.
@user-yw6wf3uu1o
@user-yw6wf3uu1o 5 месяцев назад
Is rtsp source also supported through supervision? Or do you have a plan?
@Roboflow
@Roboflow 5 месяцев назад
Not your but we have a plan to do it. But you can combo supervision with OpenCV to do it even now.
@akaashraj8796
@akaashraj8796 5 месяцев назад
is there a way to detect the objects speed while the camera thats capturing the vedio is in motion?
@Roboflow
@Roboflow 5 месяцев назад
I’m afraid not.
@deaangeliakamil7453
@deaangeliakamil7453 2 месяца назад
Hello, I am facing some issues when I used my own video. When no vehicle shown in the video, the trace_annotator and label_annotator are facing error. For trace_annotator it said "IndexError: index 0 is out of bounds for axis 0 with size 0", and for label_annotator it said "ValueError: The number of labels provided (1) does not match the number of detections (3). Each detection should have a corresponding label. This discrepancy can occur if the labels and detections are not aligned or if an incorrect number of labels has been provided. Please ensure that the labels array has the same length as the Detections object." I hope you can help to solve this error, thank you.
@TheAIJokes
@TheAIJokes 5 месяцев назад
Hi sir, You are an wonderful instructor I almost watched all of your videos.....can you please show us a way to train a car number detection model.... please that would be a great help...also I would like to know if I finetune yolo model will it forget all its previous training?
@Roboflow
@Roboflow 5 месяцев назад
License plate OCR is on my TODO list. As for fine tuning. If you start from COCO dataset and that’s fine tune it on dataset with custom classes it will detect custom classes. If you wan to preserve that previous knowledge you would need to train model on dataset that is a combination of your classes and COCO classes.
@TheAIJokes
@TheAIJokes 5 месяцев назад
@Roboflow thanks for your reply ... looking forward to it...hope you will make it soon
@m.hassanmaqsood6642
@m.hassanmaqsood6642 7 дней назад
I am facing an issue when I try this notebook AttributeError Traceback (most recent call last) in () 10 11 # annotators configuration ---> 12 thickness = sv.calculate_dynamic_line_thickness( 13 resolution_wh=video_info.resolution_wh 14 ) AttributeError: module 'supervision' has no attribute 'calculate_dynamic_line_thickness'
@SilenceOnPS4
@SilenceOnPS4 3 месяца назад
I am new to this, however, I am thinking of trailing the public, then purchasing the starter subscription to start a side project. For this specific project, how much would it cost to keep it running for 24 hours a day? Also, can you provide me with an estimate cost if this were to be scaled up to 1000 cameras? I am only looking for an idea on the cost to run such a programme on your typical camera over a motorway (like the one in this example). I am assuming it would go through roboflow, but I could be wrong. I am looking for the easiest option. Many thanks.
@Roboflow
@Roboflow 3 месяца назад
Easy is a bit relative depending on your skillset and hardware. Here are a few ways to think about it: You can deploy with the hosted API. This requires devices with internet connection. You'd then be able to choose at what rate you hit the API for predictions and that would impact pricing. 24/7 with 1 prediction per second is 86,400 API calls per day or ~32 million per year for each location. 1,000 cameras means ~32 billion per year. You could reduce the rate of predictions to bring down API calls but then you won't have a real-time system if that is what you need. Alternatively, you can deploy your models onto the edge devices using Roboflow Inference and do the same operation but use your own compute. In either scenario, this level of usage requires a conversation with our Sales team to offer you Enterprise pricing roboflow.com/sales
@SilenceOnPS4
@SilenceOnPS4 3 месяца назад
@@Roboflow Thank you for your prompt reply. I will get in touch shortly.
@PhạmNguyễnHoàngAnh-anhpnh
@PhạmNguyễnHoàngAnh-anhpnh 5 месяцев назад
How ViewTransformer for image with 1920x1080 resolution? 'NoneType' object has no attribute 'reshape' with 1920x1080 resolution.
@Roboflow
@Roboflow 5 месяцев назад
Could you create the issue and describe your problem here: github.com/roboflow/supervision/issues?
@adlernunez
@adlernunez Месяц назад
how do i run the whole code in vscode
@JIACHENWONG
@JIACHENWONG 2 месяца назад
may i know which version of supervision that i need to install into my pycharm
@Roboflow
@Roboflow 2 месяца назад
0.19.0 would be the best
@ben4571
@ben4571 4 месяца назад
Would this work on Raspberry Pi 5 taking in a live camera feed do you think?
@vlasov01
@vlasov01 29 дней назад
I've used Yolov8n model on RPi4. It can only process 1 frame in close to 2 seconds using one core. RPi5 is faster. It depends what is your target fps/precision requiremenst.
@thisistaha6366
@thisistaha6366 Месяц назад
How can I watch this in real time, that is, how can I translate the image from a camera into this at the same time? PLEAS help meee
@hamachoang5561
@hamachoang5561 2 месяца назад
"SupervisionWarnings: BoxAnnotator is deprecated: `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use `BoundingBoxAnnotator` and `LabelAnnotator` instead" I have install cuda and cudnn but why this happend, can you help me pls!!
@DavidAkinwande
@DavidAkinwande 5 месяцев назад
Thank you for such free education! Please where did you learn supervision? Edit: I learnt that you're the creator of supervision
@mileseverett
@mileseverett 5 месяцев назад
He created it
@DavidAkinwande
@DavidAkinwande 5 месяцев назад
Oooooohhh! No wonder@@mileseverett
@SkalskiP
@SkalskiP 5 месяцев назад
haha yup! I created it. Or I still create it every day. I hope you find it useful ;)
@DavidAkinwande
@DavidAkinwande 5 месяцев назад
I am really grateful for your creation and videos. I use it where I work, makes life so much easier@@SkalskiP
@danialkhan2910
@danialkhan2910 5 месяцев назад
Hi i had a question! Firstly, Amazing tutorial! It was a simple explanation of a really useful tool! I want to use this tool for myself, so my question was, will i be able to run this on a windows OS? Or is this specific to linux OS. Thanks to anyone for the help!
@Roboflow
@Roboflow 5 месяцев назад
I think we will release a Colab notebook, to help users like you.
@danialkhan2910
@danialkhan2910 5 месяцев назад
@@Roboflow That would be great! Thanks!
@XoyTech
@XoyTech 5 месяцев назад
It would be of great help if you could publish a requirements.txt file with the versions of the libraries that you use to make the examples, since newbies like me have a hard time finding the correct versions to everything works correctly, starting from the Python version and then all the others libraries. thank you.
@Roboflow
@Roboflow 5 месяцев назад
So you would like me to update this requirements.txt and include versions? github.com/roboflow/supervision/tree/develop/examples/speed_estimation
@iraadit
@iraadit 5 месяцев назад
@@Roboflow yes, it should always include versions, to be sure to be still be able to execute the code later (when new version will be out, and maybe not compatible)
@SWARO5
@SWARO5 3 месяца назад
Great video .....just a tiny issue , when i ran the code the line annotator was not taking trucks into account....can u help me with that
@Roboflow
@Roboflow 2 месяца назад
Do you mean that truck was not detected or not counted in?
@SWARO5
@SWARO5 2 месяца назад
Not counted
@abhinandang6675
@abhinandang6675 3 месяца назад
i have question will it work in low end device or PC in real time because it will take more time to process and time will increase which means the calculated speed will be less than the actual speed?? and how we tackle it ,if you know please share the solution..By The way nice calculation of prepective calculation
@Roboflow
@Roboflow 3 месяца назад
This is such a good question. I’m working on new video covering calculating time. I will answer this question soon!
@Jokopie-wv3zp
@Jokopie-wv3zp 3 месяца назад
Can anyone help me run this code :((( I don't know how to use pycharm.
@elianabboud8721
@elianabboud8721 3 месяца назад
Hello, I have run tracking and counting vehicles in addition to speed estimation and it's true , but I want a code that combines both. Do you have it?
@Roboflow
@Roboflow 3 месяца назад
We created a different tutorial where we show how to count objects crossing the line: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OS5qI9YBkfk.htmlsi=O4f26Cs3KnGGFBMC. Here is the code: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8.ipynb.
@elianabboud8721
@elianabboud8721 3 месяца назад
I understand but i mean the combination of count objects crossing the line and speed estimation in one output ? Best regards 😄
@sanchaythalnerkar9736
@sanchaythalnerkar9736 5 месяцев назад
I am planning to take a workshop on supervision in my college
@Roboflow
@Roboflow 5 месяцев назад
Is there a workshop on supervision in your college?
@Scott-lin
@Scott-lin 5 месяцев назад
HI @, i used my video to run Speed Estimation Open-Source Code,but my video had a little bit proble.could you help me ? issue >> AttributeError: 'NoneType' object has no attribute 'reshape'
@Roboflow
@Roboflow 5 месяцев назад
Could you create issue here: github.com/roboflow/supervision/issues and give us a bit more details?
@Scott-lin
@Scott-lin 5 месяцев назад
@@Roboflow OK ,thank you. I created issue
@mrmacman04
@mrmacman04 3 месяца назад
I followed this tutorial beginning to end on my laptop (Intel i9 Macbook Pro). It worked great, but was slow because it's not running on GPU. Instead of 'yolov8x-640' I used 'yolov8n-640' which ran faster, since the model is smaller. Is there any way to make these models run more efficiently on CPU?
@Roboflow
@Roboflow 3 месяца назад
It is possible to run faster on MacBooks but with M1
@mrmacman04
@mrmacman04 3 месяца назад
@@Roboflow I see. So on an Intel Mac, is there any option to speed up inference with OpenVINO? I imagine so, but would be good to see how to do it within a tutorial like this one.
@mrmacman04
@mrmacman04 2 месяца назад
@@Roboflow I just got an M3 MacBook pro. Seeing the same performance as I saw on the Intel Mac. I'm wondering if we only see good performance with these Roboflow tools (models, Inference pkg, Supervision pkg) when using GPUs?
@user-yw6wf3uu1o
@user-yw6wf3uu1o 5 месяцев назад
10:17 Here you find the coordinates for A. Is this making an assumption? Or did you find out about it through mouse events?
@user-yw6wf3uu1o
@user-yw6wf3uu1o 5 месяцев назад
SOURCE = np.array([[1252, 787], [2298, 803], [5039, 2159], [-550, 2159]]) What I'm curious about here is, are the y-coordinates of 787 and 803 different? Shouldn't it be aligned? And I don't know how -550 was derived.
@Roboflow
@Roboflow 5 месяцев назад
A and B is easy. You can do it through mouse event for example. You can also do it with this tool: roboflow.github.io/polygonzone
@Roboflow
@Roboflow 5 месяцев назад
As for C and D. I made assumption that y coordinate is aligned with bottom edge. Than I used A and B points and info about y to figure out x coordinates.
@jpsst9
@jpsst9 5 месяцев назад
@ 11:41 for the target you say 0-24 and 0-249 your target is now 24m width and 249m long are you sure you need to subtract 1 ? Not 0-25 width and 0-250 long
@Roboflow
@Roboflow 5 месяцев назад
No :) Let me explain. Target will end up as image 25 x 250 pixels. And pixels are numbered from 0 to 24. So I still have 25 pixels.
@11aniketkumar
@11aniketkumar 16 дней назад
i keep vscode on half screen and other half is for youtube, but your code is not properly visible, it's too small to copy from video, also I don't want github links to supervision and inference but direct link to the script file that you have used in this video.
@Roboflow
@Roboflow 9 дней назад
github.com/roboflow/supervision/tree/develop/examples/speed_estimation
@adityarahalkar3150
@adityarahalkar3150 5 месяцев назад
Great video!
@Roboflow
@Roboflow 5 месяцев назад
Thanks a lot!
@hammadyounas2688
@hammadyounas2688 3 месяца назад
I am facing issue with Perspective Transformation for my video. can you help me with that?
@Roboflow
@Roboflow 3 месяца назад
What’s the problem?
@hammadyounas2688
@hammadyounas2688 3 месяца назад
@@Roboflow the main issue is my box is not correctly generated i am facing issue with these values [1252, 787], [2298, 803], [5039, 2159], [-550, 2159].
@Roboflow
@Roboflow 3 месяца назад
@@hammadyounas2688 please, ask your question here: github.com/roboflow/supervision/discussions. We will try to help you.
@hammadyounas2688
@hammadyounas2688 3 месяца назад
@@Roboflow Okay.
@surajpatra6779
@surajpatra6779 5 месяцев назад
Sir please make a tutorial on how to deploy any kind of Computer Vision project in free
@Roboflow
@Roboflow 5 месяцев назад
Where would you like to deploy it?
@surajpatra6779
@surajpatra6779 5 месяцев назад
@@Roboflow Sir Anywhere except paid cloud platform like AWS, Heroku,etc.
@afriquemodel2375
@afriquemodel2375 5 месяцев назад
i trie to train custom model object detection with transformer in google coolab , but i used your tipp it does not work
@Roboflow
@Roboflow 5 месяцев назад
Hi. I’m not really sure what you are talking about? Could you be more specific?
@circulartext
@circulartext 5 месяцев назад
hey my brother is it a way to set up your python app with raspberry pi
@Roboflow
@Roboflow 5 месяцев назад
Yup. But it will be slow… probable 1-5 fps.
@circulartext
@circulartext 5 месяцев назад
@@Roboflow do you think it should be good to identify something from a good distance
@HS0
@HS0 5 месяцев назад
Can we do this in real time ?
@Roboflow
@Roboflow 5 месяцев назад
We can!
@HS0
@HS0 5 месяцев назад
Can you publish the source code to implement this project in real time?
@Roboflow
@Roboflow 5 месяцев назад
The course code is published on GitHub. The link is in the description of the video.
@deep_singh01
@deep_singh01 5 месяцев назад
can you please teach step by step . You should start from beginning like you should tell us that which IDE you are using and which dataset you are using and also give us link of dataset
@Roboflow
@Roboflow 5 месяцев назад
I’m using PyCharm IDE and all models are pre trained on COCO. Sorry but I just can’t start every video with talking about the IDE. Other people will lose interest by the time I will get to the actual topic of the video :/ Link to the code is in description. There you will find setup instructions. Let me know in the comments if you will have more questions.
Далее
100+ Linux Things you Need to Know
12:23
Просмотров 60 тыс.
Florence-2: Fine-tune Microsoft’s Multimodal Model
25:43
The Untold Story of Scott Wu, CEO of Devin AI
21:27
Просмотров 380 тыс.