Thank you very much for the tutorial. I am curious what type of camera you used - Is it a web cam? What resolution and fps would you suggest for a setup. Thank You.
Thank you so much for this video as well as the eye gaze series. They are really awesome. I have faced a problem with the code in this video when obtaining the contours by the function findContours, it produces an error "not enough vales to unpack (expected 3, got 2)" I hope you can help me with that.
Thank you so much for the very interesting and helpful video. I have a question for you. What to do in order to count how many times eye iris moves for example in a minute? Thanks for consideration.
Hi, after detect where the eye is looking can I use this information and send it into the microcontroller to take some action? if yes, this code will be efficient to do this work? and thank you for sharing this awesome work.
@@nadaomar5451 Plot twist never tried bro, I dedicate to web and some backend stuff lol. Gives me the impression that this is already implemented by other companies.
I've done a video on how to get the position of facial landmark points. You can find that here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-MrRGVOhARYY.html also you can check my video series "Gaze controlled keyboard" where I go more in depth about the eye detection.
Its amazing that gpt3 can program its own code, so it can code this automatically, I was wondering if the cisco opendna spaces is what is used to route all the cloud data for the public.
Thank you very much, your Tutorials help me a lot and I learn much even though i am just beginner. If you want, could you make a Tutorial about tracking objects like darts to find out their position in the dart board (wether it is a 20 or a 5 for example).
Hello, I am coding a project to where I have to set a variable to a different value when my eyes look in different directions. Whenever my eyes look away, I want it to make value = 'some number'. How could I do this?
When i try to run the contour part i got error like AttributeError: module 'cv2.cv2' has no attribute 'drawCounters' and ValueError: not enough values to unpack (expected 3, got 2)
findContours() was changed to not return ret, so change the line to contours = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) instead
Thanks a lot, your videos have helped me in my college work and since I am new to this, it really helped me. I just wanted to ask, if you could provide the codes to crop the eye part as you have showed.
Simple, amazing. I already learned a lot from different sources, but this videos comprises very well the main idea of Computer Vision. I would just recommed add 2 things: Instead of using the '_' special variable, use a named variable, helps us that do not have much knowledge in cv2 to understand better what its been returning. Also, I would like to you to add a little explanation about each cv2 methods (Yes, I know that I can check the docs, but I want a shortcut :)
Underscores "_" are typically used as "throwaway" variables. So when a cv2 method is returning several items (say line 12) but we only want one of them, we use this method
For some reason "_, contours, _ = " does not work for me. If I leave off the first '_,' then it does work, i.e., "contours, _ = " works. Same thing in the shapes video.
why do you speficy position of the eye; roi = frame[269: 795, 537: 1416] rather than detecting eye first using custom object detection (with tensorflow) or already build haar cascade object detecion? With your approach it only applies to your recorded video witch have no sense. Imagine if you try to track eye motion with live webcam. Overall thank you for contributing
Hi .. I have a mistake is : Traceback (most recent call last): File "D:\eye_motion_tracking\tessst_roi.py", line 19, in contours = sorted(contours, key=lambda x: cv2.contourArea(x), reverse=True) TypeError: 'NoneType' object is not iterable what is solve?
Hi Deepak, I've a video about that as well. Check my series "Gaze controlled keyboard" and you will learn there how to extract the eye from the face in real time and then detect the gaze
One of my questions is how would you make the console print out whether it is moving left, right, up, down, or center? You mentioned at the end of the video that you could do that. Could you please provide that information? Thank you!
I have a question. When i try to run the code after the command cv2.drawContours(roi, [cnt], -1, (0,0,255), 3), i get the following error code: TypeError: contours is not a numpy array, neither a scalar
File "P:/Pysource/eye_motion_tracking/eye_motion_tracking.py", line 17, in _, contours, _ = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) not enough values to unpack (expected 3, got 2) this is the error, can anyone please help me?
Take a look at this playlist ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-VWUgkcX_KoY.html It's explained how to understand if the eye is lloking at the lefto or right, so you can adapt it to detect when the eye is looking up
I've got a problem in line 14 of your code. error: (-215:Assertion failed) npoints > 0 in function 'cv::drawContours' Can anyone figure out what this means?
Nevermind, I figured it out. in OpenCV 4.x.x the first parameter returned by findContours() is the contour. Simply swapping out the contour with the first parameter should do the trick.
Thats my problem too, all those tutorials are stuffed with magic numbers beyond all recognition.Tuned to actual case, so they are useless in real cases. I'm trying to use more adaptive method based on median, etc, but the problem is really hard, starting from the gaussianblur kernel size (3,3) :)
@@stivstivsti yeah, but still need contours or edge detection to measure distance between two elliptic curves. On the shop floor, dust, fog, steam, light etc.
_, contours, _ = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) ValueError: not enough values to unpack (expected 3, got 2) what wrong with that? Can someone help?
change that line with: contours, _ = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) and it will solve the problem this because you're using Opencv 4 while the code was made for opencv 3
I think I've been hacked,my question is that if it's possible that this person is able to see what I'm exactly reading? Because I can hear them reading along with me. Sorry for my English
Hard coding the ROI is cheating. You want to train a HAAT network (there probably is one already) to detect the contrast levels that make up an eye and return that ROI.