Very nicely done video tutorial I really appreciate you taking the time to review the product. I own two of the HuskyLenses and would encourage you to do a second or maybe even a third video if required on the advanced features. Features such as saving a model to the sd card and saving a photo to the sd card and how you can access the facial recognition/color recognition etc... from code. Once again thank you and I must add probably one of the most informative videos I have seen on the HuskyLens. I look forward to seeing more videos from you.
Thankyou! I'll look into that. I'm honestly still exploring and learning it! I have facial recogntion plus keystrokes working using an arduino leonardo which opens up some fun things. I'll keep digging into it...I haven't gotten into the sd card yet, and wasn't sure if I could load my own models...I"ll have to look into that!
I found a similar AI camera like huskylens which is 'esp32 Cam ' It is a lot more cheaper than huskylens And can do all those operations like face recognition, tag recognition, object recognition ,etc so esp32 works in similar manner like huskylens or it is different And which will be better for me if I'm planning to build a advanced robo
not yet. I'm planning on trying that in a few months when I move into my new house and set it up though...it'd be VERY cool for sure. I'll post an update then
just seeing this now. I would need it to detect something new, orange traffic cones. Can it be trained to do that reliably? Are external models usable, or do I have to point it at 100 different cones and push the button each time?
Thanks for this nice video it really explains a lot and easily. My only question is; is there a way to copy the "learned face / objects" to another huskylens. I've searced their original web site, had a look at the docs -with no answer. Already googled on it (maybe I'm asking wrong questions don't know). It won't be feasible for me to run thru the learning curve every time I make another bot... And again thank you for this video.
I’m glad it helped! As for your answer I don’t know but I don’t believe so. It’s very good at quick prototyping but to translate learned models you may have to look into a tensor flow solution.
I no longer have access to mine, so I couldn't tell you, but it sends pretty standard serial descriptions from the device, so if you can map the x/y coordinates of what the camera sees to 'real space' then you should be able to have a turrent follow someone. Please don't do anything terrible with that info :)
A love this video. I’m curious, can it learn to detected a person’s covered face like a gowning process? I’m working on a project and I’m using a ESP32-CAM but all I can make it do is to identify if a person is using a mask by not detecting a face.
Okay, only 6 chess pieces to learn, and there is a few free already build chess databases for arduino / RPi .. I just don't understand how to hook them together.. I'd like to 3D print a Robotic ARM , and use a HUSKYLENS as a camera and AI for the chess pieces etc.. to move the correct piece and have it play against me. Id love to see that project done!!!
hey there! i know this things has object detection and machine learning I'm currently building a walking robot similar to the Boston dynamics dog. Does this this lenses allow it to detect stairs and climb them?
That sounds awesome! I don't think this would be the sensor for what you're looking to do. It's good at detecting an object, but something like stairs is a bit complex. If you want your dog to recognize you, or chase a ball, or use QR codes to know what room it's in...this would help there. But stairs is a repeating 'object' that I don't believe this would help with sadly. I think you may need a sensor array, and that this would however be useful for other scenarios (like the ball one I mentioned)
Not that I could see when I used it (it's been a while). I ran through all of the things it did at the time when this was made..but it may have updates or you can reach out to them to see if they have updated the models. If you've got local network support, I know that deepstack running in docker can process and detect things like cat/dog/bird and such.
i have a question, like can i get an audible output from this haskylens by using a raspberry pi 4 and earphones?? another question... can i put another model like text reading or text recognition and reads that ticket??
There’s no mic and the video is never passed through. So no audio from the camera stream. For models I’d contact them directly to understand what you can and can’t upload to it. It states it had model updates that could be done but I didn’t test that feature
@@geektoolkit oh okay,, so i need a help,, so now i am working on a project for helping blind people with a smart glasses which its concept works as an input a video and an output which will be vibration or some loud tunes from a buzzer,, now i would use that haskylens as an object detection and as a feedback for what it detects it will make a vibration.... So can haskylens help me with that or using and programming a normal lens will be more efficient however i will use raspberry pi 4 and I don't have much time as it's a competition... So i need some advices from someone professional as you
Hello, I'm looking for the code that the servo works with through the optical sensor. I can't find an Arduino library that works, so what should I do? Can you send me a code with the library, please.
@@غسانشنانة I don't have anything put together there currently to send you. Do you have servo code outside of the optical sensor that works? I usually use the sample servo code and then past that into the ino file and go from there.
That is a really cool gadget. You mentioned you don't have to upload your own models because of the built-in capabilities. Are you still able to use your own custom TensorFlow models if you want to?
From what I can tell it uses a format called Kmodel. The devs mention you can upload your own trained models how ever in the forums those that have tried haven’t had luck yet. It’s also mentions that if you do, you’d have to upload one of the types that it already supports. So short answer is you should be able to but apparently doesn’t work. So ..I guess no. :)
No sadly. I think you'd need something with maybe 120fps camera for that, and one heck of a processor. That would be an amazing project though! This just doesn't have that level of performance
OK, thank you for the quick response. I’ve started watching your video so you do good work so I have now subscribed… Do you know of a vision system that would detect a puck?
@@devanlynk7909 I don't offhand. I think one thing I can help with is what you want is not only something that can detect a puck, but an algorithm and then the compute to process it. I'd say you may be best off with a high FPS camera and a small form factor PC to pull that off so you can tweak the algorithm and such, and separate the need of a high FPS camera with the computing power needed
@@geektoolkit Details in the face get distorted as they get further away this is the reason for the issue. Since the camera is a ov2640 you might find a fish eye or wide angle lens that will fit it and that would give you a greater distance.
I’ve been looking into something like this for a quick object recognition for my robot, is it possible to switch between the different detection modes in the Arduino code?