Hi, I’m Kevin McAleer, Learn with me as I build robots, bring them to live with code and have a whole load of fun along the way. I’ll be sharing ideas, tutorials and builds , every Sunday evening at 7pm BST.
You’ll learn how to design robots in Fusion 360, Print them on your 3d printer, wire them up and then program them. Be sure to subscribe for new content and comment below to introduce yourself!
Visit www.smarsfan.com for more information about the SMARS Project.
Hey I got one of these but was disappointed you can't use it with PyTorch or llm models or other models without some weird recompiling but they don't provide the full sdk. So if you wanna train something to find a squirrel for example, I don't see any way. In the mean time it's just a simple toy to run some of their demos. Unimpressive imo. I think the coral is better since it's easier to get it to help with tasks..
@@kevinmcaleer28 I made environment and made all the installation there but the cvzone is installed normally python I have 3.10 .just brand new pi 5 with the latest fw.
This is so cool! I just bought all the parts and am working on building the robot now with my 5 year old son! Having a blast! Quick question. Does the current chassis file have the shim fix that you were talking about at 16:41? Thanks so much for all you do!
Hey, is it possible for the remote also to receive data from the bot? So both devices could read and send data? Try it for a few days but without success... Although your code is great and helps me out pretty good<3 greetings from Germany
I'm interested in detecting ArUco markers and getting their pose estimates. Is that something this AI chip can do well? Would you still need to run something like OpenCV on it? I've never used AI specific hardware so not sure how it works.
This video was epic, thanks! Kev, dude, you're incredibly thorough, and you don't make any assumptions about the watchers understanding of the various technologies. I appreciated the explanations of each command along the way. This was a proper tutorial, and not simply a how-to. I'm looking forward to watching the rest.
Great content - super helpful! This was working great for my Wemos D1 minis until I reset my laptop and downloaded the latest version of Thonny. This is fairly reliable with Thonny 4.0.0. When using 4.1.4 I got through the install and then the onboard led would blink rapidly and not stop. Thanks - been back to rewatch this video many times when something is not working exactly as I remember it.
Is it possible to use this technology to control a UAV so it can be autonomous without GPS? Maybe have it detect objects like buildings mountains lakes etc that it has seen a picture of prior? Where you can use identifiable objects as waypoints that you programmed in prior. I'm a total noob and wonder is there's a name for this idea so I can explore it further?
Thanks for the video. I got one of these kits and it is amazing when running the provided sample codes/apps. Pity to program something leveraging on this kit you need c++, the installation of the Hailo SDK and then it becomes very complicated for 99% of the people using a Pi. I would not recommend the AI Hailo kit for everyone. It is for developers with good knowledge of AI and c++. There is not a single video explains that, all video on RU-vid just try to market it, I have not found any tutorial explains how to modify or make a new app. At present, if yogurt one of these kits you will be on your own.
I don't think Alvik is worth 130€. It does have many features for a small package, but the price is just unfair. For just 80 dollars, you can get a robot that both drives faster, has block coding as well, has a camera mounted and is more open, meaning you can modify it easier, and learn more on the way.
I just got a milk-v meles and managed to get it to boot...(it won't power on using the raspberry pi 5 power but will from just a usb from my pc....) I think this RISC-V has potential but the OS support is lacking right now and RISC-V really needs some love from the open source community. I was hoping ubuntu had something already but they only have a broken version for Milk-V mars that the wifi won't even work... Yet... I think hobbiests are going to love this stuff!
I'm quite interested in seeing its performance running LLMs via Ollama. With a RPi5 alone, it takes vision model LlaVa about 5 whole minutes to interpret an image. I'm hoping the AI kit can improve that vastly! Have you given it a go?
Bit of a misleading title as there was no real comparison between the Google Coral and Hailo 8L other than just a spec sheet side-to-side comparison. I guess it’s just a promo for the RPi AI Kit.
Super interesting video! I played with OpenCV and Tensorflow on a rPi 3. Back then I used a Movidius (now Intel compute stick) to improve the detection speed. The rPi 5 seems to be even faster, but I was curious if you have tried those compute sticks?
That toilet detection was hilarious. You can see how it got matched: it looks oval and has a rim and that’s where the logic ends. No consideration of perspective or gravity. lol.
This chip appears to be low on memory, and not fitting at all for LLMs. You want to look out for Hail-10H though (not released yet, no release date either)