Never tried to make one, but i think the different part is when in aimlab, the target and background already have contrast color, so he can 'increase' the contrast, meanwhile in real fps game, the target is probably a character that is more blend to the background, so i assume the program would need to access the enemy character model, render it in different color that is contrast to the background, then continue with the same process. Again, just assumption.
@@Alex-de7rz I think Object recognition + Image classification might do the trick. No need to pin point the exact boundary of the character model, just draw a box over the character model and try to hit the center of that frame. May need to handle the delay and character movement tho.
@@welovethemallthesame1125 Yo en mi proyecto de tesis para acortar tiempos pasé de tensorflow a AWS Rekognition y era muy fácil el aprendizaje del modelo, en su momento creé un sistema de videovigilancia que detectaba objetos personales perdidos, supongo que la aplicación debe ser muy similar, sólo que al detectar un personaje de Valorant dispare... Claro que es más fácil decirlo, hay mucho ensayo y error.
I agree. You need to take note if 1. The mouse has mouse accel on/not, make sure it’s not 2. How many Gs the mouse can handle in terms of speed? Most modern gaming mice can handle adequate Gs that we mere humans cannot reach. 3. Do you turn off Windows pointer precision? This may cause you to do those complicated maths. My point is, replace the mouse and turn off Windows pointer precision. I recommend at minimal a Logitech G304 since thats the cheapest most adequate wireless gaming mouse I can think of; The more variables you can control, the merrier. Nice work!
@@barjuandavis Windows pointer precision doesn't affect games that use raw input (99% of modern shooters). Also, if anything, he should ideally just turn the sens up a shit ton and make the motors as precise as possible.
@@re4796 that's not my point. It proves that your mouse and screen don't matter that much when your movements are perfect. You don't need a fancy gaming mouse to be that good, otherwise the robot would not be be able to achieve the scores.
Possible improvements: (physical) - unshell the mouse and mount its optical sensors directly onto the robot. This will reduce rattle/wiggle and ditch the mass of the shell. - replace the mouse buttons with relays that can be directly triggered. - move as many components as possible into a separate stationary breakout box, again, to reduce the mass of the mobile unit. Ideally, your mousedroid will have only motor drives, wheels and the optical sensor(s) within the mobile unit. (possible complete reconstruction) - consider keeping the sensor itself completely stationary and inverted (pointing up) and moving the surface instead (a thin piece of roughened plastic that is easily optically trackable and could be moved with a geared/belted X/Y setup. - or, mount the sensor in the same way as the printing head of a 3D printer over the tracking surface. (programming) - keep tabs on the velocity of the mousedroid and don't just take the proximity of the target into account, but also try to select targets that are also in line with the current direction of motion to reduce the amount that the mouse has to turn.
Thank Mark for watching! These suggestions are great, and some of them I do plan on doing for a V2. This was really a demo, and was me really seeing if I can do it. But I don't like how bulky the system is so I am going to take a part a mouse and make a new much slimmer and faster system coming soon. Also, I love your name you gave to the mouse --- mousedroid is fantastic, and I might still that for the future video!
I am a backend developer, but I took 2 embedded systems courses, I like this a lot combining hardware with software with Machine Learning is just amazing and a complex task oh man , you are a "Real" software engineer, you had an idea and you made it into reality and kept developing it congratulations
Thanks for the kind words, and understanding how much work went into this! Software engineering is something else really difficult, but these one off programming challenges are fun.
Awesome stuff. An algorithm change to allow shots while on the move should give you those extra points. Plan ahead for the optimum path, re-evaluate if targets change, shoot as you pass the target on the way to the next. That's how the humans do it. Never slow down, and estimate distances in terms of time to get there, not physical distance. You might also want to get an old school rubber ball mouse, toss the ball, and turn the rollers directly.
@@niezbo And if you want, you could skip the motors entirely, and instead simply blink LEDs at a rate that suits you. The movement in a particular direction is detected by two photo junction devices that look at a LED through a perforated disk. Skip the disk and the LED, and excite the sensors with two of your own LEDs directly to create the movement you want. The challenge is to get the timing right, but once there, the number of moving parts is 0.
Go a step further. See how the mouse sends the electrical signal through USB and use an arduino to simulate mouse movement by sending the right vdc x and vdc y.
@@Shedding There's (some) danger in that one. It is conceivable that a future gaming product might look for that kind of a hack. So if you do it, it better emulate all of a gaming mouses behavior... That said, the standard mouse signaling over USB is well defined and plenty of sample code exists out there.
Cool video and nice work! I feel like the PID tuning would be much easier of you focus on singular events. For example if you let the robot shoot a couple of targets and then make a plot of "error" over time, you might have an easier time making the PID feedback faster while keeping an eye on overshoot and unwanted oscillations. Probably too much work but it would be cool to see these kind of plots!
First off thanks for watching. Yeah I have some plot of those errors I didn't think it made for interesting content but will write a technical post about that. And when I make a V2 I can add that in.
I think that and even bigger improvement could be made by using optimal control techniques instead of PID. In the video, it looks like the robot tends to overshoot some targets when it starts move large distances. Maybe using LQR or something and optimizing around minimizing some objective outputs would give much better system response than just tuning PID by hand.
i just postet it already now. i see, this would have been the right place. cnc-servo-motors are controlled with 3 pids. position->pid->speed->pid->torque->pid->dutycycle. sounds complicated, but makes tuning much easier in the end.
Use gaming mouse with a good switch and good sensor. The higher DPI will help the robot tracking. By using unhumanly high sensitivity you can get the robot to aim with little (but precise) movement
Very cool! The fastest way to get a screenshot is to copy directly from the game’s back buffer by hooking the Present function in the D3D render pipeline. This involves writing a small library and injecting the DLL into the game’s process. A much simpler method involves using the Windows GDI API. It’s not as fast, but you can still reach a few hundred FPS. You’d probably be able to achieve much higher scores by improving your kinematics. Directly driving two axes rather than using omniwheels will reduce slipping and allow you to increase acceleration.
Wow thanks watching and thanks for the information the screenshotting tips sound real helpful and I will look into them. My V2 will be still using wheels, but down the line definitely want to try with a 2 axis gantry.
I think that you could get a couple more points by optimising the targetting algorithm. Instead of targeting the closest circle,.you could turn this into a travelling sales man problem, to select the shortest path. Because there are only 3 circles visible at once, there only 6 (3!) possibilities. You could add those paths into an array, sort it by total distance and pick the shortest path and repeat after every hit. Also, tuning the values for the mouse, sounded liie something you could also turn into an ML project
Great work! A couple of suggestions: First, opencv library in python it's easy to use and setup but if you manage to code it in c++ you would improve the perfomances. Second, as far as for the PID controller. I dont know the gains you set but it's importat to have the correct sample time (for you it would be the frequency at which you execute the PID) since it influence the effectiveness of the control. Moreover, given you case, it might be enough to have a PD controller that basically has the integration gain set as zero (or just a very little value compared to the other two). The proportional part let you have a fast response while the derivative avoid the mouse to go over the target. However, again, great work! I love it! Edit: I realized too late that the video is old..lol
This is such an awesome project, came across the video on your tiktok and you’re making super high quality content, keep it up man, and I can’t wait to see how far you can go
Old trackball mice might be better for this as the mechanics required can be smaller and fit in the void for the ball. In reality you could also just disassemble the mouse completely to get to the bar rollers...
You could try using a 3d printer gantry to move the mouse faster and more precisely, the mouse could be held stationary while the mousepad moves under it
Great job, as a vision engineer, i really impressed ! I don't use python/open source in my industrial project, but maybe OpenCv in C++ is faster ? For the modelisation in the discrete automation is really math heavy, you've done a great job to find the value by trying !
Hello, thank you for your kind words, especially from someone that do computer vision for a living. How would you go about simplifying this model of the system because at the moment it's definitely nonlinear and wheel slippage will be a problem? Would you first get it into state space form?
OpenCV-python is just a wrapper around the C++ library so it shouldn't actually be much faster to switch, provided you are using numpy and not native python features
You could strip the mouse down to just its sensor and board for a smaller footprint. It'd probably be more accurate with it's reduced weight minimising inertia. You could also suspend it in place and have a small sliding base under the sensor.
Not a python developer but have you tried pillow module to take the center of the screen instead of the whole screen as a screenshot? I just imagine there's a lot of dead space around the edges that could be cropped to make the image smaller for faster encoding. Possibly faster to use .PNG encoding but not 100% sure on that. from PIL import ImageGrab ss_region = (300, 300, 600, 600) ss_img = ImageGrab.grab(ss_region) ss_img.save("SS3.png")
I had a similar idea but with a metal frame and when you move one end of the frame to the other it does a 360 turn so u can get more consistent results with a steadier frame for faster flicks
3d printers cap out at 5-6 meters per second. They are incredibly accurate too. I suggest modifying your setup to use this tech. Gut the mouse and just have the laser attached to the print head. Lock the Z axis, and set some real records!!
This is really great! I've done each piece of this project for other things and I never would've thought you could throw thresholding, PID controllers, and basic distance calculations into a controller and wire it up to a omniwheel rig around a mouse and make it an aimbot, that's so creative! Great work!!
Thank you for watching! Similar to you I have done each of those things individually PID, Computer Vision, Robot Design and that's why I felt confident I could do this projectm
This with an arm would probably rip. I think the wheels and traction as cool as they are add a lot of inaccuracies and slowness to the entire system. Still sick though 😁
To be fair, it is a human building code. I’m sure there’s some complex math equations an AI could solve and include in it own version. Heck I’m sure it could build a better device for more precise movement. I’m sure something like this would be beneficial in a warehouse. Having a robot to move boxes in a precise and efficient manner would probably speed up the process.
You just need to detect a color pixel, one optimization can be made in the size of screen capture you take(resolution), make it as small as possible which still keeps the accuracy of detection high. So as to reduce the load on computation. Maybe as low as 128x128 for a 1:1 display. let me know how this may affect your performance
the problem with this is real life, in a game the more you lower resolution the lower information you have, which may cause the robot to not detect a head
imagine the origin of AI, instead of coming from high tech massive giants like Boston Dynamics or Raytheon searching for absolute manmade sentient conscience - it started from a RU-vidr trying to create a perfect aimbot. kudos and godspeed, Kamal.
Its really impressive to see your robot hit 118k even with a burnt out motor. Before I stopped playing fps my aimlabs score sat around 103k though am I no expert by all means, realistically anything higher than 100k isn't necessary for anyone at all.
This is awesome. No doubt you could've passed Tenz's score if you spent more time tweaking your design, but that's besides the point, this was so neat.
Honestly, I wonder how big of a difference the mouse would've made. Going from a cheap wireless office mouse to a proper gaming mouse is a huge difference for humen, let alone a robot.
Really cool! I’d suggest maybe using some small brushless motors. It will be a more complicated wiring setup but likely would be quite a bit quicker and more accurate. I’d also try more securely mounting the mouse to the plate, it seems there’s slight wiggle which could cause lower accuracy. Well done!
Wait you might be on to something I am going to look into brushless motors they would be a lot better. One thing I doubt they get a lot of torque the motors I used are geared down a lot to do this.
@@ZennySilverhand why brushless motors instead of stepper motors like used on a CNC machine? also take the mouse apart and just use the guts, no need for the extra weight of the case
I also do computer vision stuff, and I usually cv2.split(frame) to get the blue, green and red frame, and then just get the mask by cv2.subtract(g,b) to get the mask that only green stuff remains
To comment on the screenshot thing. You might not need a faster screenshot process if you do trajectory planning on each screen shot. Then use PID (or cascade control my favorite) to track the trajectory. This also may give you an even higherscore as you can plan an optimal path so you never have to slow down the mouse so it's always moving. I always wanted to do this but with a stereo camera looking at the screen. That may also increase your screenshot refresh rate
Hmm 🤔 pointing the camera at the screen would be cool. It would increase the it's just like a human playing aspect. Yeah I'm also thinking about doing more advanced/optimal controls in the future the jerky jerk of the PID is definitely not sufficient if I want the hardware not to implode.