We do not utilize Road Runner during autonomous. We utilize a home grown motion planning library that allows a programmer to navigate a robot using either vision with April Tags or dead reckoning with drive wheel encoders.
We use our color sensors on our smart bucket to detect the pixel color. We have two REV Blinkin, and they are correspondent to the sensor on the appropriate side. So the right sensor aligns with the right LED, and so does the left.
Hi guys, I have a few questions for you:1) How do you lay out the purple pixel in the autonome. 2) Judging by the video, you do not control which part of the capture the pixel will fall into when it is taken, can you control it somehow??
1) Take a look at Sprint 2’s video. We use a pixel pusher to move the purple pixel onto a spike mark. 2) We can’t control where a collected pixel goes but we do know which color is on which side with the color LEDs