Nice one James, This project is getting to the inevitable forks in the road ( with decisions for enhancement ) . This has served ( as intended ) as a great guide to anyone wanting to do similar things. Showing why things dont work so well is gold to any new comer to the trials of engineering / computing so well done for those explanations ( keep them coming! ). I look forward to the next installment.
This is turning out to be truly amazing James. Hats off for all the hard work that's gone into this project. It's become one of my favorite series to follow.
This show is one of my favorite things in the world. Thank you for sharing all of the details of the process including the missteps. I think, man, what if I'd had this show as a kid? You're doing good, good work, my friend. Thank you.
Amazing project! Your explanations of how things work between the code and the physical is excellent. Thank you for that. Understanding the relationship, and then seeing it in reality, makes following the progress even more fun.
Max, it is an open source project, if you really want to cry, you could send James some optimized code, or put it on github so others can add to it and share as well.
maybe add an eye nut on the top and hook it up to a nylon rope to the ceiling so when it loses balance during testing, it doesn't fall hard on the floor (like boston dynamic etc. also do)
Really enjoyed this so far. I can see an overlap in thinking between the balance and posture issues with those of dancing. It's an intriguing overlap which can serve to provide further perspectives on the problem as well as a means to bring people involved in dance into robotics and vice versa to enjoy the benefits of both. As well to expand interest in science and technology and entertainment fitness, dismantle tired stereotypes. I shall be coming back again!
Great work and awesome problem solving. Each build you make would be considered a prototype in a corporate setting. And they go through hundreds of prototypes and millions of dollars before they have a product. This amazing beast is just opendog #1. I'm sure like the other projects you will continue to tinker with it to get it to a place you are happy with, and we will be here along the way watching each alteration/ iteration . Keep up the great work!
The creaking is probably due to the rate of change being different for each joint. (The final position is probably calculated correctly but the joints arrive there at different angular velocities and accelerations of the joints.)
I would say you could simplify things by stripping the serial down to SPI, generally you can run short hops of SPI up to system clock / 2, and it means you can have multiple devices tied off 1 SPI transceiver, It also simplifies the communication network to a ping-pong, the CS pin on the slave can fire an interrupt to enter communication, and the slave generally has a Interrupt pin telling the master to poll it shortly, So instead of constantly checking you serial buffers, you have a few pin change interrupts, then either set a flag for later or rip in / out the data at stupidly high speeds, a 16MHz arduino has a surprising amount of power when you avoid floating point math, even changing to fixed point math with 4 byte longs is generally hundreds of times faster, equally when you get a 8 bit message being sent by one device to another at Sys clock / 2, you get 16 cycles to handle the last packet, be it an address to start reading from, a command, etc, getting this right makes things way faster. I regularly play with low power electronics, so waking up, doing things fast then powering down is what I'm familiar with.
As a person that goes long ways to avoid using a computer i am always fascinated by coding. It makes me think :"Why again don't i use an arduino in my projects? This would make life so much easier..." Then i say to this Side of me: "Because you suck at coding" That's the point were i go and beat up some steel. I really love that project. I hope you don't get frustrated to often when you work on it. I wish you a stress free /frustration free work.
Another option for the leg control issues if you want to keep your current setup is to use optical recognition tags to assign "joints" to the robot that can be monitored by a camera. Using Vuforia or some other software, measure the tags relative to each other to determine positions of each leg. Have little rods that extend from the sides of the robot and have a camera mounted at the end facing towards the robot, which will allow it to monitor itself. Combined with independent IMU's in each leg, individual computing units in each leg with a central head unit that uses a banana pi, and load cells at the bottom of each foot to determine weight distribution, you have yourself a very powerful robot.
I think u should use a current meter to detect contact as not only will u get a signal on contact, but u can keep going to reach some threshold of "work" carrying weight. A simple clamp on meter should suffice as it's ac and u can get both the upper and lower joints.
Upgrade time then. Have you considered looking into using ROS (that's opensource too) running on a Raspberry Pi (or something beefier) to run the calculations?
@@thekraftyguy8246 i dont think those are the right chips for something like this, he needs a much faster MCU and not a highly complex cpu, so something like an STM32 or an ESP32 would work quite well with the existing code and would be much faster
@@asderidelp I was going to say maybe an ESP32 ( you can leverage both cores ) and set the IDE to use big app / OTA no spiff for space. Perhaps an ESP32 ( or 2 ) would be enough to replace the 3 mega;s, on the backend I would still go with an ARM Cortex perhaps the mediatek helio or something similar. The esp32 has super fast Serial for control and the helio can drive everything else and be the Master Node. This would use ROS as already stated. Perhaps building it out in a model using the ROS / Gazebo combination would be the right way to go, that way you could do everything in parallel.
Hey James, the ODrives must have a way to report motor current which can be used to sense the load on it. You could use that to estimate where is the center of gravity as the weight distribution on the feet changes.
Regarding the pressure sensors: I know it's a huge change, but redesigning the leg drives to direct belt drive (see Kuka Robot arms) would increase speed, is plenty precise, is much lighter, more silent and with an encoder attached directly to the motor you'd still have perfect position control. Aaaaaalso: by measuring the motor current you could get rid of the sensors! Can easily be done with a shunt resistor, by the way. Oh, and I am super damn impressed by your project and the progress you've made. Especially the software part. I'm a mechanical engineer, so it's black magic for me. edit: I've written this comment while watching the video. About shock absorbers: when a leg expect to touch the ground, you can use the motors as shock absorbers ... as you said like a minute later. But yeah, it'd be quite a challenge :D
correct, I got early access to the video as a patron. So I cheated. I really enjoy feeling that my couple of bucks a month helps James along with his projects, and the perk of getting the videos a little early, and the monthly live stream (which I usually miss, but catch later) gives me more then what I'm paying too. I recommend signing up.
Great work, James. Glad to see you're looking at making some material upgrades. While you're doing that, it might be worthwhile to see where you can drill/route lightening holes in some of those metal pieces. When I see a near-solid plate of aluminum, I see unloaded or under-loaded material. As I'm sure you're aware, lightening can have a positive impact on dynamic stability, as it means less weight to support and less inertia to accelerate.
WHAT ARE THOSE!? Those shoes look nice on that fellow. Not it can start working those joints more than ever. Getting closer to taking confident steps on it's own.
Hi james, Quadrupeds have three feet on the ground at any one time when walking. It may be a good idea to watch a dog, cat, a horse and a elephant walking, trotting and running and then try to mimic one of them for each mode.
I'd recommend jumping over to a teensy 3.6 or 3.5 for the main and then u can upgrade the slave arduino to teensy 3.2 or teensy lc. Plenty of speed and code space, in fact I bet u could swap out 2 slave arduino for a single teensy 3.2
I think you should do the plastic-to-metal upgrades first. Many "wobbly" issues may disappear without any code changes, due to the stiffer material (fewer vibrations/harmonics).
I'm not sure about this, but you could read the motor current / momentum from the Odrives instead of using force sensors in the feets. Also you could limit the current to achieve some spring like behavior I guess?
Right now you have a hierarchical tree like bus architecture. You should consider using a bus with multi-master support every unit is connected to. This way every unit can request data form every other unit or send data to every unit. This would give you more flexibility and faster data access.
lol. Have you considered installing a heavy gyroscope. I believe it will help out a lot with its stability. Especially if you put it on XY axis where you will be able to move it around to different corners (may not be necessary).
Two questions: have you ever tried STM32s as they are much faster? Have ever thought of trying it with AI so that it learns by itself how to stabilized walk like a baby does with try and error? I know for AI an arduino as controlling unit is maybe a tiny little bit too small ... but maybe a pi does the job?
Having done something similar but a lot more primitive many many years ago I can appreciate why you stopped here. You will end up chasing errors and compliance issues for a long time with small incremental improvements. At some point in time you have to say it is what it is. This is about what I am going to get.
ok i hate to make this more complicated but i was doing some reading on plantigrade vs. digitgrade, in short you gave the open dog ankles but no knees, digitigrade animals rely on the heel and knee working counter to each other to balance and move. when you mentioned robot-x had large feet, thats cause his whole foot is on the ground, open dog is standing on his toes.
T-nuts are fiddly, I'd recommend T-slot spring-nuts, You just roll them into the slot and they stay there because of a spring-loaded ball-bearing. As for upgrades, if it's speed you want, you could try STM32s, they're 32-bit and a fair bit quicker, plus they can be programmed with the Arduino IDE that you're familiar with. Or could even try a Beaglebone Black with its two 200MHz microcontrollers and 1GHz CPU in one tiny package with a ton of GPIO. The PRUs in the BBB were made with high-speed, timing-sensitive communication in mind.
I'd recommend a STM32F7 MCU as the main processor. I can guarantee that you will never have issues with flash size or processing power if you use of those beasts.
Been thinking the movement sequence for walking forwards overall doesn’t seem natural. Saw this and thought of you. There is a CAD animation of the four legged robot that seems to look more natural in its gait. Thought it may help 😁
Reinforcement learning is the Boston Dynamics route. That requires more powerful MPU's and some simulator training. It would make it a self-tuning PID controller. Sutton's "Reinforcement Learning" is a good start (not sure how familiar you are with AI design).EDIT: Maybe try a Ultra960 or HiKey Dragon, or some such board for the MPU.
Pick up a nice used treadmill and build up a simple support structure that can hold the dog with a rope hanging it from. That way you can eliminate the issues of balance while you work on the mechanics of walking and running
You should add computer vision to it. You could have different protocols when it detects different objects. Could also make it have emotional parameters that changed based on visual input like if it sees a dog(especially barking) it would try to get away from it since a dog would probably try to mess with it. This would also allow you for it to only walk in safe environments, for example to not go on a street, and if you want it to walk on the street then only make it go through the zebra crossing. I would integrate computer vision/machine learning and also a maps API for redundancy at detecting things.
This extends off of something I've had bouncing around in my skull for tuning 3D printers: What about putting 3-axis accelerometers in the paws to measure actual acceleration against the theoretical acceleration in software? Close the acceleration loop to cancel out the inertia associated with any particular leg movement. Or is that just adding to the real-time processing lag?
What about a ball joint foot, like what you'd find at the top of a tripod, it could stilt pitch and roll etc. while keeping the foot firmly on the ground, would work well with uneven terrain as well.
I wonder how much it would cost to start molding and replacing a lot of those parts with carbon fiber? I think some of the major issues (though, they really aren't major in the grand scheme) you're having are due to weight vs rigidity.
Are you sure you want to go with crappy sensors? Why not go for proper load cells? Even compression ones can be aquired pretty cheaply in china together with the hx711 easily integratable with arduino.
Hi James, looking good so far. It looks like the chasis is twisting a lot along the spine - could this be changing the geometry enough to make the calculations wrong for the ballance? Have you calculated the centre of mass, and set the legs to be in line with it? Or have you designed it to have neutral balance? Or is it purely based on the tilt feedback? What is the end goal for OpenDog? I'd love to see this thing running but I suspect just walking in more than one axis at once is a more realistic goal (e.g. walk forwards whilst turning).
Charles Galambos has company and he selling these dogs. He has bigger budget then James. Nevertheless it is difficult to say which dog is better. At least the james dog is well documented. I already used his code. Thank you James! :)
@Mechanicus: Charles' dog is certainly more compact. I don't know if it's because he can afford better (smaller) components, although it certainly looks like he has 3D printed some of his gears. Or perhaps he's got a different design principle, but I don't know that he's that much more ahead of what James has got. You'll notice in the first two videos that it is doing a walking motion of one foot off the ground at a time. In this video, James is trying to lift two feet off the ground at the same time and keep it in balance long enough to move itself forward without falling over. In the most recent video for Charles his dog is doing the two feet off the ground gait ("trot"), but you can tell that he, too, is still working out the details because he keeps it in a harness. James can't rebuild his dog based upon what he sees with Charles' dog, but I think there is something he can take away from it: Charles' dog's legs move fast but their motion is still smoothed. I think that can give James hope that he can do more tuning to achieve rapid movement without inducing wobble rather than any sort of rebuild. James' robot probably never will run because of its weight. But I don't see any evidence that it won't be able to walk perfectly well. With one exception... James does like to get to a certain point and then move on to another project. Everything kind of feels like it's abandoned at the 90% mark. Probably because he has to keep making new stuff to retain his audience. I suspect that Charles is going to try and nail down and perfect his dog.
I mean you're probably not there yet. But could you use motion capture or at least animal based data to control the motion of the Open Dog? It seems like designing the dog around the motion would make the most sense.
Maybe you could try to stabilize it first using only the IMUs gyro. Use changes in rotation as the input for the pid regulator, and try to keep it at 0. The gyro tend to be "faster", like the rate mode on quadcopters. I coded my own quadcopter code, and I was unable to stabilize it successfully by only using the IMUs calculated angle as input for the pid controller. It was just too slow to react.
The gyro and accelerometer are mixed together - about 98% gyro. The gyro doesn't know which way up is otherwise and only produces rate of movement date.
@@jamesbruton And this mixing is what created problems on my project. There is no need for the gyro to know which way is up. The microcontroller only needs to know what is zero rotation output for each axis on the gyro. And as far as I can see, you want zero rotation speed when the robot is standing on two legs. I made a stabilized mode for my quadcopter, as well as a rate (gyro) mode. The stabilized mode utilized 2 "daisy chained" pid controllers for both pitch and roll. For example, the pitch pid controller took the input from the radio controllers pitch joystick. Say you wanted 5 degrees pitch; the pid controller would take this as an input, compare it to what the IMU pitch angle was, then output a "wanted gyro rate" that is required to achieve the IMU pitch angle. This output angle went into the second pid controller, which compared the wanted gyro pitch rate to the actual gyro pitch rate, then gave an output to the thruster allocation on how much thrust was needed in the pitch direction to achieve the wanted gyro pitch rate. The thruster allocation then translated this into actual motor controller commands. Then exactly the same was done for the roll axis. The yaw only utilized the gyro ate stabilization as I never really got the compass to work in the IMU code. If I were writing the code for your robot beast, then I would attempt a gyro stabilization only. Tell the pid controller to attempt to keep the gyro rate at 0 deg\second for pitch and roll axis. It will drift a little, but my feeling is that it will be stable on gyro ate stabilization for long enough between each of the robots steps. It will most likely be more responsive. This method worked way better than stabilizing using the angle from the IMU only (which is calculated by various ways of combining gyro\accelerometer). Not sure how it will translate to your project but that was just my, uh, 250 cents.
What did you do with the AUX port on the ODRIVE drivers? Did you attached the braking resistor or not? What are the results? Let me know. And Great job
If I can make a suggestion. Try to stop thinking like a programmer and think like an animator. Break your movement cycle into known key points of movement and code those poses in as poses to hit at a given time in your walk cycle. Transitioning between your key poses will give you your walk cycle. Then simply transition from stand still to walking and vice versa. Loving the projects
I use a Raspberry Pi, several ESP-32s and Arduinos all connected with CAN-Bus for my home control system. It’s low cost reliable and easy to use. Think you would find it a good alternative to serial. I use MCP2515 CAN Bus Module Boards (TJA1050) with Arduinos and SN65HVD230 CAN Bus Transceivers Module with ESP-32s
Would be cool to see if a neural network could be used (or some other method of ML) to balance the robot. The problem is maybe that how do you gather the data needed to train the model. Maybe NEAT method and simulator?
Something that has been lurking in my mind for a few episodes now is the code you introduced that maps 0 to .01 to avoid division by 0 errors. This may not be an issue, but I've wondered for a while if that could introduce a cumulative error into the model based on the frequency each leg encounters a 0 in its maths?
Yeah, the lag along that data path is pretty massive. Also, 20 Hz is a bit slow for the IMU to keep a system like that balanced. I don't know if there are ones that can be polled faster, but starting with such a slow sample rate is just going to make any delay in data to movement even worse. You may want to add a bit of a shock absorber to the feet. In a real animal, the muscles and joints have some give and that pressure is sensed by the body to help with balance. I would also suggest using two pressure sensors on the feet so that the robot can tell if it is coming down on its toe, whole foot, or in between. This would make allow a simple adjustment algorithm to act on the kinematic model to give it better balance since it will know what angle it is contacting the ground at. Considering that quadrupeds have ankles, I would also add the ankles and give them the ablity to lock in place for if you want to try a running gait or a gallop. I think most of us would be happy with just walking and not bumping into things or being remote controlled. This is all bio-mimicry based, but why reinvent the dog when we can just copy it?
So I was thinking. I'm no engineer, especially not a physics person, but I noticed that at the higher speeds, the feet almost seem to be punching the ground rather than stepping back down, even though the leg should know how far down to travel to zero out (i.e. reach the ground). My thought was that as humans, we can tell without looking when our foot is starting to touch the ground because we have the physical sensation of touch as soon as our heel makes the slightest contact. However, you can't directly bestow this onto a robot, so you almost need to predict when the foot is almost at the ground so as to slightly slow down the foot and soften the landing. I know you left space in the feet for hall effect sensors, but from what I've seen in the robot these wouldn't be able to cause the robot to react fast enough to slow down the foot, so instead, how about using an optical or sonic sensor attached to the leg so you can get distance from the ground from much farther away, and thus be able to calculate when to slow down much earlier, thus having the robot react on time. I hope that made sense.
I wonder what machine learning could do for the robot. You will have to export and assemble the dog parts in a 3d physics engine so that you can train it, and then use that AI model to drive it irl.
The most difficult part would be BUILDING That model. What is your goal? What are you going to train it to do? In this video he talked about how the robot is rigid and uses mathematical models, and Machine Learning is just the progression of that. But it's still rigid. You need to be able to write a formula that calculates just how good the robot is walking, and that is NOT an easy task for such a complicated problem like walking.
exactly what i thought. look for the videos from "code bullet" this guy does make ai systems do learn all kinds of things. i would think it could adjust all the timings and angles to make itself walk better and smoother.. but without a accurate simulation its propably not practical.. like learning on the hardware might need the robot years to learn. (and also the whole robot is still so much in the basics... maybe its way to early to start such kind of fine-tuning... )
There are already AI simulations that have robots (in simulation) teaching themselves how to walk successfully. I don't know if any of them have been applied to real world robotics.
@@williamrutherford553 The reward function shouldn't be that hard for walking. And building the simulated dog would require the CAD models to be converted to standard 3d models. A physics system would handle the rest. Having the motors and other sensors properly mapped to a virtual environment would be a lot harder. And probably in the end some real life learning would be needed to refine the model. But one could get very robust and organic movements. Do note that boston dynamics do not use machine learning AI for movement.
@@mm-hl7gh I like code bullet too, but he doesn't really make AIs, he basically makes programs that brute force the result. His AIs do no learning, they just do a bunch of random movements/whatever and pick the best result using his objective function
To be honest I haven't had a close-up look on your code but my guess is the processing power you have in the dog would be more than enough to handle the proper control loop and filtering you want. The problem is the inefficiency that comes with the arduino in exchange with the easy and versatile ready-to-use libraries. If I were you I'd take a closer look at the code and try to optimise it a bit. For instance floating point operations take a whole lot of time on those chips because they do not have a FPU in hardware. Try to scale it to integer data types e.g. using a "propietary" fixed-comma format whereever possible. Try to use macros and low-level commands (can you use inline assembler with an arduino?) rather than universal functions because they may waste precious processing time, especially if there are in your control loop. I don't know how this "easyTranfer"-Library works and if it uses the SPI hardware of the controller, if not, this is definitely the fastest way to get data from one controller to another. Try to minimise overhead of the communication. Of cource this is a lot more work to optimise the code or to write optimised versions of the libraries yourself than sticking to premade libraries, but I think it's a fair alternative to just adding more hardware. If you really want to optimise stuff some custom hardware with a properly chosen controller may be a good idea, but would be a major increase in complexity and kind of ruins the "open source" style of the project, so I think that's not the right way to go here. However, I am really impressed by the project so far, really outstanding work!
I have no idea what is happening... But its pretty cool :p Quite funny watching a guy just casually sitting right next to, what I'm sure, is a very heavy robot, and just grapping it by hand when its about to tip over. After watching the Boston Dynamic videos in which they're clearly very cautious with safety.
Nice job. The mass is definitly an issue causing the lack of stability. You might want to watch a few videos on howvStar Wars Imperial Walker was stop animated. The mass apeats to bevmore in relation to an elephant than a dog so the coding could be expanded to bebmove one foot at a time including a check to a gyro.
I would maybe try using a cycloidal gear drive instead of the Pully system. It will be more compact and so will result in less creek as less parts are reliant on each they
I know you’ve since solved this stuff, but it strikes me that using tensor flow and AI with some basic constraints it could teach itself how to walk smoothly. I suspect that your code would be vastly less. Especially if you could use image recognition to watch a real dog and pick up the joints and feed those into the learning model.
A 200mhz chipkit wifire board would make an excellent upgrade to all those... Even a 40mhz pic32 is waaaay faster than an 8 bitter. Easier to program than arm also.
If you want to get serious about data processing you really should consider using an FPGA, you can have dedicated HW for PID and other background processes and an IP block for a microcontroller to handle the more complicated tasks. I would recommend using a Xilinx FPGA because ARM has recently made some of their Cortex IP available for free on Xilinx chips.
What's the little box on that really long single cord going into the 'bot at around 17:58? USB extension adapter? I'm guessing it's from your laptop to your MCU inside the chassis, I'm just curious about the little box.
since there is such a hype about this topic i would be quite interested in how certain machine learning techniques could be incorporated into this project. so may have some neuronal net figure out the best parameters and timing for all this balancing stuff. just leave it running over night or so (probably faster to run a simulation and upload the trained neural net)
Esp32 time then.. To begin with you can burn the same code from Mega, as Espressif Arduino code is pretty solid. So it can be a progression than compete rework on software side.