I liked the mechanism of the Discriminator and the Generator, it realy makes sense when i think of my own drawing decision process. I have a suggestion, if you don't already do it, i think you should let the generator start first to the process. You don't have to start to painting with an idea of what to finish with! Generator can start with ligther strokes (like a digital draft), then the discriminator can just watch and then say, "Wait, these curves there look like an elephant (with some aesthetic criterias), now go on to complete that drawing to that elephant." And then what might an elephant do and where can it be with what other objects. Those can be the other drawing objectives and steps . This sounds like a more creative process to me and this is how it works when i improvise some drawing.. Also i wonder if you use any force control loop to adjust the brush strokes or is it just a visual feedback control ?
Cool idea. One of the reasons I call this project CloudPainter is that I want my robots to be able to look into the sky and see clouds and in your words say "Wait, those clouds there look like an elephant." And begins painting an elephant.
Pindar Van Arman Oh, beautiful :) And what about my second question? For my master thesis, I had studied on force control of a humanoid robot when learning and then performing some writing on a moveble platform. It was learning a force trajectory to follow, and i had thougt it might be usefull if one day some artist would like to teach kineasthetically a robot how to draw (which is not about your project :) . That's why i wonder if your robots need any force control to adjust their line qualities.