@@UltraDestroya48 I used Blender to extract the individual frames (images) from the original Bad Apple video. In Blender set 'File Format' in Output Properties to type PNG (which is default), and it will render each frame with something like 00001.png, 00002.png, 0000N.png.
Might be easier for us mere humans to parse this if the postures were isolated from the frames and made stable/smoothly animated, and then the individual frames made off those postures without further stabilizing factors.
@@RelkondYep; this was my first attempt with using Stable Diffusion so only knew so much at the time. It takes time to learn how to do things smoothly.
I'm trying to work on a new version using AnimateDiff so it will be smoother. It's tricky to get just right though so will be a while yet before it's releasable.
Nice work! It is interesting to see AI flipping between [the character facing towards the view] and [the character facing away from the view]. Even for humans, we need to search for clues. Around 0:52, before Flan turns, it looks like your AI is quite well trained and has a bias that prefer her side ponytail is on her left, albeit there are few frames in which the AI got it incorrectly (resulting in she has her side ponytail on her right). There are scenes in which it is valid to comprehend that the character facing either way. It's as expected that the AI would be flipping around the two ways, because in the work flow you did, the consistency between frames has yet to be considered.
Thanks! Getting the head/body oriented correctly was definitely a challenge. If I recall in my prompts I think I had things like 'facing forward' or 'facing backwards'. But even with applying weights to make that phrase stronger, it still often wasn't able to get the correct orientation, mainly when the original art wasn't obvious in which way a girl was facing.
Thanks! I found one the biggest bottlenecks for me was the time it took to design a scene that I was happy with. Over the course of the month it took me to develop this video, the cards spent more time waiting for me to give them something to render than the actual rendering itself. I'm sure a 4000 series card would of helped, but I don't have any computers a 4080/4090 would fit in, which means I would have to build a new one. If I keep working with Stable Diffusion, I will most likely build a new computer with updated specs. I'm looking at Parseq next.
Thanks! For Eiki, Stable Diffusion had trouble with it because in original video is split down the middle both with the background and Eiki. So for the prompt I went with: (eiki shiki with a wooden sword:1.3), (hellish landscape:1.3), (inside a court room:1.5) Basically going with a fire theme because if I recall correctly she resides in hell, and the court room for the her role as a judge.
Anyone else think Flandre looks more terrifying here since she's looking kinda skinwalker-ish what with flickering like that? It made me think of Scooby from Velma Meets Original Velma, those kinds of spine-chilling vibes but from outside the area at a safe distance in that kind of detached manner
Thanks! Didn't actually use loras for motion; the motion comes from the original source Bad Apple!! video. I'm hoping AnimateDiff stabilizes soon; it has a lot of potential for making a smoother animation.
@@binarypearl i mean for coloring in the characters based on the original outlines since it's trying to fill them in (sometimes very weirdly lol. sd 1.5 makes me want to rip my hair out)
this is so stimulating i cannot very good video very uhhhh very my eyes are hurting from the fact that AI cant decide on painting the background or not
:) There is an extension called AnimateDiff that holds a lot of promise for making smoother animations. It's not stable enough with ControlNet yet, but once it is I hope to make another smoother version.
If I recall, the model I was using was having trouble understanding 'Eirin', so I switched the prompt to just 'nurse'. Assuming the model new 'Anne' as a 'nurse', it chose her instead.
:) I think Stable Diffusion was trying to be realistic with the physics of Nitori being upside down, but it kind of got it reversed. I debated whether to try to fix this or leave as is...I opted to leave as is.
I'm actually looking at something called AnimateDiff to make another version with much smoother transitions. There is a known bug between AnimateDiff and ControlNet that is preventing me from moving forward though. Once that is resolved, I hope to pick this back up. I might release a couple short clips from my first attempts at using AnimateDiff before the bug was introduced as a preview.
Nope, youtube didn't place any restrictions on this video. In the description I link to where I got the original video file and the youtube song / channel of Masayoshi Minoshima / ALSTROEMERIA RECORDS for credit.
Stable Diffusion default resolution is 512x512. When I first got into Stable Diffusion, there is an overwhelming amount of parameters and sliders to adjust. My understanding (at least at the time) was that 512x512 was 'optimum' for generating images in Stable Diffusion. So I didn't give it much thought and left the resolution at the default 512x512. But you are correct to point out, the original resolution of 'Bad Apple!!' was 480x360. I'm hoping to create a newer version with much smoother animation using AnimateDiff. There is currently a known bug preventing me from using AnimateDiff with ControlNet. Once that bug is resolved then I can continue. In the new version I am planning, I will keep the correct aspect ratio (which is 4:3). In my testing I found that generating the images 640x480 in Stable Diffusion and then upscaling to 4K looked the best. It still takes an immense amount of time even on a very high end video card to generate the images for an entire 3+ minute video with Stable Diffusion at 640x480.
@@binarypearl i understand that and sorry if my wording makes offence. i mean it's interesting to see the AI tries to adapt to the source content (with wrong aspect ratio) with some additional/other generated contents (that might be not expected but surprising). looking forward to seeing more of your video!
@@farteryhr No worries; no offence taken at all. You brought up a good question, and I like explaining what goes into the content I make so others can learn from it as well.