The ZX Spectrum was a huge selling computer which came out in 1982 as far as I can remember. I'm an old dude (56 years old), had the ZX Spectrum with many games, the graphics shown in the game is called Knight Lore, developed by Ultimate Play The Game and rebranded as Rare.The other game is from the Dizzy series of games, actually developed by Codemasters at the time.
@@nickgirdwood3082Yes, it was huge. It was the first personal computer showing colours, acceptable graphics (for the time) and relatively affordable. You just connected it to the TV. There were hundreds and hundreds of games. You have to understand there was very little competition. I had one. The games took just 5 minutes to load (it was little back then), and you loaded them with a regular music tape player, so if you duplicated the tape, there you had a copy of the game. I can assure you there were hundreds of thousands of sales. We're talking about nearly 45 years ago. They had a similar one/clone in the US but it wasn't called Spectrum. To get an idea of the merit if the Spectrum. Under 16K could run very complex programs. All the 3D shooting games like Doom were born from the Spectrum. Spectrum users can surely remember Wolfenstein 3D.
Don't forget the Amstrad CPC464...... less popular than the others, but getting one for Christmas as a 7 year old kid in the 80's was like the future! :D
Worth noting for those wanting to use this commercially. Looking at the terms, I believe the LoRA can be used commercially on Replicate if you created it there. But will fall under non commercial if downloaded and used locally.
Yeah, so only for commercial use on Replicate like I said :) Loras have the same licence as Dev. So Loras can't be used non commercially locally even on Schnell sadly!
Dude that was machine learning 101 and you nailed it in 15 minutes. Ding! Seriously levelling those parameters with retro pixel art, and showing it can make voxels from pixels. You just took No Man’s Sky and Pointcloud art to the masses. Pat yourself on the back lad, and have a drink. 🎉
I've done a couple of Flux loras on Replicate and yeah, it was just under $3 for each and took about 40 mins for each. If you head over to Matt Wolfe's channel, one of his videos gives you $10 free for use on Replicate. It's worth noting that the same settings for CivitAI costs about $2 and for Fal I think it was $5 but I haven't used that.
So a new job emerges a branding fine tuner... a person who creates using AI or photoshop or both 300 images for a LoRA to fine tune an image model... and now their client has artwork on demand for any campaign.
This is an awesome tutorial, kudos to you! Although I currently have a beasty 4090 sitting on my desk and I'd like to use it. Is it possible if there's a somewhat easy windows tutorial you could make? I've already tried this but failed
I'm loving Flux.1, but my biggest complaint with it is that the ControlNets and IPAdapters haven't been fully updated to support it yet 😂 Once those happen though, I'll be converting most of my ComfyUI workflows to use it. Being able to train custom Loras so easilly too will be insane potential! 😮❤
I've noticed when I say "in the style of TRIGGER_WORD" when referencing my LoRAs the result typically misses. What I would do instead is, to follow along with this video,, is modify the prompt to be something like "Create a TER image of a dog." The 16 bit style is already implied from the images you fine-tuned the LoRA on. If at all possible, give that a shot and let me know if that helps. Curious if this tweak results in a better output for others as well.
This generator is awesome and it actually generates someone holding a bow correctly. I had to add "and arrow," to "holding a bow at the ready," but it's still good. The first time it just generated a character in a green hood with a green bow on his neck. Like, the accessory. But the half illustration and art LORAs generates child-like characters and nothing like an adult. And realism or realistic or whatever. Guy means little boy, apparently. I want to create a grown adult holding a bow and not an 8 year old.
Yeah I'm looking at using ComfyUI to do heaps of stuff with converting old workflows from SDXL to FLux. FLux Upscaler, Flux Face Swapper, Flux ToonCrafter
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
The unmodified model is far more obtuse than SD 1.5 regarding human body. The only equally obtuse model hitherto is SD 2.0. DALL·E 3 (unnerfed) is top, with Ideogram coming second.
@@brexitgreens Yeah, I'm loving Flux but getting the body type I want, especially chest size, is very hit and miss. It seems to be trained mainly on supermodels.
How do you get back to the page if you so happen to close it so you can generate again and how can you come back to the section to download your weights for the trained lora ? Help is appreciated.
man i so miss them old fun isometric games.. there was a lot that came out on the old 8 bits and 16 bit machines.. you should get yourself a few emulators and start playing some of the old games. you may have to slow the game emulator down though.. there was some classic all time games to play
@@MattVidPro I agree. Though seriously, mate: education makes us human. Don't be a cheap knock-off. Even AI will respect you more if you demonstrate refined humanity. #BackToBasics
What is it with these AI models and chin clefts? They apparently love them and, sometimes, it's pretty difficult to convince them to generate a chin with no dimples or indentations. I keep trying different combinations on prompts and negative prompts but it can still be pretty hit-n-miss. 🤔
The Lora The Explorer page does not work, after I choose a style and give it a prompt and press the button of generation then it shows an error after some time
I just want to know if uploading to make the model or training it on your images is somehow sending it to the project as a whole for others to use. I would like it to be pure local trained, so none of my personal work is going wider than me.
Treasure Island Dizzy was a great game back in the day, The egg guy had loads of different titles. Dizzy rules. Now the king of ZX Spectrum was Chuckie Egg!
are any of these good for game dev 2d assets for skeletal animation in spine? I want to be able to apply the render to my sketches, like with SD 1.5 and control net, when I post a draft sketch and a prompt, and it paints on top of my sketch kind of. But I still can't control it in a way it will be good enough for animation of body parts in 2d :(
Gotta say as someone who does pixel art for quite some time I can instantly tell if somerhing is a real pixel art or not And up until now not sd 1.5 not sdxl and not midjourney managed to fool me not even once But that zx spectrum almost passed the test just when i focussed on the grid of pixels that the illusion completely broke and i could see that it was made by ai
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Thats cool but i'm interesting in something like a Krita with AI Diffusion plugin but for Pixel Art because you can immediately a fix something what are you draw with AI. Sad but i'm not find videos on youtube about it.
also I only run fp8 on my local so I assume training one of these models lora will be less expensive if those take a h100 at 80gb for 32 float and this is 8bit then 24gb should be enough
Training can't be done unless you have 24gb of vram. However, it's really easy to set up Flux locally using ComfyUI. I'm running Flux Dev locally using a 3090 and 12gb vram (64gb system ram). You can run Flux Schnell on a lot less.
"Can't get mad at me because it was a little ahead of my time..." So was fire. And the wheel. You know what those are don't you? But, nah, it wasn't that big of a deal. You do know the Commodore 64 and the Apple II+, right?
You can't compare a single image generation from one model to another. To be fair you should be comparing hundreds of image generations. Also, Grok may modify your prompt so that makes it even more difficult to compare.
Some believe the moon landing was actually shot on earth as a set.. Not that I know anything about it personally, my grandfather worked on the lunar landed project after he retired from the military so who knows, but anyway apparently pixel art TER style also believes the lunar landing happened on the earth. In room 237 .. Because i know youve got the shine
AI technologies like SmythOS are revolutionizing efficiency! They make it incredibly easy to enable complicated multi-agent cooperation. In what ways are you using AI in your projects? #Innovation #SmythOS #AI