I was a bit skeptical at first but after watching a few of your videos I think I understand a lot better now. In the beginning I thought this was going to be a very lazy “let the AI do the work” kind of thing but now I see that it’s a lot more hands on and isn’t completely negating the need for an artist with a vision
Thank you! I’m glad you appreciate that some thought needs to go into it. I’m from a creative industry, so I know that you need to have input to get any sort of professional result.
I get why you start to speed up the tutorial after you have shown the basics. But somehow I would love to see it in normal speed till the end, its super satisfying to watch you create this piece. You are one of the first people I found using AI really as an artist. Most just use trial and error with the prompt, but you have a clear image of what to produce and and work to control the ai, not the other way around :)
@@albertbozesan I agree it would be nice to see a thorough walkthrough but I will say you give the right amount of detail before you go into speed painting mode.
This is actually good news for concept artists. The more people think this is concept art, the more the quality and authenticity of portfolios will drop and the value of actual artists will rise. Concept art is not about making pictures, it's a very involved process with a lot of iteration due to feedback from an art director to achieve visual solutions to production and narrative problems. Communication and teamwork is key, so a tool like this will do no good, even if you know what you're doing, it will just be a hassle for you and a waste of time for the whole team.
Humans do it best right? Must be really painful for people working in fields thought to be the exclusive playground for human beings and seeing science is only beginning to showcase that creativity on demand is nothing more than tying the right dots together in a vast (for humans) incomprehensible and ever expanding network of source material and machine made algorithmic approaches.
@@bartlx The source material making this possible is human, though? It is an exclusive playground for human beings (excluding any theoretical alien life). If you took the creations of other Earth animals and created a similar network, you wouldnt get these results.
i've worked in game studios where the 'concept' artwork was photos ripped out of magazines and newspapers glued to a sheet of paper. the game was a hit on Steam and it was a big buget AAA game. other times we just make blockout shapes in blender and the developers would tell us what else to do. no need for pretenous concept artists.
As a concept artist this looks like a very cool tool, I’ll try to use it in my work flow, with the real knowledge of concept art this could be a really powerful tool
Yes! I don’t think this tech will replace skilled artists at all, unlike what others say. It is so important to bring art knowledge into the workflow to get great results. This will give you superpowers!
@@Iamwolf134 Do you know anybody who doesn't want it cheaper? The client is not the problem. The problem is the artist who can leverage these tools to do it cheaper and quicker than you.
This is going to make the work of concept artists and illustrators much easier. I just hope they dont get overworked, simply because the AI made everything easy and are therefore expected to work twice as hard.
I think that without major cultural changes this will always be the case. We will always find a way to squeeze things within a hairs-width of the breaking point. Automation hasn't helped any industry see working conditions improve. Either productivity went up without any benefit to the workers, or people were laid off and productivity stayed the same.
This video was the very first video I watched about three or four weeks ago. OMG has so much changed, but some of the basics here are still useful, and I still keep coming back to reference the workflow.
@@albertbozesan mind you, this was also where I got the first taste about how stupid fast this tech was advancing. "Wait, my UI didn't have that option. It's phrased completely differently." Crazy how when RU-vid suggests a video about SD or AI Art that I hadn't seen, if the video is older than 3-4 weeks, I have to pick through it to glean what hasn't been updated/made easier since then.
I actually predicted that there would be a workflow like this when Nvidia GauGAN released their AI tool 3 years ago. Seeing this thing has come true now. just after 3 years, it scared me and made me happy at the same time. I just can't imagine how extremely easy it would be to realize any visual idea in the next 1 - 5 years.
been searching high and low, for a month, trying to find a tutorial about stable diffusion that i could actually follow (as i have zero coding experience)....but i am a pretty good illustrator (if i do say so myself - lol)....but thank you, thank you....NOW i will try an install and knuckle down!!! great instructions method. great voice. great tutorial~!
It works like Keywords for SEO. The more keywords you use, the more data it knows to scan through. But if you go too far with the keywords it will counter itself and be left with fewer and fewer data sets to pull from. It is an "art" form in itself and changes bsed on the data set model we use.
Just a really small detail, but I always throw in Filter->Camera Raw Filter (not sure which version of PS you have) and play with Basic and HSL tabs. Even if something is just a quick concept art, but you like pretty colors, it's always worth spending a couple extra minutes 👍
Apparently, researchers from Adobe are working on something called Pix2Pix-Zero using Stable Diffusion; basically Img2Img but retaining your input almost exactly while only changing it's contents. If that's going where I think it's going, Photoshops gonna jump to a new level. Basically like this video, but without an external Stable Diffusion UI.
This is extremely awesome. Im decent at art and already do a lot of photo bashed work that I just blend and paint on top of. This would greatly speed up the process and save me a lot of effort and on occasion come up with unique elements better than my initial imagining. Very cool can't wait to give it a try.
The world is a better place because of people like you man. Thank you for sharing this. I am sure, this will turn "NO AI" bashing to something constructive.
THIS is the kind stuff that all the AI art haters out there despise. They’ve preached for years how you must put in at least 10,000 hours before you can even get close to their level of talent and RU-vid stardom, but now most anyone can bring their own napkin sketches to life through tools like img2img. I think what hurts them the most is that people will stop worshipping them and forking out thousands for their online classes … although I actually think AI and tools such as img2img will actually bring them MORE business since even more of the masses will get into image/art creation and there’s still a very strong need to know fundamentals of art
While I don’t agree with your point about the “worshipping” - not many working artists ever have been, they’re not in any sort of “elite” or similar - you are probably right about more courses being sold in the future. I’m brushing up on my digital painting skills and have paid for art training in the past :) may do so again soon!
Well, I'm very interested in a free method for outpainting. DALL-E2 has it, and it's great, but people have to use up lots of precious credits for that feature. I hope Stable Diffusion will have a UI version with a similar (and free) technique.
I guess it goes like this: real humans label some images with words like detailed, realistic, and others like 4k are just automatic. Images that have these labels tend to have a certain style to them: 4k might imply high level of per pixel details, while detailed may imply high amount of objects in a single take, so maybe these two will produce images with a lot of objects that have high res textures.
It is unlikely real humans doing the labelling. There might be some checks and initial hand curated data but AI models are well passed the point of doing image recognition and element abstraction.
Many thanks for this - was never really sure how stable diffusion could be used in practical terms and now i know! Great tut! Extremely informative and nicely-paced. Looking forward to the next one!
Wow. Thank you for valuable information about AI art. I am a artist myself and curious about people fearing that AI will take the artist job. AI at this stage is not scary but I will one day take over artist job. Even Elon Musk feared AI. Thank you so much for simple tuto video, Its very simple. I will try it and do my first concept art. =)
I don’t think AI will take artists’ jobs. It’s quite obvious to any professional that you need an artist’s eye and skillset to really get controlled and reliable results - super important for real work in any creative industry. Best of luck with your projects!
I can foresee the future regarding this. RU-vid challenge videos where the challenge is to generate nice looking AI art, but WITHOUT using Greg Rutkowskis name 😁. Poor Greg. In any case, fantastic video!
Very cool, I can definitely use this to improve my workflow. A good AI plug-in for photoshop/Gimp should do the trick as well. I thought NVidia is actually close. Thanks for sharing!
Think of the prompt as flicking through a filing cabinet of image descriptions. Find the things in common that you want applied to your image, I call this stuff word vomit, And I have it saved out in various forms to copy paste in for differing styles, but it is very important to the outcome of the image. For example, If I want an animated clean rendered style, I'll add tags like 'Official art by Disney' or 'Official art by Pixar' < Now this wont work for everything, This style of tagging works in this example because it is how they tag their own official releases, And that's the key, Wording things the way the specific image owners/critics describe their images. The best tip I give to fellow Ai-tists is this. Learn Jargon for everything as best you can, and Get/use a thesaurus. First Jargon, Remember every image is generally catagorized/descripted by the wording of where it was found/scraped. You want high art, use high art jargon, you want photography? Use photography and perhaps cinema and camera operator jargon aswell. Learning the terminology for different types of camera shot for example is fundamentally helpful, EG 'A pulled back shot of a' A distance shot of a' 'Panoramic' 'Fish-eye lens' These will help you play with the scale and style of your image for instance. Art terminology helps greatly well - Good to have your own discord channel to save all this shit to if your a mid journey peep - I've linked some terms below for copy paste (Sorry there were descriptions for each but youtube wouldn't let me dump that much text... Soz Beans) (Trompe l’oeil < This one is one of my personal favorites, Look them up if you haven't heard it before :) Next, the thesaurus is crucial, Say you want 'Prompt: a london street with flying cars floating down the street' This is most likely going to give you flooded London streets, find other words for floating to help eliminate the alternate meaning, Read your prompts carefully and make sure you are not accidentally getting the wrong meaning out of your words. On top of that, using terminology like 8K is great, but think about what images might mostly have that as a description, 4K is probably more helpful as there is a much larger pool of art uploaded at 4K with that as part of the description. Add both :) People tend to think of these prompts like a mystical language, and yeah Kinda, but think of it more like a rolodex of image descriptions, your not cobbling together pictures more so cobbling together peoples description of them to make a new image. Prompt ;A deep pulled back shot of A vast tundra with a futuristic city encased in a vast thick glass dome with yellow mist. ships flying around on dedicated flying lanes. glow on the out side, Inception style shot, hyper realism, crisp details, Landscape shot, vast 50 square kilometer. HD. OLED. Leading Lines. Golden Ratio. Rule of thirds. This prompt above made these images. Sorry for the plug, but kinda hard to show example with out it?? :P......(.....> Please like and subscribe hehe) instagram.com/p/Chx6eqlofAv/ Some extra things that are nice to add :Color theme, Teal, green, Lime and Yellow. (Add your own colors and research color theory just a little bit) :High contrast in Hue :More Value than Hue (These two can really make the colors pop, sometimes.) :OLED this should be pulling from TV and Monitor advert images which are some of the most expensive and high quality HDR contrasty images you can find, They gotta sell tvs after all :P :HDR similar to above :Unreal Engine :Octane Render | Renderman Render | Vray | Redshift | Mental Ray < These are all rendering engines and images with this as part of their description should be mostly Clean AF and thus will help bring some of the clean to your piece. :Symmetry < Obvious but handy I'm still learning, But This should help. Also Capcut is fantastic for making animated reels from a mobile device :D Abstraction Alla prima Allegory Appropriation Avant-garde Brushwork Chiaroscuro Color Theory Composition Contrapposto Distortion Figurative Art Genre Glazing Impressionism Mixed media Motif Narrative Perspective Photorealism Plein air Proportion Realism Scale Sfumato Symbolism Texture Theme Trompe l’oeil Underpainting Value FORESHORTENING FOREGROUND AERIAL PERSPECTIVE ASSEMBLAGE BIOMORPHIC CONCEPTUAL CONTOUR ICONOGRAPHY IMPASTO MEDIUM MODERN PENTIMENTO
If you can draw concept art. This might get things faster to complete. Concept artists can draw not detail but good enough painting and send it AI and edit the spit out image. It would be faster for concept artists and they should definitely use it to advantage than people who can't draw who will take more time to get the good results.
Yeah, there’s no way I’d get better results than experienced painters if they learned to use this as a tool. I’ve already seen some people accept this innovation rather than reject it - they will have superpowers!
Great stuff, isn't faster to just speed compositing roughtly in photoshop with images/stocks, first, then do the same process in SD ? And please do a serie of that, it's so good ! Also, final question, I'm struggling to make great faces with full body portrait in SD. Is it possible with this technique to change that, or photo batch a face on hte prompt I make then process again in stb i guess ?
I've done this as well, check out my tweet: twitter.com/AlbertBozesan/status/1563605096407019520?s=20&t=K8ndZLmFcMto6WQ8MXRrxA The disadvantage is less creativity on the side of the AI. You give it a lot more hard details to work with from the beginning. Definitely worth a shot, of course :) experimentation is key! To your 2nd question: I would suggest getting a good general full body shot first, then cropping and editing the face. The third tab with "GFPGAN" can then be used to perfect the face in a final step! Thank you for your support, I'm glad you liked the video! A new one is coming later today.
full body portraits can be hard for the ai because the millions of full body portraits on the internet & the AI tries to follow the prompt so it ends up bashing all these people together sort of creating a blob human
very good one , this process is more or less explained through reddit posts , but it's always to see it live. Now i see why artist are so angry XD. now anyone with a little knowledge on photoshop and understanding of AI prompts could do very cool images , that before were imposible. But evolve or die... you can hate it or if you're an artist you can learn to use it to improve your process. same thing when Photoshop and illustrator appeared and ppl went from pencils to digital painting
Thanks for the video. The tutorial for installation asks for Python v3.10.6 but 3.10.7 is out. Must I use .6 or is it ok to get the most recent version?
By the way, Isildur takes up his father's sword to chop of Sauron's finger, which might be the sword the Queen regent gave Elendil in the previous episode (his father).
I have a hard question can this AI fix a low quality Image from another AI Generator? I have some very good anime image but the quality is bad so I'm looking for AI to Redraw or fix it somehow I even try to upscale it but no use
It’s probably going to look a little different, but if you don’t mind that then sure! But check out “waifu diffusion”, I hear it’s a version of Stable Diffusion trained more on anime. That should get you better results.
I’ve been watching all these AI images with interest…. Had a go at mid journey… I’m an illustrator & concept artist, so there is that slight worry that AI is going to take over from my job.. but, watching this video has reassured me it’s definitely not there yet. There’s some great stuff being created, such as what your doing, but it’s taking some effort, time & skills to get it to anything useable. In the time that you’ve done this, I would have drawn it…. So I don’t see the point of using it in my workflow to save time. I’ll definitely sub to your channel & watch on to see how things develop though. Does Stable Diffusion only run on computers? I like to use my iPad when sketching & roughing out stuff, might try it if it runs on iPad, wouldn’t put it on my computer as all my work & art programs are there & I can’t risk software conflicts etc.
There’s no question that a skilled artist like you + these tools will “win” over someone like me just using AI. I also don’t believe your job is in danger - it will just change like when Photoshop was invented. Perhaps in the future you would use this to get a rough layout in a shorter time, then paint over it. Stable Diffusion requires a powerful GPU, so it either has to run locally on a PC or on a paid online service like Dreamstudio. Thanks for commenting with an open mind!
@@albertbozesan it will be interesting, as the options for quality images quickly become more available I think AI will definitely find its biggest strength in concept art… for knocking out quick ideas, that don’t need a lot of polish in early design stages it will be very useful. It’s not there yet for illustrative pieces, they take a lot more finesse generally… but as a base to paint over I can see it has promise. Lol there’s a skill to writing descriptions etc that’s for sure, I was trying some portrait type pieces in mid journey….. the results were horrendous, I have much to learn. 😂 I have a really great pc, but I’m just dubious about what I put on it… I’ll probably give it a try at some point though…. I think the Stable Diffusion UI is more user friendly than Mid Journey… but then I’m not a big fan of Discord for anything.
@@LionArtStudio the ui isn't the only one for stable diffusion. Stable diffusion is open source so people have made different gui interfaces for it. There's one called nmkd too.
Fear not, any serious employer can see through these tools pretty easily, specially in concept art, since the people reviewing your portfolio are going to be the skilled artists already in the company, not some HR grunt. This kind of faking is not new and existed way before AI, it didn't fool anyone then at it will not fool anyone now, all one can get from presenting a portfolio with AI generated images is get blacklisted. As you say these tools are not useful for real creative people, because we can come up with more accurate visual solutions without resorting to blend already existing images, it's hard for people to understand how this tools are actually a hindrance for us.
I saw your newest video where you recommended the viewer to watch your automatic1111111 installation guides, but I haven’t figured out which video you were referring to?
I suppose my couple tips in here are super old…in that case check this out, it looks easy and super helpful: www.reddit.com/r/StableDiffusion/comments/zpansd/automatic1111s_stable_diffusion_webui_easy/?
Are you asking if you’re allowed to upload AI art to stock photo websites? That depends on their Terms and Conditions, different for every site. I don’t know about Microstock.
This is so cool and exactly what i needed! I've been trying to look for a good Stable Diffusion UI for days, and its a plus that you just showed me the steps it takes to really utilize this tool to the max and make my ideas come true. Thank you!
Thank you for pointing out the insult in the setup tutorial title! It was probably not ment to be hurtful to anyone, but is just really unnecessary. And I believe not simply ignoring stuff like that is part of making the web a friendlier place for everyone :)
Yeah, it's important to me to keep this friendly for everyone. The first time the name could've been an accident. Now enough people have pointed it out, and several versions later it's still up. So I make sure to call a problem out when I see it :/
@@albertbozesan From what I understand, it comes from 4chan, more specifically, the subculture that considers the general society a bit too uptight, and rebels against it by deliberately breaking etiquete and trying to desensitize themselves and each other from communication "violence" (harsh words, shocking images, dark humor etc). Some of that intent has been diluted over time by people that don't understand it and think the goal is to try to hurt people; but it seems there is still some people more aligned with the earlier mentality.
I think it is a bit of a letdown that we have to resort to using styles of existing artists, instead of being able to write thousands of words describing painting style, colors, shapes, patterns, style of details, perspective used, composition, layout and all that good stuff. When we get to that point, it may pretty much be more like doing a traditional illustration I guess. But being able to use sketches is some sort of solution for this, at least for now.
You‘re right! I’ve changed my process and tried to be more descriptive in my prompts without using artist names. Much more satisfying to get good results then.
It's super great. Thanks for sharing your knowledge. Have you some ideas how we can do it for non square images? Resizing in Photoshop or creating environment that is not really connected to the image will give not best results
Hello, I have a question. It pulls image information from the internet (what it is training on for instance), but I was wondering if it is possible to tell it to pull inspiration from a folder on my computer?
There is a new feature which lets you feed it images so it learns a style! Check out this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7OnZ_I5dYgw.html
I installed following the guide in the description but I am missing a lot of the options. in particular the denoising strength and upscaler, any ideas?
@@albertbozesan @Albert Bozesan yeah the Voldy guide. There was a few steps I was a little unsure but I must have figured it out because it all works. I do seem to have different sampling methods too.
I had to unistall GIT because I did not understand your information about how to install all the hundreds of different things, and copying and moving, and downloading etc, etc, etc... everything here and there to make it work. To sad, because I would really like this. Would be so much easier if you just made a noob beginners video on EXACTLY step by step how to install it.
The process to install it changes quite often and is different for different specs, so if I made a video it would be outdated (and therefore even more frustrating!) too quickly.