Thanks for sharing instructions for someone who has ADHD. Repeating the instructions are super helpful! I tend to forget the name of some tools that I am practicing on Photoshop. You’ve gained a new subscriber!!!
thank you so so much very these videos! This year i decided to give myself the "back to school" treatment since i've been out of college for a few years now and learn/polish up on some editing skills. I've watched your videos over the years in general but i've really been liking dedicating time to use these videos as a form of a class lecture
Been following you for years, watched many of those long tutorials on detailed selection and manually painting backdrops... this is quite something to watch... at least the job is getting easier as we get older. lol. cheers!
Hey Aaron, I'm curious if you've tried using something like Evoto. I'm pretty blown away at how it removed wrinkles (and dirt and smudges) from a background with the click of a button. There's sliders afterward that basically act as opacity, but they're better than generative fill because they're much more predictable (and remove the need for any selections). Their button for smoothing out wrinkles in clothing is pretty great as well.
@waynedennyphoto I use eveto too.. I hardly ever use Lightroom anymore. The only reason why I still use photoshop is to expand the background whu I can't do in evoto, once they implement it, I won't need photoshop anymore
Sometimes, AI can be unpredictable. We would suggest adjusting the selection, so it gives you different results. You can also upload a reference image! Here is a tutorial on it: phlearn.com/tutorial/testing-out-photoshop-new-reference-image-ai-feature/
The background has no detail in this use case, I imagine you can completely obsufucate it by blurring the generated area slightly than adding some noise.
Exactly - the generative expand and fill is all useless in large images. Works fine if you are removing small areas/items. Surprised he doesn't mention this at all.
Another informative video, thanks Aaron. Using it with care, as it can introduce artefacts within the unselected areas , but does the heavy lifting for sure. However, I'm interested to know how to ensure consistency across a few images with the same messy background, so that the AI doesn't change the background too much across the images. Thoughts?
Hi Richard! You should check out the new "Image Reference" tool in Photoshop Beta. It will help you with that! We'll be releasing a tutorial on it soon. Stay tuned!
Love your tutorials, very simple and easy to follow. I do have a question. If AI generates a background that I like, how can I use it on multiple images.
Hey there! Unfortunately, you can't directly apply the same AI-generated background to multiple images. This is because AI background generation tools typically tailor the background to the specific image it's working with.
This is really good. Can I batch process or automate this process on multiple images? I see that the lasso tool step is custom for each pic and that may be the hurdle to automate. Any thoughts?
Unfortunately, batch processing generative fill with selections is tricky, as the content AI creates is random. However, you can definitely try to get it as close as possible, by using the new Reference Image feature in Photoshop Beta Here is a tutorial on how it works: phlearn.com/tutorial/testing-out-photoshop-new-reference-image-ai-feature/
I think that generative fill in Photoshop is amazing, but if I'm not mistaken the output is only 1024 pixels on the long end Therefore it's usable only for web unless we're working on small parts of a photo instead of the whole background.
What I can remember from a other RU-vidr there does exist a script that allows you to expand the canvas to any size and then it will make selections of 1024 x 1024 in sequence until the frame is filled. It is really cool!
Thanks for this - been struggling with inaccurate masking so this was timely. Incidentally, I wanted to ask - you're my go to for photoshop but do you have any recommendations for channels/resources to upskill on video editing? From bare bones basic.
Thanks for the info. Not what I did recently and I was struggling had kids on white cloth. Tons of dirt and wrinkles. Also, sometimes they were not in side backdrop, so there was repairs. If I lasso area and Gen fill. So so results. Ended up using marquee tool and stretching it ( since All white). Never got a good match with generative expand. Whites never matched. There was also hard line where original edge met new fill. Saw that too with your ballerina. Look at right side of pic. Any ideas? Thanks.
Hi Aaron! Is there a way after you have used AI to clean up the background and you want to transfer those AI generated results to other images in that series in order to maintain consistency? Thanks so much!
Out of curiosity, since I had to drop my PHLEARN subscription for budgetary reasons earlier this year, has Aaron ever addressed the ethics of AI in any video? I'd be curious to hear his take. I know that the Adobe marketing/talking point is that they only train their AI models on Adobe Stock, public domain things, or stuff that they otherwise have the rights to use, but I'd love to hear more about this-particularly from Aaron, who I've come to trust over many, many years of watching his videos.
Let’s say I generate a backdrop that I like, e.g the first one you liked 2:10 . How do I replicate a similar backdrop for the rest of my image set? I type in the same prompts but it comes up with something completely different for each photo? Thanks!
@@JadePogson AI still produces random results, even if you enter the same prompt. But you could test out the reference image in Beta using the sample image with the background you like. Here is a tutorial on it phlearn.com/tutorial/testing-out-photoshop-new-reference-image-ai-feature/
Great tutorial, however, generative fill always produces blurry to slightly blurry add ons to the image. How to have same pixel sharpens of new generated fill in sync with the original image?
Adobe introduced a new enhance feature in Photoshop Beta that increases the quality of the results! Check out this tutorial phlearn.com/tutorial/new-generative-fill-enhance-feature-explained/
hey there! Selecting hair can definitely be a challenge. If you have time, we have a few tutorials we think you might find very useful! phlearn.com/tutorial/selections-with-channels-photoshop/ phlearn.com/tutorial/advanced-cutouts-photoshop/ phlearn.com/tutorial/master-retouching-hair/
Be careful about using generative fill... because usually you have many images from a shoot, and your background won't match. The best way to deal with backgrounds is learn how to shoot and light properly, reducing your need to do much in post.
These videos are helpful, but what I can’t figure out is how to edit multiple photos of a session. If I use AI in one photo, I need a comparable result in other photos or it’s obviously fake.
Hey there! All new Photoshop versions have a handy Contextual Task Bar. Just make a selection, and it should pop up automatically. If it doesn't, you can find it under Window > Contextual Task Bar.
Hello! Here are some tutorials you will find useful: phlearn.com/tutorial/selections-with-channels-photoshop/ phlearn.com/tutorial/how-to-retouch-hair-photoshop/ Also, if you have time we would encourage you to take a look at this in depth tutorial phlearn.com/tutorial/master-retouching-hair/
Does anyone know how can I make it so that the same AI generate backdrop is applaid to all the images? Since I select the subject, the mask on top is always on the same position. I wanna use this for a studio sesion with multiple photos and make sure they all look the same.
Hey there! Unfortunately, you can't directly apply the same AI-generated background to multiple images. This is because AI background generation tools typically tailor the background to the specific image it's working with.
Hi Meghan! AI is still tool is still under development, and these situations can sometimes occur. While there's no one-size-fits-all solution, you can try refining the selection area around the affected body parts, or using reference images to guide the fill.
Hi Andres! They actually just released a new Enhance feature in Photoshop Beta. The tutorial below explains how it works phlearn.com/tutorial/new-generative-fill-enhance-feature-explained/
@@phlearn Thank you! I have seen your video but someone mentioned that sometimes it looked weird. The enhance feature is better than the free actions that go square by square with 1024x1024 or 500x500?
it works great with people and so bad with interior design products, Adobe AI needs to stop adding parts/components to the subject when this is a product, it always distorts/changes/multiplies my product image when I try to generate a background
Hi Fabio! Just a heads up that sometimes tweaking your selections can lead to even better results. On a different note, don't forget to check out the latest updates for Generative Fill! They've improved quite a bit!
It doesnt work that well when you want to patch just a piece of background. Photoshop AI mostly can't generate perfect background without visible line in place whete it connects with main backround. AI got problems with gradients too.
I was trying this tools but for me it doesn’t work the same as the video . I don’t like the results at all it looks too fake, when I zoom the image I can see many imperfections
Hey there! Thanks for sharing your experience with Generative Expand in Photoshop. It's still under development, and the results can vary depending on the image and settings used. If you don't like the results, you can always opt for more traditional methods. We have an amazing PRO tutorial that covers background cleanup phlearn.com/tutorial/clean-up-studio-backgrounds-photoshop/
Not worth it. Limited to 1024x1024 and using the work around is time consuming to output ratio is bad. Rather have something with higher quality that takes a bit more time.
Getting serious for a minute. There is a very major drawback to generative fill! Going from a 4 k original file to adding 1k generative fill shows up like a smashed thumb. Nobody can publish those at 2048 resolution online, much less publish them. Usually they look OK on screen at 1080 resolution, but enlarge them to 2048 on your phone or screen and you will immediately see the difference between the original part of the photo and the expanded blurry 1080 parts.
A studio backdrop has no detail, just hit it with a blur, then add back noise to match your original photo, you won't be able to tell that it was a scaled image. I just tried with one of mine and at 100% zoom on a 4k display with a 48mp image I can't tell at all that it was a stretched AI image.