In this video, I take another look at Generative Fill, a new feature found in the latest version of Photoshop Beta. Please subscribe to my newsletter! anthonymorganti.substack.com/subscribe Check out one of my newer websites - The Best in Photography: bestinphotography.com/ Please help support my RU-vid channel - consider purchasing my Lightroom Presets: www.anthonymorganti.com/ To get more info about Photoshop, go here: prf.hn/l/lGnjDBl Here is the list of my recommended software, along with any discount codes I might have: wp.me/P9QUvD-ozx Here is a list of my current cameras, lenses, etc.: wp.me/P9QUvD-ozG Help me help others learn photography. You can quickly offer your support here, where I receive 100% of your kind gift: ko-fi.com/anthonymorganti You can change the default amount to the amount you want to donate.
After using Free Transform to change the size of a generated object to a more appropriate size, just click the Generate button again. The descriptor you typed is still there, and it will regenerate a new set of three images, this time at the new size (or location, if you also moved the object to another place).
The biggest use for me is to extend my images. Sometimes I crop to close, or did not properly compose or maybe took the image vertically and need it in a different aspect ratio. It’s a huge time saver for sure.
Anthony, I give you kudos for revisiting this issue. Everyone who watches your videos will learn even more because of your willingness to reflect and revise. The fact that your viewing community provided hints shows the the "WE" is smarter than the "ME!" Thanks again!!
One thing you'll find is, the better the description the better results when generating animals. The more descriptive you are about an animal or object, the better the results. For your dog example on the sidewalk, you could use "A small dog sitting on the sidewalk." Also, it has trouble with human limbs. Like hands, sometimes it will generate 6 fingers, or 4 fingers. Sometimes 2 thumbs on one hand.
With the dogs, you need to merge the previous layers and then ask to generate whatever you want. It doesn't work well when it has to generate on top of another generative layer. That has been my experience.
I haven't tried it yet but after watching this I wonder whether giving more information in the generative fill gives better results. So maybe in the case of the dog if you put say, dog standing still or something like that you would get a better result. This new function is good already, imagine what it will be like in a few versions down the track. Great work Anthony on your willingness to take suggestions from "us" to help get better results. We all learn as we go.
The image you are working on is 6137x3632. The generative fill has a max resolution of 1024×1024 when generating content. When you work on a high resolution file and view it at 100%, you'll see how blocky the generative fill is.
I think the fact that Generative Fill can figure out the amount of bokeh to use is flat out amazing. Also the use case for the first photo to erase items to get a bokeh background is rather uncommon and so won't be perfect. Personally after playing with it, the strength of it is its ability to tweak part of images bit by bit without mad masking and blending skills with amazing results. It makes image compositing way too easy.
Hello Anthony, In regards to generative fill and attempting to resize the generated object, a solution is to mask the generated object which will allow free transform to change the size cleanly. I have been using Lumenzia from Greg Benz for years now for its luminosity masking. He has added a feature in the Basics panel that meshes with Generative fill and will allow auto-masking of the generated object. Free transforming now allow you to resize the object cleanly..
The funny thing is, as Photoshop users, we've all encountered that frustrating situation of trying to extend an image in Photoshop. The hours we've spent painstakingly manipulating the background, adding extra elements or pretty much anything else. We probably yelled, "Where have you been all my life?!!"(I know I did..LOL🤣) and wondered why it took so long for this game-changing feature to get here. Its amazing, scary and crazy all at once.😄
I've been using it like mad and am thrilled with the results. Some of them seem supernatural, especially how it judges the perspective of the scene and puts in into the regen.
Good observations Anthony. I've been playing with this recently as well. In fact it came to me just in time for some commercial product images I was creating for a couple of small breweries. What I have found is that it has saved me a ton of time and effort in my setups. For instance I had one shot of a can of beer where I simply photographed the product with good lighting, a plain dull grey background and set on a simple wood surface. Using gen-fill I created a background, an interesting surface, a glass of beer and another prop item in the image. It definitely took some time and some experimenting to get it right, but once done it is an amazing result, and I would say completely realistic. I don't think I could have done better with a detailed studio setup and having gone and sourced the props and setting items. This is truly a game changer for this kind of work and is frankly going to save me from having to acquire a bunch of prop items for shoots. This, in turn, should allow me to offer even better value to clients and simplify my workflow. Way less compositing required now. I'm pretty happy so far!
Having watched this video I downloaded the PS Beta to give it a try. Using the Lasso tool as you have the results have been amazing. I was able to remove people and objects which spoilt an image that I toook of the Flying Scotsman at a station. This is a real game changer.
You can use all the reshaping tools for the generated “dog” if you are not pleased with the size. The shadows etc will work. It is maybe that you generate the dog in a blurry picture in the background that make your problem.
I think another reason for your dissapointment is you are expecting too much from a one word query that is very general, I had my nest results using it like a search engine for example "Small collie sitting". And you can use free transform if you make a tighter selection.
When adding something it will add based on the shape you draw so if you draw a horizontal rectangle with the term dog it will usually be laying down and if vertical it will be a sitting or standing dog facing the camera. Also it will try fill as much of the shape as it can so if you draw a big shape it will be a big dog so for your example if you made the same shape but small to the perspective your result would be closer to accurate.
Using this for me falls firmly into one of two categories - 1) A fun thing to play around with for a laugh and 2) a useful tool for small changes to subtly enhance the original photo without trying to create a new image. Best use I have found is for extending the image out which works phenomenally well and can be useful for adding some background details. I think it is going to struggle (or be more noticeable) when you are editing foreground objects with it.
This was a very helpful tip. I too was not impressed with the ability to remove objects but your suggestion to loosely select the area improve the results significantly. THANKS.
When you generate something like a red car and then scale it up (or place it in another part of the scene) you can then re-select the now 'mismatched' background and click generate and the background fits perfectly again.
The little bit I've played with this, I like it for image clean-up, but it's not necessarily great (yet) for major changes. But then, I'm not interested in making major changes to my photos, so it looks like it could be useful for clean-up tasks that the clone/heal tools might have trouble with.
Thanks. I think it's about the same as using a pre-painted backdrop that mimics a room background. Definitely agree about documentary and journalism work.
I think Adobe's Compositeshop has really taken photographic illustration to a whole new level. I'm excited to see what v2 looks like of the generative fill, it's amazing what we can do now as photorealistic illustrators.
Many many many years ago, when cable hi-speed was coming into this area, I asked why the upstream was much slower than downstream. I was told by the cable company they throttled it back intentionally as people were setting up their own computers like servers. Now, If that’s still the reason they do this, I cannot say
I see generative fill as a time saver. I’ve used it on images with complex cleanups where it quickly did seventy five percent of the job. Then it’s back to the clone stamp, patch tool and all the tools we’ve used a thousand times before. I don’t see that as a failure of generative fill because it saved me a great deal of time and laborious work.
Greg Benz has a great video on gen fill. He uses a fox lying down in a desert and the back lighting and shadows are amazing. You really should look at it
You can also open a new custom blank document and then select it with the marquis tool and type in a scene you'd want like "stormy night with lightning" It does a great job with that but what I can't seem to do is change the colour of an article of clothing. Try as I might with different prompts, it won't change the colour. It'll give me different styles but with the same color. I know that you shouldn't give it instructional prompts just nouns. I try to change the colour of hair by using the lasso tool around the hair area and just typing "blond hair or red hair but it just doesn't do it. Sometimes it'll change just a portion of the hair but not the entire head of hair even though it's all selected. I've watched tutorials by Adobe instructors and all they do is type in stuff like green sweater or blue sweater after selecting the sweater and it works. Same thing with hair. I have the latest beta version and it won't work for me. Anyone have any ideas?
You almost sound sad that it works so good. This beta version has only been out 5 days at this point ... this is the worst it will be. It's not going to give perfect results every time. FYI ... in the beta version, the resolution is limited to 1024px on the long side. Even if it were to never improve what it does now is impressive. Fixing its mistakes is much easier than doing what it creates from scratch.
How about allowing your audience access to the photos you are working on and look for feedback from them on what they can achieve with different techniques?
every medium has a maximum bandwidth at a set technology. Back when cable and DSL internet was developed internet providers noticed most home users had a lot more downstream traffic than upstream traffic. So they said ok we'll sacrifice upstream bandwidth for more downstream and that's how we ended up getting all these asymmetric internet connections. For fiber internet there is no technical reason to do this anymore as fiber internet works over a pair one for sending and one for receiving. Yes technically dsl also works over a pair of wires but that's to close the electrical signal.
It's awesome for aspectratio-changes. Just change to the new ratio which will have transparency on the edges. Then make regular rectangle selection all around covering a small part of the image material on every side that needs extension. Voila!
I see in the video when you were dressing the circle for the dog you just randomly drew a circle. The size of the object generated will be relative to the size of the circle drawn. Most people get better results when they actually draw a shape of the dog. If you do that you can control direction dog is facing and it's size.
Haha... It's already one of the top in-paint and out-paint features. It outperforms stable diffusion, auto 1111, dalle, leonardoai, and many others. Plus, you can generate content directly within the Photoshop editing app. What more could you ask for? This is only beta version and the worst you can get. It is going to get better and better. I think you should be thankful.
I think it's doing better on sharp objects. It would be interesting to get a photo of three North American cars parked side by side, and ask it to replace the middle one with a "Ford Territory from Mandurah." It would have to figure out it's RH drive, and that it has the proper number plates. You could also try "1948 Holden from Canberra," "1970 Ford Falcon from Bathurst," "Colin Brock Holden Torana." The last would be in racing trim and the number 05 emblazoned on the side. I saw a video, I think on Imitative Photography," about a very expensive photograph of a German river. The photographer had put serious work into removing the industrial buildings along the river. This should be good for that work (but probably seriously damage its value as others can more easily do the same thing).
the laziness and dependence of substandard photographers will increase. this type of thing is cool and all, but also asks, why even go out and take a picture of things if you're going to replace them and have a computer generate them for you anyway. framing, composition , subject and location selection all go out the window.
At first I thought this was a weird use case, since she’s in focus and the background isn’t, but there’s lots of applications for parallax animation of the image that would normally need a clean plate and used to be a PITA
In the children photos and others it appears that Photoshop is using the correct blur for wherever you are putting the object. Would the red car appear sharp where you are putting it. If it was built from the front of the image it would probably look a lot better.
Another issue is the generated imagery isn't the same as the original image. It lacks noise, and if you look closely, you can see where the original image and the generated portions diverge.
And not use a photo where there's not one point in focus. In the original the woman was the only thing in focus, but after he edited, it's one big blur. Was he expecting Bokeh Dog to be put into the photo? It's a weird test case to use.
I work at a commercial print shop. I have made 50k+ midjourney images in my spare time and nobody I work with gives a shit. THIS tool however was the hammer strike that cracked their nut. Tools like automatic1111 and controlnet are applying CONSTANT pressure on Adobe to get something elegant released quickly for modern work flows. exciting times
I haven't tried this beta (probably won't), but it looks like it works best when it has enough material left around the selection to work with. Had the subject in your first example not taken up so much of the image, it might have worked better. I would like to have it look at my own images of the same location and use those as source material.
I am getting mixed results, sometimes very good other times not so. I asked for different outdoor backgrounds (from a plain blank original shot in studio) but the results were simplistic, perhaps I need to add more specific prompt for fill. Although you are right that it does a good job of removing things and replacing with similar content in the image. However, I have seen when generative fill is asked to extend a detailed landscape scene to more canvas area on left or right, it does pretty good job doing so.
Generative fill / Firefly does a tremendous job at removing stuff stuff or expanding images, without any prompts. But it's really bad at rendering things. At best, it's on par with base Stable Diffusion 1.5, miles behind Midjourney. Still, the expand / remove functionality is mindblowingly good, so I'm finding it hard to complain. 🙂
I tried it on four different images and it is truly magical. However, I think photography will suffer from it. How can you believe any photo has not been enhanced/altered? It might be great for commercial artists, but for real photographers? No way.
I can't quite figure out how removing an object (like a light fixture) brings up the violation warning. I seem to get it quite often when im trying to remove something very simple. I hope Adobe gets better with this, as it's about the only drawback I see with this tool
@11:08 "we want to get rid of all the children" 🤣 That sounded very wrong 😂But seriously....I think the new feature is rather useful and decent for practical purposes like removing small annoyances in the photos, not like 63% of the frame. If one needs to remove 2/3 of a photo they took than it's probably something wrong with their photography taking skills. I've spent hours in the past cloning things out while trying to maintain some bokeh and lighting cohesion and avoid repetitive patterns.
I was just working on a pic where I wished to place a dog. Used the phrase "dog playing, Super photo realistic, 4k. Makes me wonder which prompts I've used in Mid-journey will work with this beta. Still exploring it.
Can't you transform (to scale down) the objects (like dog) that generative fill plunks into the space you created a selection around - and that may be too big for the image? Totally cracked up over your comment about 'being arrested" re: the warning box. I've gotten things like that for no logical reason in plain old Firefly... If there is some smallish leftover content from generative fill, seems like would be faster to do minor cleanups with the Remove Tool.
This looks like a great feature. Unfortunately I cannot use it as I get a massage back saying that I am not 18. I most certainly am way over 18. I cannot find where I can change my age. Can you help me please?
You can resize and regenerate, but it won't be the exact same thing, it'll be something else created from scratch. I wish we could get roughly the same thing just bigger.
You probably already figured this out but my guess is that the reason your previous tests weren't as successful as your latest was that by cropping in too tightly you weren't giving the program enough "hints"as relates to the surrounding areas it needs to match in order to make a more" realistic" lol image.
Seems odd that the Adobe AI servers would generate something “illegal” and then police itself leaving you with less results. Seems an unintuitive way for it to operate. It should just keep generating until it creates three “legal” options and never bother with presenting you the dialogue.
PS Beta is so full of bugs I have not been able to get to work since I first downloaded it. It is now telling me I am not old enough to use it. I am older than 18.
I wanted to test this feature. But when I click on generative fill, it tells me I can't use it, because the data in my adobe account doesn't verify that I am older than 18 years. I think my account itself is older than 18 years... Unfortunately I can't find an option to fill in my birth date. Does anyone know how to fix this?
I had the same problem; solution is log in to Behance, update your bio with date of birth, then restart PS Beta. It worked fine for me thereafter. Good luck
@@allanwright8137 Thank you for that great advice! I don't use Behance, so this would never have occurred to me as an option. Adobe support wasn't helpful at all. But thanks to you I was able to solve my problem.
Whenever I try and use GF I get told I am not yet 18, goodness knows why Adobe think this, they take money from bank account each month, and there is no place to add a DoB in my account!
Anthony the first dog is blurry because the picture is blurry there, as was the person walking that you removed. If the dog was in focus the rest of the photo would look like you just placed the dog in it. The closer dogs are in focus like the model was. I wish you had put more thought into this presentation. It is the first of yours that I am disappointed in.
I don’t have a problem with AI imaging. What I do have a problem with is Adobe moderating the content. Sign of current society. If it can’t be controlled, it’s illegal. Who determines what is illegal or offensive? Most likely very offensive folks… Why is everything political at this point?
you're wrong again.. you just can't simply type in dog.. AI will assume you just wanted a dog in your lasso. You need to be descriptive like dog walking on pavement, pond or street...; something like that