Replies: 3 comments 5 replies
-
I remember that at least the img2img had a weird behavior, like it needed a lot more strength to be comparable to the SDXL one, so probably don't expect a good inpainting without a properly trained model. Also, at least with the SD models, to completely change the background you need a strength of 1.0 or 0.99. Finally what're you're doing is outpointing and not inpainting, which is a lot more harder to achieve, you can read about it, but there's still people that have a really hard time doing outpaintings even with an "inpainting" trained model and good controlnets. I haven't had the time to play with the inpainting flux pipeline, but I don't have really high hopes for it yet, but at least, your images have good quality. |
Beta Was this translation helpful? Give feedback.
-
Try reducing guidance scale: |
Beta Was this translation helpful? Give feedback.
-
Hi, sorry, which version of diffusers do you have? I can't import the FluxInpaintPipeline from diffusers. I don't know why, it must be related to the diffusers version. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi.
This is my code for the flux inpaint pipeline:
The car's is originally parked in a parking lot. I am trying to inpaint it with a beach background but its returning the same car inpainted into similar scenarios as the original one.
This is the original:

These are the result:


s:
Beta Was this translation helpful? Give feedback.
All reactions