-
Hi, Thanks for your work on CatVTON.
Is it possible to train the CatVTON architecture with the Flux model? I think Flux could enhance the quality for try-on tasks.
-
### Checklist
- [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [X] The issue exists on a clean install…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do?
A simple tool to erase select area's when in…
-
```
GMIC is an image manipulation toolkit. Among its various tricks, it can do
image inpainting which would give better looking results than the current black
box system.
Example:
http://cimg.sourc…
-
Hi!
I recently published these nodes: https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch/
They enable inpainting only in the masked area, thus speeding up inpainting and mapping the promp…
-
Thank you for the excellent work.
I am trying to finetune the inpainting model. I have a dataset with this structure:
```
.../inpaint_ds/
├── train
│ ├── 0.png
│ ├── 1.png
│ ├── ...
├─…
-
Thanks !! but the dictionary was not given during the inpainting period, so the program could not run. Can you give me the training dictionary or the training process?
`train_params.iter=150;
dict…
-
Hey Konie thank you so much for your great work.
I don't know why the eyes look so bad on replicate fooocus api (the normal one). Have you ever notice this problem before ?
-
I have tested in many different ways. The exact same codes and parameters work with AutoPipelineForInpainting (also tried StableDiffusionInpaintPipeline) using EulerAncestralDiscreteScheduler, but it …
-
It seems pretty important to be able to create a dynamic mask indicating where you want the AI to generate, then provide the whole image and that area as context.
Perhaps it could be accomplished w…