Closed AlanTuring42 closed 9 months ago
Also everything on the ImagePrompt works fine too like ImagePrompt
PyraCanny
CPDS
. Only issue seems to have with the FaceSwap
feature. Like when it download and loads the control models I think then the issue happens, like it seems like google collab deliberately shut downs the process I dunno.
this problem is only reported in free version of colab. maybe because of limited RAM. we will take a look, while pro version seems to solve it.
Same problem.
I have bought 100 GPU units for 13 USD(pay as you go) and faceswap works correctly
this problem is only reported in free version of colab. maybe because of limited RAM. we will take a look, while pro version seems to solve it.
Yeah, sadly it appears to be the culprit perhaps. I noticed that the RAM usage was around 95% when Colab shut down the process.
Actually I wouldn't use Colab if there were a solid, proven guideline to install it on Intel-based Mac. I saw despite following the official guideline, people had lots of issues even with apple silicons mac, that made me certain I would fosho face some issues lol 😂 Maybe I'll use time machine and give it the best shot maybe after a fresh install and see what happens
IMO Amazon SageMaker Studio Lab could be a better option than Colab then, at least you got 16gb ram & they are more transparent then google.
@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the FaceSwap
lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that, FaceSwap
actually worked perfectly for me like even couple of days ago.
When It worked
When It didn't worked
The free version of Colab offers limited RAM, which can be managed more efficiently by opting for fp16
or fp8
precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:
For fp16
precision:
!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16
Alternatively, for fp8
precision:
!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2
The free version of Colab offers limited RAM, which can be managed more efficiently by opting for
fp16
orfp8
precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:For
fp16
precision:!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16
Alternatively, for
fp8
precision:!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2
fp16 working wonderfully on Colab so far. I might try some heavier checkpoints and share the results in the future. Thanks
The free version of Colab offers limited RAM, which can be managed more efficiently by opting for
fp16
orfp8
precision to lower RAM consumption and enhance VRAM usage. This approach has proven effective in my experience. To implement this, you can modify your execution command as follows:For
fp16
precision:!python entry_with_update.py --share --preset realistic --always-high-vram --all-in-fp16
Alternatively, for
fp8
precision:!python entry_with_update.py --share --preset realistic --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2
Thanks mate, I also thought about it after @lllyasviel pointed out the ram could the culprit and then in the readme I saw it is possible to use cmd like those, and utilize the full power of gpu ram, but you gave the confirmation. I just tried various form:
1. !python entry_with_update.py --share --always-high-vram
2. !python entry_with_update.py --share --always-gpu
4. !python entry_with_update.py --share --always-high-vram --all-in-fp16
5. !python entry_with_update.py --share --always-high-vram --unet-in-fp8-e5m2 --clip-in-fp8-e5m2
In my test I think 1
,2
,3
all worked very similar but maybe slightly better usages of ram and vram in 1
. 4
worked the least, resulting slightly lower quality images compared to others. I tested with all of the major checkpoints.
@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the
FaceSwap
lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that,FaceSwap
actually worked perfectly for me like even couple of days ago.When It worked
When It didn't worked
How did you get it working on sagemaker? I cant even get a gradio link
@lllyasviel just to let you know, after some digging I used it with Amazon SageMaker and everything works perfectly as it also do with the Colab but again the
FaceSwap
lol, It actually worked like 2-3 times and then the same exact behaviour that I have with the Colab. Also FIY I forget to mention that,FaceSwap
actually worked perfectly for me like even couple of days ago. When It worked When It didn't workedHow did you get it working on sagemaker? I cant even get a gradio link
Getting error each time tried to use the
FaceSwap
feature. Tried completely fresh start with different google accounts.In the log below you can see I tried to generate 4 picture with the text, that worked fine. But when I tried to use the
FaceSwap
I got the error and colab stopped automatically.Here's screenshots of the issue happening:
Full Colab Log