-
```
## PACS, Overall, K=3
python data_list_generator.py --dataset PACS --target art_painting --mode overall --style adain --K 3 &
python data_list_generator.py --dataset PACS --target cartoon --mod…
-
-
Hello,Thanks for your code and the pretrained-model. However, the [GoogleDrive link](https://drive.google.com/file/d/1vBF-4s5u0sro3nwDFWL7VnAV6KViCMp0/view?usp=sharing) become invalid these days. Woul…
-
**thanks for your work. You have wroten very good paper and support code soonly.
Besides , I have read your early paper 《Styleflow for content-fixed image to image translation
》 which describes th…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
### What happened?
Today, when us…
owlwc updated
5 months ago
-
## 🐛 Bug
when i create a tensor on the tpu, i can perform operations including printing, but after running inference it takes a very long time, sometimes more than 2 minutes
## To Reproduce
run…
-
Hi,Yiran:
Thank you for your good work.
I ran into some problems.
When i ran the step 5 or test on a target person, program will load ./checkpoints/memory_seq_p2p/60_net_G.pth after initialize net…
txjlx updated
3 years ago
-
Hi @jiwoogit & @hse1032,
Thanks for open-sourcing this great work! I am working with the diffusers library and using base model [SD2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) from…
-
### Issue Description
Rendering 512x768 with 2x upscale creates 1024x1538 images instead of 1024x1536 images.
Reproduce:
- SD 1.5 image with 512x768
![image](https://github.com/user-attachmen…
-
There is a new "reference-only" preprocessor months ago, which work really well in transferring style from a reference image to the generated images without using Controlnet Models: https://github.com…