Closed ningxinJ closed 9 months ago
Hello, I've been attempting to replicate the results from your paper, specifically the text "A portrait of Hatsune Miku, robot" using the Civitai model. Unfortunately, the outcomes I'm getting are quite poor and don't resemble the results shown in the paper.
I am unsure if there is a specific configuration that I might be missing, could you provide a config file that could reproduce the results as they are in the publication?
Thank you very much for your assistance.
Can you share your setting with me? Maybe I can help you. A trick we use when trying out civitas models is that we adopt a larger delta_t; for example, I adopt 200 for the Miku case I show in our teaser and use 'A woman head' as the prompt for pointe initialization.
Specifically, I use the model 'realcartoonPixar_v3'.
Can you share your setting with me? Maybe I can help you. A trick we use when trying out civitas models is that we adopt a larger delta_t; for example, I adopt 200 for the Miku case I show in our teaser and use 'A woman head' as the prompt for pointe initialization.
Specifically, I use the model 'realcartoonPixar_v3'.
Thank you for your reply. I've tried changing the config as you suggested, and the other views turned out much better. However, there are still issues with the front view. I simply used the zombie_joker.yaml
file as the config file, and only changed the text and the pretrained model.
Can you share your setting with me? Maybe I can help you. A trick we use when trying out civitas models is that we adopt a larger delta_t; for example, I adopt 200 for the Miku case I show in our teaser and use 'A woman head' as the prompt for pointe initialization. Specifically, I use the model 'realcartoonPixar_v3'.
Thank you for your reply. I've tried changing the config as you suggested, and the other views turned out much better. However, there are still issues with the front view. I simply used the
zombie_joker.yaml
file as the config file, and only changed the text and the pretrained model.
It looks better, you may also try to tune the 'rand_cam_gamma' parameter where a larger gamma value emphasizes the front face better.
Hello, I've been attempting to replicate the results from your paper, specifically the text "A portrait of Hatsune Miku, robot" using the Civitai model. Unfortunately, the outcomes I'm getting are quite poor and don't resemble the results shown in the paper.
I am unsure if there is a specific configuration that I might be missing, could you provide a config file that could reproduce the results as they are in the publication?
Thank you very much for your assistance.
excuse me, how do you change the model, by changing model_key?
I download from civitai and change LoRA_path like ts_lora.yaml, and something going wrong in lora.py, it seems like it's different from Taylor_Swift safetensors file.
Hello, I've been attempting to replicate the results from your paper, specifically the text "A portrait of Hatsune Miku, robot" using the Civitai model. Unfortunately, the outcomes I'm getting are quite poor and don't resemble the results shown in the paper. I am unsure if there is a specific configuration that I might be missing, could you provide a config file that could reproduce the results as they are in the publication? Thank you very much for your assistance.
excuse me, how do you change the model, by changing model_key?
I download from civitai and change LoRA_path like ts_lora.yaml, and something going wrong in lora.py, it seems like it's different from Taylor_Swift safetensors file.
If you download a safetensor checkpoint, you may change the model key to the path of the safetensor, set 'is_safe_tensor: true' after the model key and leave the "lora_path: none".
Hello, I've been attempting to replicate the results from your paper, specifically the text "A portrait of Hatsune Miku, robot" using the Civitai model. Unfortunately, the outcomes I'm getting are quite poor and don't resemble the results shown in the paper. I am unsure if there is a specific configuration that I might be missing, could you provide a config file that could reproduce the results as they are in the publication? Thank you very much for your assistance.
excuse me, how do you change the model, by changing model_key? I download from civitai and change LoRA_path like ts_lora.yaml, and something going wrong in lora.py, it seems like it's different from Taylor_Swift safetensors file.
If you download a safetensor checkpoint, you may change the model key to the path of the safetensor, set 'is_safe_tensor: true' after the model key and leave the "lora_path: none".
I got it. Thank You!!
Hello, I've been attempting to replicate the results from your paper, specifically the text "A portrait of Hatsune Miku, robot" using the Civitai model. Unfortunately, the outcomes I'm getting are quite poor and don't resemble the results shown in the paper.
I am unsure if there is a specific configuration that I might be missing, could you provide a config file that could reproduce the results as they are in the publication?
Thank you very much for your assistance.