-
Hello!
Would you mind telling me what I should type in the {path-to-diffusion-lm} in decoding Diffusion-LM, I have tried several but they all return an empty output list.
Also may I ask where did y…
-
Could you also share an example of generation by secondary structure, that is shown in 4.3.3 CONTROLLABLE GENERATION: SECONDARY STRUCTURE GUIDED PROTEIN SAMPLING part of your state. Thank you in advan…
-
Mflux ComfyUI is a great project, and I hope the team will continue to maintain and update it. If possible, pulling in other like-minded technical enthusiasts to collaborate would be beneficial. I kno…
-
Hello dear Tongyi SpeechTeam
I am interested with the controllable generation via instruction, and I want to fine-tune the model with my own version of data. Based on this, my question is, are there …
-
First off, I want to say that I really admire your work! The new version of Craftsman looks fantastic, and it’s exciting to see the progress you’ve been making.
However, I noticed an issue where th…
-
Expression Transfer:
"GANimation: Anatomically-aware Facial Animation from a Single Image" (Pumarola et al., 2018)
"MeshTalk: 3D Face Animation from Speech using Cross-Modal Disentanglement" (Rich…
-
# URL
- https://arxiv.org/abs/2408.12599
# Affiliations
- Xun Liang, N/A
- Hanyu Wang, N/A
- Yezhaohui Wang, N/A
- Shichao Song, N/A
- Jiawei Yang, N/A
- Simin Niu, N/A
- Jie Hu, N/A
-…
-
# Controllable deep melody generation via hierarchical music structure representation [[ISMIR21](https://arxiv.org/abs/2109.00663)]
## Abstract
- music framework generates rhythm and basic melody …
-
感谢你们的开源工作,这个模型很不错
我有个问题想问下,请问有关于副语言token如何使用的指南吗?
比如,论文和demo中提到的 `[word_rep]` `谁@知@道@啊` `(realization)[prolong]`,这些标记
还有,Emotion如何控制,代码中目前好像并不包含这些内容?
-
https://virtual2023.aclweb.org/paper_P1495.html