-
How would you evaluate the quantitative performance of your model on the genea_challenge_2020 dataset? I only found the code for evaluation on the TED dataset.
-
I input the audio: TrinitySpeech_Gesture_I_GENEA_Challenge_2020/Test_Data/Audio/TestSeq001.wav, which has 1min 58 seconds.
after run generate.py, get motion TestSeq001.bvh, which has 2350 Frames, F…
-
Hello, thanks for sharing code !
In the DiffuseStyleGesture , the model only use one audio feature , wavlm .
But when extract wavlm feature from raw wav , the [code](https://github.com/YoungSeng/Di…
-
I train 'style_gestures' with the challenge data in GENEA Workshop and use the provided data processing script to get the training input. The training loss decreases from tens to negative hundreds (80…
-
I found that when I use my pertained model to synthesize new gesture following the guidance, there is only the bvh output, how can I get the paired audio data ?
-
Hello,
I am trying to run you great work with the command:
python main_v2.py -c config/multimodal_context_v2.yml
but got an error:
File "/home/zhewei.qiu/anaconda3/envs/s2ag-env/lib/python3.7/site…
-
What sort of functionality should be exposed to the CLI?
For example a universal GTFGFFrefFlat conversion script would be useful.
daler updated
11 years ago
-
Dear scGPT team,
thanks for providing the tutorial and example for the perturbation prediction. Gets lots of attention.
I would have a question if your model can also generate a gene perturbation …
-
你好! 请问有评估指标的代码吗?
223d updated
11 months ago
-
Hello,
Thank you for this nice repository.
I have noticed that the only gesture generation challenges called GENEA Challenges are not on your list of Challenges:
https://genea-workshop.github.i…