Table of contents generated with markdown-toc
14-06-2023: The GENEA Challenge 2023 visualizer receives improved input and output handling of filenames. (The original pre-updated code given to participants can be found in this release):
-i1
-> -imb
(main agent BVH)-i2
-> -iib
(interlocutor BVH)-a1
-> -imw
(main agent WAV)-a2
-> -iiw
(interlocutor WAV)-n
must be specified by the user for handling filenames of intermediate and final output files:
.
) or slashes (/
, \
) in the value. For example, -n "my_output"
is allowed, but -n "my_output.mp4"
or -n "my_directory/my_output"
is not. Output directory should be specified using the -o
arg.-n
as filename, with .mp4
added at the end.The very first installment of the GENEA Challenge 2020 visualizer is hosted in a different repo: https://github.com/jonepatr/genea_visualizer
Thanks to @AbelDoc, the visualizer received a minimal version that can be used locally. It is especially useful for batch-rendering BVH files and supports multiple rendering engines in Blender!
The code is hosted here: https://github.com/AbelDoc/GENEA_Visualiser_Local
The GENEA Challenge 2022 visualizer is archived at the archive_2022
branch: https://github.com/TeoNikolov/genea_visualizer/tree/archive_2022
Example output from the visualization server. The indicators above the speakers hint to the viewer that the speaker is engaged in "active speech".
This repository contains code that can be used to visualize BVH files (with optional audio) using Blender for dyadic interactions. The code was developed for the GENEA Challenge 2023, and enables reproducing the visualizations used for the challenge stimuli on most platforms. Currently, we provide only one interface for rendering visualizations:
The Blender script can be used directly inside Blender, either through a command line interface or Blender's user interface. Using the script directly is useful if you have Blender installed on your system, and you want to play around with the visualizer.
Blender 2.93.9
installed (other versions may work, but this is not guaranteed).
Blender
and navigate to the Scripting
panel above the 3D viewport.Open
to navigate to the blender_render_2023.py
script. This script is found inside the celery-queue
folder.main()
below the comment block that reads "SET ARGUMENTS MANUALLY...".ARG_OUTPUT_DIR
directory (defaults to the same folder as the BVH file). Filename is computed from ARG_OUTPUT_NAME
.It is likely that your machine learning pipeline outputs a bunch of BVH and WAV files, such as during hyperparameter optimization. Instead of processing each BVH/WAV file pair separately through Blender's UI yourself, call Blender with command line arguments like this (on Windows):
"<path to Blender executable>" -b --python "<path to 'blender_render_2023.py' script>" -- -imb "<path to main agent BVH file>" -iib "<path to interlocutor BVH file>" -imw "<path to main agent WAV file>" -iiw "<path to interlocutor WAV file>" -v -d 600 -o <directory to save MP4 video in> -n "<output file name>" -m <visualization mode>
On Windows, you may write something like this (on Windows):
& "C:\Program Files (x86)\Steam\steamapps\common\Blender\blender.exe" -b --python ./blender_render_2023.py -- -imb "C:\Users\Wolf\Documents\NN_Output\BVH_files\mocap1.bvh" -iib "C:\Users\Wolf\Documents\NN_Output\BVH_files\mocap2.bvh" -imw "C:\Users\Wolf\Documents\NN_Output\audio1.wav" -iiw "C:\Users\Wolf\Documents\NN_Output\audio2.wav" -v -d 600 -o "C:\Users\Wolf\Documents\NN_Output\Rendered\" -n "Output" -m "full_body"
Tip: Tweak --duration <frame count>
, to smaller values to decrease render time and speed up your testing.
During the development of the visualizer, a variety of scripts were used for standardizing the data and processing video stimuli for subjective evaluation. The scripts are included in the scripts
folder in case anyone needs to use them directly, or as reference, for solving similar tasks. Some scripts were not written in a user-friendly manner, and lack comments and argument parsing. Therefore, using some scripts may be cumbersome, so be ready for some manual fiddling (e.g. replacing hard-coded paths). Writing a short readme inside the scripts folder is on the backlog, but there is no telling when this will happen at the moment.
Currently, the default settings written inside the Blender script indicate the settings that will be used to render the final challenge stimuli of GENEA Challenge 2023. Please check this repository occasionally for any changes to these settings.
@inproceedings{kucherenko2023genea,
author={Kucherenko, Taras and Nagy, Rajmund
and Yoon, Youngwoo and Woo, Jieyeon
and Nikolov, Teodor and Tsakov, Mihail
and Henter, Gustav Eje},
title={The {GENEA} {C}hallenge 2023: {A} large-scale
evaluation of gesture generation models in
monadic and dyadic settings},
booktitle = {Proceedings of the ACM International
Conference on Multimodal Interaction},
publisher = {ACM},
series = {ICMI '23},
year={2023}
}
To find more GENEA Challenge 2023 material on the web, please see:
To find more GENEA Challenge 2022 material on the web, please see:
If you have any questions or comments, please contact: