zyhbili / LivelySpeaker

[ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".
77 stars 9 forks source link

error while executing the scripts for BEAT #10

Open VishnuSai87 opened 10 months ago

VishnuSai87 commented 10 months ago

I am getting this error while executing the recently released scripts for BEAT I am not able to find this file in the test folder of the BEAT dataset self.lmdb_env = lmdb.open(preloaded_dir, readonly=True, lock=False) lmdb.Error: /home/LivelySpeaker_beat/datasets/BEAT/finaltest/my6d_bvh_rot_2_4_6_8_cache: No such file or directory Can you tell me if I have to download the whole zip folder of the BEAT dataset or what exctly should I download from the BEAT dataset to run this code. Thank You

nehaksheerasagar commented 10 months ago

Even I am facing the same issue can you tell me how to extract speakers 2,4,6,8 from the cache file and create the bvh_rot_2_4_68 cache from bvh_rot_cache. Thank You.

fcchit commented 10 months ago

@VishnuSai87 @nehaksheerasagar I use tmp/process_cache.py to generate the my6d_bvh_rot_2_4_6_8_cache from bvh_rot_cache generated by BEAT preprocessing tools. I'm training the model now.

nehaksheerasagar commented 10 months ago

I run this command:python tmp/process_cache.py This file got created my6d_bvh_rot_2_4_6_8_cache but it gave me this output: train, 0/64 Traceback (most recent call last): File "tmp/process_cache.py", line 58, in build_data_with_beat("train") File "tmp/process_cache.py", line 32, in build_data_with_beat sample = pyarrow.deserialize(sample) File "pyarrow/serialization.pxi", line 461, in pyarrow.lib.deserialize File "pyarrow/serialization.pxi", line 423, in pyarrow.lib.deserialize_from File "pyarrow/serialization.pxi", line 400, in pyarrow.lib.read_serialized File "pyarrow/error.pxi", line 87, in pyarrow.lib.check_status pyarrow.lib.ArrowIOError: Cannot read a negative number of bytes from BufferReader. And when i run the code for train its running fine but when i run the code for test it says number of samples = 0 Thank You.

VishnuSai87 commented 10 months ago

After training for testing should we keep the my6d_bvh_rot_2_4_6_8_cache file in the test folder in the dataset to execute the test code because i did that and also got number of samples=0

zyhbili commented 10 months ago

I update the data scripts in data_libs, please see README in the sub folder for details. In short, we mostly follow the original BEAT processing procedure to generate bvh_rot_2_4_6_8_cache. Additionally, we patch it with rot6d and mel and generate my6d_bvh_rot_2_4_6_8_cache.

zyhbili commented 10 months ago

After training for testing should we keep the my6d_bvh_rot_2_4_6_8_cache file in the test folder in the dataset to execute the test code because i did that and also got number of samples=0

We use the finaltest cache for testing. We split the original long seq test dataset into 34 frames for testing following the same operation in TED. Thus, you should split the test dataset using dataloader first.