Closed aragakiyui611 closed 4 months ago
Hi,
There should not be any NaN in HumanML3D. Please refer to the tutorial to check if your data is correct. Specifically, the error message should refer to the validation set. I suggest you check if you processed the dataset correctly. If it's overall correct, then just filter out the NaN sequence for a quick fix.
You may also check if you have matched numpy and scipy library version.
On Sun, Feb 18, 2024 at 7:53 PM Yuxuan Mu @.***> wrote:
Hi,
There should not be any NaN in HumanML3D. Please refer to the tutorial https://github.com/EricGuo5513/HumanML3D/blob/main/motion_representation.ipynb to check if your data is correct. Specifically, the error message should refer to the validation set. I suggest you check if you processed the dataset correctly. If it's overall correct, then just filter out the NaN sequence for a quick fix.
— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/momask-codes/issues/25#issuecomment-1951233236, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNB56B3MPP62NVYXNPI3YUHTVJAVCNFSM6AAAAABDDMS5EGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJRGIZTGMRTGY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
I regenerate the dataset and it still has nan. I tried to filter them out.
However there still some data problem.
Installing corresponding scipy and numpy version does not work.
There’s a problem in your current filter solution that you ignored the constraints of the covariance matrix. E.g. the diagonal elements must be larger than 0.
What I suggest for a quick fix is to check the motion sequences in the dataset. Hopefully, there would be only few illegal sequences. Then, filter out all the illegal sequences before any operations. I also suggest you check which operation involves NaN data but not mess them up.
Thank you! I found the data have nan and deleted them. There were only *007975.npy has nan.
when I run
python train_vq.py --name rvq_name --gpu_id 1 --dataset_name t2m --batch_size 256 --num_quantizers 6 --max_epoch 50
, it prints this bug, is there are NaNs in the datasets?