pieterwolfert / co-speech-humanoids

Co-Speech Gesture Generation for Humanoid Robots
13 stars 3 forks source link

Empty preprocessing pickle file #1

Closed a-haydar closed 5 years ago

a-haydar commented 5 years ago

Assuming that I am running the code correctly, the steps are: 0- Place JSON, CSV, and VTT files from youtube-gesture-dataset in ./data 1- Run preprocessing.py to generate the pickle file 2- Run main.py for training

However, step 1 generates an empty file

Captions are not picked up correctly in as default file names have language (?????????-en.vtt):

https://github.com/pieterwolfert/co-speech-humanoids/blob/ce6f3caaa4c5c2ca1a97340a300e9a7ce0d089fe/preprocessing.py#L119-L122

renaming the files in the folder to remove the "-en" fixes that issue, however, the following issue comes up in normalizePoses():


ZeroDivisionError Traceback (most recent call last)

in 26 frame_sec_end = e_end_seconds - start_time 27 ff = frames[int(frame_sec_start * fps):int(frame_sec_end * fps)] ---> 28 pose_frames = normalizePose(ff) 29 text = c[1] 30 clip_caption_poses.append([text, pose_frames]) in normalizePose(pose_frames) 18 if i % 2 == 0: ---> 19 pose_list[i] = (item - offset_x) / length_shoulders 20 else: 21 pose_list[i] = (item - offset_y) / length_shoulders ZeroDivisionError: float division by zero
pieterwolfert commented 5 years ago

Hi a-haydar,

I think some of the files changed (I've heard youngwoo yoon, the original author who shared the dataset, has updated some of his data). In my tedx_subtitles folder files are like .en.vtt ; that should fix it as far as I'm aware. Once that data is read in correctly, it should be possible to normalize the skeletons.

a-haydar commented 5 years ago

Hello pieterwolfert,

Thank you for your reply. Renaming the files does not fix the issue, some skeletons still fail to normalize (that error is actually after renaming the files). I am working on removing frames that cannot be normalized for now, however, training does not seem to be going well even then.

pieterwolfert commented 5 years ago

I will do some testing soon with the new dataset version; hopefully this can eliminate the problems you're experiencing now.

a-haydar commented 5 years ago

Thank you, I would like to close this issue as it is a mix of multiple issues, and I apologize for that. I have written an update to the README file to help potential users on how to prepare for running the code, please review it. I will also open issues for things I find.