Hello! Thank you for your work!
When I ran the code to read the skeleton data kpl file of the German Sign Language dataset, I found that there was only the skeleton data of dev and test file videos, but not the training file videos. I think this is the reason why the source code cannot run directly? This requires generating additional skeleton data for the train file video, right? Also, I would like to ask how two tributaries can simultaneously read RGB and skeleton data, as I am not very familiar with the source code and do not understand it very well. Thank you!
Hello! Thank you for your work! When I ran the code to read the skeleton data kpl file of the German Sign Language dataset, I found that there was only the skeleton data of dev and test file videos, but not the training file videos. I think this is the reason why the source code cannot run directly? This requires generating additional skeleton data for the train file video, right? Also, I would like to ask how two tributaries can simultaneously read RGB and skeleton data, as I am not very familiar with the source code and do not understand it very well. Thank you!