Closed argadewanata closed 1 month ago
Yes, you may refer to the preprocess.py and generate the video path, gloss labels as well as other necessary information into a series of .npy files first, and then load the dataset in the dataloader_video.py.
As far as i know, there is no gloss translation available for the sign language used in my country. Can I proceed solely with the sign translation?
This reop currently only supports CSLR, which uses glosses as outputs. For sign translation, you may use models that support the sign language translation (SLT) task.
I'm considering implementing it for my country's sign language.