I have a few questions about the Sign Language Processing repository in general:
After generating the fingerspelling and SignBank+ datasets, I'm trying to run the scripts for training the various models (fairseq, sockeye, opennmt, mt5). I noticed that the scripts use sbatch (which as I understand it, requires a slurm setup). How would you recommend I go about training, evaluating, and passing inputs to the model locally?
I noticed there is another repository with text to gloss to pose conversion. Is the text to gloss part meant to be a different approach from (1), where glosses are used instead of FSW strings, or are they related? Additionally, do FSW strings encode information about facial expressions?
If you don't have slurm set up, you can just sh script_name.sh and run it using bash directly in the terminal. You will have to activate the conda environment beforehand.
I have a few questions about the Sign Language Processing repository in general:
After generating the fingerspelling and SignBank+ datasets, I'm trying to run the scripts for training the various models (fairseq, sockeye, opennmt, mt5). I noticed that the scripts use sbatch (which as I understand it, requires a slurm setup). How would you recommend I go about training, evaluating, and passing inputs to the model locally?
I noticed there is another repository with text to gloss to pose conversion. Is the text to gloss part meant to be a different approach from (1), where glosses are used instead of FSW strings, or are they related? Additionally, do FSW strings encode information about facial expressions?
Thank you in advance for your help!