Open AmitMY opened 7 months ago
Some notes:
The system mainly fails here on numbers (1830), and named entities (Karl Friedrich Schinkel) (which with some modifications, it could spell out).
It also did not make the sentence boundary clear, and basically ignored the punctuation (also, fixable).
In my opinion maybe the biggest problem here to address, is that the signing is performed in the spoken language word order. It is comprehensible, but not really sign language.
The smoothing between signs is too simplistic (can be easily seen in the skeleton video), and can be fixed.
The video quality is not the best. The generated interpreter has some artifacts even if the pose sequence was perfect. Not easily fixable.
The video is quite slow. Further work can be done to make the signing faster and tighter, decreasing the number of frames.
We now use the improved
pose-to-video
based on diffusion models.We start with a paragraph in German, translate it to German Sign Language:
The simple glossing gives:
The current system gives:
We choose to focus on one issue - visual inconsistency between signs. After adding pose anonymization in https://github.com/sign-language-processing/spoken-to-signed-translation/commit/0072c52478c5cf9be030b3181bc997337e6507f3 the output is:
https://github.com/sign-language-processing/spoken-to-signed-translation/assets/5757359/12c1dc04-043d-4709-845d-303f3407b2ce
We note that the database lookup time was 9 seconds, this was optimized to 1-2 seconds, and could be improved further.
We recognize that sentences should be split. This will affect both the database search (only search up to one sentence) and in the video, lower and raise hands without cropping on sentence boundary. (Possibly, generate every sentence independently, then join them)