Our team has sort of dragged its feet when it comes to transcribing our participant videos; it's intensive, tedious, and takes forever. However, the transcripts on their own don't make sense when you read them, making it really hard to properly analyze our own research.
After Otter has auto-transcribed a video file, the transcript must be manually cross-referenced with the video to correct typos and sentence structures, and add strong emotions, gestures, screen-shares, or other relevant data that the auto-transcript did not pick up. Only then can coding analysis and synthesis begin.
(It would be a DREAM if we could caption the video files themselves with RST, but we might be a long way from that.)
Our team has sort of dragged its feet when it comes to transcribing our participant videos; it's intensive, tedious, and takes forever. However, the transcripts on their own don't make sense when you read them, making it really hard to properly analyze our own research.
After Otter has auto-transcribed a video file, the transcript must be manually cross-referenced with the video to correct typos and sentence structures, and add strong emotions, gestures, screen-shares, or other relevant data that the auto-transcript did not pick up. Only then can coding analysis and synthesis begin.
(It would be a DREAM if we could caption the video files themselves with RST, but we might be a long way from that.)
Action Items
Folder of all studies, including videos and transcripts
stakeholder, developer, product managers
designers