Open hannolans opened 10 years ago
A first simple check would be to check if a srt-file or webvtt is provided, but sadly that doesn't say that if no subtitling is provided a video should fail. It can be a video without dialogues and important sounds. To check a video we could analyse the video. For example, a certain failure would be if it's a talking heads video and there is no caption provided. To analyse video, we could render the video in canvas and take captures. There seems to be libraries to do face detection and further image analysis: http://wesbos.com/html5-video-face-detection-canvas-javascript/ http://libccv.org/ An even better way would be to analyse the audio track with the web audio API. And if the browser QUAIL is running on is webkit, we could use the realtime using Web Speech API on JavaScript that do speech to text . The speech will get analysed (in Chrome webkit by Google) and you'll get the transcription back. Google has a session limit of 60 seconds, but that should be enough for us to detect if there are valid captions provided. http://stiltsoft.com/blog/2013/05/google-chrome-how-to-use-the-web-speech-api/
So test would be:
Talked with Arjan and this test could better handle the detection of (valid) caption files only, so if a technique is used. Video analysis could be handled in "F8: Failure of Success Criterion 1.2.2 due to captions omitting some dialogue or important sound effects". Added new issue for this failure: #152
Ok, this will leave the test to discovering caption files:
Great that html5 video is covered. I think a object embed is not covered yet.
Is for example this video covered? http://www.rijksoverheid.nl/documenten-en-publicaties/videos/2014/01/24/persconferentie-na-ministerraad-24-januari-2014.html
The html includes: `
`
A test would be:
We could add .mp4 as well to check for in the param.
I'm starting a video-captions
branch to move some code into more components rather than putting it all in videoEmbeddedOrLinkedNeedCaptions
.
Great idea. we could then also add a condition test whether it is live or recorded video.
Merged in video-captions branch, any additional use cases we need to capture?
The objective of this technique is to provide a way for people who have hearing impairments or otherwise have trouble hearing the dialogue in synchronized media material to be able to view the material and see the dialogue and sounds - without requiring people who are not deaf to watch the captions. With this technique all of the dialogue and important sounds are embedded as text in a fashion that causes the text not to be visible unless the user requests it. As a result they are visible only when needed. This requires special support for captioning in the user agent.
Procedure