We have three different types of setups for the primary camera:
Room 1 (keynote, with BM PCIe camera audio+video capture)
Room 2,3,4,5 (BM USB audio+video capture)
Room 6 (Opsis video capture,
Each of these may require a slightly different audio/video delay implemented somewhere in the vocto pipeline - all of these setups have been used for previous LCA's so clues about video or audio delay will be in a git repo.
This should be tested though. Do this like the following:
Build a room setup, with a monitor acting as the projector
Plug a laptop into the Opsis, to capture it.
Point the camera at the monitor, so in Vocto all inputs are showing the same thing.
Feed audio from the laptop into the camera.
[1] Play a Youtube video with a speaking head (documentaries with interviews are good),
Verify that all inputs have proper lip sync. If any are out, then video/audio delays will need to be appropriately changed. It's best experimenting with this locally with values before committing to the ansible repo.
1 will be different to 2,3,4 & 5. Which will be different to 6.
[1] Alternatively, use some kind of A/V sync tool or video, such as Carl's clocky thing.
We have three different types of setups for the primary camera:
Each of these may require a slightly different audio/video delay implemented somewhere in the vocto pipeline - all of these setups have been used for previous LCA's so clues about video or audio delay will be in a git repo.
This should be tested though. Do this like the following:
Verify that all inputs have proper lip sync. If any are out, then video/audio delays will need to be appropriately changed. It's best experimenting with this locally with values before committing to the ansible repo.
1 will be different to 2,3,4 & 5. Which will be different to 6.
[1] Alternatively, use some kind of A/V sync tool or video, such as Carl's clocky thing.