This repository contains the Observation Framework component of the "WAVE Streaming Media Test Suite - Devices". The Observation Framework determines pass or fail results, based on observations of screen recordings of tests which are run on a device by the Test Runner.
The test suite implements the tests and observations as defined in the CTA WAVE Device Playback Capabilities specification. For more information:
The Device Observation Framework must work together with the Test Runner. The Test Runner should be set up prior to the Device Observation Framework. Please follow the instructions in the main WAVE Streaming Media Test Suite README.
Check that the Test Runner is functioning correctly and able to run tests prior to installing the Observation Framework.
Setting up inside Docker container
The dpctf-deploy repo contains scripts to set up Device Observation Framework inside a Docker container. It also contains a script to run Device Observation Framework analysis. Instructions can be found here README.
Or setting up without Docker
The Device Observation Framework can also be installed without Docker; instructions can be found here deploy_without_docker.
Ahead of running Device Observation Framework the user MUST set up the camera and device under test (DUT) carefully to record the tests; instructions can be found in the following section. Once the camera and DUT set-up is correct then Test Runner sessions can be analysed. See https://web-platform-tests.org/running-tests/ for instructions on how to run a test session. Prior to starting the session, begin the camera recording (ensuring that camera is set to record at around of 120 fps). Record the Test Runner session from beginning to end and then stop the camera recording.
Only one session may be contained in a single recording. A single session may contain multiple tests.
Once the recording is complete, follow the camera manufacturer's instructions to transfer the recorded file to the PC where Observation Framework is installed.
Command to run the Device Observation Framework can be found in README.
You MUST add the .mp4 extension to the file name.
where file specifies the path to the recording to be analysed:
If the session is recorded to a single file then specify the path and recording filename for the file parameter.
If the camera splits a single session's recording into multiple files then specify the path to the folder containing the recorded files. Note that:
The Observation Framework will analyse the recording and post the results to the Test Runner for viewing on Test Runner's results pages. Note that the observation processing may take considerable time, depending on the duration of the session recording.
When a selected test includes an observation "The presented sample matches the one reported by the currentTime value within the tolerance of the sample duration.", a CSV file contains observation data will be generated at logs/<session-id>/<test-path>_time_diff.csv
. Where is contains current times and calculated time differences between current time and media time.
At the end of the process the Observation Framework will rename the file recording to:
<file_name>_dpctf_<session-id>.<extension>
Observation framework can be run with specific mode enabled by passing some optional arguments.
optional arguments:
--range {id(file_index):start(s):duration(s)}
Search QR codes to crop the QR code area for better detection.
QR codes area detection includes mezzanine QR codes and Test Status QR code.
--log debug Logging level to write to log file.
--scan intensive Scan depth for QR code detection.
--mode debug System mode is for development purposes only.
--ignore_corrupted video
Specific condition to ignore. To support recording devices that has corrupted video or audio.
Where range this is optional argument for video only tests. However, when the 1st test is audio only test it is important to set scan range so that the process can find mezzanine QR code area correctly for mixed video and audio tests. Setting the range is also useful to speed up the processing time when observing audio only tests. The range argument requires three digit variables separated by ":", {id(file_index):start(s):duration(s)}
.
For example, --range 0:20:2
states for scan QR code area in 1st recording file starts from 20 seconds and ends the scan at 22 seconds when QR code area not detected.
Where log specifies log level. Default value is "info" when not specified. When --log debug
is selected, full QR code detection will be extracted to a CSV file at logs/<session-id>/qr_code_list.csv
, and the system displays more information to the terminal as well as to a log file logs/<session-id>/session.log
.
This includes information such as decoding of QR codes:
Where scan specifies scan method to be used. Default value is "general" when not specified. --scan intensive
makes the QR code recognition more robust by allowing an additional adaptive threshold scan, however this will increase processing time. This option is to be used where it is difficult to take clear recordings, such as testing on a small screen devices like mobile phones.
Where mode specifies the Observation Framework processing mode, which can be set to debug. In debug system mode the observation process reads the configuration files from configuration folder and save observation results locally instead of import back to the test runner. Running in debug system mode is useful when debugging recording taken by someone else and without test runner, or debugging previous recording where the test id is no longer valid for the current test runner set up. More detailed instructions can be found here.
Where is it not recommended, ignore_corrupted specifies the special condition to be ignored by observation framework. We have added this feature to work around some cameras produce corrupted capture. When "--ignore_corrupted video" is set, the Observation Framework will ignore the corrupted recording frame and carry on reading the next frames in the recording instead of ending the process early. Impact of using this option for audio testing is to be confirmed, it might cause the audio tests and A/V sync test to fail.
The Observation Framework operates on camera recordings of the device's screen whilst running tests under the Test Runner. These test materials contain QR codes which must be accurately captured for the Observation Framework to interpret.
NOTE Audio observations are not in scope for the initial release, however Section 9 tests have been already generated with the correct audio content. It is recommended that users DO NOT capture audio, by either turning off the audio recording on the camera or muting the device before recording a test. Observation results will show "NOT_RUN" in this case. However, when the correct audio is being recorded jointly with video, the Observation Framework processes audio observations, and the observation results will show either "PASS" or "FAIL".
For the Phase 1 Observation Framework a variety of cameras should be suitable. (NOTE: this will not be the case for Phase 2 which will likely require very specific camera model(s)... TBD.)
The camera's requirements are:
The set up needs to be in a light-controlled environment and the camera configured to record high quality footage to allow consistent QR code detection. It is highly unlikely that simply setting up the equipment on a desk in a standard office environment will produce satisfactory results!
More detailed guidance, and example videos are available from the document how_to_take_clear_recordings.pptx .
For the camera/device set up:
Once the camera/device are set up, DO NOT change or alter settings during the recording. If changes are necessary, then a new recording shall be taken.
Note: Minimizing time between the start of recording and when the pre-test QR code shows up helps Device Observation Framework to process faster and give test results quicker.
The QR codes outlined in GREEN were successfully decoded. Those outlined in RED failed to be decoded:
For the initial set up, we recommend a user try to run a sequential track playback test.
From Test Runner select and run the "/
Once the recording is taken, the following steps should be followed to verify the camera setup and the recording:
At camera frame N there were X consecutive camera frames where no mezzanine QR codes were detected. Device Observation Framework is exiting, and the remaining tests are not observed.
Above steps can be repeated, if necessary, in order to find the best set up for the selected device and the camera. For small screen devices, such as a mobile phone, it is more difficult to find the good set up. A better camera or a better lens, such as a micro lens which can capture small details, might be required for testing on smaller screen devices.
How to verify your camera, and recording instructions for a combined audio and video synchronization test can be found here.
Check that Test Runner is installed and running without problems and that it is visible to the Observation Framework.
If a large number of expected frames containing QR codes are missing, then this indicates something is seriously wrong. The Observation Framework will terminate the session analysis with an error result. (the threshold for this can be set in the "config.ini" file with the 'missing_frame_threshold' parameter).
If this occurs, check the quality of the recorded video. Ensure that the camera/device set up instructions described earlier have been followed.
More information about Debugging Observation Failures can be found here.
When new tests are added to the dcptf-tests repository, support for these will also need adding to the Observation Framework. The scale of changes required will depend on how much the new tests diverge from existing tests.
For example, the 'playback-of-encrypted-content' test uses the same Observation Framework test code and observations as the (unencrypted) 'sequential_track_playback' test. To add such a test simply requires adding a new test name mapping to the existing test module and class name in the "of_testname_map.json" file. For example:
"playback-of-encrypted-content.html": [
"sequential_track_playback",
"SequentialTrackPlayback"
],
For example, the 'regular-playback-of-a-cmaf-presentation' test uses the same test logic as the 'sequential_track_playback' test. However, it requires a different list of observations. Create a new test code module containing a new test class derived from an existing test. Then override the method to provide the correct list of observations, for example:
class RegularPlaybackOfACmafPresentation(SequentialTrackPlayback):
def _init_observations(self) -> None:
self.observations = [...]
Add the new test name, python module, and class name to the "of_testname_map.json" file.
For example, the 'random-access-to-time' test uses the same observations as the 'sequential_track_playback' test. However, it requires different parameters to be passed. Create a new test code module containing a new test class derived from an existing test with the same observations. Then override the appropriate methods to provide the correct parameters for the observations, for example:
class RandomAccessToTime(SequentialTrackPlayback):
def _init_parameters(self) -> None:
[...]
def _get_first_frame_num(self, frame_rate: Fraction) -> int:
[...]
def _get_expected_track_duration(self) -> float:
[...]
Add the new test name, python module, and class name to the "of_testname_map.json" file.
Create new observation class(es) either derived from an existing observation if an appropriate one exists or derived from the 'Observation' base class. Override methods as needed and provide a make_observation() method. For example,
class EverySampleRendered(Observation):
def __init__(self, global_configurations: GlobalConfigurations, name: str = None):
[...]
def make_observation( self, test_type, mezzanine_qr_codes: List[MezzanineDecodedQr],
_unused, parameters_dict: dict ) -> Dict[str, str]:
[...]
Create a new test code module and class as described in (c) above. Implement a make_observations() method that calls the required observations and returns a list of pass/fail results. For example:
def make_observations(
self,
mezzanine_qr_codes: List[MezzanineDecodedQr],
test_status_qr_codes: List[TestStatusDecodedQr],
) -> List[dict]:
[...]
Add the new test name, python module, and class name to the "of_testname_map.json" file.
Installation and usage instructions (in this README).
Installation scripts for Linux shells and Windows batch file.
End-to-end Observation Framework functionality.
Analysis of multiple tests in one session recording.
Result reporting to Test Runner.
QR code based video tests implemented for:
White noise based audio tests implemented for:
NOTE No audio switching for tests 8.8, 8.9, 8.13, 8.14 and 9.4