Renames the existing Decord decoder kind to DecordAccurate. This is because the API calls use accurate seeking.
Adds a new Decord decoder kind, DecordAccurateBatch. This uses the batch APIs. We believe this is an accurate API.
Adds a Decord benchmark kind to the README graph.
Renames the existing TorchCodecCore decoder kind to TorchCodecCoreNonBatch.
Adds the decoder kind TorchCodecCore - while it has the same name as a previous decoder kind, it's using the best core API for each scenario. We can directly compare it to TorchCodecPublic. Any systematic difference is likely caused by the logic in VideoDecoder itself.
Removes all of the fine-grained calls to timeit inside of the experiments. If we want that data, we should create separate experiments for it. In general, if we are going to do something N iterations, and then time how long the N iterators take, we can't also time each N iteration. We don't want the cost of the fine-grained timers to add to the overall time. If we want fine-grained timers, we can't time the batch. And if we time the batch, we can't do fine-grained timers.
Refactors benchmark_decoders.py so that we have a registry of decoder kinds, and we access that registry to know what to run. This eliminates a lot of the bespoke logic. Adding new decoder kinds is now easy: just make a new entry to the registry, and the rest of the code works. As a bonus, this unifies specifying and adding options for decoder kinds.
The sampler-inspired experiments (random and uniform) are remarkably consistent across all decoders.
1 next and 10 next are also remarkably consistent across all decoders.
100 next is consistent across:
a. DecordAccurate.
b. DecordAccurateBatch.
c. TorchVision.
d. TorchCodecCoreBatch.
100 next has remarkable variation across:
a. TorchAudio.
b. TorchCodecCoreNonBatch.
c. TorchCodecCore.
d. TorchCodecPublic.
TorchCodecCore is consistently slightly faster than TorchCodecPublic. This means we have an opportunity to shave off some time in the logic in the public API.
While both TorchCodecCore and TorchCodecPublic display variation across runs, they notably always move together within a run. That is, if TorchCodecCore has a "good" run, then so does TorchCodecPublic. That means there may be something systematic going on that determines if a run is "good" or not. Maybe something to do with how the video gets laid out in memory?
TorchVision is consistently the best performer in 100 next.
This PR makes the following changes:
DecordAccurate
. This is because the API calls use accurate seeking.DecordAccurateBatch
. This uses the batch APIs. We believe this is an accurate API.TorchCodecCore
decoder kind toTorchCodecCoreNonBatch
.TorchCodecCore
- while it has the same name as a previous decoder kind, it's using the best core API for each scenario. We can directly compare it toTorchCodecPublic
. Any systematic difference is likely caused by the logic inVideoDecoder
itself.timeit
inside of the experiments. If we want that data, we should create separate experiments for it. In general, if we are going to do something N iterations, and then time how long the N iterators take, we can't also time each N iteration. We don't want the cost of the fine-grained timers to add to the overall time. If we want fine-grained timers, we can't time the batch. And if we time the batch, we can't do fine-grained timers.benchmark_decoders.py
so that we have a registry of decoder kinds, and we access that registry to know what to run. This eliminates a lot of the bespoke logic. Adding new decoder kinds is now easy: just make a new entry to the registry, and the rest of the code works. As a bonus, this unifies specifying and adding options for decoder kinds.The following results were run with:
These are four different calls of the above:
Some observations:
DecordAccurate
. b.DecordAccurateBatch
. c.TorchVision
. d.TorchCodecCoreBatch
.TorchAudio
. b.TorchCodecCoreNonBatch
. c.TorchCodecCore
. d.TorchCodecPublic
.TorchCodecCore
is consistently slightly faster thanTorchCodecPublic
. This means we have an opportunity to shave off some time in the logic in the public API.TorchCodecCore
andTorchCodecPublic
display variation across runs, they notably always move together within a run. That is, ifTorchCodecCore
has a "good" run, then so doesTorchCodecPublic
. That means there may be something systematic going on that determines if a run is "good" or not. Maybe something to do with how the video gets laid out in memory?TorchVision
is consistently the best performer in 100 next.