Open Breakthrough opened 6 days ago
The table of results looks promising and the approach is interesting. They haven't published their code yet (coming soon according to their homepage). Ideally, they could also release a pre-trained SVM model that could simply be imported and used.
The bit about this that I don't understand is how they are using temporal information. Looking through this, here is my understanding of how this algorithm works:
d_color
metric is simply the correlation between sequential frames' rgb histograms. This is the exact same metric used by HistogramDetector
in PySceneDetect.d_struct
is calculated by combining the maximum value of each pixel for the current frame of either the grayscale image or the image after a Canny edge filter (equation 3 in section 4.1). Then, the result of this is compared to the previous frame's equation 3 output by way of SSIM (I think skimage
has an implementation of this).What I don't understand is the temporal information. They write:
Regarding temporal information, we hypothesize that video changes are relatively stable over time. By estimating a Gaussian distribution of changes from past frames, if the current frame’s change exceeds the 3σ confidence interval, we consider it a significant transition.
So, I get that they are looking backwards X number of frames, finding the standard deviation, and calculating the current frame's Z-score. However, are they calculating the Z-score for both of the above metrics? An average of the two? Some other metric? It isn't clear. Additionally, it doesn't seem like this temporal information is used by the SVM since they explicitly say that the SVM only takes the two parameters calculated above. So, is something marked as a scene if the SVM classifies it as such or the frame's Z-score exceeds 3? I am not sure how to incorporate the temporal information here.
This is an area that would be quite onerous to do on our own, so if they do release a model, it would be a tremendous help. When describing how their SVM is trained, they write (from 4.1):
We treat image pairs from the same video source as negative examples and pairs from different video sources as positive examples.
I am assuming here that they are using their giant dataset for this. If I am trying to extract their data generation method from their very brief description, then it would be something like this:
d_color
and d_struct
)d_color
and d_struct
)The same method could be used to generate your test data as well. This would only be possible with a curated dataset like theirs that consist of single-scene videos.
Their dataset seems to be a giant list of youtube videos with timestamps denoting the start and stop points of each of those youtube videos, segmenting what part of the video is included in the dataset.
If we wanted to train our own SVM based on this dataset, it would be a huge task to reconstruct their dataset based off of the youtube urls and timestamps. Additionally, they don't really give any insight on the SVM parameters. I am far from an ML expert, so having some additional information on any of the options used for the SVM would be helpful for replication.
The table of results looks promising and the approach is interesting. They haven't published their code yet (coming soon according to their homepage). Ideally, they could also release a pre-trained SVM model that could simply be imported and used.
I'm curious what it would look like if we plotted the values for d_color
and d_struct
on a few videos to see if any obvious patterns emerge. If the data can be fitted by a typical kernel function we may just need to get the coefficients correct. I also found across the additive chi squared kernel which seems like a feature map which can be trained in linear time, but does require training. The scikit SSIM function is skimage.metrics.structural_similarity
.
I am not sure how to incorporate the temporal information here.
This section is also very unclear to me as well, and you raise some good questions. Hopefully they will publish some more information soon.
I'm curious what it would look like if we plotted the values for d_color and d_struct on a few videos to see if any obvious patterns emerge.
I am pretty sure that d_color
is literally the same metric that HistogramDetector
already uses. Calculating d_struct
doesn't look too bad, but would probably need to add an skimage
dependency.
In doing some searching around about this, I also ran across CLIP. This is a pre-trained deep-learning transformer that can measure similarity between two images. I found a Stack Overflow answer that has a great explanation with some examples. This might be an alternative similarity metric to SSIM. It would also require new dependencies though, and I have no idea what the computational efficiency would be.
Koala-36M proposes a significantly improved model for scene transition detection (paper: HTML or PDF)
See section 4.1 which uses an SVM classifier. The performance degredation is just over 2x slowdown, however the accuracy, precision, and recall show marked improvements across the board that likely warrant this change for the majority of users.