I am trying to adjust the code that saves the features of a track, to store them at the end of the video,to be re-loaded in the next video. And the crux is to have it unsupervised.
My current plan was to:
Adjust the "Dead"-status of a track to "Saved" where the feature extractor will check the dead tracks' feature to link to one of the unmatchable tracks, before initializing a new track.
This would be an alternative to post Image-to-Video referencing of track sequences.
From my understanding of the code:
The first cost matrix is made by NearestNeighborMatching of the features and targets (def>function)
It is converted to a gated matrix (def>function, leading to the gating_distance) makes the cost matrix based on the appearance of the subjects.
The matching_casade (def>function, with the min_cost_matching above) does the distribution of the cost matrix.
For this implementation to work I had a few questions:
Is it possible to adjust the weights/costs that are required to create a new track. Meaning it will prefer to use a "Saved"-track instead of initializing a new one.
Summerize the feature extractor to collapse the feature-list to some key features (like the front, side, back). Otherwise, a detector to classify the direction of the subject would also work.
How can I increase the dimensions of the feature extractor from 128 to more?
Is there another Filter/Matcher that would be more suitable for such a task.
It is possible to do this. However, a "saved" track is usually referred to as a "lost" track. However I doubt this will give you the functionality you are looking for. It is preferred to have 4 states: new, tracked, lost and removed. Tracks that are "tracked" take precedence; then "lost" and finally "new".
I do not know enough about the feature extractor here.
You can do this using a different network model. Models have a predetermined feature size.
A minimum cost perfect matching is by definition "perfect". That means that the sum of the weights of the matched rows and columns is minimal.
Hi guys,
I am trying to adjust the code that saves the features of a track, to store them at the end of the video,to be re-loaded in the next video. And the crux is to have it unsupervised.
My current plan was to: Adjust the "Dead"-status of a track to "Saved" where the feature extractor will check the dead tracks' feature to link to one of the unmatchable tracks, before initializing a new track. This would be an alternative to post Image-to-Video referencing of track sequences.
From my understanding of the code:
For this implementation to work I had a few questions:
Any help would be appreciated! Kind regards, Max