Open ecatanzani opened 1 year ago
Hi @ecatanzani, yes, in order for it to work, this while loop needs to be moved out of the forward function: https://github.com/facebookresearch/co-tracker/blob/4f297a92fe1a684b1b0980da138b706d62e45472/cotracker/models/core/cotracker/cotracker.py#L264
We will try to add this feature soon
Hi, thanks for your fast reply. I'm hardly trying to implement the modifications. If you manage, can you please share with me an example code feeding to the tracker the frames from an input stream (for example from cv2.VideoCapture(0))? Thanks again four your great work
@ecatanzani I tried the suggested solution here: https://github.com/facebookresearch/co-tracker/compare/main...whymatter:co-tracker:streaming. See the limitations in the Readme.
Hi @nikitakaraevv, I'm curious about the progress of adding these features as mentioned in the reply. Could you possibly provide an estimated timeline for implementation? Thanks for your work!
Hi @Danie1Nash, we are planning to release an updated version of CoTracker in late November. It will also support loading chunks of videos.
@nikitakaraevv It would be nice if with the new version we could enable/control a strategy to automatically allocate new points/grids on new incoming contents on the fly. As new incoming and disappearing old content is going to decimate points over time so it could be nice to have a strategy to automatically allocate new points for new uncovered areas (eventually with an additional mask).
Hello, it is now the end of November. May I know when the version supporting live video will be released?
Hi @supengufo, I'm working to release it as soon as possible :) I need a few more days to get everything ready.
Any news on this?
Would love an update! I'm planning on incorporating this into a project and am considering how I would cut up live video to have object tracking throughout several contiguous clips.
Hi @bhack, @alejandroarmas, @supengufo, we found some bugs along the way and had to retrain the model several times before the release. The new model is finally available and it supports live video in online mode, please see https://github.com/facebookresearch/co-tracker?tab=readme-ov-file#online-mode and the online demo. Let me know if this helps or if you have any other questions!
Thanks, was the bug also in the original version?
Thanks, was the bug also in the original version?
No, it was related to Virtual Tracks introduced in the new version. We also fixed a lot of small things, such as coordinate mappings when the image or feature resolution changes. This did not really affect the performance of the model.
Do you think that it has also an impact for https://github.com/facebookresearch/co-tracker/issues/15 ?
Hi @bhack, I think the model still works well only with the training resolution unfortunately.
@bhack proposed a great suggestion to dynamically allocate new points on the fly. That's very important for practical applications. Have you considered supporting that? @nikitakaraevv
Hi @nnop, yes! We're currently working on multiple ideas at the same time, including this one.
That's great! Is there any rough time we could expect?
We are still researching it, so we can't really promise anything yet.
@bhack proposed a great suggestion to dynamically allocate new points on the fly. That's very important for practical applications. Have you considered supporting that? @nikitakaraevv
Hi, I was wondering if there is an update on this idea of dynamically allocating new points?
Thanks!
Hi, is there any possibility to port the actual cotracker capabilities to live videos (camera stream for example)? Accordingly to the paper the tracking is computed in an iterative way splitting the input video in sub-windows; I'm wondering if an application to live videos has been considered in this scheme.
Thanks for your work