-
Dear author
Thank you very much for your great work!
I want to ask where I can find the Supplementary Materials referred by your main paper? I am trying to find it but I can not find it in you…
-
I am currently participating in the challenge you hold and trying to submit the submission.json file to the codalab, but it shows the JSON file is an invalid file. So I zip the file directly and resub…
-
https://yangyzzzz.github.io/post/866b08a7.html
友链请私戳我
-
Hi @csuhan,
I am trying to reproduce the results presented in Table 4 of the OneLLM paper (CVPR 2024). While I was able to reproduce the results on the MUSIC-AVQA dataset, I am struggling to achiev…
-
You said that you will release the code after the paper is published. Can you share it?
-
Dear authors:
Thanks for your wonderful work! I wonder if and when the data you used to train the model to be packed-up and open-scourced. Since from your paper, it's large and carefully organized:
…
-
PUDD: Towards Robust Multi-modal Prototype-based Deepfake Detection
https://arxiv.org/abs/2406.15921
-
### Description
The download link for the base subdataset in `ChesapeakeCVPR` dataset returns a 403. Permalink to affected code:
https://github.com/microsoft/torchgeo/blob/05ea5138fd9bbf995a6c51505c…
-
Congratulations on your work being accepted by CVPR-2024.
I m interested in your work, when are you plan to open the source code? : )
-
Can you provide your code about the video feature extraction? Nice Work in CVPR 😄