-
Thank you for the great repository! Could you add our CVPR 2023 paper that applies image-metadata pertaining to zero-shot image splice detection?
Paper name: EXIF as Language: Learning Cross-Modal…
-
Could you add our CVPR 2024 paper about vision-language pertaining, "Iterated Learning Improves Compositionality in Large Vision-Language Models", into this repo?
Paper link: https://arxiv.org/abs/…
-
Thank you for the great repository! Could you add our CVPR 2023 paper that applies image-metadata pertaining to zero-shot image splice detection?
Paper name: EXIF as Language: Learning Cross-Modal …
-
# [CVPR 2024][Highlight] HOLD paper review - 정완이의 개발 일기장
HOLD: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video
[https://on-jungwoan.github.io/dl_paper/hold/](https://…
-
Hi! I am currently working on a project about scene understanding and have read your ICCV 2019 paper on 3D Scene Graph. It was quite impressive and appealing. I noticed that this work was built upon y…
-
Dear authors:
Thanks for your wonderful work! I wonder if and when the data you used to train the model to be packed-up and open-scourced. Since from your paper, it's large and carefully organized:
…
-
PUDD: Towards Robust Multi-modal Prototype-based Deepfake Detection
https://arxiv.org/abs/2406.15921
-
First of all, thank you for revealing the code for your CVPR paper.
I am studying about depth estimation. Can I get the code for the MSL depth estimation part of the paper?
I will thank for your r…
-
Dear author
Thank you very much for your great work!
I want to ask where I can find the Supplementary Materials referred by your main paper? I am trying to find it but I can not find it in you…
-
With the latest Typst v0.12 update, it should now be possible and straightforward to insert a teaser figure above the abstract for the CVPR template, as this version supports multi-column floating fig…