-
a list of resources to be kept with the code-base that shouldn't be under version control
-
Hello!
I read your cvpr paper "Generative Flows with Invertible Attentions" and it is really impressive work for developing normalizing flow.
Then, i am trying to study and implement your work, …
-
Hello. We'd like to introduce our paper "Query-Dependent Video Representation for Moment Retrieval and Highlight Detection (CVPR 2023 Paper)" regarding cross-modal moment retrieval.
Code : https://…
-
Thansk for sharing the paper and code. It is a good work.
I notice your work is closely related to MATE (Masked Autoencoders are Online 3D Test-Time Learners, ICCV'23) and BFTT3D (Backpropagation-f…
-
https://github.com/NVlabs/RADIO
The code and model weights of paper *[CVPR 2024] AM-RADIO: Agglomerative Vision Foundation Model - Reduce All Domains Into One* has been released by Nvidia
> RADI…
-
When I was developing a semi-supervised detection library, I followed your work closely. I was sorry to see that your paper was withdrawn from ICLR, but congratulations on getting it accepted at CVPR!…
-
Hi @GeorgeCazenavette
I hope all is well. I am wondering if it would be possible for you to upload the Torch tensors containing the distilled dataset for the GLaD paper (CVPR 2023) distillation me…
-
we see that there is a SE block on CVPR2019 at Figure 2, but not seen at the figure of this repo as well as 'resnetDLAS_A' on resnet_caffe_DLAS.py. I wonder if the performance shown at paper Table 3 …
-
Thanks @shallowdream204 for sharing the final checkpoints of SwinIR (#10)!
I have a few follow up questions though,
- Please share reproducible training code of the proposed self-training scheme…
-
I wonder is it okay for you to release the training and evaluating process and also the checkpoint? For me, when I am training, I just train from the scratch without adding openscene or LLaMa checkpoi…