-
微博内容精选
-
Hi, I request to add the two papers:
SVFormer: Semi-supervised Video Transformer for Action Recognition , [pdf](https://arxiv.org/abs/2211.13222)
Zhen Xing, Qi Dai, Han Hu, Jingjing Chen, Zuxuan W…
-
### Data Owner Name
FileGuard
### Data Owner Country/Region
China
### Data Owner Industry
Web3 / Crypto
### Website
https://fileguard.io
### Social Media
```text
N/A
```
### Total amount of…
-
We would like to introduce a new paper regarding novel class discovery (open set recognition).
1. Difficulty-Aware Simulator for Open Set Recognition (ECCV 2022)
Arxiv version is available
[h…
-
Hi, Thank you for your excellent work.
I am curious about the prompt templates (eg, 'a photo of a {}') for the tags during training. Are these templates similar to those utilized in CLIP? However, t…
-
Hi, @hzwer ,
could you add the following entry?
ECCV 2022 (Oral), [Secrets of Event-based Optical Flow](https://github.com/tub-rip/event_based_optical_flow), stars 71 (as of May 2023)
This is…
-
I noticed there is another model called X-CLIP by Yiwei Ma et. al, [arXiv:2207.07285](https://arxiv.org/abs/2207.07285).
Their paper was submitted on arXiv on July 2022, while your paper (Bolin Ni …
-
When I see your paper in arxiv, I can't see the supplementary.
![image](https://user-images.githubusercontent.com/96899552/199146583-c576a857-df7b-4043-bc0a-4ffde243736d.png)
I want to know the thre…
-
Thank you for your great work and share!
However, I want to implement your code on COCO dataset.
Do you use hyperparameters the same as VOC in COCO training session?
If not, can you share you…
-
In README:
> The PyTorch implementation of [Multimodal Transformer for Automatic 3D Annotation and Object Detection](https://arxiv.org/abs/2207.09805), which has been accepted by ECCV2023.
I belie…