-
Hi, Can you give me instructions of how to set up for my own custom dataset? I have about 100 folders of images, with each folder containing the images from a video clip. I also have captions for each…
-
# Notary Application
## Core Information
- Name: James Hoang
- Affiliated organization: PiKNiK
- On-chain address to be notarized (recommend using a new address): f1kqdiokoeubyse4qpihf7yrpl7czx4…
-
This is a: **BUG**
## Details
Have you seen this already? Everything seems working fine, but this feels like something might be wrong:
```
Duplicate config variable in conditional 3 global /…
-
Hi,
I have followed the latest command to generate clip_annotations.json and training. During inferencing on the validation set, I encountered quite a few of these warnings:
```
Infer.py:145: Runti…
-
I think SpaceTimeTransformer can use to extend the CLIP model to process videos.
For example, the 'openai/clip-vit-base-patch32' is based on a text_transformer and a ViT backbone.
I am trying th…
-
## 公告
- 如果你是在校的学生,并且有兴趣参加飞桨社区的远程实习项目,非常欢迎你依据这份材料来申请远程实习:[百度飞桨框架实习计划](https://github.com/PaddlePaddle/community/blob/master/contributors/paddle_contributor_remote_intern_program.pdf)。我们会在3月3日晚上19点,举…
-
Hi,
I would like to ask what is the relation between your proposed cross-frame attention and the one in IFC [1] and TEViT [2], I consider none of the above papers is cited. In addition, the text to…
-
### News
- Conferences
- Interspeech 2022: Notification: 모두 축하드립니다. 송도에서 뵈요!
- CVPR 2022: 오늘부터 드디어 시작!!! 부스 많이 들러 주세요! (6.19 ~ 24)
- 네이버의 발표 스케쥴 (17개 발표): https://naver-career.gitbook.io/e…
-
Hello, I am very interested in your research.
In the evaluation method, after you sort the predicted scores, you do not use them any more, but only use the real [labels](https://github.com/TencentARC…
-
Hello, When I finished ***train***, how do I ***test*** or ***eval*** it? I got this issue. Can you Help me?
VIDIOC_REQBUFS: Inappropriate ioctl for device
Traceback (most recent call last):
F…