-
https://github.com/Hzzone/PseCo/blob/e68019ae6ec3c5401f62f2553721915d4325c335/fscd_lvis/4_1_train_roi_head.py#L79
Hi, it seems that the wandb.init() has no attribute 'print_freq'?
Is there any w…
-
Hello,
could you please give the code to reproduce the results on LVIS minival?
Thanks,
-
- https://arxiv.org/abs/2108.06753
- 2021
オブジェクトプロポーザルは、オブジェクト検出、弱教師付き検出、オブジェクトディスカバリー、トラッキングなど、多くのビジョンパイプラインの不可欠な前処理ステップとなっています。
学習を必要としない手法と比較して、学習ベースのプロポーザルは、オブジェクト検出への関心の高まりを受けて、最近人気が高まっています。…
e4exp updated
3 years ago
-
# Summary
CLIP (CLIP ViT)은 zero-shot image classification을 매우 잘하지만, 이를 dense prediction task에 이용하기는 어려웠음. image-level representation을 이용해서 align을 수행하기 때문에, text가 region 단위로 매치되지 않았기 때문. 근래에는 regi…
-
Hey
Thank you for releasing nanoowl, I think it's really helpful for my ongoing work. Is there a way to fine-tune the weights for my own data?
Instructions on how train / fine tune would be gre…
-
Hi!
I want to ask, did you try to use instead of sin position encoder PE with learnable layer? If yes, how did it behave?
Also I'm interested, as I understand from paper, in final version of vis…
-
## ❓ Questions and Help
The [LVIS website](https://www.lvisdataset.org/dataset) points that the validation set has 20k images, but when I downloaded it, there were only 5k images.
Where can I dow…
-
I'd like to track the validation set loss for finetuning evaluation on a custom dataset (i.e., as shown in the original maskrcnn trainer [here](https://github.com/facebookresearch/maskrcnn-benchmark/…
-
So in config file I can see NUM_CLASSES: 22047
https://github.com/facebookresearch/Detic/blob/main/configs/Detic_LCOCOI21k_CLIP_SwinB_896b32_4x_ft4x_max-size.yaml
But it uses only 1023.
How ca…
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### YOLOv8 Component
_No response_
### Bug
…