-
# Baseline Expermental - Korean Dataset
- Baseline 실험 결과 정리
- 한국어 데이터셋 실험 결과 정리
- [koBART(gogamza/kobart-base-v2)](https://huggingface.co/gogamza/kobart-base-v2) 사용
# Experimental Plan
- [x] koBA…
-
Do you use open source data for the video dataset, or do you organize it yourself?
-
## Keyword: chest
### Revisiting Computer-Aided Tuberculosis Diagnosis
- **Authors:** Yun Liu, Yu-Huan Wu, Shi-Chen Zhang, Li Liu, Min Wu, Ming-Ming Cheng
- **Subjects:** Computer Vision and Pattern…
-
Hi, there.
I am new to `accelerate` and I've found that it really improves my development productivity. Thanks for your great work.
But I have some problems when using `accelerator.gather`.
I…
-
Sik-Ho Tang. [Review — Unsupervised Learning of Visual Representations using Videos](https://sh-tsang.medium.com/review-unsupervised-learning-of-visual-representations-using-videos-abee72149f77).
-
Hi, thanks for your excellent work. I was wondering why implementing prototype classification and representation learning in two embedding spaces, i.e., `bottleneck_dim` $\neq$ `in_dim` (if I didn't m…
-
### Description
在AltDiffusion-m18目录下执行generate.py文件报如下错误
******************** text2img altdiffusion-m18
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
No …
-
Could I ask your STAN-self-B/16 training time in your paper.
And I really be astonished at frame number@12 and batch size@128, which means one forward need to process 1536 images, and images also wi…
-
Hello Authors,
Firstly, I would like to extend my sincere appreciation for your exceptional work on "Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds". Yo…
-
Hello! When I was fine-tuning the model recently, I noticed that the training script didn't seem to support multi-loss-cocosoda.
I set "model_type=multi-loss-cocosoda" in run_fine_tune.sh. And the …