-
https://arxiv.org/abs/2304.12210
-
#Ranked 1st on the KITTI Depth Completion Competition
#RGB-D Input (do not misunderstand)
Institute: MIT
URL: https://arxiv.org/pdf/1807.00275.pdf
Git: https://github.com/fangchangma/self-superv…
-
We have trained SSL model based on the NEST, how to finetune the previous model based on the CTC Loss function?
The pretraining scripts are as follows: [NEST](https://github.com/NVIDIA/NeMo/blob/main…
-
Hello,
I'm experiencing challenges
training the model on a custom dataset consisting of medical images.
Environment & Setup:
GPUs: 8 x A100 40GB
Batch Size: 32 per GPU (Total: 256)
Learni…
-
Hi Wan,
Thanks for your great work. I am running your code with command
`python3 network/run_engine.py --initial_model ./pretrained/synthetic.pth --mode Train --model_dir ./output --tag self-su…
-
Thank you for making UNI publically available.
Since you may have extensive experience training UNI and other models in a self-supervised paradigm, I would like to ask if you have trained and compar…
-
I have 16000 images on my unlabeled data, the batch_size is set to 32, and it takes almost 20 minutes to train an epoch, what is the reason for that
-
### 論文へのリンク
[[arXiv:2010.09893] LT-GAN: Self-Supervised GAN with Latent Transformation Detection](https://arxiv.org/abs/2010.09893)
### 著者・所属機関
Parth Patel, Nupur Kumari, Mayank Singh, Balaji…
-
![QML}Y0CSAEX~A%F)NZAB26O](https://github.com/user-attachments/assets/c90c5cce-2939-4304-ab42-ff1d3c4dc8ce)
Hi, maybe there something wrong in my step. It's like the train.pt and other file is empty?…
-
Hi,
Self-supervised pretraining for speech representation is a promising technique for developing ASR in resource-constraint languages with little transcribed data, and SimCLR is applied with success…