-
Thank you for your great work.
Would you please post your training scripts to train the ViT model on ImageNet 21k from Scratch? For example the learning rate, weight decay, training_step and etc?
Th…
-
I am currently trying to run the program and I need to convert the ffhq.pkl model do a .pt one.
When I enter `python stylegan_nada\convert_weight.py --repo stylegan_ada --gen models/ffhq.pkl` it sa…
-
As far as I can see, the code for training the Vision Transformer which is evaluated in the paper is not included.
Could you state or link the training details for the evaluated ViT?
Was it trai…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
When I try to load the Stable Diffusion v2.0 checkp…
-
Supporting new methods:
1. Support new backbone architectures ([LITv2](https://arxiv.org/abs/2205.13213)).
2. Refactor code structures weight initialization in various network modules (using `BaseMo…
-
我一直在尝试让这个项目正常运行,但一直遇到很多问题。请问有没有可能提供一个Docker文件来容器化这个项目?
I have been trying to make this project work properly, but I have encountered many problems. Is it possible to provide a Docker file to containe…
-
**Original Post by @sayakpaul***
I agree that rescaling to [0, 1] is way simpler and easier to do but a significant amount of models could be supported off-the-shelf with this consideration I belie…
-
hello, i validate the GCViT using your public checkpoint and the accuracy for tiny, xtiny, xxtiny is very low(top1_error 99.9, top5_error 99.75)
I wonder either you public the wrong checkpoints or my…
-
### Describe the bug
i tried to train vits from scratch for bangla by following this tutorial : https://github.com/coqui-ai/TTS/blob/dev/notebooks/Tutorial_2_train_your_first_TTS_model.ipynb
i tri…
-
Hi, Thanks for the great work.
Due to the needs of specific tasks, i want to train CLIP from scratch without using BPE coding and the length limit of 77, how should i do?