-
3.3 下载minimind语言模型的预训练权重([百度网盘](https://pan.baidu.com/s/1LE1SPoPYGS7VNtT1tpf7DA?pwd=6666) or [HuggingFace](https://huggingface.co/datasets/jingyaogong/minimind-v_dataset/tree/main/out)),放到./out/ 目录下,命…
-
### System Info
```shell
vault.habana.ai/gaudi-docker/1.17.0/ubuntu22.04/habanalabs/pytorch-installer-2.3.1:latest
```
### Information
- [X] The official example scripts
- [ ] My own modified scri…
-
Hello, could you please provide the parameters of the pretrain model?
-
Hi,
I noticed that the `--version` arg in both the pretrain and finetune scripts is passed with **v1**, which is different from the original LLaVA&LLaVA-1.5 and other LLaVA style projects. Do you h…
-
Is there away to pretrain the M_3 models?
-
Thank you for your excellent work. If I want to use this data for pretraining and conduct a rigorous comparison with the DCLM-BASELINE 7B model mentioned here, what hyper-parameters should I use? Coul…
-
pretrain model release?
-
I have downloaded and checked the pretrain data, however, there are only about 1/4 images of the 2M data is in the downloaded images. Where can I find the remaining images?
-
I'm training a Hubert model from scratch on 8k Hz audio speech data same as described on the paper, first iteration succeeded. I've started the second iteration where first iteration features were us…
-
Hello,
I'm trying to pretrain with the FMoW dataset (FMoW stands for 'Functional Map of the World - Sentinel-2 corresponding images').
By the way, if you execute the command below, the following…