In paragraph 4.3 Unsupervised Contrastive Pretraining of the paper Nomic Embed: Training a Reproducible Long Context Text Embedder, the learning rate is set to 2e-5, contrastors/src/contrastors/configs/train In the /contrastive_pretrain.yaml file, it is set to 2e-4. What is correct?
In paragraph 4.3 Unsupervised Contrastive Pretraining of the paper Nomic Embed: Training a Reproducible Long Context Text Embedder, the learning rate is set to 2e-5, contrastors/src/contrastors/configs/train In the /contrastive_pretrain.yaml file, it is set to 2e-4. What is correct?