Closed ratthachat closed 1 year ago
Thank you for your interest.
Models such as molt5-small and molt5-large were pre-trained using unsupervised learning objectives outlined in our research paper. However, these models haven't been fine-tuned for any specific task yet.
Nevertheless, you have the option to fine-tune these pre-trained models for any particular task that you're interested in. While we don't currently have pre-trained weights for smiles to smiles, with the appropriate training data, you should be able to fine-tune our pre-trained models to achieve your desired results.
We hope this information sufficiently answers your question.
Hi, first of all, thank you very much for your great work!
My question is about : https://huggingface.co/laituan245/molt5-small All other weights are either smiles2captions or captions2smiles, but how about this molt5-small ? Which purpose should we use?
(BTW, is there any pretrained weights on smiles to smiles ? )