usually we recommend to push separate checkpoints to separate model repositories (for things like download stats to work). In this case, you could for instance have a mlfu7/crossmae-vit-base model repository (with tags like "image-classification, "robotics" etc.), as well as a mlfu7/llama-7b-lora model repository, etc. See here for a guide: https://huggingface.co/docs/hub/models-uploading. For custom PyTorch models, the easiest is to leverage the PyTorchModelHubMixin class.
the various model repos (as well as the dataset, paper) can then be grouped together into a collection. See e.g. here for current trending collections: https://huggingface.co/collections.
Hi,
Niels here from the open-source team at Hugging Face. I discovered your work through the paper page: https://huggingface.co/papers/2408.15980.
Thanks for making the models and dataset available on the 🤗 hub. Some small suggestions on how to improve the discoverability of your work:
mlfu7/crossmae-vit-base
model repository (with tags like "image-classification, "robotics" etc.), as well as amlfu7/llama-7b-lora
model repository, etc. See here for a guide: https://huggingface.co/docs/hub/models-uploading. For custom PyTorch models, the easiest is to leverage the PyTorchModelHubMixin class.Let me know if you need any help regarding this!
Cheers,
Niels ML Engineer @ HF 🤗