thu-ml / MMTrustEval

A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust)
https://multi-trust.github.io/
Creative Commons Attribution Share Alike 4.0 International
85 stars 5 forks source link

Improve discoverability of your work on Hugging Face #2

Open NielsRogge opened 1 month ago

NielsRogge commented 1 month ago

Hi,

Niels here from the open-source team at Hugging Face. It's great to see you're releasing models + data on HF, I discovered your work through the paper page: https://huggingface.co/papers/2406.07057 (feel free to claim the paper so that it appears at your HF account!).

However there are a couple of things which could improve the discoverability of leaderboard + dataset, which I've listed below.

Gated access to dataset

Would you be interested in making your dataset available on the hub?

We support gated access which requires people to fill in their contact information (similar to how models like LLaMa-3 are gated): https://huggingface.co/docs/hub/en/datasets-gated

This way, people could load the dataset in 2 lines of code, like so (assuming they got access through the manual approval and are logged in with their HF account):

from datasets import load_dataset

dataset = load_dataset("thu-ml/mmtrust-eval")

Leaderboard as an app on Spaces

Secondly, if you're interested, the leaderboard could be hosted on Spaces using Streamlit/Gradio/Docker app. See here for more info: https://huggingface.co/docs/hub/en/spaces-overview

Let me know if you need any help regarding this!

Cheers,

Niels ML Engineer @ HF 🤗

zycheiheihei commented 1 month ago

Thanks for your advice! We have put these on our agenda : )

hxhcreate commented 1 month ago

+1, dataset on huggingface might be easy to get than Google Drive