huggingface / OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
https://huggingface.co/datasets/HuggingFaceM4/OBELICS
Apache License 2.0
171 stars 9 forks source link

When will the trained model be released? #3

Open chenxshuo opened 10 months ago

chenxshuo commented 10 months ago

Hi there,

thank you very much for this awesome project! I wonder whether you are going to release the model that is trained on this dataset in the near future. If yes, when will it be?

Best regards

vishaal27 commented 10 months ago

I have this same question too. I see that some of these model links are public: https://huggingface.co/HuggingFaceM4, however upon clicking them it shows a 404. Is there a planned release date for making them public?

VictorSanh commented 10 months ago

Hey @chenxshuo & @vishaal27 , thanks for your interest! We will be announcing officially around mid-week. The link will all become public at this time :)

vishaal27 commented 10 months ago

@VictorSanh Thanks for the model release, looks super exciting. I just was wondering if you had a profiling table of the load times, inference times for a single moderately sized sequence, and necessary GPU memory for both the 9B and 80B models. On how many GPUs (and their specs) did you run the 80B evals? And were all the evals done in fp16/bf16?