Open NatanFreeman opened 6 months ago
It would be nice to have:
A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!
I'm happy as long as the code is up to date and the science is released even if not in an academic setting.
A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!
Agree that model card (what data Grok was trained on) is crucial for this being truly "open source"
It looks like Grok had a model card back from last November: https://x.ai/model-card/
Training data The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.
I doubt we'll get an answer any more detailed than "the Internet" and "whatever synthetic data our employees made"
Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.
Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)
Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.
Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)
Disagree. I think @Explosion-Scratch did a good job pointing out why a paper would be useful in this case.
That's a technical report at best though.
That's a technical report at best though.
Call it what you want, the issue is that it doesn't exist.
Absolutely needs some experiment details on μTransfer of a MOE model that large, if someone noticed several 'weird' multiplier here. https://github.com/xai-org/grok-1/blob/7050ed204b8206bb8645c7b7bbef7252f79561b0/run.py#L31-L47
Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.