xai-org / grok-1

Grok open release
Apache License 2.0
49.46k stars 8.33k forks source link

Is there a scientific paper? #23

Open NatanFreeman opened 6 months ago

NatanFreeman commented 6 months ago

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

Explosion-Scratch commented 6 months ago

It would be nice to have:

JudeDavis1 commented 6 months ago

A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!

JudeDavis1 commented 6 months ago

I'm happy as long as the code is up to date and the science is released even if not in an academic setting.

AlexanderPuckhaber commented 6 months ago

A model card would make sense. If there weren't any new per-say techniques then I wouldn't see the need for yet another paper. If there is, then sure!

Agree that model card (what data Grok was trained on) is crucial for this being truly "open source"

It looks like Grok had a model card back from last November: https://x.ai/model-card/

Training data The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.

I doubt we'll get an answer any more detailed than "the Internet" and "whatever synthetic data our employees made"

Qu3tzal commented 6 months ago

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)

NatanFreeman commented 6 months ago

Is there a scientific paper accompanying this release? I've searched but couldn't find one. I find it odd that the weights would be released but not the research.

Because there's no research underlying it? Nothing new or surprising in the model so far, it's just the same architecture as other MoE LLMs with different data and training compute. Not every software needs a paper that's going to be rejected at conferences and stay at pre-print stage on ArXiV. :)

Disagree. I think @Explosion-Scratch did a good job pointing out why a paper would be useful in this case.

Qu3tzal commented 6 months ago

That's a technical report at best though.

NatanFreeman commented 6 months ago

That's a technical report at best though.

Call it what you want, the issue is that it doesn't exist.

yzlnew commented 6 months ago

Absolutely needs some experiment details on μTransfer of a MOE model that large, if someone noticed several 'weird' multiplier here. https://github.com/xai-org/grok-1/blob/7050ed204b8206bb8645c7b7bbef7252f79561b0/run.py#L31-L47

AsureDay commented 5 months ago

image