-
Hi,
I have found that, we are using async logging mechanism using thread, when generate accuracy log entries (content of responses, either first token or sample response). This mechanism is working…
-
e.g., https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.qmc.Halton.html#scipy.stats.qmc.Halton
This is likely to be better maintained than the MLCommons code.
-
### 🐛 Describe the bug
We tried to save and load the torch.exported dlrm_v2 model(97.5GB), the model repository is: https://github.com/mlcommons/inference/tree/master/recommendation/dlrm_v2/pytorch…
-
https://mlcommons.org/en/mlcube/
2. https://mlcommons.github.io/mlcube/getting-started/mnist.html
3. Try out building your own MLCube
are leading to a document that does not work
-
During dataset submission, we need to download the data preparation MLCube to make sure it can be executed correctly. While this is happening, we're not letting the user know, and instead we're leavin…
-
Is there any reason why we have an [accuracy upper limit for LLAMA2 Tokens per sample](https://github.com/mlcommons/inference/blob/master/tools/submission/submission_checker.py#L109) but not for GPT-J…
-
In https://github.com/MIT-LCP/physionet-build/pull/640 we added support for Schema.org metadata. Croissant is an extension to this schema that captures additional information for finding and using the…
-
-
Benchmark run
-
At least map to corresponding values
https://github.com/mlcommons/croissant