-
While running one of the benchmarks, there was an internal exception generated for decoding the chunk. This is intermittent.
```bash
jswinski@eragon [h5coro] (concurrent_list_group):~/meta/h5coro…
-
I have a dataset where the vast majority of values are nodata values (https://github.com/mthh/contour-rs/issues/16).
The bounding box is substantially larger than what would be required if the data …
-
### Motivation.
We want to start tracking performance numbers of vLLM on more realistic workloads. Thanks to our sponsors #4925 we are getting a pool of hardware resources ready to run the testing …
-
Is [this](https://zenodo.org/record/6962043#.Y8gir9LP1nM) the same dataset? If not why was it not used for benchmark?
-
We should preload more datasets. As some datasets are quite large we should implement a feature to only add a reference to a dataset and download it during runtime with no further action required from…
-
Hi, I saw that you benchmark everyone's code on your env (same one as the **old** official evaluation env).
Could you please run my code on that server against both the default and 10K datasets? It…
-
Thanks for sharing this, it seems to me this is a trained model for test purpose only using your benchmark system. According to your paper, you are utilizing a custom dataset (around 14 hours of manua…
-
## Adding a Dataset
- **Name:** Mostly Basic Python Problems Dataset
- **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by e…
-
On the 17 Dec 2014 benchmarking call, we discussed benchmarking tools to convert VCF and other sources of data into HGVS nomenclature (including entries made by tools that annotate VCF files).
This t…
-
We should consider switching to a dynamic, information-rich homepage to replace the big grey image.
The goal would be to actively present useful/interesting information to our users, rather than m…