-
Hello, are there any statistics regarding the inference time for the experiments in the 'Open Domain Question Answering over Tales via Dense Retrieval'?
-
## 🐛 Bug
Unexpectedly, `opacus_cifar10` benchmark is running training with our benchmarking script, but it's erroring out using PyTorch's own benchmarking script.
```bash
# Command for running …
-
## 🐛 Bug
Running the upstreamed benchmarking scripts with the following command results in an unexpected error.
```bash
python xla/benchmarks/experiment_runner.py \
--suite-name torchbe…
-
## 🐛 Bug
`hf_GPT2` and `hf_GPT2_large` training fails to run on dynamo. See the error below:
```bash
python xla/benchmarks/experiment_runner.py \
--suite-name torchbench --accelerator cuda…
-
Rather than re-typing everything, I'm simply providing a link to my issue request in the ```insanely-fast-whisper``` repository for you asking the same type of information be put in the readme:
[Se…
-
There is a bug in the seed checker:
On this line we see that
https://github.com/mlcommons/logging/blob/9ede9c6f2d1c8e6c02b6442ee15c64b29c8ecebb/mlperf_logging/package_checker/package_checker.py#L1…
-
## 🐛 Bug
After converting the `Background_Matting` model to `bfloat16` and running it (see command below), it fails with the following error:
```bash
python xla/benchmarks/experiment_runner.py …
-
## 🐛 Bug
I noticed that when I execute some code (see further below) on a TPU VM v3-8 (inside a Python venv 3.10.12 + torch 2.1.2+cu121 + torch_xla 2.1.0) uncommenting each time either the `xm.xrt…
-
If a pretrained model is used, have a list of approved checkpoints and make sure the submission uses the approved checkpoints.
-
hi,
i wanted to use the jetson nano for inference, simply testing with object-detection tutorial an discoverd that the normal tensorflow-models dont work. do you have any experience or hint how to us…
ozett updated
4 years ago