-
In axolotl, there's a config parameter you can set:
`train_on_inputs: false`
It changes the way the loss is calculated when training a lora -> i.e. it ignores the loss on input tokens and only tra…
-
**Hello! I use auto-gptq to quantized `llama-2-7b-instruct` model to `llama-2-7b-instruct-4bit-128g`. And i try to compare the speed between them. But the result is very strange. The storage of the qu…
-
This issue will be kept open for op benchmark result posting for Mobile OS (Android & iOS).
For correct output values, reference the output in [Desktop OS op benchmark results](./3).
-
Hi danjust
I am NTU student in Taiwan. Recently I study in NN model predict research and I found your paper. I think it's a good solution to predict inference and train time.
I guess first step …
-
## Issue
In order to ensure our benchmark fulfills the goals of measuring training-time domain authorization we need to understand what it is!
## ToDo:
- [x] @janweh develop initial concpetual …
-
Wav2letter doesn't handle clips larger than 30 sec properly. Current benchmark filters such audio files from test/train set.
Maybe VAD can be used to break a longer audio file and concatenate the t…
-
Hi, thanks for sharing your great work.
I've finished setting up my environment using the repo you created.
I would like to ask if you have any demo code that inferences on that OVAD model?
Thank…
-
## 🐛 Bug
## To Reproduce
Steps to reproduce the behavior:
1. I want to train the model with my own data set in vac_cocostyle format.
1. I find my .json file has same structure with the o…
-
## Motivation
`TorchBench` is a collection of open-source benchmarks used to evaluate PyTorch performance. It provides a standardized API for benchmark drivers, both for evaluation (eager/jit) and tr…
-
```
./inductor_xpu_test.sh torchbench amp_fp16 inference accuracy xpu 0 static 1 0 torchrec_dlrm
```
```
Traceback (most recent call last):
File "/home/jovyan/pytorch/benchmarks/dynamo/torchb…