-
I am running several evaluation. Many ones have succeeded but the last one process raise error,
```
Inner exception:
File "/mnt/data/conda/envs/lora/lib/python3.10/threading.py", line 973, in _…
-
Create task groups to run grouped tasks together. All the tasks that have more than one config should be grouped. There are some tasks with many different configurations and we have to write them indi…
-
### 描述该错误
为什么我在评估模型的时候,模型要创建那么多的task,然后每个task都会加载一次模型,最终显卡out of memory?
### 环境信息
{'CUDA available': True,
'CUDA_HOME': '/usr/local/cuda-11.7',
'GCC': 'gcc (GCC) 10.2.0',
'GPU 0,1,2,3': 'NVID…
-
Hi, thanks for the great collection of datasets.
But it seems that not all datasets in it are correctly preprocessed. Multirc requires paragraph, question, individual answers concatenated together f…
-
I've reproduced the whole StackLLaMA pipeline using the changes in #398 #399 #400
Here is the [corresponding wandb report](https://wandb.ai/mnoukhov/trl/reports/StackLLaMA-Repro--Vmlldzo0NTM1MDk2)…
-
Hello, I want to evaluate some 7B models using multi-gpu on a cluster of tasks. Right now I use the [master branch's latest commit ](https://github.com/EleutherAI/lm-evaluation-harness/commit/b281b092…
-
Hi, I'd like to run a 65B llama with LOMO, what config should I use to run the training on a 8*RTX 3090 machine?
It would be very nice if you add config/args_lomo.yaml and config/ds_config.json for 6…
-
python main.py \
> --model hf-causal-experimental \
> --model_args pretrained=../LLaMA-Efficient-Tuning/model/Baichuan-13B-Instruction \
> --tasks Ceval-valid-* \
> --device cuda:0…
-
Hi
How can I train the model when I provide multiple datasets? It concatenates the datasets, but I know that each batch must have one task. How do you shuffle such batches?
Currently I get error o…
-
:/