-
Looks like the GPU in colab is not being engaged. Tried using A100, V100, T4 GPU, and TPU hardware settings in colab. command:
`python train.py spacetimeformer mnist --embed_method spatio-temporal …
-
getting this when using `blip = true`
```
ERROR:root:Exception during BLIP captioning
Traceback (most recent call last):
File "/content/gdrive/MyDrive/captionr/captionr/captionr_class.py", line …
-
In the inner loop of FlashAttention-2, each computation of O requires a computation of V. I adopted a different implementation approach. For each block Q, after calculating the complete attention scor…
-
### Your current environment
```text
# Using pip install vllm
vllm==v0.5.1
```
### 🐛 Describe the bug
```text
# My python script to test long text
def run_Mixtral():
tokenizer = A…
-
```
>> dbnames = {'ibug'};
>> train_model
Attention, if error occurs, plese ensure you used the correct version of parallel initialization.
Starting parallel pool (parpool) using the 'local' profile .…
haipz updated
7 years ago
-
**Describe the bug**
Local cache is not working in my CI pipeline, even though I have `skipLocalCache: false` in my config.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a lage.con…
-
Hi im getting this error and i cant figure it out:
[UltraSinger] Creating C:\song\JENNIE - You & Me\JENNIE - You & Me.txt from transcription.
[UltraSinger] Calculating silence parts for linebreak…
-
python3 train.py --seed 1 --bS 16 --accumulate_gradients 2 --bert_type_abb uS --fine_tune --lr 0.001 --lr_bert 0.00001 --max_seq_leng 222 --do_train
BERT-type: uncased_L-12_H-768_A-12
Batch_…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
Code to reproduce:
```python
import trl
from unsloth import FastLanguageModel
import torch
from tqdm import tqdm
from transformers import AutoTokenizer
from datasets import load_dataset
fr…