-
![image](https://github.com/yongzhuo/ChatGLM2-SFT/assets/3400895/9fe50607-2636-4f3b-ae3f-086ad5b8268c)
运行的时候报如上错误是为啥呀
logcy updated
12 months ago
-
Hi there! I found your colab coincidentally and wanted to try hallucinating some binders using it for the protein "8DYS" on PDB.
However, when using the recommended settings and initializing with …
-
hi sir
my GPU is nvidia RTX3080, its cuda memory is 10 GB
i run "sh ./scripts/colorize.real.sh" it will show cuda out of memory.....
Where can I modify the batch or size?
thanks
-
This may, of course, be a bug within `cellxgene_census`. However, the Python and R APIs should be working similarly enough that if one API is not OOMing and another is OOMing, on the same dataset and …
-
Please,How to solve this problem?
RuntimeError: CUDA out of memory. Tried to allocate 74.00 MiB (GPU 0; 1.96 GiB total capacity; 1.46 GiB already allocated; 71.50 MiB free; 38.53 MiB cached)
My grap…
-
CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 24.00 GiB total capacity; 22.99 GiB already
allocated; 0 bytes free; 23.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated…
-
(chatglm3-finetune) root@g101:/data/ChatGLM3/chatglm3-finetune# python finetune.py --dataset_path ./alpaca --lora_rank 4 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --max_steps 520…
-
batch_size=4 , 3090 Ti, the GPU's video memory is 24GB. Why does the code report the error"CUDA out of memory" after several iterations ?
-
Hi, may I ask how much GPU memory is needed to train the model? Is there any way to modify the batch size?
-
Hi.
I'm trying to train this model on a single P100 with 16 GB memory but seem to be running out of memory with a batch size of 2. Do I need more than 16 GB for this model? How can I reduce the GPU…