-
If I set LoRA training to CPU in Kohya, will it proceed?
-
Hello everyone,
I'm working on training a Yolo model for object detection and plan to use a Google Coral Dev Board for inference. As the Coral documentation recommends, the model should be in the T…
-
My CPU gets maxed out as soon as I start the training, but my GPU doesn't get used. I've installed nvidia drivers, but I can't figure out what I'm doing wrong.
The training takes a ridiculous amou…
-
There are parts of Init Training when it is loading a model to be used that uses too much VRAM due to the model size, but would otherwise work for training outside of the initialization. For example w…
-
Dear Kun Wu,
I hope this message finds you well.
I have a few questions regarding your paper, "Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture." Whe…
-
Amazing work and fantastic resource, thanks for sharing your work - this should jump start usage of llm on low resource devices.
Quick question - is there a guide to convert existing models to bitnet…
-
- PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.1+cpu
- Python version: 3.9
- Operating System: Windows11
### Expected behavior
I executed code `Baseline().predict(val_dataloader…
-
If I set **use_gpu=false** or CUDA is not available, then I got the following KeyError-
`Traceback (most recent call last):
File "D:\efficientdet\trainer.py", line 28, in train
gtf.Train_Da…
-
Hello @siddk, I loaded the "prism-dinosiglip-224px+7b" weights you uploaded on huggingface and tried to decode predicted texts from CausalLMOutputWithPast in the training code as follows:
```
…
-
Hi! Thanks for your great work. Can you provide the script with Qlora fine-tuning?