-
I deploy local ollama in my docker-compose setup, add gramps web env vars:
GRAMPSWEB_LLM_BASE_URL: "http://ollama:11434/v1"
GRAMPSWEB_LLM_MODEL: FacebookAI/xlm-roberta-base
OPENAI…
-
Hey bro~ I'm sorry that my problem is a bit specific, because the local machine is a dual 4090 video card machine, so after I successfully installed and deployed fluxgym, I reported an error after deb…
-
Should supporting running the embedding models on multiple-GPUs be prioritized? Here are the pros/cons as I see it (not necessarily equally weighted in terms of importance):
## Pros
- Allows users…
-
### **_Description_**
Hi, @lengstrom . Thanks for your wonderful work!
My goal is to run a ResNet18 under ImageNet on my server using a multi-GPU training strategy to speed up the training proc…
-
Do you want to keep your interference going while those long training jobs are running? The multi-stream-multi-model-multi-GPU version of TrainYourOwnYOLO ([**now available here**]( https://github.com…
-
How can I do the evaluation on larger batch size to utilize my all the GPUs better in the multi-GPU setting?
-
Sorry for the inconvenience, but this would be an awesome way to speed up training. I changed the line 22 of file code/train.py to be gpus=-1 which is the value for pytorch lightning to use all availa…
-
when i try to use `model = multi_gpu_model(model,gpus=3)` in my data,there is a error occured:
> tensorflow.python.framework.errors_impl.InvalidArgumentError: Can't concatenate scalars (use tf.stac…
pzxdd updated
5 years ago
-
I am using diffuser based multi-concept fine-tuning script for the case of cat and wooden_pot but I am not able to reproduce the kind of results shown in the image below.
![image](https://user-imag…
-
1. ~cleaning~
2. ml train/valid/test
a. output TOP-K results
3. show QA accuracy progress and early stopping if accuracy variation goes low