-
### System Info
```Shell
- `Accelerate` version: 0.31.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- `accelerate` bash location: /remote-home/xhwang/anaconda3/envs/gloq/bin/accelerat…
-
Hi,
I am curious about how different it is to use multiple MIG instances instead of multiple no-mig GPUs (such as V100) in terms of paralleling, memory sharing etc. I didn't receive same outputs in…
-
The only way to choose a diferent device is to change this bit of the code right in the reikna.py?
```
ocldevice = self.api.get_platforms()[0].get_devices()[0] #Get first device available
…
-
感谢作者分享工作,我已经将GPS-Gaussian的网络部分全部转换成TensorRT引擎,并开启了fp16优化。但在2048x1024分辨率下测试得仅推理时间就需要约60ms,不应该是实时推理应有的速度。请问是什么地方出了问题?
以下是trtexec的输出:
>&&&& RUNNING TensorRT.trtexec [TensorRT v100100] # /home/lisi/pr…
-
Hello, Could you tell me how to use the yolo-manba-seg.yaml? When i used the command :`python mbyolo_train.py --task train --data ultralytics/cfg/datasets/coco123.yaml --config ultralytics/cfg/models…
-
in load_pretrained_model
model = CambrianLlamaForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3531, in from_pretrained
) =…
-
Now I have one computer(win10 os) with multiple gpus(e.g. two RTX 2070s), How to use all of them to render???
Vulkan1.1 support multi-gpu by using device groups!
Thanks very much!!!
-
执行脚本:
#!/bin/bash
#SBATCH --job-name=sft_sql_codes # name
#SBATCH --nodes=1 # nodes
#SBATCH -w wuhan-gpu-[17]
#SBATCH --ntasks-per-node=1 …
-
### Motivation
Currently, device_map="auto" only support a single-node, multi-GPU setup (https://github.com/huggingface/transformers/issues/24747). If you have access to 8xA100 80GB/40GB, things ar…
-
**The problem of distributed training blocking**
**Steps to Reproduce**
1、Minimum code block
from otx.engine import Engine
engine = Engine(model="yolox_s", data_root="pwd")
engin…