-
Is bolt support batch inference , could I inference 2 or more sentence at the same time ?
-
Hi, I have a problem with using Vicuna13b-v1.3 to make an **inference with multi-GPU**. Could anyone please provide an example of code used for multi-GPU inference without the CLI? On the other hand, …
-
thank you for this model, I am trying to train the model on totaltextdata .. I train on some batches, but it suddenly stops showing this (This the important part of the error)
```
2024-10-30 10:49…
-
Thanks for your interesting work and for sharing the code.
In the README, you only provide examples of how to generate captions for one image at a time (batch size = 1). Could you (@Yushi-Hu) expl…
-
I tried the batch inference in XTTS, So I am doing padding till the max text sequence in the batch and also adding the attention mask for this, But for shorter sequences,
I am getting some random…
-
Hi,
I am running train.py, and this line of code complains the input is a directory. `complex_names_all = read_strings_from_txt(self.split_path)`
The original code snippet, read_strings_from_t…
-
The spec-infer works well for batch size (1,2,4,8,16). But I change the batch size to 32, it turns out to be "stack smashing detected"
```+ ncpus=16
+ ngpus=1
+ fsize=30000
+ zsize=60000
+ max_se…
-
Hi There,
I need to infer with a batch size of 2. I have exported the model to onnx format using the command -
`yolo export model=best.pt format=onnx simplify=True opset=11 dynamic=True`
Th…
anidh updated
2 months ago
-
Hi CLIP authors,
so i was trying to run the clip using the code below, but when i doing this in batch, i am having trouble with model.encode_text method, as it will show error message below
…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
…