-
# Overview
The goal is to add support for efficient batch processing of inputs to the MLX-VLM library. This will allow users to process multiple images and text prompts simultaneously to generate cor…
-
To allow multi-tenancy inferences
- [ ] Explore vLLM/HuggingFace TGI
- [ ] Fallback implement baseline FastAPI with batch processing
-
Output
```
Benchmarking instance: batch_processing
Traceback (most recent call last):
File "/local/scratch/a/peng372/github/gdplib/benchmark.py", line 109, in
benchmark(model, strategy, …
-
using teams 4.0.1
I ran a script and generated 1000 orphaned items (in an item set)
**Question 1**
Is there a way to create an item in a team via the API ?
If not (my case),
I try to move…
-
Hi there,
I am using version 1.2.4 of SPCal and when doing a batch processing of 8 samples the software crushes and closes itself. I was previously using version 1.1.11 and batch processing worked …
-
v. 2.41.5 (recent previous also)
**Describe the bug**
File Batch processing not executing Event Commands
**Expected behavior**
Event commands should be executed during File Batch processing
…
-
Hope to add batch processing function
-
Is there no batch processing of folders? Only single images?
-
Hi, thanks for your attention.
I was using the main.exe to batch processing my *.mp3 files.
This is my command.
** D:\Whisper\cli>main.exe -m D:\Whisper\medium\ggml-medium.en.bin -gpu -otxt…
-
Currently, handling of invalid messages with the available DLQ does only work if the strategy has a direct processing chain and does not include batching. Consider the following:
```
0) consume ->…