-
Hi! Thanks for your job!
I do batch inference(~1000 images) on a V100 gpu.
At first, the inference is quick(~1s). However, for rear images, it becomes slower and slower(~30s for 300th image.)
The …
-
I have set up my own instance on Ubuntu 22.04.5 LTS, following the batch setup instructions. While I can run tests interactively via the web interface without any issues, I encounter a problem when su…
-
### Bug Description
Yes: [documentation](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings/#batch-size) claims default is 10
But: [defaults to 100 in code](https://github.com/ru…
-
The batch builder in the block-producer rework should take in `Arc` to prevent cloning them.
https://github.com/0xPolygonMiden/miden-node/pull/530#discussion_r1819509446
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
* #12641 Contains a _very much in progress_ draft implementation of this.
* https://github.com/bep/hugotestingreact contains some React JSX tests that uses the above PR.
Naming is hard, but the en…
-
Your paper states that you trained on 8 V100s with a batch size of 16. Does this mean that each GPU had a batch of 2? I am trying to get a gauge of how much VRAM this model uses for someone with less …
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Bug description
I use DLC with PyTorch engine and when I set the batch size for training in `pytorch_config.ya…
-
Thanks for your awesome work and selfless open-source!
Could you please provide the code for batch inference in simple tasks like video captioning, which is very useful in testing.
Sincerely hope f…
-
## What problem does this solve or what need does it fill?
There lacks a handy and efficient way to spawn many similar entities when using [ChildBuilder](https://dev-docs.bevyengine.org/bevy/hierarch…