-
**Problem**
The current blake3 crate leaves a lot of single core performance on the table for message sizes below 8 KiB.
Namely, it doesn't SIMD parallelize hashing for small messages.
As a P…
-
**Is your feature request related to a problem? Please describe.**
By default Agones batches allocations for better throughput by 500ms at a time. In some scenarios, that amount of wait time is ove…
-
-
Is this code "optimal" for batched inference and preprocessing?
-
Relevant docs can be found here.
https://nvidia.github.io/TensorRT-LLM/batch_manager.html#get-and-send-callbacks
Inflight batching is the most beneficial feature in CUDA system for LLM inferenci…
-
So there is the option for generic loop-style batching vs a liquid specific batching maybe even the option to batch using the pegin mechanism for very high-value pegins that we wouldn't have the worki…
-
Commonly used textures should only be loaded once per frame instead of per object. Mainly coins, shadows, and particles.
-
## Bug description
I want to retrieve 10,000 accounts (by IDs) from the database.
Due to the nature of how GraphQL resolvers work, I can't simply use `findMany` something like
```javascript
pr…
-
## Question
**Describe your question in ONE SENTENCE.**
## Further Information
Describe your question in greater length here.
## Screenshots
If applicable, add screenshots to help explain you…
-
# Batching Mechanisms for Distributed Executors
To implement efficient distributed executors for composite schemas, we need robust batching mechanisms. While introducing explicit batching fields fo…