-
### System Info
```Shell
- `Accelerate` version: 0.29.3
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /(...)/.venv/bin/accelerate
- Python version: 3.10.…
-
On the server-side, Seq imposes two size limits on incoming events:
* The event size limit, which applies to the JSON-encoded byte size of individual events, and
* The batch size limit, which ap…
-
## Description
I have noticed that the performance metrics (precision, recall, accuracy) of my model are different while it is in memory (just after training and before serialization) and after loa…
-
### What type of bug is this?
Performance issue
### What subsystems and features are affected?
Compression
### What happened?
Running query from https://docs.timescale.com/tutorials/latest/blockc…
-
Hi, thank you for your attribition!
It seems that the "factor" in "model.py" is None first, and set to a fixed value after the first batch of first epoch, and keeped fixed during the training pro…
-
###### Which version and edition of Flyway are you using?
9.7.0 Community Edition
###### Which client are you using? (Command-line, Java API, Maven plugin, Gradle plugin)
Gradle plugin and comman…
-
### System Info
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MULTI_GPU
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_trainin…
-
Since #856 got merged, I was wondering if we can have sending multiple files in one go into faster-whisper, something like:
```py
from faster_whisper import WhisperModel, BatchedInferencePipeline
…
-
## Proving fee
Currently we charge:
- Base fee
- Priority fee - if the user wants to pay it ofc
- L1 fee
We neglected another important fee though, proving fee.
On our latest profiling map, …
-
To follow up to the discussion started here:
https://github.com/flux-framework/flux-sched/issues/1009#issuecomment-1610039068
I'm trying to get it working to be able to run a batch job that has…