Closed Veldhoen closed 3 months ago
Notes:
Python 3.9
(OpenShift cluster default)Python 3.12
(installed via miniconda)batch_size = 16
, model performs the fastest observed so far. I will experiment with higher batch_size
since the model wasn't using the whole GPU memory available.UPDATE:
batch_size = 24
works the best (91% GPU memory usage, fastest overall)batch_size = 26
was <1s faster for 1 hr audio file and consumed around 97% GPU memory. Overall it was still slower by <1s than size 24. Very close to running OOM => NOT SUITABLENext step: Run WhisperX evaluation using batch_size = 24
Deploy and run WhisperX