Closed jkrukowski closed 7 months ago
Can the model now be used completely offline? It seems that loadTokenizer still needs to connect to hf (huggingface) to download.
I'd like to avoid requiring
WhisperMLModel
for these because we will soon be working with MLX models that will follow the same TextDecoding protocols, but won't have associated MLModels. Everything else looks good
not much is left in this PR after this is removed so I'll close it for now
WhisperMLModel
conformance, no more conditional castingsetupModels
-> staticsetupModelFolder
which makesmodelFolder
non optionaldownload
param, I think it's not needed. When themodelFolder
param isnil
we download, otherwise we load the model from the directoryPossible future improvements:
featureExtractor
,audioEncoder
andtextDecoder
) in parallel?forceDownload
param which, in casemodelFolder
is provided will redownload