Open KamilCSPS opened 1 month ago
You can change the filename prefix "model". The suffix is hard coded.
// will fetch model_br_quantized.onnx, using latest main / v3 branch
await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {model_file_name: 'model_br'})
Another workaround would be: download a compressed models, decompress it, and cache the decompressed models, using path "model_quantized.onnx"
Question
Hi,
I just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example.
I can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the config files is references to the folder of the model but not the actual file paths.
E.g "model_quantized.onnx"
Any help is appreciated.