Export Hugging Face models to Core ML and TensorFlow Lite
Apache License 2.0
622
stars
46
forks
source link
Can't convert Fine-tuned Ollama model - For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper_bound to a positive value in "RangeDim()" for the "inputs" param in ct.convert(). Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... #84
Hi! When trying to convert a fine tuned Ollama model with the following options:
python -m exporters.coreml --use_past --compute_units all --preprocessor tokenizer --model=./merged_finetuned_open_llama_3b_v2_shakespeare ./models/shakespearellama.mlpackage
I get the following error:
File "/opt/anaconda3/envs/pytorchNightly/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py", line 871, in _validate_conversion_arguments raise ValueError(err_msg_infinite_bound) ValueError: For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper_bound to a positive value in "RangeDim()" for the "inputs" param in ct.convert().
I am using the latest version of pytorch and CoreMLTools 8.0b1
Running on MacOS 14.6
The code for the fine-tuning is basically the same code that was used on the WWDC24 Session: Train your machine learning and AI models on Apple GPUs
(Also when removing the '--use_past' flag I get a different error when validating the model)
Hi! When trying to convert a fine tuned Ollama model with the following options:
python -m exporters.coreml --use_past --compute_units all --preprocessor tokenizer --model=./merged_finetuned_open_llama_3b_v2_shakespeare ./models/shakespearellama.mlpackage
I get the following error:
File "/opt/anaconda3/envs/pytorchNightly/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py", line 871, in _validate_conversion_arguments raise ValueError(err_msg_infinite_bound) ValueError: For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper_bound to a positive value in "RangeDim()" for the "inputs" param in ct.convert().
I am using the latest version of pytorch and CoreMLTools 8.0b1 Running on MacOS 14.6 The code for the fine-tuning is basically the same code that was used on the WWDC24 Session: Train your machine learning and AI models on Apple GPUs (Also when removing the '--use_past' flag I get a different error when validating the model)