Open FrancescoSaverioZuppichini opened 1 year ago
@FrancescoSaverioZuppichini
can you please try below API call:
def convert_float32_to_float16(fp32_model_path, fp16_model_path):
from onnxmltools.utils.float16_converter import convert_float_to_float16
from onnxmltools.utils import save_model
model = onnx.load(fp32_model_path)
new_onnx_model = convert_float_to_float16(model, keep_io_types=True)
save_model(new_onnx_model, fp16_model_path)
convert_float32_to_float16("fp32.onnx", "fp16_1.onnx")
if still not work, you could share me your onnx model in Google drive (if the model is very large), or just send me email (xiaowuhu@microsoft.com) and attached it (if it is smaller than 200M).
Hey guys,
I hope you are doing great.
onnxconverter_common.auto_mixed_precision.auto_convert_mixed_precision
takes forever, I let it run for 15m and is still more or less half way. Any idea why? My code:Thanks a lot :)