Closed MoonKraken closed 1 year ago
Hello @Me163 ,
This is not caused by a device issue, but by the lack of support for bfloat16
in Numpy. I have added a small utility function that should handle this case and cast the tensors to their bfloat16
representation. Note that these would have to be reloaded as Kind::BFloat16
in the tch-rs
crate.
The candidate PR is at https://github.com/guillaume-be/rust-bert/pull/396 - could you please give this a try and let me know if this helps?
I am trying to use
convert_model.py
to convert a model to .ot so that I can use it with rust_bert. But I am getting the following error:I'm on an M1 MBP and a quick Google search revealed that maybe this issue is expected on this hardware, but I don't know enough about numpy or pytorch to now how to fix it. TIA!