Closed ghost closed 3 years ago
Sorry, ARM is not an architecture we ever tested on or support. I am actually amazed you got Marian to compile there.
out of interest, what machine are you using? Apple M1? Surface Go? AWS server?
Sorry, ARM is not an architecture we ever tested on or support. I am actually amazed you got Marian to compile there.
Yes, I understand. But i think the same problem will be on any 32 bit machine.
Is this a bin model or an NPY? I think the NPY file Should he readable across all platforms, since NPY is a standard.
If the NPY file can be loaded, Maybe you can try to use the ARM build to convert it to BIN?
Get Outlook for iOShttps://aka.ms/o0ukef
From: Marcin Junczys-Dowmunt notifications@github.com Sent: Tuesday, January 5, 2021 11:56:48 PM To: marian-nmt/marian marian@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: Re: [marian-nmt/marian] Cross platform model (#352)
Sorry, ARM is not an architecture we ever tested on or support. I am actually amazed you got Marian to compile there.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/marian-nmt/marian/issues/352#issuecomment-755145834, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AD7GDFX5DN67DUHIHCRX4B3SYQJUBANCNFSM4VXECC3Q.
out of interest, what machine are you using? Apple M1? Surface Go? AWS server?
Apple Watches Arm64_32(ARMv8-s)
Is this a bin model or an NPY? I think the NPY file Should he readable across all platforms, since NPY is a standard.
If the NPY file can be loaded, Maybe you can try to use the ARM build to convert it to BIN?
Get Outlook for iOShttps://aka.ms/o0ukef
From: Marcin Junczys-Dowmunt notifications@github.com Sent: Tuesday, January 5, 2021 11:56:48 PM To: marian-nmt/marian marian@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: Re: [marian-nmt/marian] Cross platform model (#352)
Sorry, ARM is not an architecture we ever tested on or support. I am actually amazed you got Marian to compile there.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/marian-nmt/marian/issues/352#issuecomment-755145834, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AD7GDFX5DN67DUHIHCRX4B3SYQJUBANCNFSM4VXECC3Q.
I use BIN model. Thanks! I'll try NPY.
Where does the *.bin file come from?
I think the BIN model is more or less a direct memory dump, that’s kind of their point :) . I would consider BIN models as inherently non portable. If NPY fails as well, then that’s something worth debugging.
Yes, as Frank said, the *.bin models are meant to be memory-mapped on 64-bit machines (apart from the small header).
And careful: it's .npz not .npy.
I think *.npy won't do anything, hopefully it will complain :)
Indeed NPZ, not NPY :) been away from Marian for too long already!!
I guess a .bin file created from a .npz file on an ARM machine would have a chance to work then?
It should, or if not, it should be an easy fix if the NPZ works.
We've been compiling Marian for WASM in https://github.com/browsermt/marian-dev/pull/6 . WASM also has a 32-bit size_t. This https://github.com/marian-nmt/marian-dev/pull/779 should fix one problem for you. For what it's worth, an npz trained on an amd64 machine just worked on WASM.
BTW you shouldn't need to retrain the model for this. You just need to convert the BIN model to NPZ. Not sure if marian-convert has a function to go that way, but it should be easy to hack if not.
Thanks to all! NPZ models works. Also needs to undefine FASTOPT feature.
marian-convert can work both ways, based on the suffix.
Hi! I have a model trained on 64 bit Linux(Intel) machine.
But when I trying to use this model on arm64 with 32bit logic/pointers I’m always getting an exception while bin model loading.
It’s happened because train machine use few 64bit data types(like ‘size_t’, ‘long’) which used in model files, but arm64_32 has different data types. For example, size_t is a 4 bit value instead 8 bit on training machine.
So, when I trying to read this model on arm( 32bit data types), model load class use wrong offset(in ‘template get’) - 4 bit instead 8 bit in model and fails.
Is there a solution to this problem? Thanks!