Wonder if mobile-sam can reach the inference performance mentioned in origin paper using mnn backend. As mentioned in origin paper encoding part cost 8ms, decoding cost 4ms.
I've tested it with interpreter and session instead of module api but can not reach 8ms when applying encoding transform:)
Wonder if mobile-sam can reach the inference performance mentioned in origin paper using mnn backend. As mentioned in origin paper encoding part cost 8ms, decoding cost 4ms.
I've tested it with interpreter and session instead of module api but can not reach 8ms when applying encoding transform:)