wangzhaode / mnn-segment-anything

segment-anything based mnn
34 stars 2 forks source link

mobile-sam mnn inference performance #1

Open MaybeShewill-CV opened 1 year ago

MaybeShewill-CV commented 1 year ago

Wonder if mobile-sam can reach the inference performance mentioned in origin paper using mnn backend. As mentioned in origin paper encoding part cost 8ms, decoding cost 4ms.

I've tested it with interpreter and session instead of module api but can not reach 8ms when applying encoding transform:)

wangzhaode commented 1 year ago

The mobile-sam speed test on a single GPU. Which device you tested on ?

MaybeShewill-CV commented 1 year ago

@wangzhaode single GTX-3080 :)