Closed MarkWijkhuizen closed 1 year ago
Uh, they got a same name. These M ones are from Github microsoft/Cream/EfficientViT, paper 2305.07027 EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention, and I put the wrong one...
What a coincidence, these models with the identical names are released within a month of each other. With a name like EfficientVit this is not exactly unsurprising... If you would be in the position to add the other EfficientVit models, that would be highly appreciated!
It's almost done, just uploading. Just wondering where should I put them, as the B
and M
series are actually totally different.
Thanks for picking it up so quickly!
Where to put them is a good question.
The B
and M
models are from different authors with different architectures, this should be explicitly stated.
From a programming point of view it would be convenient to have them under the same EfficientVit
class, since their purpose is the same: small efficient vision transformers.
As the project admin you know best what to do :)
Ya, currently they are both placed under keras_cv_attention_models/efficientvit and acceptable through kecam.efficientvit
, and also separately acceptable in kecam.efficientvit_m
and kecam.efficientvit_b
. Just be it.
Perfect, pulled the updates locally and all working smoothly. Many thanks for adding the EfficientVit-B models!
The original EfficientVit paper mentions 4 model, namely EfficientVit-B0/B1/B2/B3.
These models are published on the paper's official GitHub repository.
In the latest release EfficientVit-M1/M2/M3/M4/M5 were added, which does differ from the official models.
Would it be possible to add the EfficientVit-B0/B1/B2/B3 from their official GitHub repository?