Closed macsmy closed 11 months ago
I've got a similar issue earlier in another repo: EfficientNetV2B0 to TFLite #99. May try:
from keras_cv_attention_models import efficientformer, model_surgery
mm = efficientformer.EfficientFormerL1()
mm = model_surgery.convert_to_fused_conv_bn_model(mm) # fuse conv - bn, technically removes all BN layers
mm = model_surgery.convert_gelu_and_extract_patches_for_tflite(mm) # gelu -> gelu/app, you may already done this
# Other process converting tflite
...
But seems LayerNorm
also not listed in Supported ops of TFLite GPU delegate, so I'm not sure if it works...
You may also just remove the usage of layer_norm
in efficientformer.py L28, L38, L94, with modifying the next layer inputs. Check if it's LayerNorm
causing the trouble, or others like mhsa_with_multi_head_position
.
thanks, however, LayerNorm
works. i believe the problem is with FullyConnected layers
You mean the Dense
layers? If you can help confirm a model like mm = efficientformer.EfficientFormerL1(num_classes=0)
without output Dense
layers works, I think we can have a function like convert_dense_to_conv2d
.
Similar issue solved in Converting EfficientFormer into tflite doesn't work #137.
Hi! Thanks for great repo! I have converted the EfficientFormer model to tflite. However, applying both XNNPACK and GPU delegates fail.
Do you know what could be the issue? Im using latest tensorflow version for conversion.