Open wingman-jr-addon opened 1 year ago
Found an excellent resource for model implementations at https://github.com/leondgarse/keras_cv_attention_models#recognition-models, which should accelerate trying out new models.
I've been doing some catch up on recent advancements of the past two or so years. While ConvNextV2 is intriguing due to model size, I think a better approach for this use case will be to not only focus on the model size but also on the pretraining. In particular, I think the DINO/DINOv2 and CLIP-related pretraining approaches are particularly helpful due to the robustness in distribution shift. Not only is the training material much closer to our target distribution, but generally the models will be much stronger.
To test this theory out, I tried a DINOv2 finetune out (dense layer only + gradual weight change on last 15 layers) and got excellent results, better than I had seen for some of my more half-hearted approaches to e.g. Inceptions and/or ResNets. The only challenge there is that the smallest model available uses a whopping 47.23G FLOPS (vs. 0.72G for say EfficientNetV1 B0). A bit surprisingly, I was able to successfully convert it to TensorflowJS but it was slow to the level of several seconds per image prediction. Still, a useful experiment to demonstrate effectiveness of a stronger model. The dataset has also increased somewhat, so it's not quite apples to apples but notice the improvement in the DET and ROC curves. SQRXR 112 (EfficientNet Lite L0-based):
SQRXR 119 (DINOv2):
I'm going to check out an EVA02 based model next as it is CLIP-based, but it still weighs in at 4.72G FLOPS so it's in theory going to be a few times slower than the current.
So I tried out the smallest EVA02, EVA02TinyPatch14. I tried training with various finetunes changing the number of layers of the graph I retuned. Results were OK, but final DET graphs showed poor and/or uneven performance. My hypothesis is to try the next size up EVA02 and see if I gain more of the smoothness and performance I observed in the megasized DINOv2.
For comparison, here's a DET output from EVA02Tiny (SQRXR 120):
Now compare that to the current deployed EfficientNetLite L0 based approach (SQRXR 119):
Results from EVA02 larger model were decent but not enough to justify performance penalty. SQRXR 121 EVA02Small:
EfficientFormerV2S2 seems like a potential for incremental improvement. (SQRXR 122)
Still a little squiggly on the bottom of the DET curve, not such a fan for FPR on the "trusted" zone. Still good though and inference only increased from about 68ms for SQRXR 112 to 82ms for this one SQRXR 122.
I had a few more experiments.
I tried working with EfficientViT B2 as SQRXR 128. Training went well and overall results were promising: Unfortunately the resulting model was a bit difficult to both reload as well as to convert into TF.js. The use of 'hard_swish' did not play well, but I was able to coax the data into a custom layer instead of a function and got it to reload; however, the use of the PartitionedCall op ultimately meant TF.js couldn't handle it. Might be something to return to as there may be a way to coax the model to not have a PartitionedCall at some point but it's not obvious.
Next up trying RepViT M11 as SQRXR 129. Training was OK but did not seem to provide much advantage over say SQRXR 127.
Next - Levit 256 as SQRXR 130: Marginal advantage on ROC over baseline 112, less good DET.
CMT XS Torch as SQRXR 131: Marginal advantage on ROC AUC, disadvantage on DET.
TinyViT 11 as SQRXR 132: Marginal advantage on ROC AUC, disadvantage on DET.
EfficientNetV2 B0 as SQRXR 133: No advantage.
EfficientFormerV2S0 as a smaller variant of an earlier experiment. SQRXR 134: No improvement, and unsurprisingly not as good as V2S2.
Tried a bit different training regime with current EfficientNetLite L0 using some of the other advances like swapping out for AdamW. SQRXR 135: About the same but the DET curve's a bit more gnarly at the beginning. No clear advantage. Still, I think there may be something to the training technique approach here.
GCViT XTiny - a bit bigger model. Shows in the performance. SQRXR 136: Definite improvement. Scanning speed is slow but sort-of-tolerable. Might be useful as a bigger model. The adventurous can try it out on the test branch while it sticks around: https://github.com/wingman-jr-addon/wingman_jr/tree/sqrxr-136
I've been trying this out ... and I'm not sure it's fast enough or good enough to become the next top model yet. I might need to keep searching.
https://github.com/facebookresearch/ConvNeXt-V2/issues/3 https://github.com/edwardyehuang/iSeg/tree/master/backbones