I trained model with faster_rcnn_fbnetv3a_C4.yaml I exported the model to *.pt and run it on android with out any issue.
I tried to train model for smaller objects therefore I change my weight to mask_rcnn_fbnetv3g_fpn.yaml, it works perfect on python environment, but when I try to use it in android, I keep getting the error below:
FATAL EXCEPTION: Thread-2
Process: org.pytorch.demo.objectdetection, PID: 4185
com.facebook.jni.CppException: Output channel size of weight and bias must match.
Debug info for handle(s): debug_handles:{-1}, was not found.
Exception raised from apply_impl at /home/agunapal/pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:899 (most recent call first):
(no backtrace available)
at org.pytorch.LiteNativePeer.forward(Native Method)
at org.pytorch.Module.forward(Module.java:52)
at org.pytorch.demo.objectdetection.MainActivity.run(MainActivity.java:331)
at java.lang.Thread.run(Thread.java:1012)
I trained model with faster_rcnn_fbnetv3a_C4.yaml I exported the model to *.pt and run it on android with out any issue. I tried to train model for smaller objects therefore I change my weight to mask_rcnn_fbnetv3g_fpn.yaml, it works perfect on python environment, but when I try to use it in android, I keep getting the error below: FATAL EXCEPTION: Thread-2 Process: org.pytorch.demo.objectdetection, PID: 4185 com.facebook.jni.CppException: Output channel size of weight and bias must match.