Open lankastersky opened 6 years ago
Yes, the official pre-trained model will not be directly used on the device. That is why I created this project.
Refer to step 3 in "Preparing the models" section, you need to modify the export scripts to cast INT64 to INT32:
semantic_predictions = tf.slice(
predictions[common.OUTPUT_TYPE],
[0, 0, 0],
[1, resized_image_size[0], resized_image_size[1]])
Crashes with the frozen graph mobilenetv2_coco_voc_trainaug from Deeplab github (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md):
FATAL EXCEPTION: ModernAsyncTask #3 Process: com.dailystudio.deeplab, PID: 4546 java.lang.RuntimeException: An error occurred while executing doInBackground() at android.support.v4.content.ModernAsyncTask$3.done(ModernAsyncTask.java:161) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383) at java.util.concurrent.FutureTask.setException(FutureTask.java:252) at java.util.concurrent.FutureTask.run(FutureTask.java:271) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) Caused by: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Slice' with these attrs. Registered devices: [CPU], Registered kernels: device='CPU'; T in [DT_BOOL] device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_INT32]