dailystudio / ml

ML related stuff
Apache License 2.0
140 stars 48 forks source link

No OpKernel was registered to support Op 'Slice' with these attrs exception #3

Open lankastersky opened 6 years ago

lankastersky commented 6 years ago

Crashes with the frozen graph mobilenetv2_coco_voc_trainaug from Deeplab github (https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md):

FATAL EXCEPTION: ModernAsyncTask #3 Process: com.dailystudio.deeplab, PID: 4546 java.lang.RuntimeException: An error occurred while executing doInBackground() at android.support.v4.content.ModernAsyncTask$3.done(ModernAsyncTask.java:161) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383) at java.util.concurrent.FutureTask.setException(FutureTask.java:252) at java.util.concurrent.FutureTask.run(FutureTask.java:271) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) Caused by: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Slice' with these attrs. Registered devices: [CPU], Registered kernels: device='CPU'; T in [DT_BOOL] device='CPU'; T in [DT_FLOAT] device='CPU'; T in [DT_INT32]

     [[Node: SemanticPredictions = Slice[Index=DT_INT32, T=DT_INT64](ArgMax, SemanticPredictions/begin, SemanticPredictions/size)]]
    at org.tensorflow.Session.run(Native Method)
    at org.tensorflow.Session.access$100(Session.java:48)
    at org.tensorflow.Session$Runner.runHelper(Session.java:298)
    at org.tensorflow.Session$Runner.runAndFetchMetadata(Session.java:260)
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:220)
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:197)
    at com.dailystudio.deeplab.ml.DeeplabModel.segment(DeeplabModel.java:117)
    at com.dailystudio.deeplab.SegmentBitmapsLoader.loadInBackground(SegmentBitmapsLoader.java:92)
    at com.dailystudio.deeplab.SegmentBitmapsLoader.loadInBackground(SegmentBitmapsLoader.java:27)
    at android.support.v4.content.AsyncTaskLoader.onLoadInBackground(AsyncTaskLoader.java:306)
    at android.support.v4.content.AsyncTaskLoader$LoadTask.doInBackground(AsyncTaskLoader.java:59)
    at android.support.v4.content.AsyncTaskLoader$LoadTask.doInBackground(AsyncTaskLoader.java:47)
    at android.support.v4.content.ModernAsyncTask$2.call(ModernAsyncTask.java:138)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more
dailystudio commented 5 years ago

Yes, the official pre-trained model will not be directly used on the device. That is why I created this project.

Refer to step 3 in "Preparing the models" section, you need to modify the export scripts to cast INT64 to INT32:

semantic_predictions = tf.slice(
         predictions[common.OUTPUT_TYPE],
         [0, 0, 0],
         [1, resized_image_size[0], resized_image_size[1]])