tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.36k stars 1.92k forks source link

Deeplab/Segmentation: Error: The dtype of dict['ImageTensor'] provided in model.execute(dict) must be int32, but was float32 #3723

Closed un1crom closed 4 years ago

un1crom commented 4 years ago

TensorFlow.js version

"@tensorflow/tfjs-node": "^2.0.1" "@tensorflow/tfjs": "^2.0.1"

Browser version

Nodejs and expressjs, no browser

Describe the problem or feature request

In using a LOCALLY downloaded and served model.... in the NPM/node module the index.js file requires me to cast a tensor to an int32 or it errors out about float32 no matter what I pass it.

Code to reproduce the bug / link to feature request

model loading

const loadModelDeepLab = async () => {
  const modelName = 'pascal';   // set to your preferred model, either `pascal`, `cityscapes` or `ade20k`
  const quantizationBytes = 2;  // either 1, 2 or 4
  const url = 'https://tfhub.dev/tensorflow/tfjs-model/deeplab/pascal/1/default/1/model.json?tfjs-format=file';
  return await deeplab.load({modelUrl: url,base: modelName, quantizationBytes});
};

info on the tensor

Tensor {
  kept: false,
  isDisposedInternal: false,
  shape: [ 1000, 1000, 3 ],
  dtype: 'int32',
  size: 3000000,
  strides: [ 3000, 3 ],
  dataId: {},
  id: 2,
  rankType: '3',
  scopeId: 0 }

the error out

"(node:35252) UnhandledPromiseRejectionWarning: Error: The dtype of dict['ImageTensor'] provided in model.execute(dict) must be int32, but was float32
"

I manually changed offending code to... and everything runs

    SemanticSegmentation.prototype.predict = function (input) {
        var _this = this;
        return tf.tidy(function () {
            var data = utils_1.toInputTensor(input);
            return tf.squeeze(_this.model.execute(tf.cast(data,"int32")));
        });
    };
rthadur commented 4 years ago

@un1crom please check for a related stack-overflow issue here

google-ml-butler[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 dyas if no further activity occurs. Thank you.

google-ml-butler[bot] commented 4 years ago

Closing as stale. Please @mention us if this needs more attention.

parthlathiya2697 commented 3 years ago

I got the same error.

My Scenerio: I trained a pre-trained model from tensorflow model zoo using transfer learning using tensorflow api as saved model (model.pb file) and converted it into tfjs format (model.json and shared .bin files).

When I tried running this model.json on the javascript(web), it gives the below error:

Uncaught (in promise) Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32

When I tried someone else's working converted model (model.json and shared .bin files) on my javascript(web), it worked.

Conclusion: There is something wrong with my converted model. I converted it using tensorflowjs_converter. My original model in (model.pb) works accurately in python too.

I'm still trying out to convert my model.pb file with different tensorflowjs_converters as it seems to be the converters versioning issue.

maxbauer commented 3 years ago

Same problem for me when trying to run the TensorFlow SavedModel Import Demo with a custom ssd mobilenet v2 model.

@parthlathiya42 it worked for me with the following cast:

const integerTensor = imageToPredict.toInt();
return this.model.executeAsync(integerTensor); //execute() doesn't work for me here
dubrovin-sudo commented 1 year ago

System information

!pip install tensorflowjs

Describe the current behavior After the conversion, the input data type changed from float32 to int32

Describe the expected behavior The data format remains the same

Standalone code to reproduce the issue

!tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_format=tfjs_graph_model \
    --signature_name=serving_default \
    --saved_model_tags=serve \
    /content/gdrive/MyDrive/customTF2/data/inference_graph/saved_model \

Other info / logs Include any logs or source code that would be helpful to

TFmodel.signatures['serving_default'].output_dtypes
{'detection_anchor_indices': tf.float32, 'raw_detection_scores': tf.float32, 'detection_classes': tf.float32, 
'num_detections': tf.float32, 'raw_detection_boxes': tf.float32, 'detection_boxes': tf.float32, 
'detection_multiclass_scores': tf.float32, 'detection_scores': tf.float32}

Colab link

hozeis commented 1 year ago

Has there been a fix or bypass for this issue. I created a model using transferred learning from the modelzoo/mask-rcnn model.