tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.37k stars 1.92k forks source link

0bject detection model gives output of '1' or '0' instead of real prediction values #7102

Closed Nnamaka closed 1 year ago

Nnamaka commented 1 year ago

System information

Describe the current behavior I get actual output values from my custom detection model only when using the remote debugger on my PC. Like the screenshot below

Screenshot (112)

But when I stop using the remote debugger, the output values are either '1' or '0'. These are not reasonable output values I expect from the detection model because I can't use these outputs to calculate bounding boxes or know the classes.

Screenshot (120)

This is how I load my detection model.

const modelJson = require("./assets/tfjsexport/model.json");

      const modelWeights = [
        require("./assets/tfjsexport/group1-shard1of3.bin"),
        require("./assets/tfjsexport/group1-shard2of3.bin"),
        require("./assets/tfjsexport/group1-shard3of3.bin"),
      ];

      const roiModel = await tf.loadGraphModel(bundleResourceIO(modelJson, modelWeights)).catch((e) => {
        console.log("[LOADING ERROR] info:", e)
      })

And this is how I get the prediction values. const predictionsData = await model.executeAsync(imagesTensor);

`Describe the expected behavior I expect the model to output reasonable values even when I don't use the remote debugger like this.

Screenshot (113)

Standalone code to reproduce the issue My custom-trained model and code are hosted on GitHub here!. The model.json and its weights( *.bin) are inside the assets/tfjsexport directory I use the Expo go app to run the code.

on your terminal, expo start starts the server. After that, you can scan the QR code with the Expo go app on your mobile device.

Other info / logs I retrieve the prediction results with model.executeAsync(). Then I console-log the detected classes and scores.

const predictionsData = await model.executeAsync(imagesTensor);

export const makePredictions = async (model, imagesTensor) => {
  console.log('infering model...');
  const predictionsData = await model.executeAsync(imagesTensor);
  imagesTensor.dispose();

  return predictionsData;
}
....
const predictions = await makePredictions(model, tensorImage);

  console.log('Gotten final results');

  const boxes = await predictions[6].arraySync();
  const scores = await predictions[4].arraySync();
  const classes = await predictions[1].dataSync();

  console.log(classes);
  console.log(scores);

Expo SDK version: 47.0.8

package.json

"dependencies": {
   "@react-native-async-storage/async-storage": "~1.17.3",
    "@tensorflow/tfjs": "4.0.0",
    "@tensorflow/tfjs-react-native": "^0.8.0",
    "expo": "~47.0.8",
    "expo-camera": "~13.1.0",
    "expo-gl": "~12.0.0",
    "expo-gl-cpp": "^11.4.0",
    "expo-image-manipulator": "~11.0.0",
    "expo-status-bar": "~1.4.2",
    "react": "18.1.0",
    "react-native": "0.70.5",
    "react-native-barcode-mask": "^1.2.4",
    "react-native-fs": "^2.20.0",
    "react-native-mlkit-ocr": "^0.2.5"
  },
Nnamaka commented 1 year ago

Note: I can confirm that my model.json ( and its weights ) gives great prediction results. I know this because when I use the remote debugger, I get actual prediction values that I use to calculate bounding boxes for the detected regions, which are correct.

I have changed to a few tensorflow/tfjs versions, but the issue still persists. I also looked at these similar issues( #5162,#6507,#5145,#4339) to get hints on how to tackle this, but I couldn't find solutions to my problem.

@vladmandic at #5145, had tf.ENV.set('WEBGL_FORCE_F16_TEXTURES', true) set to true. but mine is set to false

Please what am I missing?

My model and my code are hosted in my GitHub here

gaikwadrahul8 commented 1 year ago

Hi, @Nnamaka

I apologize for the delayed response. I have been trying to replicate the issue you are experiencing, but I am getting errors related to package dependency conflicts so I am unable to replicate the same issue on my end.

Could you please update or upgrade your package dependencies and see if that resolves the issue? If not, could you please provide a reproducible GitHub repo link so that I can try to replicate the issue on my end? This will help us to further investigate the issue.

Thank you for your patience and cooperation.

github-actions[bot] commented 1 year ago

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] commented 1 year ago

This issue was closed due to lack of activity after being marked stale for past 7 days.

google-ml-butler[bot] commented 1 year ago

Are you satisfied with the resolution of your issue? Yes No