Closed ray-1337 closed 2 years ago
@ray-1337 is it possible to share the model ?
@ray-1337 is it possible to share the model ?
Sorry , I did not find a model.json file , are you running any examples from above repo ?
Sorry , I did not find a model.json file , are you running any examples from above repo ?
I don't know if this helps, but I seemed to be experiencing the same thing (can't dispose of tensors), and seem to have fixed it by changing calls from myTensor.dispose() to tf.dispose(myTensor) and also changing myModel.dispose() to tf.dispose(myModel).
I am only using tf,tidy() in a few limited spots and that isn't leaking.
I monitored tf.memory().numTensors before and after and now have 0 leaks with using tf.dispose( thingToBeDisposed ).
I don't know if this helps, but I seemed to be experiencing the same thing (can't dispose of tensors), and seem to have fixed it by changing calls from myTensor.dispose() to tf.dispose(myTensor) and also changing myModel.dispose() to tf.dispose(myModel).
I am only using tf,tidy() in a few limited spots and that isn't leaking.
I monitored tf.memory().numTensors before and after and now have 0 leaks with using tf.dispose( thingToBeDisposed ).
is that even clearing the NodeJS memory as well? EDIT: i dont think so
@ray-1337 thank you for reporting the issue, I took a look at the code you provided, there are couple things caught my eyes:
Here is an example code I tried, which only load the model once and the heapSize seems to be stable. Please give it a try and let me know if it works for you.
import * as tf from '@tensorflow/tfjs-node';
import {Tensor, Tensor3D} from '@tensorflow/tfjs-node';
import * as fs from 'fs';
let counter = 0;
let model: tf.LayersModel;
async function loadModel() {
return await tf.loadLayersModel(
`file://./nsfwjs/example/nsfw_demo/public/model/model.json`);
}
async function inference() {
let imageData = tf.zeros([1, 299, 299, 3]);
let model_checking = tf.tidy(() => {
let normalized = tf.scalar(255);
let img = imageData.toFloat().div(normalized) as Tensor3D;
let RB = tf.image.resizeBilinear(img, [299, 299], true);
let batched = RB.reshape([1, 299, 299, 3]);
return model.predictOnBatch(batched) as Tensor;
});
const classes = ['Cat', 'Dog']; // leetcode
let values = model_checking.dataSync();
// const topK = Math.min(classes.length, values.length);
// for (let i = 0; i < values.length; i++) value_index.push({ value:
// values[i], index: i });
// value_index.sort((a, b) => b.value - a.value);
// const topk = new Float32Array(topK), topkI = new Int32Array(topK);
// for (let i = 0; i < topK; i++) {
// topk[i] = value_index[i].value;
// topkI[i] = value_index[i].index;
// };
// console.log({ className: classes[topkI[0]], probability: topk[0] });
imageData.dispose();
tf.dispose(model_checking);
console.log(tf.memory());
console.log(process.memoryUsage());
}
async function main() {
model = await loadModel();
for (let i = 0; i < 1000; i++) {
inference();
}
model.dispose();
console.log(tf.memory());
}
main();
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you.
Closing as stale. Please @mention us if this needs more attention.
System information
Node.js
(v16.13.1)@tensorflow/tfjs-node
(v3.12.0 - v3.13.0) (might still happens in previous version)npm
(v8.3.1)Linux, Ubuntu 20.0.4
(Server)Describe the current behavior The memory usage is keep increasing while/after predicting the content. Tidying/disposing the tensor won't change anything.
Describe the expected behavior The memory usage should be either stay or decreased or any.
Standalone code to reproduce the issue
Other info / logs
tf.predict(tf.zeros([1, 299, 299, 3]))
5 hours later: