Open lebron8dong opened 1 year ago
Hi, @lebron8dong
Apologize for the delayed response and apply()
method is used to execute the Layers computation and return Tensor(s) when we call it with the tf.Tensor(s). If we call it with the tf.SymbolicTensor(s) then, will prepare the layer for future execution. you can refer below code snippet:
const lstm = tf.layers.lstm({units: 8, returnSequences: true});
// Create an input with 10 time steps.
const input = tf.input({shape: [10, 20]});
const output = lstm.apply(input);
console.log(JSON.stringify(output.shape));
// [null, 10, 8]: 1st dimension is unknown batch size; 2nd dimension is the
// same as the sequence length of `input`, due to `returnSequences`: `true`;
// 3rd dimension is the `LSTMCell`'s number of units.
model.predict()
uses to do inferences or predictions after training the model on unseen data(test dataset ,validation dataset), you can refer below code snippet and I'm really sorry to inform you that, I haven't got your second question Could you please elaborate more so I'll try to answer your question?
If you're looking to calculate execution time in TFJS, I think you can have look into our official documentation sections like Performance-Timing and tf.util.now , I hope it will help you to resolve your issue. Thank you!
// Define a model for linear regression.
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
// Prepare the model for training: Specify the loss and the optimizer.
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
// Generate some synthetic data for training.
const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);
const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);
// Train the model using the data then do inference on a data point the
// model hasn't seen:
await model.fit(xs, ys);
model.predict(tf.tensor2d([5], [1, 1])).print();
@gaikwadrahul8
I want to do inference at layer granularity and measure the running time of each layer during model inference.
But I add up the running time of each layer and it is much faster than "model.perdict()"
//model inference at layer granularity
for(let i=0; i<model.layers.length; i++){
if( i==0 ){
const startTime = tf.util.now();
testdata=model.layers[i].call(testdata);
const endTime = tf.util.now();
timeData.push((endTime - startTime));
console.log(Layer ${i} took ${(endTime - startTime)}ms
);
}else{
const startTime = tf.util.now();
testdata=model.layers[i].apply(testdata);
const endTime = tf.util.now();
timeData.push((endTime - startTime));
console.log(Layer ${i} took ${(endTime - startTime)}ms
);
}
}
@gaikwadrahul8 This can also be a feature request.
Hi, @lebron8dong
Apologize for the delayed response and I'm not sure whether this issue will be considered as feature request or not at the moment.
@Linchenn, Could you please look into this issue? Thank you!
when you print out the result after calling predict, it will wait for GPU to finish all the computation, when you only calls apply, it will return to JS without waiting for GPU to finish.
@pyu10055 Thank you very much. I get it, and how should I measure the execution time of each layer during model inference? there is no API to get it directly. How to correctly implement at layer granularity? and I think this method can measure the execution time of each layer. I have proposed feature request, but I haven't received a response yet.
I use 'tf.util.now()' to measure the running time of the model.