Closed rodyherrera closed 2 months ago
Hi, @rodyherrera
I apologize for the delay in my response and as far I know @tensorflow-models/handpose
model is not officially compatible with the tensorflow
backend on Node.js
environment that may be reason you're encountering this error message Error: Kernel 'RotateWithOffset' not registered for backend 'tensorflow'
If you don't want to use GPU for some reason I would say the safest and most reliable option for now is to use the CPU await tf.setBackend('cpu');
with @tensorflow-models/handpose
in Node.js. This ensures the model works as intended.
Could you please try to set IS_NODE
to false
before you load the model to avoid above message ?
tf.env().set('IS_NODE', false);
Thank you for your cooperation and patience.
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.
This issue was closed due to lack of activity after being marked stale for past 7 days.
System information
Yes
.Linux Ubuntu 22.04
./
.NPM
.4.20.0
./
./
.Describe the current behavior I'm running the
@tensorflow-models/handpose 0.1.0
model onNode.js v20.15.0
using@tensorflow/tfjs-node 4.20.0
. My server doesn't have a GPU, so if I use the CPU as backend the model works but it takes 6 seconds to make a prediction, which is absurd. If I use 'tensorflow' as backend, the predictions are made quite fast, the change is dramatic, however I get the following error:"Error: Kernel 'RotateWithOffset' not registered for backend 'tensorflow'"
.@tensorflow-models/handpose
doesn't allow running thetensorflow
backend, butcpu
orgpu
does. When I start my backend server, the same Tensorflow library throws the following message if I don't usetensorflow
as backend:The only way to avoid it is by changing the backend to
tensorflow
, and as the message says and as I mentioned above, yes, performance improves dramatically,but the model stops working, really...?
.Here is the full error:
As you can see, at first the model makes predictions, and detects in this case, that there is no gesture because there is no "hand" in the images sent to it. However, when it receives a photo that DOES have a hand, that is where the error occurs.
Describe the expected behavior When using the 'cpu' backend, the model can make the prediction correctly, with the difference that the time it takes is ridiculous. Below is a screenshot of how the model's prediction is returned when, in this case, an image of an open hand is sent to it.
The first detection it makes returns an object that has the 'gesture' key value 'Handpose::HandsOpen', basically with the 'cpu' backend it works as expected. But if I change the backend to 'tensorflow' it stops making predictions, throwing the message that was described above.
Standalone code to reproduce the issue
Is there a way to fix this so I can use the model using
tensorflow as backend
as the library even recommends when starting the script when usingcpu
?