Closed jacobgil closed 6 years ago
I have not explicitly, though I think I've run the demo on a device that used CPU backend (iPad Pro?). That seems like an extremely odd problem though....
I'm also not 100% certain, but I'm also pretty sure you'd want to set backend before model download (I'd assume post-download, tensors are already created for the model). I'm not sure why it'd silently crash and to be honest I wouldn't know where to even start looking into that...
I'm taking a stab in the dark. I'm assuming it crashes because on setting the backend, it throws out the old environment, including any tensors initiated on it (though silently crashing is bad, should open a ticket in tfjs if that's the case). So any tensors created implicitly on import get destroyed. AFAIK, the YOLO_ANCHORS
is the only place I've created tensors implicitly (sorry!). Check it out here: https://github.com/ModelDepot/tfjs-yolo-tiny/blob/master/src/postprocess.js#L6. Try either setting tf.setBackend before that or lazy load it and hopefully that'll work?
Let me know how it goes! I wish I could help more but I'm unfortunately still tied a bit on other work :(
It turns out that setting tf.setBackend('cpu') works, but then the inference is extremely slow (10 seconds!). With webgl it is much faster, even its still running on a cpu.
Closing the issue, I think its a general tensorflow-js problem.
Hi, I'm trying to run this on a cpu backend as opposed to the default webgl backend.
When I call tf.setBackend('cpu') before downloading the model, it seems to silently crash and is stuck. When I call after downloading the model, I get errors about changing the backend twice.
Were you able to experiment with the cpu backend ?