apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
https://tvm.apache.org/
Apache License 2.0
11.83k stars 3.48k forks source link

What's the efficiency when compiled model runs on the web? #541

Closed HighCWu closed 7 years ago

HighCWu commented 7 years ago

I've been very excited when I found the nnvm can compile the nn model to run on the web through tvm. I also know that the project 'webdnn' uses WebGL and some optimizations to make the model run faster on the web. Compared with 'webdnn', does tvm use some OpenGL function in codes for the model to get the GPU-accelerated when runs on the web? If it does or will add this support,it will sincerely help a lot when I try to show my model's effects to my users. The reason why I do not choose 'webdnn' is that that project have few DL framwork support while yours have many. I am looking forward to this project's performance. It will lead the machine learning to develop more.

tqchen commented 7 years ago

You can explore the web backend here https://github.com/dmlc/tvm/tree/master/web

For now, we did not yet take effort into WebGL generation because OpenGL's limitation in terms of expressing compute kernels. We do support Metal code generation, which effectively makes it work with WebGPU.

The web assembly code generation takes advantage of ASM.JS so likely it will also give you faster performance given the shared policy with the CPU kernels.

tqchen commented 7 years ago

Also we provide a rpc interface that allows you to benchmark, tune your code using python by directly connect to your browser

HighCWu commented 7 years ago

As far as I know, WebGPU is currently only supported by Apple's Safari development mode, and I do not know when Apple's standards will be widely accepted. And I have no confidence in the the CPU kernel's speed when it runs the model. But I think I can have a try and test it first. Thank you for your answer.

tqchen commented 7 years ago

There is also on-going effort on WebGL, will update announcement when there are results. close this for now