taichi-dev / taichi

Productive, portable, and performant GPU programming in Python.
https://taichi-lang.org
Apache License 2.0
25.52k stars 2.29k forks source link

[Taichi.js] JavaScript Backend via Emscripten #394

Closed yuanming-hu closed 2 years ago

yuanming-hu commented 4 years ago

Is your feature request related to a problem? Please describe. Allowing Taichi to generate JavaScript code will enable many more people to play with state-of-the-art computer graphics in their browsers.

Describe the solution you'd like More investigation is needed. Emscripten or WASM seem good ways to go.

The kernel code will still be written in Python, yet a ti.export function will be added to dump a kernel into compiled JavaScript. Then users can load these js and run it in HTML5.

The JavaScript backend does not have to support full Taichi functionality. For example, we can omit some sparse data structure support.

Discussions on/contributions to this are warmly welcome! :-)

r03ert0 commented 4 years ago

I'd be happy to help! Do you think something like gpu.js could be used to accelerate some computations?

yuanming-hu commented 4 years ago

Good question. I guess one thing to discuss here is "should we do pure JavaScript or leverage WebGL (fragment/compute shaders)?"

If we go WebGL, we get higher performance, but the computational pattern also gets restricted to pure array operations (e.g. a[i, j] = ti.sqrt(b[i, j])). This means we need some language restriction on the frontend and not every Taichi program gets compiled to WebGL. Not sure how compute shaders (see https://github.com/9ballsyndrome/WebGL_Compute_shader and https://www.khronos.org/registry/webgl/specs/latest/2.0-compute/) help with this.

If we go JavaScript, then it will run slower but we can support much more computational patterns. It's also easier since we can probably directly translate the generated LLVM IR into Javascript. I would suggest starting with this path.

yuanming-hu commented 4 years ago

Let's narrow down the range to generating Javascript via Emscripten until WebGL compute shader is mature.

yuanming-hu commented 4 years ago

It seems that Emscripten itself is switching to the LLVM WASM backend. https://v8.dev/blog/emscripten-llvm-wasm

So one decision to be made: do we directly generate WASM via LLVM or go through Emscripten?

The former saves us from adding the dependency on Emscripten. The latter can generate Javascript as well, which has better compatibility. Emscripten also seems better documented than LLVM WASM backend.

A question to Web experts: how well supported is WASM on current browsers? If everyone's browser already supports WASM (https://caniuse.com/#feat=wasm) then maybe we should directly use the LLVM WASM backend?

An old Rust thread on WASM: https://github.com/rust-lang/rust/issues/33205

Inputs are welcome!

github-actions[bot] commented 4 years ago

Warning: The issue has been out-of-update for 50 days, marking stale.

WenheLI commented 4 years ago

It seems that Emscripten itself is switching to the LLVM WASM backend. https://v8.dev/blog/emscripten-llvm-wasm

So one decision to be made: do we directly generate WASM via LLVM or go through Emscripten?

The former saves us from adding the dependency on Emscripten. The latter can generate Javascript as well, which has better compatibility. Emscripten also seems better documented than LLVM WASM backend.

A question to Web experts: how well supported is WASM on current browsers? If everyone's browser already supports WASM (https://caniuse.com/#feat=wasm) then maybe we should directly use the LLVM WASM backend?

An old Rust thread on WASM: rust-lang/rust#33205

Inputs are welcome!

@yuanming-hu I think directly export taichi to wasm should be fine. The majority of browsers have supported this feature. And, asm.js could be used as a fallback to wasm. Therefore, it should be fine to use WASM in most cases.

archibate commented 4 years ago

In comparision to the Taichi -> LLVM -> WASM approach, it's worth mention that we already have some nice progress in the Taichi -> C -> WASM approach: https://github.com/taichi-dev/taichi.js

WenheLI commented 4 years ago

In comparision to the Taichi -> LLVM -> WASM approach, it's worth mention that we already have some nice progress in the Taichi -> C -> WASM approach: https://github.com/taichi-dev/taichi.js

Cool! May I have some insights in terms of the future plan?

archibate commented 4 years ago

Cool! May I have some insights in terms of the future plan?

Here's my plan:

  1. release the C backend (where Emscripten is based) on Windows and OS X too.
  2. make Taichi.js a powerful tool for creating heavy Web VFXs.
  3. setup a server that compiles Taichi kernel into WASM to run it on client, so that people could play Taichi online without installing Python.
  4. we may even consider utilizing WebGL after compute shader is mature there, with OpenGL backend.
WenheLI commented 4 years ago

Cool! May I have some insights in terms of the future plan?

Here's my plan:

  1. release the C backend (where Emscripten is based) on Windows and OS X too.
  2. make Taichi.js a powerful tool for creating heavy Web VFXs.
  3. setup a server that compiles Taichi kernel into WASM to run it on client, so that people could play Taichi online without installing Python.
  4. we may even consider utilizing WebGL after compute shader is mature there, with OpenGL backend.

This is happening under https://github.com/taichi-dev/taichi.js ? Interested in this project, just wondering if there is any starting point for collaboration?

archibate commented 4 years ago

This is happening under https://github.com/taichi-dev/taichi.js ?

Yes, except for 1 is actually happening under https://github.com/taichi-dev/taichi.

Interested in this project, just wondering if there is any starting point for collaboration?

Oh that would be great! Here's something we could do at this moment:

  1. Add more examples to our online demo.
  2. Setup a server for compiling Taichi and output WASM, may be based on Jupyter notebook.
  3. Fallback to asm.js when WASM not available.
  4. Add API documents for this project.
yuanming-hu commented 4 years ago

Thanks for all the discussions here!

On the compiler side, so far there are two approaches to generate WASM/JS.

On the web development side, a cool thing we can do is host a TaichiHub website, that allows users to share their WASM/JS programs generated by Taichi. Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)

archibate commented 4 years ago

Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)

Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/

WenheLI commented 4 years ago

Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)

Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/

We can host the web & service on vercel. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.

yuanming-hu commented 4 years ago

We can host the web & service on vercel. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.

Free services are always good :-) We do need to run a Python program on the server and potentially host a database to store the shader data - does vercel support that?

WenheLI commented 4 years ago

We can host the web & service on vercel. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.

Free services are always good :-) We do need to run a Python program on the server and potentially host a database to store the shader data - does vercel support that?

vercel provides serverless function ability. It does support database connection & python runtime. We could use mongodb to store the data as mongodb also provides free host service. We can discuss the capability in detail. But in theory, it is totally doable.

rexwangcc commented 4 years ago

Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)

Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/

We can host the web & service on vercel. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.

@WenheLI TIL that zeit.co/now has been re-branded to vercel!

@yuanming-hu if we go serverless solution instead of containers or hosted service, looks like https://vercel.com/docs/serverless-functions/supported-languages provides services similar to AWS Lambda Functions. (we might deploy the website on it later as well in case we need to speed up the access speed)

yuanming-hu commented 4 years ago

Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)

Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/

We can host the web & service on vercel. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.

TIL that zeit.co/now has been re-branded to vercel!

@yuanming-hu if we go serverless solution instead of containers or hosted service, looks like https://vercel.com/docs/serverless-functions/supported-languages provides services similar to AWS Lambda Functions. (we might deploy the website on it later as well in case we need to speed up the access speed)

Wow sounds really like a nice fit! They also seem to support Python dependencies: https://vercel.com/docs/runtimes#official-runtimes/python/python-dependencies we can then just add taichi to requirements.txt. The global CDN feature also sounds nice - it seems that China/US always has a 300+ms ping. @archibate what do you think?

archibate commented 4 years ago

@archibate what do you think?

It would be nice to have a free server! My concerns are if vercel support installing Emscripten (emcc) as dependencies? Here's a list of requirements to host TaichiHub:

  1. A /tmp directory, the ActionRecorder needs a place to emit C source file.
  2. Install Emscripten, it should launch emcc -c /tmp/hello.c -o /tmp/hello.js when being requested.
  3. Store user shaders somewhere so that they could share and show up in the gallery.
  4. Cache the compiled .js and .wasm files for gallery shaders. Otherwise it's wasting resource if ~10 users are requesting the same shader.
  5. The performance shouldn't be too low for their free edition, idealy <5s for each compilation.

If they provide these support, congrats! We can host TaichiHub there.

WenheLI commented 4 years ago

@archibate We only need to investigate if vercel will allow installations for external packages. For the cache, we can use a database to handle it. Persistent storage can also be achieved by the database.

And the worse case is we can not host compilation on vercel, we can still host the frontend on it. It gives good speed across the world.

archibate commented 4 years ago

And the worse case is we can not host compilation on vercel, we can still host the frontend on it. It gives good speed across the world.

Separating frontend and backend is a nice idea! So here's our workflow:

  1. User request vercel for frontend webpages (accelerated by CDN).
  2. User click RUN button to send a request to the vercel server (accelerated CDN).
  3. The vercel server check mongodb for cached WASM, if not cached:
  4. The vercel server send a request to our non-free backend server.
  5. The backend server returns a WASM file as response.
  6. The vercel server cache that WASM file to mongodb.
  7. The vercel server return the WASM file to user client for execution.

The backend server could also be equipped with password so that only the vercel server can invoke it. WDYT? If we reached agreement, I'll transform my current setup in 142857.red into a backend server, would you mind help me move the frontend to vercel?

We may even make the frontend server non-Python (non-Flask), as its only job is response HTMLs and redirect requests to our backend server, where Emscripten and Taichi are hosted.

WenheLI commented 4 years ago

And the worse case is we can not host compilation on vercel, we can still host the frontend on it. It gives good speed across the world.

Separating frontend and backend is a nice idea! So here's our workflow:

  1. User request vercel for frontend webpages (accelerated by CDN).
  2. User click RUN button to send a request to the vercel server (accelerated CDN).
  3. The vercel server check mongodb for cached WASM, if not cached:
  4. The vercel server send a request to our non-free backend server.
  5. The backend server returns a WASM file as response.
  6. The vercel server cache that WASM file to mongodb.
  7. The vercel server return the WASM file to user client for execution.

The backend server could also be equipped with password so that only the vercel server can invoke it. WDYT? If we reached agreement, I'll transform my current setup in 142857.red into a backend server, would you mind help me move the frontend to vercel?

We may even make the frontend server non-Python (non-Flask), as its only job is response HTMLs and redirect requests to our backend server, where Emscripten and Taichi are hosted.

Sounds like a plan, we can definitely do it