Closed yuanming-hu closed 2 years ago
I'd be happy to help! Do you think something like gpu.js could be used to accelerate some computations?
Good question. I guess one thing to discuss here is "should we do pure JavaScript or leverage WebGL (fragment/compute shaders)?"
If we go WebGL, we get higher performance, but the computational pattern also gets restricted to pure array operations (e.g. a[i, j] = ti.sqrt(b[i, j])
). This means we need some language restriction on the frontend and not every Taichi program gets compiled to WebGL. Not sure how compute shaders (see https://github.com/9ballsyndrome/WebGL_Compute_shader and https://www.khronos.org/registry/webgl/specs/latest/2.0-compute/) help with this.
If we go JavaScript, then it will run slower but we can support much more computational patterns. It's also easier since we can probably directly translate the generated LLVM IR into Javascript. I would suggest starting with this path.
Let's narrow down the range to generating Javascript via Emscripten until WebGL compute shader is mature.
It seems that Emscripten itself is switching to the LLVM WASM backend. https://v8.dev/blog/emscripten-llvm-wasm
So one decision to be made: do we directly generate WASM via LLVM or go through Emscripten?
The former saves us from adding the dependency on Emscripten. The latter can generate Javascript as well, which has better compatibility. Emscripten also seems better documented than LLVM WASM backend.
A question to Web experts: how well supported is WASM on current browsers? If everyone's browser already supports WASM (https://caniuse.com/#feat=wasm) then maybe we should directly use the LLVM WASM backend?
An old Rust thread on WASM: https://github.com/rust-lang/rust/issues/33205
Inputs are welcome!
Warning: The issue has been out-of-update for 50 days, marking stale
.
It seems that Emscripten itself is switching to the LLVM WASM backend. https://v8.dev/blog/emscripten-llvm-wasm
So one decision to be made: do we directly generate WASM via LLVM or go through Emscripten?
The former saves us from adding the dependency on Emscripten. The latter can generate Javascript as well, which has better compatibility. Emscripten also seems better documented than LLVM WASM backend.
A question to Web experts: how well supported is WASM on current browsers? If everyone's browser already supports WASM (https://caniuse.com/#feat=wasm) then maybe we should directly use the LLVM WASM backend?
An old Rust thread on WASM: rust-lang/rust#33205
Inputs are welcome!
@yuanming-hu
I think directly export taichi to wasm should be fine. The majority of browsers have supported this feature. And, asm.js
could be used as a fallback to wasm. Therefore, it should be fine to use WASM
in most cases.
In comparision to the Taichi -> LLVM -> WASM
approach, it's worth mention that we already have some nice progress in the Taichi -> C -> WASM
approach: https://github.com/taichi-dev/taichi.js
In comparision to the
Taichi -> LLVM -> WASM
approach, it's worth mention that we already have some nice progress in theTaichi -> C -> WASM
approach: https://github.com/taichi-dev/taichi.js
Cool! May I have some insights in terms of the future plan?
Cool! May I have some insights in terms of the future plan?
Here's my plan:
Cool! May I have some insights in terms of the future plan?
Here's my plan:
- release the C backend (where Emscripten is based) on Windows and OS X too.
- make Taichi.js a powerful tool for creating heavy Web VFXs.
- setup a server that compiles Taichi kernel into WASM to run it on client, so that people could play Taichi online without installing Python.
- we may even consider utilizing WebGL after compute shader is mature there, with OpenGL backend.
This is happening under https://github.com/taichi-dev/taichi.js ? Interested in this project, just wondering if there is any starting point for collaboration?
This is happening under https://github.com/taichi-dev/taichi.js ?
Yes, except for 1 is actually happening under https://github.com/taichi-dev/taichi.
Interested in this project, just wondering if there is any starting point for collaboration?
Oh that would be great! Here's something we could do at this moment:
Thanks for all the discussions here!
On the compiler side, so far there are two approaches to generate WASM/JS.
Taichi->C->WASM
, an initial step in this direction is nicely done by @archibate.Taichi->LLVM->WASM
. This direction has not started, but it has great potential. Specifically, since the LLVM backend already has good support for all the feature extensions (especially sparse computation), following this path will allow the users to demonstrate really cool sparse computation tasks in the browser.On the web development side, a cool thing we can do is host a TaichiHub website, that allows users to share their WASM/JS programs generated by Taichi. Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)
Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)
Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/
Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)
Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/
We can host the web & service on vercel
. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.
We can host the web & service on
vercel
. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.
Free services are always good :-) We do need to run a Python program on the server and potentially host a database to store the shader data - does vercel
support that?
We can host the web & service on
vercel
. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.Free services are always good :-) We do need to run a Python program on the server and potentially host a database to store the shader data - does
vercel
support that?
vercel
provides serverless function ability. It does support database
connection & python runtime
. We could use mongodb
to store the data as mongodb
also provides free host service. We can discuss the capability in detail. But in theory, it is totally doable.
Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)
Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/
We can host the web & service on
vercel
. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.
@WenheLI TIL that zeit.co/now
has been re-branded to vercel
!
@yuanming-hu if we go serverless solution instead of containers or hosted service, looks like https://vercel.com/docs/serverless-functions/supported-languages provides services similar to AWS Lambda Functions. (we might deploy the website on it later as well in case we need to speed up the access speed)
Good references are https://allrgb.com/ and https://www.shadertoy.com/ I can help raise some money for hosting the TaichiHub website if that's necessary :-)
Hi, everyone! Here's my recent progress on TaichiHub: http://142857.red:3389/
We can host the web & service on
vercel
. It provides global cdn and it's free! If you think it is a good option, I can help with the deployment.TIL that
zeit.co/now
has been re-branded tovercel
!@yuanming-hu if we go serverless solution instead of containers or hosted service, looks like https://vercel.com/docs/serverless-functions/supported-languages provides services similar to AWS Lambda Functions. (we might deploy the website on it later as well in case we need to speed up the access speed)
Wow sounds really like a nice fit! They also seem to support Python dependencies: https://vercel.com/docs/runtimes#official-runtimes/python/python-dependencies we can then just add taichi
to requirements.txt
. The global CDN feature also sounds nice - it seems that China/US always has a 300+ms ping. @archibate what do you think?
@archibate what do you think?
It would be nice to have a free server! My concerns are if vercel
support installing Emscripten (emcc
) as dependencies?
Here's a list of requirements to host TaichiHub:
/tmp
directory, the ActionRecorder
needs a place to emit C source file.emcc -c /tmp/hello.c -o /tmp/hello.js
when being requested..js
and .wasm
files for gallery shaders. Otherwise it's wasting resource if ~10 users are requesting the same shader.free edition
, idealy <5s for each compilation.If they provide these support, congrats! We can host TaichiHub there.
@archibate We only need to investigate if vercel
will allow installations
for external packages. For the cache, we can use a database to handle it. Persistent storage can also be achieved by the database.
And the worse case is we can not host compilation on vercel
, we can still host the frontend on it. It gives good speed across the world.
And the worse case is we can not host compilation on vercel, we can still host the frontend on it. It gives good speed across the world.
Separating frontend and backend is a nice idea! So here's our workflow:
vercel
for frontend webpages (accelerated by CDN).RUN
button to send a request to the vercel
server (accelerated CDN).vercel
server check mongodb
for cached WASM, if not cached:vercel
server send a request to our non-free backend server.vercel
server cache that WASM file to mongodb
.vercel
server return the WASM file to user client for execution.The backend server could also be equipped with password so that only the vercel
server can invoke it. WDYT?
If we reached agreement, I'll transform my current setup in 142857.red into a backend server, would you mind help me move the frontend to vercel
?
We may even make the frontend server non-Python (non-Flask), as its only job is response HTMLs and redirect requests to our backend server, where Emscripten and Taichi are hosted.
And the worse case is we can not host compilation on vercel, we can still host the frontend on it. It gives good speed across the world.
Separating frontend and backend is a nice idea! So here's our workflow:
- User request
vercel
for frontend webpages (accelerated by CDN).- User click
RUN
button to send a request to thevercel
server (accelerated CDN).- The
vercel
server checkmongodb
for cached WASM, if not cached:- The
vercel
server send a request to our non-free backend server.- The backend server returns a WASM file as response.
- The
vercel
server cache that WASM file tomongodb
.- The
vercel
server return the WASM file to user client for execution.The backend server could also be equipped with password so that only the
vercel
server can invoke it. WDYT? If we reached agreement, I'll transform my current setup in 142857.red into a backend server, would you mind help me move the frontend tovercel
?We may even make the frontend server non-Python (non-Flask), as its only job is response HTMLs and redirect requests to our backend server, where Emscripten and Taichi are hosted.
Sounds like a plan, we can definitely do it
Is your feature request related to a problem? Please describe. Allowing Taichi to generate JavaScript code will enable many more people to play with state-of-the-art computer graphics in their browsers.
Describe the solution you'd like More investigation is needed. Emscripten or WASM seem good ways to go.
The kernel code will still be written in Python, yet a
ti.export
function will be added to dump a kernel into compiled JavaScript. Then users can load these js and run it in HTML5.The JavaScript backend does not have to support full Taichi functionality. For example, we can omit some sparse data structure support.
Discussions on/contributions to this are warmly welcome! :-)