Open kant2002 opened 1 year ago
Noob question here, what is the difference between NativeAOT that we have currently at runtime repo and this NativeAOT-LLVM? Are they doing the same thing? Which one is better? I mean what's the actual difference!?
Pardon these very basic questions!
NativeAOT-LLVM is a version of NativeAOT that can compile to LLVM IR and WASM. You can think of it as the implementation of -r browser-wasm
(and -r wasi-wasm
) for NativeAOT.
Hey @SingleAccretion, thanks for the reply. If I am not wrong based on LLVM IR we generate the machine code gen. And we are generating codegen for native aot already as we can see at runtime repo, so can't we use that to do same for browser-wasm and wasi-wasm? Why we need compile nativeaot to LLVM version? The native aot thing we have at runtime lab is having short comings is it?
I might not understand full world here, sorry for that!
And we are generating codegen for native aot already as we can see at runtime repo, so can't we use that to do same for browser-wasm and wasi-wasm? Why we need compile nativeaot to LLVM version?
WASM is a codegen target of its own, like x64, x86, arm64, etc. Upstream RyuJit doesn't support targeting WASM, neither does the upstream NativeAOT runtime. This experimental branch adds the necessary support. Using LLVM for codegen is a choice, certainly (we could go straight to WASM), but it has both historical and technical advantages, like architecturally allowing for the support of targets other than WASM.
Thanks, @SingleAccretion, now I understand it! So this NativeAOT is specifically for the WASM WASI app model.
@ShreyasJejurkar also for WASM in the browser. WASI is veeery recent addition and we don't move it that far.
yeah @kant2002 WASI is new. And also I want to know did we benchmark running our current WASM to NativeAOT WASM. Is the difference significant in terms of perf and size!?
Size is worse for now then WASI SDK for now, perf is better (at least on number crunching cases) I take this sample https://github.com/hanabi1224/Programming-Language-Benchmarks/blob/main/bench/algorithm/lru/2.cs from https://programming-language-benchmarks.vercel.app/rust-vs-csharp
run with parameter 100 10000000
> date ; wasmtime artifacts\bin\hellowasi\debug_wasi-wasm\AppBundle\hellowasi.wasm --mapdir .::. -- 100 10000000; date
Mon, May 29, 2023 2:47:43 PM
969216
9030784
Mon, May 29, 2023 2:48:32 PM
> date ; wasmer C:\d\github\nativeaotwasm\artifacts\publish\hellowasi\debug_wasi-wasm\hellowasi.wasm --mapdir .:. -- 100 10000000; date
Mon, May 29, 2023 2:57:24 PM
969216
9030784
Mon, May 29, 2023 2:57:52 PM
Unfortunately I cannot make it run on same runtime. But I think numbers should apply.
Just to give idea of areas which can be improved in NativeAOT-LLVM. Based on @SingleAccretion words.
0) Build runtime first, if you in doubt how to do that, go to https://discord.gg/csharp and #allow-unsafe-blocks channel and ask questions. We would try to help.
List of tasks from POV of @SingleAccretion
1) Finishing exceptions support for browser targets (https://github.com/dotnet/runtimelab/issues/2169). 2) Looking at implementing finalization (https://github.com/dotnet/runtimelab/issues/2240). 3)
Looking at enabling the smoke tests that are disabled right now. <-- May be an attractive options for doing smaller things.4) Enabling some libraries tests and fixing bugs. 5) Looking at more efficient shadow stack allocation algorithms and other codegen optimizations, e. g. https://github.com/dotnet/runtimelab/issues/2000 (also may be attractive, if not correctness-related). 6) Looking at threading. This is a bit hard because LLVM is quite unlike .NET in its memory model. 7)Writing jit-analyze-like tooling to asses codegen changes.8) another area of focus is compile time improvements: parallel compilation and looking at ILC profiles to optimize things 9) For WASI specifically we don't write the export list at the moment so you also need -Wl,--exports. Another thing to fix.Any of these could be done in parallel.