Open titzer opened 1 year ago
Support was recently added to Binaryen, but hasn't been added to LLVM yet.
I'm curious if there is any example of "high level language being compiled to WebAssembly with multi-memory support"? I searched for a long time but still haven't found any. :(
(sorry, misread the above comment)
Support was recently added to Binaryen, but hasn't been added to LLVM yet.
Is anybody working on LLVM support? LLVM IR has address spaces, that can be used to represent multiple memories.
No, nobody is currently working on multimemory in LLVM, although Igalia's work adding support for tables is very similar to what would need to happen to support multiple memories as well.
I'm curious if there is any example of "high level language being compiled to WebAssembly with multi-memory support"? I searched for a long time but still haven't found any. :(
I don't know of any either, probably because of the current lack of implementations. Hopefully we will soon break the chicken-and-egg problem. Did you have a particular thing you wanted to learn from an example?
I'm curious if there is any example of "high level language being compiled to WebAssembly with multi-memory support"? I searched for a long time but still haven't found any. :(
I don't know of any either, probably because of the current lack of implementations. Hopefully we will soon break the chicken-and-egg problem. Did you have a particular thing you wanted to learn from an example?
I have come up an technique leveraging the multi-memory support of the WebAssembly and now need some test cases. So I got a high-level coding application and wanted to rewrite it for working on multi-memory, then compile it to wasm.
And such high-level coding method is what I wanted to learn.
Igalia's work adding support for tables is very similar to what would need to happen to support multiple memories as well.
What does this work look like? LLVM has addrspace
attribute which in some cases has exactly the same meaning as multiple memories (think OpenCL before version 2), though I've also read about supporting GC object using that.
One of the more compelling use cases I've stumbled on is virtualizing interfaces that use memory. E.g. implementing a Wasm module that has an imported memory from the "user", which it may read and/or write, and then a private memory that is used to store additional internal state and possibly communicate with other modules.
AFAICT It would be possible to write such a module in C with address space annotations.
yes, that’s what I want to learn. How does this addr space annotations work?
One of the more compelling use cases I've stumbled on is virtualizing interfaces that use memory. E.g. implementing a Wasm module that has an imported memory from the "user", which it may read and/or write, and then a private memory that is used to store additional internal state and possibly communicate with other modules.
AFAICT It would be possible to write such a module in C with address space annotations.
How does this addr space annotations work?
With Clang and C/C++ it is __attribute__((address_space(N)))
before the type, though the N
for the purposes of multiple memories needs to be a constant.
Example:
int incr_from_mem3(__attribute__((address_space(3))) int * ptr) {
return (*ptr) + 1;
}
(Edit) Even though this would lead to addrspace
in the LLVM IR, Wasm backend would quietly ignore it at the moment, though it should not be too hard to enable that.
How does this addr space annotations work?
With Clang and C/C++ it is
__attribute__((address_space(N)))
before the type, though theN
for the purposes of multiple memories needs to be a constant.Example:
int incr_from_mem3(__attribute__((address_space(3))) int * ptr) { return (*ptr) + 1; }
(Edit) Even though this would lead to
addrspace
in the LLVM IR, Wasm backend would quietly ignore it at the moment, though it should not be too hard to enable that.
I see, thanks for explanation.
Since address spaces need to be statically allocated by the LLVM backend for WebAssembly, it would not be scalable to try to use them to support multiple memories directly. Tables are modeled in LLVM IR as global arrays in a special address space so that an arbitrary number of them may be created. The Wasm object file format used with LLVM was also extended with additional relocation types for tables. The same patterns would work well for modeling multi-memory as well.
I actually find that take somewhat surprising; given that address spaces also need to be statically allocated in the wasm module, requiring the same static allocation at the LLVM IR level seems like it should scale exactly as well in LLVM as it would in wasm itself? Tables are different in the sense that there's not really any obvious analog in the IR already (not just for tables, but also for the references they contain).
I am going to second what @dschuff said, aren't memories statically declared, why would they need to get the same dynamic treatments tables get?
By "statically allocated in the backend," I mean statically allocated when LLVM is compiled, not when the user program is compiled. So if you had a 1:1 mapping between address spaces and memories, then when you compile LLVM, you would have to determine what the maximum number of memories an LLVM IR module could reference at that point. In contrast, the scheme used for tables allows user programs to use an arbitrary number of tables.
Is this discussion is just about the LLVM internal representation? At the C or C++ level these would still be address space annotations on pointer types?
So if you had a 1:1 mapping between address spaces and memories, then when you compile LLVM, you would have to determine what the maximum number of memories an LLVM IR module could reference at that point.
There is a hard limit on number of memories, memory index is one byte, I think.
Is this discussion is just about the LLVM internal representation? At the C or C++ level these would still be address space annotations on pointer types?
At the C or C++ level these would most likely be new annotations like __attribute__((wasm_memory))
, since clang would also have to check a bunch of semantic restrictions (such as ensuring that the arrays are not address-taken) just like it does for tables.
There is a hard limit on number of memories, memory index is one byte, I think.
No, just like all other indices in Wasm, memory indices are LEB128 values.
At the C or C++ level these would most likely be new annotations like
__attribute__((wasm_memory))
, since clang would also have to check a bunch of semantic restrictions (such as ensuring that the arrays are not address-taken) just like it does for tables.
Oh, so you mean they would be globally-declared (non-address taken) arrays into which the program would index with integers?
Yes, exactly.
Is this discussion is just about the LLVM internal representation? At the C or C++ level these would still be address space annotations on pointer types?
At the C or C++ level these would most likely be new annotations like
__attribute__((wasm_memory))
, since clang would also have to check a bunch of semantic restrictions (such as ensuring that the arrays are not address-taken) just like it does for tables.
"not address-taken" sounds like a very severe restriction for memory as C/C++ applications usually access memory via pointers. i suspect it's worse than having a static limit on the number of memories. am i missing something?
It's definitely a severe restriction compared to what you can do with other constructs in C/C++, but that's ok because a program would only need to use this feature to do something very specific to WebAssembly, and in that case having the source language construct match the underlying construct as closely as possible is a good thing.
It's definitely a severe restriction compared to what you can do with other constructs in C/C++, but that's ok because a program would only need to use this feature to do something very specific to WebAssembly, and in that case having the source language construct match the underlying construct as closely as possible is a good thing.
where did your assumption "a program would only need to use this feature to do something very specific to WebAssembly" come from? i feel it's false in general as i've heard people wanting to be able to "just" annotate and re-compile their existing libraries to make it operate on non-default memory addresses.
i've heard people wanting to be able to "just" annotate and re-compile their existing libraries to make it operate on non-default memory addresses.
But that’s not something you can do in portable C/C++, so it only makes sense to expect to be able to do that if you’re targeting WebAssembly (or some other specific platform that could provide similar functionality).
i've heard people wanting to be able to "just" annotate and re-compile their existing libraries to make it operate on non-default memory addresses.
But that’s not something you can do in portable C/C++, so it only makes sense to expect to be able to do that if you’re targeting WebAssembly (or some other specific platform that could provide similar functionality).
right. my point is that there could be a middle ground which is more convenient to users than the extremes like "portable C" and table-like accessors.
To turn this around, what would it even mean to take the address of something that lowers to a WebAssembly memory? How do you envision the address would be represented and how do you envision it could be used?
i've heard people wanting to be able to "just" annotate and re-compile their existing libraries to make it operate on non-default memory addresses.
In this case you can just compile the library normally to use a single memory, then use something like wasm-merge to merge it into the rest of the application, which would have a different memory. If you need to copy data from one memory to the other on the boundary, you could use the __attribute__((wasm_memory))
feature described above to write glue code that does the copy.
To turn this around, what would it even mean to take the address of something that lowers to a WebAssembly memory? How do you envision the address would be represented and how do you envision it could be used?
i don't pretend to have a clear vision!
although i'm not an llvm expert, address spaces seems like the closest construct llvm currently has. as you said it has its drawbacks though.
i've heard people wanting to be able to "just" annotate and re-compile their existing libraries to make it operate on non-default memory addresses.
In this case you can just compile the library normally to use a single memory, then use something like wasm-merge to merge it into the rest of the application, which would have a different memory. If you need to copy data from one memory to the other on the boundary, you could use the
__attribute__((wasm_memory))
feature described above to write glue code that does the copy.
maybe. i suspect users likely want the library to place some part of its data (eg. c stack) on the default memory though.
Does anyone know the current status of multi-memory support in toolchains, e.g. LLVM? After a cursory search of LLVM commits, I didn't turn up anything.