Closed mikesir87 closed 6 months ago
I probably do not understand this very well. But I will still ask.
In the browser, we use imports
functions to allow the WASM binary to interact with the browser environment, for example, by passing console.log
as an import, the WASM module can use that imported function to log things in the console in the browser. The concept can be further exploited to expose a full set of graphics functions to allow a WASM binary to run a legacy game (stuff I am really into). Giving the game low-level OpenGL access to an HTML canvas element.
How does this work in Docker+WASM with the WasmEdge runtime. Do I have to write a WasmEdge plugin to be able to create this link to the outside world? Is this technically a part of the so-called containerd shim provided by WasmEdge? What kind of functionality is provided by the WasmEdge runtime? I looked at the API documentation but there was nothing there that resembled, for example, networking functionality or anything of this sort.
I would like to understand this stuff more and how to make the most out of it. Thanks!
Hi @diraneyya! Glad you asked!
WasmEdge implements the WASI standard that gives you access to the outside world. There is still active work happening on this specification. You can follow Wasm spec work more broadly as it happens under the Bytecode Alliance.
@juntao and his team can give you more information about the WasmEdge runtime specifically.
Thanks @chris-crone and hi @diraneyya !
Yes, WASI specifies how WebAssembly Runtimes, including WasmEdge, access the operating system. That includes standard POSIX / libc APIs to access the file system / network etc.
So, you can just create a Rust program to open a file or open a socket connection or write to the console (eg print!() and dbg!() macros), compile it to wasm32/wasi target, and it will run in WasmEdge. See
https://wasmedge.org/book/en/write_wasm/rust.html
You can also do this with other languages supported in WasmEdge including JavaScript.
Hey folks - Heads up that I'm not able to get the tech preview to run on an intel Mac. I am a Docker Desktop user and the stable version works fine. Let me know if you would like any debug information to assist. I'm really excited to see this and would love to try it out
Hi @lachie83! Sorry to hear it isn't working. We'll reach out via email to get more details
Thanks Chris - happy to help.
@lachie83, @chris-crone, FWIW - I was able to run it on first shot.
Here are some details about my setup in case they help debug:
$ docker --version
Docker version 20.10.18, build b40c2f6
$
$ neofetch --stdout | egrep 'OS|Host|Kernel'
OS: macOS 11.6 20G165 x86_64
Host: MacBookPro16,1
Kernel: 20.6.0
Looks like similar hardware and Kernal, different OS
$ neofetch --stdout | egrep 'OS|Host|Kernel'
OS: macOS 11.7.1 20G918 x86_64
Host: MacBookPro16,1
Kernel: 20.6.0
$ docker --version
Docker version 20.10.18, build b40c2f6
Well done @mikesir87 and team, this is very exciting work. I'm looking forward to seeing this work as a driver for WASM through the ecosystem, e.g., resolving/landing https://github.com/opencontainers/image-spec/pull/964, and seeing https://wasmedge.org/book/en/use_cases/kubernetes/docker/containerd.html filled out.
I for one welcome our new micro-microservice overlords.
I wasn't mentioned in the blog post, but where is io.containerd.wasmedge.v1
? I saw a couple of older projects implementing the containerd Runtime V2 shim for other WASM runtimes, (which is what I understand was done here based on the blog post), but couldn't find this work on GitHub or the wider Internet.
So I'm assuming it's not public yet, but will be?
Can someone confirm this is the same containerd shim used under https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-pools right?
Just want to make sure when developing a generic WASI app it runs in Docker WASM.
Hi @TBBle
Thank you for your kind words. Please do let us know if you encounter any issues. We cannot wait to see the Wasm microservices you create. :)
The containerd shim for WasmEdge is here:
https://github.com/second-state/runwasi
We are in the process of merging it upstream to DeisLabs runwasi project, and hopefully eventually to the containerd project itself.
It is not the same. But all standard compliant WASI app should work in both environments. If not, let us know. :)
Can someone confirm this is the same containerd shim used under https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-pools right?
Just want to make sure when developing a generic WASI app it runs in Docker WASM.
Just chiming in quickly to confirm what @juntao says here. The wasmedge team did in fact use the core https://github.com/deislabs/runwasi shim to quickly provide wasi support in containerd with the wasmedge runtime and we're thrilled it worked perfectly for the Docker Desktop preview. As a result, @tigerinus, our path for bringing the two together and moving the shim upstream to the containerd project is pretty easy and we look forward to working on that together.
In fact, the community's collective work on https://github.com/deislabs/runwasi -- including enabling any number of open source runtimes like SecondState's wasmedge -- will continue rapidly. If you're interested, you can track that repo (which we hope to move into the containerd repo as soon as we can). Using the Docker Desktop+wasm preview with the https://github.com/second-state/runwasi will not be any problem for using future shims. It's a great way to get started here!
Great work everyone and very exciting!
I'd be keen to see the following from Docker Desktop going forward:
docker buildx build
to eventually create native Wasm artifacts rather than shoe-horn them into OCI imagesAnything else, @nigelpoulton ? I mean, make it at least HARD.
Mind you, on 1 I am of course speaking for myself here, not Docker. :-)
Quick question: can we leverage HPA to autoscale WASM module like traditonal PoD?
you will be able to use all the normal k8s mechanisms with either version of https://github.com/deislabs/runwasi as we roll forward. At the moment, however, telemetry isn't working well -- we have to emit things to enable this. We should soon have a new version of the runwasi shim (and secondstatewill work on theirs until we coalesce) that will emit all the telemetry needed for any k8s scaling scenario.
The ONLY thing that will not yet work is mixed containers/wasm inside one pod. This precludes the usage of service messes in k8s for the time being. Nailing that scenario is in the backlog but we have no roadmap timing for that yet. Ironically, as service messes have lots of headspace we have very little evidence that lots of people use them: https://www.datadoghq.com/container-report/ says only about 10% of people use meshes..... ymmv.
Although full-on service meshes might not have high penetration, I note that mixed containers/wasm is also an important use case to implement sidecars for telemetry collection, e.g. Datadog Agent, OTel Agent, Jaeger Agent. I don't know in general how the split is between (possibly auto-injected) sidecar agents and host agents (e.g., HostPort-based Daemonsets), but Datadog specifically excluded their own Agent sidecar from that report, so presumably it's widespread enough that it matters in such data. (Since they use data from their Agent for the report, presumably it'd have been present on 100% of hosts in their report if they didn't exclude it, but it would have been interesting to see the split between deployment style.)
I'll note that anecdotally the last two k8s-based applications I was involved in building both relied on auto-injected telemetry sidecars (once with Jaeger Operator, once with OTel Operator), but since I was the common factor there, that's pretty low-quality signal even for anecdotal data.
Thx @squillace for your reply.
Before, we played around with deislabs/krustlet . With Krustlet, we are able to deploy WASM module into K8s cluster. But, we are NOT able to eanble HPA feature (for autoscaling).
Two questions:
in reverse order:
Turns out that ContainerD shims are right abstraction for that. Krustlet might still become very useful for key situations.
Are you using Krustlet successfully aside from the autoscaling?
@TBBle:
I note that mixed containers/wasm is also an important use case to implement sidecars for telemetry collection, e.g. Datadog Agent, OTel Agent, Jaeger Agent.
Oh yes -- because that's how they work now. With Wasm components, you'd be moving toward an in-proc OTEL component that exports such information. Sidecars are horrible unless you have only one or two -- not because the model doesn't work, but because it's just harder to use. BUT we use them!!!! For example, we've test implemented Dapr sidecars as components and WOW does that change things. It's almost as if kubernetes is good. :-P
That said....
I'll note that anecdotally the last two k8s-based applications I was involved in building both relied on auto-injected telemetry sidecars
I'm expecting that we'll make this happen. But it's not going to happen this month. First, we make these solid, enable componentized runtimes so that people can choose theirs to use, and upstream the shim work to the ContainerD project in the CNCF.
in reverse order: 2. #426 (comment) pretty much explains it. SecondState used the deislabs runwasi to implement wasmedge because it has special features that aren't available elsewhere. You can see the diff here. But the larger point is that this is merely an early set of experiments by us all in the wasm ecosystem. Both SecondState and Microsoft (Deislabs) will be working to bring multiple runtimes to the runwasi shim as we go forward. You might need to know now that there's a difference, but we assume that in the future you won't need to.
- Krustlet was our second attempt at deislabs (after modeling them as a CRI implementation in https://github.com/deislabs/wok) to integrate wasm into Kubernetes smoothly. Krustlet DID work, as you note, but it brought with it a ton of overhead as well as making it harder to model wasm as pods easily. In the end, it is used as a rust kubelet and it implements a rust oci-distribution crate (which will be very useful moving to OCI Artifacts) as well as krator, the state engine. But neither the Fermyon maintainers, nor the Cosmonic maintainer, nor we here at Microsoft are still working on krustlet. It's just too much effort to bring it up to production quality and then you still have to treat your nodes like pets with taints and tolerations and all that messiness.
Turns out that ContainerD shims are right abstraction for that. Krustlet might still become very useful for key situations.
Are you using Krustlet successfully aside from the autoscaling?
Other features works well. BTW, I recalled there is open issue mentioning this limitation too: https://github.com/krustlet/krustlet/issues/470
Hey guys, great work!
Have one question - how do I pass arguments to the underlying runtime? In my case I want to access files on my local machine. Now, if I do this with wasmedge separately I'd just have to add a say --dir /:/
to have access to everything from WASI.
I tried mounting with -v when I run the "wasm container" but that was not enough.
E.g. using singlestore-labs' python I could do something like this
alexandrov:\>wasmedge --dir /:$(pwd) --env PYTHONPATH=/python311.zip/python3.11 --env PYTHONHOME=/python311.zip/python3.11 ./python3.11.wasm -c 'import os; print(os.listdir(""));'
['python3.11.wasm', 'python311.zip']
alexandrov:\>tree
.
├── python3.11.wasm
└── python311.zip
0 directories, 2 files
The key thing here is --dir/$(pwd)
along with the PYTHONPATH and PYTHONHOME variables.
So I built myself an image that has just python3.11.wasm and python311.zip but couldn't make it work.
Even running the interpreter fails as it does not pick the env variables. Tried passing them to the container, as usual, but that did not propagate them to the wasm module.
docker run -d --runtime=io.containerd.wasmedge.v1 --platform wasi/wasm32 python-wasm -v /:$(pwd)/python-wasm -e PYTHONPATH=/python311.zip/python3.11 -e PYTHONHOME=/python311.zip/python3.11
Any help or ideas would be greatly appreciated.
Hi @assambar
The Wasm file is packaged in a SCRATCH OCI image. It has access to the / directory of the container. So you can mount a directory to the container and then access it from inside the Wasm app.
Let us know if it works. Thanks!
you will be able to use all the normal k8s mechanisms with either version of https://github.com/deislabs/runwasi as we roll forward. At the moment, however, telemetry isn't working well -- we have to emit things to enable this. We should soon have a new version of the runwasi shim (and secondstatewill work on theirs until we coalesce) that will emit all the telemetry needed for any k8s scaling scenario.
The ONLY thing that will not yet work is mixed containers/wasm inside one pod. This precludes the usage of service messes in k8s for the time being. Nailing that scenario is in the backlog but we have no roadmap timing for that yet. Ironically, as service messes have lots of headspace we have very little evidence that lots of people use them: https://www.datadoghq.com/container-report/ says only about 10% of people use meshes..... ymmv.
Hi @squillace & Community Members,
Beside Autoscaling, are there any other known limitations? We can't wait to roll out this features to our Production env :)
Suppose we have K8s as our Application Hosting Platform, to enable user/developer to be able to deploy to WASM-based Application (i.e. WASM Module) to our K8s env, is there any special settings/configuraiotns required for my K8s cluster?
OR
all I have to do is to package WASM file/module in a docker image (i..e. SCRATCH OCI Image)?
Another question: Once the Docker image (containing WASM file) is deployed to K8s, it's running in a Container like other tradiiotnal non-wasm Docker image? from user's perspective, it's totally transparent?
Thanks in adavnce.
Hi @juntao
It has access to the / directory of the container. So you can mount a directory to the container and then access it from inside the Wasm app.
I still could not get this behavior to work for me. Created a simple repository with a repro here - https://github.com/assambar/lsr-wasm-docker
Just using a rust program, which lists recursively from /
, when I run it inside the container I get nothing. I added more info on what I'm doing in the repo above.
This is the version I have
docker --version
Docker version 20.10.18, build b40c2f6
Thanks!
@assambar I just opened a PR to make this work, see here: https://github.com/second-state/runwasi/pull/23
My output with this code:
$ make demo-contents-not-seen
cargo build --release
Finished release [optimized] target(s) in 0.00s
docker build --platform wasi/wasm -f Dockerfile . -t lsr-wasm
[+] Building 0.0s (6/6) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 168B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.69kB 0.0s
=> CACHED [1/2] COPY target/wasm32-wasi/release/lsr.wasm / 0.0s
=> CACHED [2/2] COPY target/wasm32-wasi/release/ /t-w-r/ 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => exporting manifest sha256:96faf496b2dcc33348802c8c77c3f17164f04f697cbb3eda5943b0ce8abdaf92 0.0s
=> => exporting config sha256:3c909aecd3975ec564e5b68ca45454dbdead748bce263801115cb452d87c9e35 0.0s
=> => naming to docker.io/library/lsr-wasm:latest 0.0s
=> => unpacking to docker.io/library/lsr-wasm:latest 0.0s
docker run --name lsr-docker-run --runtime=io.containerd.wasmwasi.v1 --platform wasi/wasm lsr-wasm
Starting from root_dir "/"
/
/lsr.wasm
/etc
/etc/hostname
/etc/hosts
/etc/resolv.conf
/t-w-r
/t-w-r/lsr.d
/t-w-r/build
/t-w-r/lsr.wasm
/t-w-r/deps
/t-w-r/deps/libwalkdir-e3683a9a82f3afb8.rmeta
/t-w-r/deps/lsr-7938fa61da9ece80.wasm
/t-w-r/deps/libsame_file-3faf241ee7934e00.rmeta
/t-w-r/deps/walkdir-e3683a9a82f3afb8.d
/t-w-r/deps/libwalkdir-e3683a9a82f3afb8.rlib
/t-w-r/deps/libsame_file-3faf241ee7934e00.rlib
/t-w-r/deps/lsr-7938fa61da9ece80.d
/t-w-r/deps/same_file-3faf241ee7934e00.d
/t-w-r/.cargo-lock
/t-w-r/.fingerprint
/t-w-r/.fingerprint/same-file-3faf241ee7934e00
/t-w-r/.fingerprint/same-file-3faf241ee7934e00/invoked.timestamp
/t-w-r/.fingerprint/same-file-3faf241ee7934e00/dep-lib-same-file
/t-w-r/.fingerprint/same-file-3faf241ee7934e00/lib-same-file
/t-w-r/.fingerprint/same-file-3faf241ee7934e00/lib-same-file.json
/t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8
/t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/invoked.timestamp
/t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/lib-walkdir
/t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/dep-lib-walkdir
/t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/lib-walkdir.json
/t-w-r/.fingerprint/lsr-7938fa61da9ece80
/t-w-r/.fingerprint/lsr-7938fa61da9ece80/invoked.timestamp
/t-w-r/.fingerprint/lsr-7938fa61da9ece80/bin-lsr.json
/t-w-r/.fingerprint/lsr-7938fa61da9ece80/dep-bin-lsr
/t-w-r/.fingerprint/lsr-7938fa61da9ece80/bin-lsr
/t-w-r/examples
/t-w-r/incremental
docker export lsr-docker-run -o lsr-contents.tar
tar -tvf lsr-contents.tar
drwxr-xr-x 0/0 0 2022-11-23 17:57 etc/
-rw-r--r-- 0/0 0 2022-11-23 17:57 etc/hostname
-rw-r--r-- 0/0 0 2022-11-23 17:57 etc/hosts
-rw-r--r-- 0/0 0 2022-11-23 17:57 etc/resolv.conf
-rwxrwxr-x 0/0 2185909 2022-11-23 17:56 lsr.wasm
drwxr-xr-x 0/0 0 2022-11-23 17:56 t-w-r/
-rw-rw-r-- 0/0 0 2022-11-23 17:56 t-w-r/.cargo-lock
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/.fingerprint/
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/.fingerprint/lsr-7938fa61da9ece80/
-rw-rw-r-- 0/0 16 2022-11-23 17:56 t-w-r/.fingerprint/lsr-7938fa61da9ece80/bin-lsr
-rw-rw-r-- 0/0 418 2022-11-23 17:56 t-w-r/.fingerprint/lsr-7938fa61da9ece80/bin-lsr.json
-rw-rw-r-- 0/0 24 2022-11-23 17:56 t-w-r/.fingerprint/lsr-7938fa61da9ece80/dep-bin-lsr
-rw-rw-r-- 0/0 48 2022-11-23 17:56 t-w-r/.fingerprint/lsr-7938fa61da9ece80/invoked.timestamp
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/.fingerprint/same-file-3faf241ee7934e00/
-rw-rw-r-- 0/0 8 2022-11-23 17:56 t-w-r/.fingerprint/same-file-3faf241ee7934e00/dep-lib-same-file
-rw-rw-r-- 0/0 48 2022-11-23 17:56 t-w-r/.fingerprint/same-file-3faf241ee7934e00/invoked.timestamp
-rw-rw-r-- 0/0 16 2022-11-23 17:56 t-w-r/.fingerprint/same-file-3faf241ee7934e00/lib-same-file
-rw-rw-r-- 0/0 376 2022-11-23 17:56 t-w-r/.fingerprint/same-file-3faf241ee7934e00/lib-same-file.json
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/
-rw-rw-r-- 0/0 8 2022-11-23 17:56 t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/dep-lib-walkdir
-rw-rw-r-- 0/0 48 2022-11-23 17:56 t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/invoked.timestamp
-rw-rw-r-- 0/0 16 2022-11-23 17:56 t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/lib-walkdir
-rw-rw-r-- 0/0 430 2022-11-23 17:56 t-w-r/.fingerprint/walkdir-e3683a9a82f3afb8/lib-walkdir.json
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/build/
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/deps/
-rw-rw-r-- 0/0 55414 2022-11-23 17:56 t-w-r/deps/libsame_file-3faf241ee7934e00.rlib
-rw-rw-r-- 0/0 39295 2022-11-23 17:56 t-w-r/deps/libsame_file-3faf241ee7934e00.rmeta
-rw-rw-r-- 0/0 235082 2022-11-23 17:56 t-w-r/deps/libwalkdir-e3683a9a82f3afb8.rlib
-rw-rw-r-- 0/0 143945 2022-11-23 17:56 t-w-r/deps/libwalkdir-e3683a9a82f3afb8.rmeta
-rw-rw-r-- 0/0 222 2022-11-23 17:56 t-w-r/deps/lsr-7938fa61da9ece80.d
-rwxrwxr-x 0/0 2185909 2022-11-23 17:56 t-w-r/deps/lsr-7938fa61da9ece80.wasm
-rw-rw-r-- 0/0 1023 2022-11-23 17:56 t-w-r/deps/same_file-3faf241ee7934e00.d
-rw-rw-r-- 0/0 1691 2022-11-23 17:56 t-w-r/deps/walkdir-e3683a9a82f3afb8.d
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/examples/
drwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/incremental/
-rw-rw-r-- 0/0 117 2022-11-23 17:56 t-w-r/lsr.d
hrwxrwxr-x 0/0 0 2022-11-23 17:56 t-w-r/lsr.wasm link to t-w-r/deps/lsr-7938fa61da9ece80.wasm
docker rm lsr-docker-run
lsr-docker-run
rm lsr-contents.tar
@rumpl Thanks for the quick fix and merge into main. I managed to build and use this locally with containerd.
However, I cannot seem to find the proper way to do this with docker-desktop + wasm. Where do you place your re-built containerd-shim-wasmedge-v1 binary to make it work?
I'm using Windows with WSL2 backend, so maybe that's the issue here, but still wanted to ask. Thanks!
@assambar This will be available in the next version of Docker Desktop that will be released next week.
I have installed the docker desktop preview and the wasmedge docker shim. When I run the docker + wasm example, I encountered the following error:
Using default tag: latest
latest: Pulling from michaelirwin244/wasm-example
operating system is not supported
And this is my docker version and my os:
$ docker --version
Docker version 20.10.21, build baeda1f
$ neofetch --stdout | egrep 'OS|Host|Kernel'
OS: Ubuntu 20.04.4 LTS x86_64
Host: CVM 3.0
Kernel: 5.4.0-109-generic
Dear @jungan21, re: https://github.com/docker/roadmap/issues/426#issuecomment-1325192701.
Beside Autoscaling, are there any other known limitations? We can't wait to roll out this features to our Production env :)
If you take security and compliance seriously, then it is NOT anywhere near production quality yet. Here is the implementation list for OCI compliance; while not all are actually required for solidity and confidence, a good portion ARE. https://github.com/deislabs/runwasi/issues/23. It appears that we've already made great progress thanks to @juntao's team: https://github.com/deislabs/runwasi/pull/26. And a good portion of the OCI work appears possible due to an offer from a community member.
How long? If the world spins perfectly, half a year. If it doesn't, DEFINITELY within the year.
Suppose we have K8s as our Application Hosting Platform, to enable user/developer to be able to deploy to WASM-based Application (i.e. WASM Module) to our K8s env, is there any special settings/configuraiotns required for my K8s cluster?
Your distro will need to bootstrap the shim onto the nodes. For example, using spin and slight shims with AKS Azure does that for you with the az aks nodepool add
command: https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-pools. If you want a k3s example, have a look at: https://github.com/deislabs/containerd-wasm-shims/blob/main/deployments/k3d/README.md#how-to-run-the-example. There will be a cluster-api template, too.
For most people, the distro will handle this.
all I have to do is to package WASM file/module in a docker image (i..e. SCRATCH OCI Image)?
At the moment, yes. Use docker buildx and the wasi/wasm target out of tooling courtesy. :-) and it's likely that within the year you'll be able to use the OCI Artifacts work that Docker Hub just announced to deliver the same workloads.
Another question: Once the Docker image (containing WASM file) is deployed to K8s, it's running in a Container like other traditional non-wasm Docker image? from user's perspective, it's totally transparent?
NO -- the shim does NOT run in a container in the normal sense. The scratch container is only used to deliver the artifacts; once the shim pulls the image, it extracts the artifacts and lays them into the namespace and invokes the runtime, passing them as arguments. This means that a) the wasm sandbox is used to handle requests, but in addition you will ALSO get the standard cgroups and namespace and other security support that a container runtime normally has. Double the security boundaries, double the fun.
Does that all make sense?
The scratch container is only used to deliver the artifacts; once the shim pulls the image, it extracts the artifacts and lays them into the namespace and invokes the runtime, passing them as arguments.
If the runtime is getting all the same namespace/cgroup setup that a container normally gets, this sounds exactly like "a container in the normal sense", or at least much closer than something like kata-containers which uses different isolation technology.
I had understood the difference being that a traditional container:
while the runwasi approach:
Are there other differences we'd need to be aware of?
@assambar This will be available in the next version of Docker Desktop that will be released next week.
That's a great news! Where should I subscribe to get access to that?
Hi @TBBle,
If the runtime is getting all the same namespace/cgroup setup that a container normally gets, this sounds exactly like "a container in the normal sense", or at least much closer than something like kata-containers which uses different isolation technology.
if using posix kernel features is all that's required to be a container, then everything that uses them is a container. :-) I'd put it a different way, but obviously ymmv. wasm is a vm spec. Just like other vm specs. In that sense, the inner processes do not use syscalls directly. This is the fundamentally differentiating concept between katacontainers, gvisor/firecracker, jvms/mscorlib, wasm, and "docker containers". IMHO.
What all vm-style "containers" do is create an inner sandbox that possesses everything necessary to run the process, but the inner process does not touch anything outside the sandbox. Of course, unless the sandbox runtime gives permission to do so! So it's possible to have a "secure" wasm destroy the host system if the host system gives it kernel access. Yay! So don't do that. In this sense, it absolutely IS much closer to a katacontainer-style protection but based on a virtual syscall system (wasi) and not on posix or any specific one. Furthermore, one can limit what syscalls the module can even use; this is much harder to do the closer to the underlying system you get. (if you have to bake different node images, it's harder.)
When you ask, what else "we'd need to be aware of" I'm not clear what you'd like to know! Are you asking about the difference between runwasi and runc shims? or crun? or https://github.com/kata-containers/runtime/issues/485? or these kinds of things?
if using posix kernel features is all that's required to be a container, then everything that uses them is a container.
I'd tend to go wider than that: Things that are managed by containerd runtime shims are containers. ^_^
Not everything that uses the namespace APIs (which aren't POSIX, are they? I thought they were Linux-specific...) is a container, but I expect at this point that the overwhelming majority of in-field usage of those APIs is for containerisation.
Anyway, your comment makes sense. I hadn't really thought about WASM as a VM in this context, and see that it's closer to katacontainers than namespace-based containers in underlying tech.
As far as "differences we'd need to be aware of", this was coming from the same user-experience side as the original question, that given the three dot-points I listed seemed to be nicely in parallel if you replace "forks into..." with "enters an isolated sandbox", and hence mostly-ignorable by the user, I don't understand why the answer to the question
Once the Docker image (containing WASM file) is deployed to K8s, it's running in a Container like other traditional non-wasm Docker image? from user's perspective, it's totally transparent?
was "No".
It seems to be transparent, in the same way that from a user's perspective, kata-containers is mostly transparent to the user: You provided an appropriate image reference in, a PodSpec, that image's process is now executing, and doing its job, interacting with other processes in its Pod and nominally isolated from other Pods (barring bugs or known limitations in the specific sandbox tech), and the CRI interactions are consistent across techs, providing consistent behaviours from kubelet up.
So my question was along the lines of: what other things does runwasi do differently enough to catch a k8s user (application deployer) unawares? I'm mostly interested in fundamental differences myself, i.e. things that will remain different, rather that current known limitations, e.g., mixed WASM/non-WASM Pods, since the "for now" state was well-described earlier in the same answer.
From the above, my existing understanding was that the user-facing differences would all be abstracted away in the image format and the containerd configuration for the relevant CRI runtime_handler
field, both only interesting to the user at initial Pod creation-time.
in reverse order: 2. #426 (comment) pretty much explains it. SecondState used the deislabs runwasi to implement wasmedge because it has special features that aren't available elsewhere. You can see the diff here. But the larger point is that this is merely an early set of experiments by us all in the wasm ecosystem. Both SecondState and Microsoft (Deislabs) will be working to bring multiple runtimes to the runwasi shim as we go forward. You might need to know now that there's a difference, but we assume that in the future you won't need to.
- Krustlet was our second attempt at deislabs (after modeling them as a CRI implementation in https://github.com/deislabs/wok) to integrate wasm into Kubernetes smoothly. Krustlet DID work, as you note, but it brought with it a ton of overhead as well as making it harder to model wasm as pods easily. In the end, it is used as a rust kubelet and it implements a rust oci-distribution crate (which will be very useful moving to OCI Artifacts) as well as krator, the state engine. But neither the Fermyon maintainers, nor the Cosmonic maintainer, nor we here at Microsoft are still working on krustlet. It's just too much effort to bring it up to production quality and then you still have to treat your nodes like pets with taints and tolerations and all that messiness.
Turns out that ContainerD shims are right abstraction for that. Krustlet might still become very useful for key situations.
Are you using Krustlet successfully aside from the autoscaling?
Hi @squillace , I am wondering why WOK
didn't work? Or anything in krustlet is better? I do some research and found it should be capable to do that thing using CRI.
So my question was along the lines of: what other things does runwasi do differently enough to catch a k8s user (application deployer) unawares? I'm mostly interested in fundamental differences myself, i.e. things that will remain different, rather that current known limitations, e.g., mixed WASM/non-WASM Pods, since the "for now" state was well-described earlier in the same answer.
@TBBle As far as I know, runwasi is still in its infancy. Only basic small functions are available yet. That is where users may be surprised. We have not yet added features such as rootless, which is becoming commonplace these days. You can follow this issue for more information. I will help support it. https://github.com/deislabs/runwasi/issues/23
@TBBle:
Oh, I see what you're getting at now. Yes, from the user's pov it's like a container, a katacontainer, and so on. It operates differently, but YES, a user should just deploy yaml and it should look pretty much the same. The feature set is different, which is why we do it, but the experience is mostly the same -- as we think it should be.
To then get at your driving question:
what other things does runwasi do differently enough to catch a k8s user (application deployer) unawares? I'm mostly interested in fundamental differences myself, i.e. things that will remain different, rather that current known limitations, e.g., mixed WASM/non-WASM Pods, since the "for now" state was well-described earlier in the same answer.
Off the top of my head, I'd say the following things are likely to remain different-ish:
There are other differences to be aware of, and they affect where you'll use wasm and where you will use containers. Containers are built for long running processes; wasm, for short, fast, quick processes where you are not going to accumulate lots of state. So the requirement to scale out with wasm is going to be far less, as each request allocates memory, starts a module, and then reclaims that memory. Container workloads do not behave the same way. Somewhere there's a very good Fermyon post where they illustrate the lower, steadier work state consumption for a wasm workload versus a container one. Neither are better, but as we learn more about wasm in this environment we'll have a better understanding of how resources are consumed.
Hope this helps!
in response to @MrZLeo:
Hi @squillace , I am wondering why WOK didn't work? Or anything in krustlet is better? I do some research and found it should be capable to do that thing using CRI.
in reverse order: CRI is so completely hard coded to containers themselves that we got exhausted faking apis that have no meaning. Full stop. In theory, a CRI that is a "compute" runtime interface and not a container one would be perfect here. Kubernetes is designed to have gears replaced; this is one area that needs a CRI v2. :-)
Krustlet worked fine, but it models nodes so that they become "special". If you need taints and tolerations for workloads, ultimately those nodes are now pets. Of course, some cattle are more "petty" than others; still, the direction we want to move is toward a nodeless or node-agnostic experience. Krustlet, however, does have lots of potential for special cases. I'd love to use it to experiment with disconnected and semiconnected nodes at some point. Why not?
@utam0k is correct: we have a whole list of OCI spec features that are not yet implemented in runwasi in addition to runtime swapping and so on. It'll be cool to collaborate with @utam0k and SecondState to bring them into the mix so that things just start working -- and working correctly. Docker Desktop support is a huge step in the right direction here.
in response to @MrZLeo:
Hi @squillace , I am wondering why WOK didn't work? Or anything in krustlet is better? I do some research and found it should be capable to do that thing using CRI.
in reverse order: CRI is so completely hard coded to containers themselves that we got exhausted faking apis that have no meaning. Full stop. In theory, a CRI that is a "compute" runtime interface and not a container one would be perfect here. Kubernetes is designed to have gears replaced; this is one area that needs a CRI v2. :-)
Krustlet worked fine, but it models nodes so that they become "special". If you need taints and tolerations for workloads, ultimately those nodes are now pets. Of course, some cattle are more "petty" than others; still, the direction we want to move is toward a nodeless or node-agnostic experience. Krustlet, however, does have lots of potential for special cases. I'd love to use it to experiment with disconnected and semiconnected nodes at some point. Why not?
@squillace
Thanks for replying me! Learning the story behind it helps me a lot. But I am still confused about nodeless design here. Does it mean we don't need abstraction of Pod in WASM?
As far as I am concerned, Pod == a WASM Virtual Machine. Am I wrong? :(
ah yes: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/. A pod
is a representation of a process, more or less -- there can be multiple processes (containers) in a pod. A node
is a vm, more or less (because it could be baremetal and so on). All of these can be faked or modified if you know what you're doing. But generally a node is a vm and a pod is a container(s)/process.
CF., https://kubernetes.io/docs/concepts/architecture/nodes/ and https://kubernetes.io/docs/concepts/workloads/pods/
The example from
https://docs.docker.com/desktop/wasm/#running-a-multi-service-application-with-wasm. gives error
Docker version: Docker version 20.10.21, build baeda1f.
OS: OSX
➜ microservice-rust-mysql git:(main) docker compose up
=> ERROR [buildbase 3/4] RUN <<EOT bash 3.0s
------
> [buildbase 3/4] RUN <<EOT bash:
#0 1.954 + apt-get update
#0 2.453 Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
#0 2.577 Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
#0 2.640 Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
#0 2.656 Err:1 http://deb.debian.org/debian bullseye InRelease
#0 2.656 At least one invalid signature was encountered.
#0 2.707 Err:2 http://deb.debian.org/debian-security bullseye-security InRelease
#0 2.707 At least one invalid signature was encountered.
#0 2.875 Err:3 http://deb.debian.org/debian bullseye-updates InRelease
#0 2.875 At least one invalid signature was encountered.
#0 2.887 Reading package lists...
#0 2.919 W: GPG error: http://deb.debian.org/debian bullseye InRelease: At least one invalid signature was encountered.
#0 2.919 E: The repository 'http://deb.debian.org/debian bullseye InRelease' is not signed.
#0 2.919 W: GPG error: http://deb.debian.org/debian-security bullseye-security InRelease: At least one invalid signature was encountered.
#0 2.919 E: The repository 'http://deb.debian.org/debian-security bullseye-security InRelease' is not signed.
#0 2.919 W: GPG error: http://deb.debian.org/debian bullseye-updates InRelease: At least one invalid signature was encountered.
#0 2.919 E: The repository 'http://deb.debian.org/debian bullseye-updates InRelease' is not signed.
------
failed to solve: executor failed running [/bin/sh -c <<EOT bash
set -ex
apt-get update
apt-get install -y \
git \
clang
rustup target add wasm32-wasi
EOT]: exit code: 100
@vsilent You will need Docker Desktop 4.15 with containerd
support turned on in settings. Thanks.
@MrZLeo happy new year (by some counts)! You still have the abstraction of pods not because we couldn't have gotten rid of them but because a) they're normal for k8s people to think about a parent process or "app", and b) we want a normal k8s "view" of the world.
What we're aiming at is to give you merely "another" container-type experience but without worrying about which type of node it's running on. So it's not really a nodeless experience, and in that I misspoke. But "node-agnostic" experience is the correct way to think about it. For each pod you have "at least" one wasm runtime instance, yes. You could have more, but at least one.
@squillace happy new year! It's nice to have your message. I understand it now! : ) Thanks a lot!!!
Tell us about your request As a developer, I would like to have the ability to build, share, and run Wasm applications using Docker Desktop.
Which service(s) is this request for? Docker Desktop
Additional context This post is intended to gather feedback on the new Docker+Wasm integration. Feel free to share issues or ideas on how we can make the product better!
With Docker Desktop on macOS regular containers are run inside a Linux VM. Is it possible for the WASM containers to run without a Linux VM?
I think this is in theory possible but would probably require significant changes in Docker Desktop -- correct me if I am wrong here -- I understand Docker Desktop is set up to run everything in a VM on MacOS at this time.
With Docker Desktop on macOS regular containers are run inside a Linux VM. Is it possible for the WASM containers to run without a Linux VM?
Tell us about your request As a developer, I would like to have the ability to build, share, and run Wasm applications using Docker Desktop.
Which service(s) is this request for? Docker Desktop
Additional context This post is intended to gather feedback on the new Docker+Wasm integration. Feel free to share issues or ideas on how we can make the product better!