Open N7KnightOne opened 3 days ago
This is probably out of scope for Wolf, it feels to me that this should be another software that leverage Wolf inside.
Wolf currently allows multiple users to share a single host, it automatically provisions virtual desktops, devices and routes the events from one client to the right Docker container, to simplify:
flowchart TD
subgraph Wolf
M("Moonlight server")
uinput
GPU
Docker
end
U1("User 1") --> M
U2("User 2") --> M
Docker --> C1("Container 1")
Docker --> C2("Container 2")
With k8s you could scale that further so that multiple clients can be mapped to multiple machines:
flowchart TD
subgraph K8S
M("Moonlight server")
subgraph Pod1
subgraph Wolf1
uinput1
GPU1
Docker1
C1("Container 1")
end
end
subgraph Pod2
subgraph Wolf2
uinput2
GPU2
Docker2
C2("Container 2")
end
end
end
U1("User 1") --> M
U2("User 2") --> M
M --> Wolf1
M --> Wolf2
I hope this makes sense, I don't exclude we could work on this in future once Wolf is probably more mature and stable. Controlling multiple machines is definitely outside the scope of Wolf and will be better positioned in a separate program, but, there's quite a lot of logic and stuff that could be re-used IMHO.
I think the main thing about a kubernetes integration would be having Wolf talk to the kubernetes API to create pods, rather than having it use a passed-through docker socket. Would that be feasible at all, in terms of functionality & code structure?
Edit: Looking at docker.hpp, that seems to me like it's not all too far off fitting the Kubernetes API as well. I have no insight into whether all the other parts of wolf could work with Kubernetes though, in terms of passing through devices and communicating between containers and such.
I think the main thing about a kubernetes integration would be having Wolf talk to the kubernetes API to create pods, rather than having it use a passed-through docker socket. Would that be feasible at all, in terms of functionality & code structure?
Yep that could easily be a different Runner the problem is more about the locality of things
I have no insight into whether all the other parts of wolf could work with Kubernetes though, in terms of passing through devices and communicating between containers and such.
That's exactly my point and why I've included uinput
and GPU
in the simple charts above (sorry, I should have been more specific). I'm not an expert on k8s, but, I know that we can't have Wolf running in one pod and an app (Steam, Firefox, ...) on another one because we can't share the local virtual input devices (uinput
) and Wayland socket (GPU
) between different machines.
That's why I think the proper solution would be to decouple the Moonlight protocol from the rest, this way you'll have
Users --> Moonlight server --> Multiple machines --> Wolf [Multiple desktops/instances --> Steam]
Wolf can "only" do the last step, but you need something else that coordinates the multiple Wolf instances.
I hope this clears things up; I'm very open to ideas and feedback
Is it 'just' unix sockets that need to be shared, or is there more to it? I think it would be possible to have separate kubernetes Pods for the different bits with a socket shared between them, with the caveat that it's a bit unusual and would only work as long as the Pods are still on the same Node (but that's honestly fine).
Wolf takes a docker socket and spins stuff up on the fly - is that required, or is it possible to have all the containers started ahead of time instead? If so, having one Pod with all of the Wolf bits inside of it would also be a reasonable solution that should work on Kubernetes without (many) changes today.
Something relevant here that isn't yet entirely clear to me - in the existing Docker setup, does a single Wolf instance support multiple clients running different things at the same time, or is it moonlight sending clients to separate Wolf instances? That would influence what the "best practice" way to fit this into Kubernetes is.
Is it 'just' unix sockets that need to be shared, or is there more to it? I think it would be possible to have separate kubernetes Pods for the different bits with a socket shared between them, with the caveat that it's a bit unusual and would only work as long as the Pods are still on the same Node (but that's honestly fine).
It's not just sockets, we are talking to and sharing virtual devices under /dev/input/
plus the Wayland socket that can't be shared over the network. Wolf has been developed under the assumption that you are on a single machine, lifting that assumption to manage multiple machines is out of scope for Wolf.
Wolf takes a docker socket and spins stuff up on the fly - is that required, or is it possible to have all the containers started ahead of time instead? If so, having one Pod with all of the Wolf bits inside of it would also be a reasonable solution that should work on Kubernetes without (many) changes today.
It is required because you don't know ahead of time what is going to be the client settings (resolution, FPS, how many and which devices to provision, think of joypads for example) and most importantly how many and which apps do you want to start.
Wolf automatically spins up and down containers on demand, that's the main point. You want to start Steam on your phone and another one in the household starts Retroarch on another device? Wolf will do that, automagically and transparently for the end user.
Something relevant here that isn't yet entirely clear to me - in the existing Docker setup, does a single Wolf instance support multiple clients running different things at the same time, or is it moonlight sending clients to separate Wolf instances? That would influence what the "best practice" way to fit this into Kubernetes is.
There's a single Wolf instance that will manage and control multiple containers. You can read more about how it works from a high level POW in the docs here
Wolf has been developed under the assumption that you are on a single machine, lifting that assumption to manage multiple machines is out of scope for Wolf.
This is absolutely fair. I want to clarify that running in Kubernetes doesn't mean you have to split across multiple machines. You can pin things to one node, and you still get the other benefits of Kubernetes when doing that.
My suggestion for a first pass at this request would be adding a Kubernetes backend that spins up a Pod for each container that's needed, and uses pod affinity to ensure they all land on the same node, then just mounts host paths in the same way that the current Docker setup does. I expect that should Just Work without needing changes in other parts of Wolf (fingers crossed).
That doesn't sound too bad, and it'll probably fit well as a different Runner implementation. Sounds like this would give me a nice excuse to finally tip my toes into k8s.. 😅
I didn't expect you'd be interested in implementing it yourself, but that would be pretty awesome :D Unfortunately I can't really write the code myself (or I'd already be doing so, lol) but if you need any help around Kube at all don't hesitate to ask!
Thank you for the great discussion btw :)
Thank you for sticking around and bring up some good points! There are a few issues that I would like to tackle first, I'm not sure when I'll have time to look into this; I'll keep this open if anyone else wants to give it a shot.
Please consider integrating with the Kubernetes API to deploy Steam, Firefox, etc in additional pods.