Closed johanhelsing closed 1 year ago
In the future I want to see matchbox find its niche as MMO-capable, with native and web client libraries. I dismissed naia since it isn't async and won't scale. That left only wasm-peers, which was elegant but only offered a WASM client and the maintainer didn't have the bandwidth to maintain it. Currently matchbox is well tailored to WebRTC, and the underlying behavior has parity for native and web. However the topology of the server would need to mature, since it is tailored for small grouping. Regardless, I see matchbox as being the biggest fish in a small pond (x-platform mmo-capable networking).
So yes re: authentication, server-validation. I'd also like a server library as scaffold to make my own network abstraction layer on top of matchbox.
Suggested roadmap:
---> Provide server Library --> Create external, general purpose server CLI ---> Remove matchbox_server
So what I'd like to know:
- Are there things you'd like to add to the signalling endpoint itself? Authentication, validate claims etc.?
- Are you bothered by running more than one process?
- Other things?
matchbox_server
to handle only peer signalling with matchmaking implemented alongside - whether as a separate binary or bundled with the matchbox_server
(fairly easy if we expose a function akin to add_signalling_server(app: WebFrameworkApp) -> WebFrameworkApp
).matchbox
for a client-server topology in my next project. At present it doesn't seem like we're too strongly bound to full-mesh; I've even prototyped client-server by putting the server in many rooms.Okay. As an immediate short-term plan, I'd like us to:
matchbox_server
splitting out a server library module, preferably keeping the next
implementation in the separate binary
or top-level module.Maybe this was just rephrasing what you said, @simbleau, but I just want to make sure we're not talking past each other.
About crate names: matchbox_server
is what we've currently called the binary application, both on crates.io and the docker image.
I guess ideally, we would have called the server library crate matchbox_server
, and the standard server cli binary would have been matchbox_server_cli
, but perhaps it's weird to repurpose a binary crate to a library crate?
Another option would be to call them matchbox_signalling_server
and matchbox_signalling_server_cli
instead? Quite a mouthful, though 🤔
I had no idea cargo install matchbox_server
was a thing, is that documented anywhere? If not, I don't see anything controversial about yanking the builds or pushing a new, updated README directing them to use XYZ for the future.
I think matchbox-cli
is a good choice.
Run server:
matchbox-cli server <port> [--matchmaking] [-v]
Test server (Connect and disconnect):
matchbox-cli connect <url> [--keep-alive <seconds=5>] [-v]
Ah, yeah, that sound way better. Nice to have some convenient test tools, and I guess there are no big reasons to have separate binaries for signalling server and testing tools.
Perhaps just even have the binary itself be just matchbox
(without -cli
) and the crate
name be matchbox_cli
.
EDIT:
I had no idea
cargo install matchbox_server
was a thing, is that documented anywhere?
It's in the tutorial series at least: https://johanhelsing.studio/posts/extreme-bevy
I've got a slightly different take here, I think we'll find ourselves with one signalling library / framework (matchbox_signalling
) and a number of executable signalling servers built using it - e.g. matchbox_server_full_mesh
or matchbox_server_client_server
.
This is with the view that users will create their own if they desire non-trivial behaviour. It might be nice to (if possible) have each of these derive binaries host some testing code too
I agree with that take. I'd rather have a matchbox_signalling crate as well to help with general purpose signalling and then have examples within that, for client/server, full mesh, or one-to-one.
But I think there's no avoiding circumnavigating providing a CLI for signalling, if you expect users to use this library and adopt it. Expecting everyone to understand WebRTC (hard) is too much. We should to some degree provide some user experience attractions which make matchbox unique.
Yeah, would have to agree there, and would be really nice if we could auto-magically build them from whatever the user specifies signalling server wise. How we go about doing that I'm less sure of
Yeah, would have to agree there, and would be really nice if we could auto-magically build them from whatever the user specifies signalling server wise. How we go about doing that I'm less sure of
We could use a shared config type, like WebRtcSocketConfig
to create a signalling server as well.
Something like config.full_mesh_signaller() -> (Signaller<FullMesh>, SignallerFut)
We could use a shared config type, like
WebRtcSocketConfig
to create a signalling server as well.Something like
config.full_mesh_signaller() -> (Signaller<FullMesh>, SignallerFut)
That would be really nice actually, almost forces the user into having a shared library if they want a custom signaling server, but that's fine I reckon
I'm not sure I understand what you want to do automatically?
Also, I'm generally not too keen on making the socket and/or protocol have opinions on topology.
Having a stupidly simple signalling protocol is a good thing. What would be the benefit of adding this as a config in the socket?
Can't it just be, if they want client-server, run
matchbox signalling-server --topology client-server
or
matchbox signalling-server
for full mesh?
Then the socket/protocol is the same as usual? If you build a client-server app, you know peer = server?
Maybe it was confusing, I didn't mean we should use WebRtcSocketConfig
, but rather something similar, for a signalling library.
So, we would have a new type like SignallingConfig
and have 2 convenience functions: config.full_mesh()
or config.client_server()
. We could also have a Signaller::new_full_mesh(config)
similar to WebRtcSocket::new_with_config(..)
.
Options in the config could be: max_peers
(which would be 2 for one-to-one topology + full-mesh), authentication
(query args, header authentication, etc.), port
, hostname
, tls
, and things of that nature.
The return signature would be config.full_mesh() -> (Signaller<FullMesh>, SignallerFut)
where SignallerFut
is the axum bind you need to await to run the signalling service, in a similar pattern to the socket today.
The reason for the generic Signaller<FullMesh>
is to use a type state pattern to change the API and prevent the API from being misused, sine the way the API would be used would be different depending on the topology. Here's a decent video explaining that further: https://www.youtube.com/watch?v=_ccDqRTx-JU
--
So, just to clarify, no changes to socket/protocol would be necessary.
We would then use this signaller API to build the matchbox ...
CLI you described @johanhelsing . It just seems more important that the API comes before the CLI.
Closing this in favor of more granular issues
This is a pretty broad issue, continued from the discussion in #83, that issue interleaves several different issues, so here is the relevant parts quoted:
Also some history in #4
Some initial thoughts
I think the output from this issue is:
matchbox_server and project structure
About moving
matchbox_server
to a separate repo: In the immediate future, this is a very clear "no" from me. As it is, we're still adding new requests and events (i.e. addition ofPeerLeft
in #84); the API is clearly not done. I'm not saying never, but right now, I think the extra overhead of managing dependent PRs across multiple repos, communicating compatible and incompatible versions far outweighs the drawbacks of using a monorepo.About moving things within the repo, and the name
matchbox_server
being potentially misleading: This is open for discussion. At the very least, it sounds like we should add a readme to thematchbox_server
folder.Current role of matchbox_server
Currently,
matchbox_server
is the signalling server. You could easily build your own, compatible, server, since the signalling protocol is so simple, but you'd have to keep up with any changes we make across breaking (0.x) releases.I'm currently running it unmodified for all my own matchbox-related projects.
Gather requirements for why people don't want to use the existing server (as-is)
As seen in #4, and as @garryod's https://github.com/johanhelsing/matchbox/issues/83#issuecomment-1428718711, there is some interest in being able to run parts of the signalling server yourself in your own server process. I'd like to better understand why. In my own projects, the feature set of matchbox_server has been sufficient and more or less unchanged since 0.1. I run matchmaking as a separate process, and when I have a group of peers I'd like to match, I just hand them a randomly generated id, and they connect to
wss:://matchbox_server.example.com/<session_uuid>
.Perhaps this approach should be use by the demo as well? write a tiny "next" machmaking service and remove the feature from matchbox_server?
To me, this feels clean: the scope of
matchbox_server
is very limited, little room for bugs, easy to reason about or port to new stack, everyone is running the same thing (well-tested).So what I'd like to know:
This is not me saying "you're doing it wrong", I'd simply like to get a better idea about where this is coming from before making a decision.