Open aeplay opened 3 months ago
/bounty $2000
/attempt #301
with your implementation plan/claim #301
in the PR body to claim the bountyThank you for contributing to gardencmp/jazz!
Add a bounty • Share on socials
Attempt | Started (GMT+0) | Solution |
---|---|---|
🟢 @ayewo | Aug 12, 2024, 4:59:43 PM | #597 |
Hi @aeplay
I see your project just joined Algora, so welcome!
Different projects pick different styles of working so I'm curious, how do you want attempts at this bounty to shake out:
The risk with style 1. is that the assigned dev might take too long to show progress (if they are inexperienced or experienced but busy with a day job).
The risk with style 2. is since there is only 1 bounty reward, anyone willing to work on this risks getting blind-sided by other devs who open PRs to claim the bounty.
Hey @ayewo good question! This is what I'm most curious about in this bounty model as well.
For this task I would say first-come first-serve, since it is quite a detailed project and I would hate anyone to waste their effort. There is no super urgent deadline for it, so I would be happy to let the first serious contender iterate on it with my input.
Excellent! In that case, I'd like to /attempt #301 this.
Algora profile | Completed bounties | Tech | Active attempts | Options |
---|---|---|---|---|
@ayewo | 22 bounties from 5 projects | TypeScript, Rust, JavaScript & more |
Cancel attempt |
@ayewo yes! Let's gooo
Please ask for clarifications here, I'm in GMT+1 and mostly available during normal work hours, but also during other times on my phone for quick answers
@aeplay I'm also in GMT+1 :) and just joined your Discord.
I take it you prefer clarifications happen here in the open, right?
yes please, don't worry about making this issue noisy, that's what it's for
Roger that. Would appreciate it if you can assign the issue to me, otherwise there will be drive-by attempts from other devs feigning ignorance of our conversation above.
done, thanks for walking me through this
Hi @aeplay, I'm interested in the project's client and ci/cd workflow related part, if its okay can we split the project @ayewo?
Hey @DhairyaMajmudar I appreciate your offer but would like to keep this focused on one person attempting it. Thank you!
Hey @DhairyaMajmudar I appreciate your offer but would like to keep this focused on one person attempting it. Thank you!
That's fine!
And @ayewo just clarifying: there is no CI/CD aspect to this project - it's all meant to be run manually.
@aeplay Yes, understood. You want this to be single-thread and launched locally.
(I built microbenchmark recently using a combination of PowerShell (on Windows) and Bash (on Linux) but they were each executed remotely on EC2 instances using Terraform.)
Hint: Next, time You May want to Keep Applications a Bit Longer open so You Can Evaluate a few applicants, it doesn't have to be, first come first serve or Battle Royale.
Hint: Next, time You May want to Keep Applications a Bit Longer open so You Can Evaluate a few applicants, it doesn't have to be, first come first serve or Battle Royale.
Makes sense, but this time I wanted to move quickly and @ayewo seemed eager and capable so I just went with him
I'd like to share some progress on the research I've done so far and ask a few questions.
I looked into the HTTP protocol versions supported by servers A, B, and C and it seems that only Caddy supports all three versions of the HTTP protocol natively (i.e. HTTP/1.1, HTTP/2 and HTTP/3).
# | Server | HTTP/1.1 | HTTP/2 | HTTP/3 |
---|---|---|---|---|
A | Node.js v22.6.0 | ✅ | ✅ | ❌ |
B | uWebSockets.js v20.47.0 | ✅ | ❌ | ⚠️ (experimental, development paused) |
C | Caddy v2.8.4 | ✅ | ✅ | ✅ (and HTTP/2 over cleartext (H2C)) |
Node.js doesn't yet suppport HTTP/3 natively but I came across a 3rd party repo (https://github.com/endel/webtransport-nodejs) that claims to offer HTTP/3 support but I didn't look too closely.
Since you also want to test against 3 different protocols:
I tried to map servers A, B, C to the 3 protocols to see what is possible:
# | Server | Layer 7 | Layer 6[^2] | Layer 4 | Supported |
---|---|---|---|---|---|
A1 | Node.js | HTTP/1.1 + WebSocket (WS) | TLSv1.3 (Optional) | TCP | ✔️ |
A2 | HTTP/2 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | ✔️ | |
A3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ❌ | |
B1 | uWebSockets.js | HTTP/1.1 + WS | TLSv1.3 (Optional) | TCP | ✔️ |
B2 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | ❌ | |
B3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ❌ | |
C1 | Caddy | HTTP/1.1 + WS | TLSv1.3 (Optional) | TCP | ✔️ |
C2 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | ✔️ | |
C3 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ✔️ |
Is my understanding of the server to protocol mapping correct?
In the Simulation spec section, you wrote:
- Subscribing to and mutating a CoValue
- structured: 50 byte incoming SSE messages/WebSocket packets, mutations are 50 byte outgoing messages as a request/WebSocket packet
- assume one client creating a mutation that is published to 10 other clients
For the actual test, does this imply that after each server is started, 10 clients will be spawned that will subscribe to a CoValue, then 1 client will mutate the CoValue triggering a notification by the server to those 10 clients?
[^1]: The emojis are also link to relevant docs. [^2]: HTTP and TLS are both layer 4 protocols in the TCP/IP model but I opted for the OSI model here to keep things clear.
Hey @ayewo, thanks for sharing your research results in such a well-structured format.
It matches what I was aware of. For uWebsockets.js please can you try the experimental HTTP3 support and let me know how it goes?
Your understanding is correct, and just to be clear, I am not expecting you to do anything with Jazz/actual CoValues, we are just simulating their traffic patterns by sending (client -> server) and then broadcasting (server -> 10 clients) random data
Some more clarifications:
So the full mapping would look like this
# | Server | Layer 7 | Layer 6 | Layer 4 | Port | Supported |
---|---|---|---|---|---|---|
A1 | Node.js | WebSockets only | TLSv1.3 (Optional) | TCP | 3001 | ✔️ |
A2 | HTTP/1 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | 3002 | ✔️ | |
A3 | HTTP/2 + Server-Sent Events (SSE) | TLSv1.3 (Optional) | TCP | 3003 | ✔️ | |
A4 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | ❌ | ||
B1 | uWebSockets.js | WebSockets only | TLSv1.3 (Optional) | TCP | 4001 | ✔️ |
B2 | HTTP/1 + SSE | TLSv1.3 (Optional) | TCP | 4002 | ✔️ | |
B3 | HTTP/2 + SSE | TLSv1.3 (Optional) | TCP | 4003 | ✔️ | |
B4 | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | 4004 | ⚠️ (try) | |
C1 | Caddy (in front of Node.JS) | HTTP/3 + SSE | TLSv1.3 (Mandatory) | UDP (QUIC) | 5001 | ✔️ |
Thanks for the confirmation.
Please note that I have updated the 2nd table to remove the port numbers. I imagine each of the servers A,B & C will be started as a standalone process so they could simply listen on the same port i.e. localhost:3000
instead of listening on individual ports localhost:3001
, localhost:4001
etc that I originally used.
yeah makes sense re ports - we can run the different cases in succession
Just double checked re uWebSockets and HTTP2 - you're right, that's surprising. Remove that case then, but try HTTP1 + SSE, please
- Yes, this was in fact one of my follow-up questions. I imagine that the set-up in A1 is essentially your baseline i.e. what you are currently using today. The others A2-A3, B1-C3 and C1-C3 are what the synthetic benchmark would be uncovering, correct?
This is exactly the case, correct
Another question: I want to assume all protocol combinations will use TLS in the benchmarks? TLS is optional in HTTP/1.1 and HTTP/2 (h2c) but in HTTP/3 it will not work over plaintext which is why it is the only web protocol where TLS use is mandatory.
yes please assume and use TLS for everything (local certs are ok), because one thing I am interested in is how long it takes to bootstrap a connection - which is most noticed on interrupted connections. I'm expecting Websockets + TLS to be the longest and HTTP3 + SSE + TLS to be the fastest in this regard.
Got it.
More questions.
The Simulation spec talks about simulating the transfer of structured and binary data. But looking at the main differences (source) between WebSockets and SSE in the table below:
WebSockets | Server-Sent Events |
---|---|
Two-way message transmission | One-way message transmission (server to client) |
Supports binary and UTF-8 data transmission | Supports UTF-8 data transmission only |
Supports a large number of connections per browser | Supports a limited number of connections per browser (six) |
SSE only supports UTF-8 data transmission. I guess for SSE, this implies the use of base64 to encode and decode binary each way?
What about the 50MB limit? It is the final payload size prior to being base64 encoded?
For the client that will interact with the sync server using browser-native APIs (WebSocket, fetch, EventSource), is using a (headless) Chrome instance from playwright
sufficient? Or you want the browser client to be configurable? In other words, the tester gets to use only Chrome or they can pick from any of the browsers supported by playwright
i.e. Chrome, Edge, Safari (WebKit) or Firefox, as long as those browser-native APIs are properly supported?
Use base64 encoding everywhere
- 50mb prior to encoding
Re: 1 & 2
Can you relax this so that base64 encoding is not necessary for loading/creating binary CoValues.
In other words, base64 encoding will only used for delivering subscription events over a WebSocket or SSE?
It's much easier to split a 50MB binary file, as is, and stream it in 100KB chunks in either direction (server->client and client->server) than to do so with base64 encoding added to the mix.
Hey @ayewo sorry for the late reply.
Yes happy to relax this.
Ideally (to be most similar to cojson) you could base64 encode the individual chunks - but if it's simpler to have them binary wherever possible just do that - it's not really relevant to the main concern.
Thank you
Hey @aeplay
Brief status update:
Right now, I’ve working all 3 use cases for text and binary CoValues over:
Still trying to finish the WebSocket implementation and hoping I can re-use most of the code for SSE browser client to build the browser client that will interact with the WebSocket server.
(PS: I’ve been really poor at sharing updates on my progress because I have been dealing with regular interruptions. So sorry about that.)
Hey @ayewo no worries, thanks for your update - looking forward to it!
Hey @ayewo any updates on this? :)
@aeplay Still working on it :).
I should open a PR tomorrow or Wednesday, God willing.
I should open a PR tomorrow or Wednesday, God willing.
Turns out my estimate was off by a few days as there are parts of the benchmarks’ plumbing that are not yet finished. Sorry about that.
When I started, I must have interpreted the Deliverable section as saying that a PR should be only be opened when the code is close to done. But re-reading it now, I realize I should have simply asked for your preference:
Right now, I have most things working. The only requirement I haven't touched at all are simulating the various Network conditions: I, II, III, IV.
That sounds wonderful, I would love to see a WIP draft PR. Thank you!
💡 @ayewo submitted a pull request that claims the bounty. You can visit your bounty board to reward.
I'd opened a draft PR and included basic instructions on how to set it up locally for testing at the end.
Any news @ayewo ?
Hey @aeplay Sorry I haven't been able to share any updates yet.
I've been AFK for a few weeks because I traveled. I should be returning home this weekend, God willing.
In order to get an idea how best to proceed with #233, it would be good to have ballpark numbers of the performance characteristics of WebSockets vs HTTP requests + Server-Sent Events for our needs.
Setup
We can get this data - completely decoupled from the internals of Jazz - by creating some synthetic microbenchmarks.
Simulation details
Original data use that needs to be simulated
Currently, Jazz uses WebSockets to sync CoValue state between the client and syncing/persistence server.
The communication typically consists of three scenarios:
Loading a CoValue
Creating a CoValue
Subscribing to and mutating a CoValue
Websockets vs Requests & SSE
Currently, 1., 2. and 3. happen over WebSockets, with 1 package per request/response/incoming update
For using Requests and SSE instead, we would use Requests & Responses for 1. and 2., while for 3. we listen to incoming updates with Server-Sent Events and publish outgoing updates as a Request with no expected Response.
Simulation spec
There are roughly two classes of CoValues: structured CoValues (thousands of <50 byte edits) and binary-data CoValues (few edits that are each 100kB).
Since we are only interested in the data transmission performance, we can model the scenarios using packets containing random data:
Loading a CoValue
Creating a CoValue
Subscribing to and mutating a CoValue
No extra HTTP headers should be set (other than what browser set by default, and these should be minimised if possible)
Target metrics
The main variables we are interested in are
Loading a CoValue
Creating a CoValue
Subscribing to and mutating a CoValue
Variables
It would be good to get results for the metrics above assuming
Different network conditions
Different protocols
You don't need to actually deploy a server anywhere if you can simulate these conditions locally, just make sure to note down your hardware specs and use exactly one thread/core for the server
Dimensions summary
So in total we have the following dimensions:
Deliverable
I realise this spec is a lot, so feel free to ask lots of clarifying questions before & after accepting the task!