libp2p / go-libp2p

libp2p implementation in Go
MIT License
6.04k stars 1.07k forks source link

webrtc: investigate resource usage #1917

Closed marten-seemann closed 7 months ago

marten-seemann commented 1 year ago

@ckousik reports that running WebRTC is expensive, and a significant amount of CPU can be spent on handling a comparatively small number of WebRTC connections.

We should investigate how expensive it actually is. Our current DoS mitigation assumes that handling connections doesn't come with a large overhead in terms of CPU, and that the main cost is imposed by handling streams, in particular by the memory consumed on streams. This might not hold true for WebRTC.

If that's the case, we'll need to impose strict limits on the number of concurrent WebRTC connections that a node is willing to handle.

BigLep commented 1 year ago

@ckousik : can we link to the benchmarks that were run?

ddimaria commented 1 year ago

@ckousik is picking this up today

ddimaria commented 1 year ago

@ckousik built out the test for this. He was blocked while waiting for hosted resources, but was given the green light from @p-shahi to acquire our own instances over the break, so will pick this work back up this week.

BigLep commented 1 year ago

Just curious what the status is here.

p-shahi commented 1 year ago

I don't think it's all that different from @ddimaria 's update

ddimaria commented 1 year ago

@ckousik can provide an update here? @marten-seemann in order to have output match up with other transport metrics you've collected in the past, can you point @ckousik to that output?

BigLep commented 1 year ago

What is the done criteria here?

@ckousik : what did you have in mind? @marten-seemann : what are you expecting?

ckousik commented 1 year ago

Libp2p WebRTC comparitive performance tests Datadog.pdf

BigLep commented 1 year ago

@ckousik : thanks for sharing. We're ultimately going to need @marten-seemann to weigh in.

A couple of things:

  1. Are there any next steps you'd recommend of these findings?
  2. Was there anything that surprised you?
  3. It would be great to attach the notebook so someone can review the setup so can confirm/verify the experiment methodology. (I personally don't have a datadog account.)
ckousik commented 1 year ago
  1. As for next steps, Glen is working on optimisation passes over the current PR.
  2. Couple of things that stood out to me:
    • Pion has issues with datachannel ID reuse. We have a workaround for this, but are holding off on investigating the issue in Pion. The corresponding issue in Pion can be found here: https://github.com/pion/webrtc/issues/2258
    • Pion rate limits the creation of streams. There can be 16 SCTP streams that are not accepted at any given time. This value is hardcoded and not configurable, therefore we have to ramp up the number of connections and streams.
  3. @BigLep Glen is also going to be running the tests and verifying them. Is there anything you would prefer in place of datadog?
BigLep commented 1 year ago

Thanks @ckousik .

For 3, the key thing is to enable reproducibility. This is for two fold:

  1. Easy enable others to spot-check the methodology. It can be useful to make sure that the test parameters, configuration, etc. is what we expect.
  2. If someone looks at this issue six months from now, and assuming we're all gone, they should be to understand how we arrived at these results. At the minimum, let's have a way to see the config/code that was used. Attaching an .ipynb notebook is fine, or a gist, etc. We just want to avoid the case of future folks not having access to your datadog and then not being able to verify / understand what was executed.
ckousik commented 1 year ago

Sorry, I had misunderstood the notebook as a DataDog notebook. The test code is present here: https://github.com/little-bear-labs/libp2p-webrtc-bench

p-shahi commented 1 year ago

The test code is present here: https://github.com/little-bear-labs/libp2p-webrtc-bench

Looks like this was now included in the pr itself right? Maybe you can archive this repo?

We have a workaround for this, but are holding off on investigating the issue in Pion.

Was this workaround in go-libp2p PR (couldn't find a reference to https://github.com/pion/webrtc/issues/2258 in a comment) or elsewhere? Can you link to it?

ckousik commented 1 year ago

@p-shahi We manually assign stream ID's here: https://github.com/libp2p/go-libp2p/pull/1999/files#diff-f3e8c67f01e1cd4597f5d58558db1e0e28f21be14b640d8e31282eb9580476aaR310-R320. I'll add a comment linking to pion/webrtc#2258

p-shahi commented 1 year ago

Resource usage investigation is being done as part of this pr https://github.com/libp2p/go-libp2p/pull/1999

p-shahi commented 1 year ago

Status update: @glendc to provide Pion bottlenecks report sometime in the next two weeks

sukunrt commented 8 months ago

We have 1 fix in: https://github.com/pion/mdns/pull/172 We need

  1. Remove the closed datachannels from the pion/webrtc.PeerConnection object.
  2. Fix a memory leak in pion/sctp
sukunrt commented 8 months ago

Tracking issue for 1. https://github.com/pion/webrtc/issues/2672

sukunrt commented 7 months ago

For

  1. Remove the closed datachannels from the pion/webrtc.PeerConnection object

The PR in webrtc decreases memory usage from 31MB to 17MB when running 10 connections and echoing 1MB over 100 different streams(1GB total data transferred). The rest of the memory use are fix sized allocations that'd take more effort to reduce (1 MB buffers in sctp and 1MB buffers for read from ice connections)

the benchmarks are in branch webrtc-bm-2.

memprofile_2_10_100.pb.gz memprofile_10_100.pb.gz

sukunrt commented 7 months ago

Setup: two ec2 instances. c5.xlarge(4 cores 8GB Ram). us-east-1 and us-west-2. Ping time is 60ms. BDP is 40MB assuming 5Gbps link.

sukunrt commented 7 months ago

I quite like the idea of 100kb receive buf. The performance seems acceptable for now and the peer can enqueue 10x less on the sctp layer.

SgtPooki commented 7 months ago
  • Per percent of CPU used we get roughly 6-8 Mb/s throughput. So 3% CPU usage gives us 25 Mb/s throughput
  • Single stream througput is limited by maxBufferedAmount. For 100kb buffered amount we get 12Mb/s and for 32kB we get 3Mb/s throughput. This throughput is lower for higher latencies as we write the same amount on the channel but have to wait more time to get the ACK

Does this mean we can limit CPU usage by using maxBufferedAmount, or is there a better way to accomplish that?

MarcoPolo commented 7 months ago

Applications can limit their throughput if they want to limit CPU usage. For now we aren't going to expose maxBufferedAmount to the go-libp2p user.

MarcoPolo commented 7 months ago

The source code for @sukunrt's test is here: https://github.com/libp2p/go-libp2p/tree/webrtc-echobm/examples/echobm

recv buf: 100kB, maxBufferedAmount: 32kB

I agree that this one seems like a good option. It is fast enough while using little CPU. If use cases appear that need to optimize past 12 Mb/s on WebRTC connections we can expose an option to tune the recv buffer and the maxBufferedAmount. But defaulting to conservative resource usage seems better to me.

MarcoPolo commented 7 months ago

I think we can close this issue as @sukunrt's report and code is enough.

sukunrt commented 7 months ago

@SgtPooki the better way to do that is to limit the sctp receive buffer to 100kB as this affects all streams on a connection.

Apologies for the poor naming, I used the term we are using in code. The maxBufferedAmount is the send buffer we have per stream. So increasing maxBufferedAmount increases throughput per stream and also the cpu used as we need more cpu for the higher throughput. Just changing this number still won't increase the per connection(sum of all streams) cpu used because that number is limited by the receive buffer which is shared across all streams. The receive buffer is shared across all streams because SCTP doesn't have per stream flow control.

vyzo commented 7 months ago

this should be done through the resource mamager, dont hardcode values please.

sukunrt commented 7 months ago

That's a much larger change since the underlying sctp connection's receive buffer is a fix sized buffer and doesn't support window updates.