Open omegablitz opened 1 year ago
Wouldn't portreuse
and calling listen_on
with the corresponding interface address suffice?
With portreuse
turned on, the dial address is retrieved from PortReuse::local_dial_addr
- which is always Ipv4Addr::UNSPECIFIED
.
Do you see a way for how we can support your usecase without adding an additional configuration option?
Hmm, the only possibility I can think of is to change port-reuse to bind to the listening address (including ip), but I saw that this was changed intentionally in #2382.
It also may be possible to unify the configuration of outbound dial addresses and port reuse instead of having 2 separate configuration options - but this is likely to be a more involved change.
Can you give a bit more context on what you are trying to achieve and how the change would help you?
I don't see a way for how we can implement what you need without:
Both seem like a bad idea.
Sure!
I've partitioned my loopback addresses locally into different subnets, eg 127.100.0.0/16
, 127.101.0.0/16
, 127.102.0.0/16
, etc.
I've configured all traffic from 127.100.0.0/16
-> 127.100.0.0/16
to have latency a, 127.100.0.0/16
-> 127.101.0.0/16
to have latency b, 127.100.0.0/16
-> 127.102.0.0/16
to have latency c, etc. for all subnet pairs.
I'm running at least 1 libp2p swarm per subnet, listening on an address within that subnet (eg 127.100.0.1
, 127.100.0.2
, 127.101.0.1
, etc.).
I'm trying to get the swarms to communicate with each other with the given latencies, but the problem is even though the server-side socket is bound properly, the dialer-side socket will not be bound to the listener address (which is in the desired subnet) - which breaks the latency simulation.
I agree that figuring out a clean abstraction for implementing this isn't straightforward. I'll try and think about it some more. Will close this issue in a meanwhile if no-one else has any suggestions.
Could this perhaps be solved if you run them inside (docker) containers where there you only expose that one particular interface to the swarm?
Yes, that's actually what I was doing previously via testground. I wanted to move away from that in order to avoid needing to do any sort of container coordination, to make the tests simpler.
In my current test, each swarm is being run from the same process, which makes both running the test and aggregating results/logs simple.
We've moved away from testground for our interop-tests and are now using docker-compose. It works really well for us. Compose has a feature that allows the entire suite of containers to fail with the exit code of a particular container: https://github.com/libp2p/test-plans/blob/b2504be8cb2fa22b57c985721a3d99b96643c201/multidim-interop/src/compose-runner.ts#L42
What's the status of this proposal? Is it a nice-to-have feature or not planned?
Description
I want to be able to configure the bound address of outgoing tcp connections - eg
127.0.0.2:0
. Currently, the only supported binding is related to port reuse.Motivation
In a local test I've created, I'm trying to shape traffic by IP ranges. There currently isn't a way to configure the outbound IPs used for dialing in the default libp2p tcp transport.
A concrete example:
127.0.0.1:5000
and one listening on127.0.0.2:5000
Desired behavior:
127.0.0.1:0
, and all outbound connections from the latter to be from (configurable)127.0.0.2:0
.Requirements
Add a config option here for a configurable outbound
bind
address.Open questions
Are you planning to do it yourself in a pull request?
Yes