perfsonar / bwctl

A scheduling and policy framework for measurement tools
Apache License 2.0
16 stars 6 forks source link

BWCTL FW checker #31

Closed igarny closed 3 years ago

igarny commented 8 years ago

Hi Mark, guys,

I believe it was discussed and it is in the plans to have some way of checking if local FW config complies to the requirements of the measurement tools. Obviously if we are considering a wider set of tools, there is no way that we can know out-of-the-box about all of the varieties of port requirements. IMHO this means we need to have some xml based descriptions of the tools with incoming and outgoing ports. Besides this, if we intend to really prove that the measurement can be produced, we should also be checking if any of the incoming ports is already reserved by another tool on the running system (here there is dependency if it is a single control port or a pool of ports). IMHO these checks of FW would be enough to affirm that the local toolkit instance is ready to participate in the suggested measurement. Please consider also https://github.com/perfsonar/bwctl/issues/30 . The information for this could also go into the xml config. Execution of test runs of the measurements, I do not see as a task for perfSONAR. It would be sufficient that the user/administrator to be able to invoke/request such tests, but it should be also his responsibility to setup the test parameters for this test. Obviously a throughput test of 5sec would give nobody a proper understanding on the network performance, but is enough for a FW test on the network path. Again I see the initiation of such tests only in the disgrace of the user. Summary:

Best regards, Ivan

mfeit-internet2 commented 8 years ago

...we need to have some xml based descriptions of the tools with incoming and outgoing ports.

One of the facets of the new architecture is that the scheduler and tools aren't tightly coupled like they are in the old one. Tools enumerate themselves to the scheduler by providing a standardized blob of JSON.

Very early versions will feature a one-test-at-a-time model, but the eventual goal is to do resource modeling in a way that will let us run as many as possible concurrently without them stepping on each others' toes. A tool running a test will declare what it needs (e.g., "an entire processor core and two TCP ports on an interface that can see IP 1.2.3.4") and the scheduler will assign resources from a (configurable) pool. We can arrange things so tools that must have a particular port can be accommodated, but I would recommend that we favor tools that can be told explicitly what ports to use. (The tools we support now all allow that.)

Execution of test runs of the measurements, I do not see as a task for perfSONAR. It would be sufficient that the user/administrator to be able to invoke/request such tests...

If I understand what you're getting at, verifying that ports are usable before using them for tests is absolutely perfSONAR's bailiwick. Putting a locally-unusable port in the pool is a configuration error, and there's no reason we can't test that and raise a red flag if that's happening. Anything we can identify as wrong automagically instead of leaving the end user to figure it out is a better thing.

The related thing that came up at TechEx was having a way for two nodes to check whether the ports being used for a test can actually see each other, although I think resource modeling and better diagnostic output than what we have now may reduce the need for that.

igarny commented 8 years ago

Hi Mark,

Thanks for your attention!

On your second comment:

"Putting a locally-unusable port in the pool is a configuration error, and there's no reason we can't test that and raise a red flag if that's happening"

I believe you are missing the sentence below and the 2-3 above it.

(quote from initial text) "IMHO these checks of FW would be enough to affirm that the local toolkit instance is ready to participate in the suggested measurement."

I mean everything locally on the tookit should be considered, but I was afraid that you (team) would like to go beyond that and test the connectivity as a whole.

Best regards, Ivan

mfeit-internet2 commented 8 years ago

Yeah, not going that far, at least not in the scheduler.

There's been some interest in doing tests of end-to-end connectivity, and that's something that can be addressed later by developing tests and tools that plug into the scheduler.