CNES / openbach-extra

GNU General Public License v3.0
0 stars 2 forks source link

Possible name collision between 'interval' parameter from iperf.py and a scenario parameter named 'interval'? #2

Open godelc7 opened 1 year ago

godelc7 commented 1 year ago

This issue is about the job iperf job which can be found in: externals_jobs/stable_jobs/transport/iperf.

This job has an interval parameter, which is used to set the interval between each generation of metrics by the underlying iperf2 command line tool. But when I use it, lets say interval=3, it rather create an openbach job that is rescheduled every 3 seconds after the beginning of the first run. This create in fact an indefinitely running job. I guess, this wasn't the intention behind this 'interval' parameter of the iperf job.

The iperf3 job has a similar parameter, which is named metrics-interval. This is on the one hand much more precise from a semantic point of view, and on the other hand to does exactly what it should, which is to call the iperf3 command line tool with the '-i --interval' parameter.

Maybe the interval parameter of the iperf2 job should be renamed to metrics-interval also?

[Steps to Reproduce the erroneous behavior] Create a python script with the following content and execute it. It will create never ending iperf2 jobs.

from scenario_builder import Scenario scenario = Scenario('A_SCENARIO_NAME') iperf_server = scenario.add_function('start_job_instance') iperf_server.configure('iperf', 'ENTITY_NAME', server_mode=True, interval=3) iperf_client = scenario.add_function( 'start_job_instance', wait_launched=[iperf_server], wait_delay=1 ) iperf_client.configure('iperf', 'ENTITY_NAME', client_mode_server_ip=) observer = ScenarioObserver() args = ['PRJ_NAME', 'run'] observer.parse(args) observer.launch_and_wait(scenario)`

Kniyl commented 1 year ago

You’re perfectly right, we changed the interval argument in iperf3 back in june 2020 for the exact same reason and forgot to do it with iperf2 ever since.

Will fix ASAP!

godelc7 commented 1 year ago

Hi,

since @Kniyl committed a fix here, should I close this issue?