Closed Tiboris closed 1 year ago
Oh i forgot to rewrite calls in test code. Will do
Failure cauised by:
Error:
Problem: package mrack-1.12.3-4.el8.noarch requires python3-mrack-beaker = 1.12.3-4.el8, but none of the providers can be installed
- cannot install the best candidate for the job
- nothing provides beaker-client needed by python3-mrack-beaker-1.12.3-4.el8.noarch
Possible resolution is to suggest or recommend the package instead of require to workaround such errors.
/packit test
Depends on https://github.com/neoave/mrack/pull/233
Hey @Tiboris These changes look good at first sight. Could you help me understand the different sleep times that you've put in place and what they solve?
asyncio.sleep(slow_down)
in _provision_base
: This slows down the provisioning at the beginning using a variable time to sleep so we prevent mrack executions to provision at the same time.await asyncio.sleep(timeout * 10)
in _provision_base
: Sleep time to wait for checking if resources are available again.await asyncio.sleep(cooldown)
in strategy_retry
: In this case we haven't been able to provision all resources despite the previous waits, so we probably are in a race condition with concurrent runs, and thus we free all resources and try again in cooldown
minutes.Please correct me or complement the above :slightly_smiling_face:
@dav-pascual thanks for review and questions.
await asyncio.sleep(cooldown)
instrategy_retry
: In this case we haven't been able to provision all resources despite the previous waits, so we probably are in a race condition with concurrent runs, and thus we free all resources and try again incooldown
minutes.
I would like to correct this one the actual cooldown now is in seconds, this can be edited if needed and if it makes sense.
As an answer to other questions:
With concurrent runs its hard to say how many hosts or what will be the percentage of provisioned i am afraid its very non-exact here removing all is destructive, I agree, however helps jobs in case all have 1 hosts provisioned only (also speculative statement with small change of reproducing).
I will use your very nicely formulated assumtions which are basicaly 99% correct and add comments to the code with it.
Again thanks!
@dav-pascual thanks for review and questions.
await asyncio.sleep(cooldown)
instrategy_retry
: In this case we haven't been able to provision all resources despite the previous waits, so we probably are in a race condition with concurrent runs, and thus we free all resources and try again incooldown
minutes.I would like to correct this one the actual cooldown now is in seconds, this can be edited if needed and if it makes sense.
As an answer to other questions:
With concurrent runs its hard to say how many hosts or what will be the percentage of provisioned i am afraid its very non-exact here removing all is destructive, I agree, however helps jobs in case all have 1 hosts provisioned only (also speculative statement with small change of reproducing).
I will use your very nicely formulated assumtions which are basicaly 99% correct and add comments to the code with it.
Again thanks!
@Tiboris Thanks for adding the comments :slightly_smiling_face:
Your answer makes sense and the rest looks good so I will approve the PR
Thanks for the review @dav-pascual
feat: Do not use same sleep for every mrack run
feat(AWS): Add utilization check method
feat(OpenStack): Add utilization check method
refactor(OpenStack): make private openstack methods truly private
Signed-off-by: Tibor Dudlák tdudlak@redhat.com