Open athiruma opened 4 months ago
This issue is currently awaiting triage.
If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
By using capacity blocks for ML, one can obtain a significant discount compared to on-demand GPU instances. However, we can also use CapacityReservations to allocate additional on-demand instances to the cluster in case of poor availability of on-demand instances.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] If the user uses the on-demand CapacityReservationId to the cluster if the reservation expires the cluster falls into normal on-demand instances. But for the Capacity Blocks, the instances will start deleted as these are GPU instances which need to be allocated to other users.
Environment:
kubectl version
):/etc/os-release
):