Open fridex opened 4 years ago
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@sesheta: Closing this issue.
@fridex: This issue is currently awaiting triage.
One of the @thoth-station/devsops will take care of the issue, and will accept the issue by applying the
triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/triage accepted /remove-lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
@sesheta: Closing this issue.
/lifecycle frozen
from my experience, web server like load scaling on cpu usage is not super reliable. Scaling on requests rate is probably a better idea, and the metrics is probably already available from whatever python webserver implementation.
Acceptance criteria
Resource
story points: 3pt
Is your feature request related to a problem? Please describe.
As reported originally in https://github.com/thoth-station/user-api/issues/739 we could configure autoscaler for user API.
Describe the solution you'd like
Configure autoscaler so that user API is automatically scaled up if too many requests are made at the same time. As a start, we can use CPU utilization and observe how the API works. Later, we can utilize metrics, such as the number of connections to scale the replicas up/down.
Describe alternatives you've considered
Have constantly scaled multiple user API replicas, but that does not scale automatically based on load.
Additional context
https://docs.openshift.com/container-platform/3.9/dev_guide/pod_autoscaling.html https://github.com/thoth-station/user-api/issues/739