kthcloud / console

kthcloud web console
https://cloud.cbh.kth.se
MIT License
3 stars 3 forks source link

Show replica status #272

Closed pierrelefevre closed 1 month ago

pierrelefevre commented 1 month ago

For deployments with multiple replicas, it is difficult to grok the overall health of all replicas with only one status indicator. Therefore, we now expose the number of replicas for each status. A component showing this is needed, which could perhaps live by the top bar, like in the following crude photoshop mockup (less wide would look nicer). Screenshot from 2024-05-15 23-09-30

Inspiration: Rancher replica status overview

overview image

expanded image

Looks like this in the deployment object

  "replicaStatus": {
    "desiredReplicas": 1,
    "readyReplicas": 1,
    "availableReplicas": 1,
    "unavailableReplicas": 0
  }

Some conditions:

Phillezi commented 1 month ago

Hi!

I started to work on this but i have a few questions, what statuses are wanted, and what should they represent.

I currently have these:

image image

pierrelefevre commented 1 month ago

Adding @saffronjam to the loop, what do you think makes most sense, not sure what the difference between ready and available is here

saffronjam commented 1 month ago

Statuses are taken directly from Kubernetes. Here's a post describing them :) https://stackoverflow.com/questions/66317251/couldnt-understand-availablereplicas-readyreplicas-unavailablereplicas-in-dep

pierrelefevre commented 1 month ago

Hi!

I started to work on this but i have a few questions, what statuses are wanted, and what should they represent.

I currently have these:

  • Ready: Amount gotten from readyReplicas for replicas that are ready, no current action (showed in green)
  • Occupied: Amount gotten from availableReplicas - readyReplicas for replicas that are doing something (showed in blue) does this make sense? Edit: busy might be a better name for this
  • Unavailable: Amount gotten from unavailableReplicas for replicas that are unavailable.
  • Wanted: Amount gotten from desiredReplicas - (availableReplicas + unavailableReplicas) for replicas that the user wants but doesnt exist.

image image

Nice updated mockup! I think this makes sense, maybe we could use Desired instead of Wanted since that seems to be the wording used in many K8s docs, but other than that it looks great :)

Phillezi commented 1 month ago

Thanks!

Yes, desired may be better, I just wanted to differantiate between desired at first since it is displaying the amount of: desiredReplicas - (availableReplicas + unavailableReplicas).

Will take a look at the k8s replicaSet status docs to make sure they match what i have interpreted.

Phillezi commented 1 month ago

Hi again,

I can´t get the status to show for multiple replicas, i created a deployment to test without hard-coded values and i get the same replicaStatus regardess of the number of replicas on the deployment.

{
        "id": "0fc7e863-8cb5-4ecd-b49c-ad8f8de673f5",
        "name": "joyfully-cautiously-minus",
        "type": "prebuilt",
        "ownerId": "4efea96b-2d6b-41f6-96a2-656f18d6f8d1",
        "zone": "se-flem-2",
        "createdAt": "2024-05-20T18:30:57.963Z",
        "updatedAt": "2024-05-20T18:35:39.328Z",
        "accessedAt": "2024-05-20T18:35:52.479Z",
        "cpuCores": 0.2,
        "ram": 0.5,
        "replicas": 4,
        "url": "https://joyfully-cautiously-minus.app.cloud.cbh.kth.se",
        "envs": [
            {
                "name": "NAME",
                "value": "THIS_IS_A_DUMMY"
            },
            {
                "name": "PORT",
                "value": "8080"
            }
        ],
        "volumes": [],
        "initCommands": [],
        "args": [],
        "private": false,
        "internalPort": 8080,
        "image": "nginx:latest",
        "healthCheckPath": "/healthz",
        "status": "resourceRunning",
        "replicaStatus": {
            "desiredReplicas": 1,
            "readyReplicas": 1,
            "availableReplicas": 1,
            "unavailableReplicas": 0
        },
        "pingResult": 502,
        "integrations": [],
        "teams": [],
        "storageUrl": "..."
    },
saffronjam commented 1 month ago

That's because we are "cheating" a little bit. Having more replicas just edits your scaler (HPA) so that you could have more given your load. So if you add more load, like http requests, more pods will probably be created.

A tool like siege is useful! ☺️

Phillezi commented 1 month ago

That's because we are "cheating" a little bit. Having more replicas just edits your scaler (HPA) so that you could have more given your load. So if you add more load, like http requests, more pods will probably be created.

A tool like siege is useful! ☺️

Ahh ok I see, I just thought that it changed minimum replicas on the HPA but this makes more sense.

Thanks, I will look into and try siege :)