owncloud / ocis

:atom_symbol: ownCloud Infinite Scale Stack
https://doc.owncloud.com/ocis/next/
Apache License 2.0
1.4k stars 183 forks source link

healthcheck not failing even when service registration is not successful #8783

Closed wkloucek closed 3 weeks ago

wkloucek commented 7 months ago

Describe the bug

For some reason, all services can't do their service registration:

...
appregistry-5c7cd79696-vvzzm appregistry {"level":"error","service":"app-registry","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTBOBYC24TFM5UXG5DSPE======': nats: connection closed","time":"2024-04-05T05:25:12Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.app-registry"}
authmachine-6b75db9859-rjnkk authmachine {"level":"error","service":"auth-machine","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTBOV2GQLLNMFRWQ2LOMU======': nats: connection closed","time":"2024-04-05T05:25:13Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.auth-machine"}
gateway-7f75b8d7d-smmkt gateway {"level":"error","service":"gateway","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTHMF2GK53BPE======': nats: connection closed","time":"2024-04-05T05:25:13Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.gateway"}
frontend-599d689b84-d6bj7 frontend {"level":"error","service":"frontend","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOO5SWELTGOJXW45DFNZSA====': nats: connection closed","time":"2024-04-05T05:25:13Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.web.frontend"}
authservice-59696b97d8-wgbq7 authservice {"level":"error","service":"auth-service","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTBOV2GQLLTMVZHM2LDMU======': nats: connection closed","time":"2024-04-05T05:25:14Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.auth-service"}
gateway-7f75b8d7d-z85gw gateway {"level":"error","service":"gateway","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTHMF2GK53BPE======': nats: connection closed","time":"2024-04-05T05:25:14Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.gateway"}
groups-664b5c5459-jpsqq groups {"level":"error","service":"groups","error":"Failed to store data in bucket 'ONSXE5TJMNSS24TFM5UXG5DSPFPWG33NFZXXO3TDNRXXKZBOMFYGSLTHOJXXK4DT': nats: connection closed","time":"2024-04-05T05:25:15Z","line":"github.com/owncloud/ocis/v2/ocis-pkg/registry/register.go:33","message":"registration error for external service com.owncloud.api.groups"}
...

The only service that actually fails and restarts is the appprovider. All other services pretend to be healthy and stay running.

$kubectl get pods -n ocis
NAME                                       READY   STATUS      RESTARTS          AGE
antivirus-7c77fb964b-g8r4l                 1/1     Running     0                 6d16h
antivirus-7c77fb964b-wqdh8                 1/1     Running     0                 6d16h
appprovider-office-6c6846cf9f-c4wcs        1/1     Running     980 (5m41s ago)   6d16h
appregistry-5c7cd79696-vvzzm               1/1     Running     0                 6d16h
audit-57484b77c5-grq5f                     1/1     Running     0                 6d16h
audit-57484b77c5-s48lv                     1/1     Running     0                 6d16h
authmachine-6b75db9859-jvdz4               1/1     Running     0                 6d16h
authmachine-6b75db9859-rjnkk               1/1     Running     0                 6d16h
authservice-59696b97d8-h9p9c               1/1     Running     0                 6d16h
authservice-59696b97d8-wgbq7               1/1     Running     0                 6d16h
clientlog-fdfb746f8-bj7l9                  1/1     Running     7 (4d7h ago)      4d7h
clientlog-fdfb746f8-pc2pn                  1/1     Running     0                 6d16h
eventhistory-7bdddbcc76-mrp59              1/1     Running     0                 6d16h
eventhistory-7bdddbcc76-q2r4b              1/1     Running     0                 6d16h
frontend-599d689b84-d6bj7                  1/1     Running     0                 6d16h
frontend-599d689b84-p872k                  1/1     Running     8 (4d7h ago)      4d7h
gateway-7f75b8d7d-smmkt                    1/1     Running     0                 6d16h
gateway-7f75b8d7d-z85gw                    1/1     Running     0                 6d16h
graph-bb5764595-nf9xn                      1/1     Running     0                 6d16h
graph-bb5764595-rqv9p                      1/1     Running     0                 6d16h
groups-664b5c5459-jpsqq                    1/1     Running     0                 6d16h
groups-664b5c5459-mfck5                    1/1     Running     8 (4d7h ago)      4d7h
ldap-main-0                                1/1     Running     0                 6d16h
ldap-secondary-0                           1/1     Running     0                 6d16h
nats-0                                     2/2     Running     0                 21h
nats-1                                     2/2     Running     0                 21h
nats-2                                     2/2     Running     0                 21h
nats-box-5d5787998c-zqdlx                  1/1     Running     0                 6d17h
notifications-5bc9cb698c-6mcdf             1/1     Running     0                 6d16h
ocdav-66fd6cdd68-9pcsm                     1/1     Running     0                 6d16h
ocdav-66fd6cdd68-vbzf9                     1/1     Running     0                 6d16h
ocs-56cc5f7777-4djwq                       1/1     Running     8 (4d7h ago)      4d7h
ocs-56cc5f7777-vbr6s                       1/1     Running     0                 6d16h
policies-648db4488b-mq585                  1/1     Running     0                 6d16h
policies-648db4488b-vfztq                  1/1     Running     0                 6d16h
postprocessing-6bfb9c75ff-dfhmb            1/1     Running     0                 6d16h
postprocessing-6bfb9c75ff-hmdhx            1/1     Running     0                 6d16h
proxy-574c44c499-xhbdm                     1/1     Running     0                 6d16h
proxy-574c44c499-zvb48                     1/1     Running     0                 6d16h
search-86969f87cc-dnmjv                    1/1     Running     0                 6d16h
settings-7b79f98c45-pdfqh                  1/1     Running     0                 6d16h
settings-7b79f98c45-qvbnq                  1/1     Running     0                 6d16h
sharing-fcd5df4b9-tcn2c                    1/1     Running     0                 6d16h
sharing-fcd5df4b9-x5f56                    1/1     Running     0                 6d16h
sse-567f9f4658-mfl5x                       1/1     Running     0                 6d16h
sse-567f9f4658-zm8nr                       1/1     Running     0                 6d16h
storagepubliclink-69dbfb86df-2c9ns         1/1     Running     0                 6d16h
storagepubliclink-69dbfb86df-6wbpr         1/1     Running     0                 6d16h
storageshares-765cb77cf4-26bpc             1/1     Running     0                 6d16h
storageshares-765cb77cf4-cghkg             1/1     Running     0                 6d16h
storagesystem-557cc5f9bb-bt8ph             1/1     Running     8 (4d7h ago)      4d7h
storagesystem-557cc5f9bb-hx748             1/1     Running     0                 6d16h
storageusers-85697664c7-kzjds              1/1     Running     0                 6d16h
storageusers-85697664c7-wbqk5              1/1     Running     0                 6d16h
thumbnails-8d4d7bd56-lt5zx                 1/1     Running     0                 6d16h
thumbnails-cleanup-28538116-nfv4r          0/1     Completed   0                 130m
thumbnails-cleanup-28538176-26h7g          0/1     Completed   0                 70m
thumbnails-cleanup-28538236-8fzvg          0/1     Completed   0                 10m
userlog-74fd6d7b99-7jp6r                   1/1     Running     0                 6d16h
userlog-74fd6d7b99-wlfkx                   1/1     Running     0                 6d16h
users-b897bfdf-6wnvm                       1/1     Running     0                 6d16h
users-b897bfdf-nlrkx                       1/1     Running     0                 6d16h
web-6945f454d4-5f2qb                       1/1     Running     8 (4d7h ago)      4d7h
web-6945f454d4-tmvkx                       1/1     Running     0                 6d16h
webdav-58f7f68c5b-m6629                    1/1     Running     0                 6d16h
webdav-58f7f68c5b-wm8rc                    1/1     Running     0                 6d16h
webfinger-7465cfdcd9-2drtz                 1/1     Running     0                 6d16h
webfinger-7465cfdcd9-tvf5f                 1/1     Running     0                 6d16h
wopi-wopiserver-699df58bc7-7tqkk           1/1     Running     10 (4d7h ago)     6d16h
wopi-wopiserver-699df58bc7-qbtbz           1/1     Running     10 (4d7h ago)     6d16h

Steps to reproduce

I have no reproducer to get into that state. (actually recreating all oCIS pods got me out of this state)

Expected behavior

No service returns a positive health status when the service registration doesn't work.

Actual behavior

All pods are marked as healthy.

Setup

I have oCIS 5.0.0 installed via the oCIS Helm Chart (https://github.com/owncloud/ocis-charts/commit/fe5697d0a8a0431d7b0b39e4928beb66f2276baf) with a external NATS cluster as service registry, store and cache.

Additional context

The oCIS chart uses the /healthz endpoint to determine the status of a oCIS pod: https://github.com/owncloud/ocis-charts/blob/fe5697d0a8a0431d7b0b39e4928beb66f2276baf/charts/ocis/templates/_common/_tplvalues.tpl#L207-L220

for some context:

wkloucek commented 7 months ago

@micbar I would like to escalate this, for more details see https://github.com/owncloud/enterprise/issues/6547

The NATS client seems to be stuck in a disconnected state .

wkloucek commented 7 months ago

Reminds me of https://github.com/owncloud/ocis/issues/7056

butonic commented 7 months ago

I am wondering what our nats clients, using the default options, do after 60 reconnect attempts habe been reached:

// Default Constants
const (
    Version                   = "1.33.1"
    DefaultURL                = "nats://127.0.0.1:4222"
    DefaultPort               = 4222
    DefaultMaxReconnect       = 60
    DefaultReconnectWait      = 2 * time.Second
    DefaultReconnectJitter    = 100 * time.Millisecond
    DefaultReconnectJitterTLS = time.Second
    DefaultTimeout            = 2 * time.Second
    DefaultPingInterval       = 2 * time.Minute
    DefaultMaxPingOut         = 2
    DefaultMaxChanLen         = 64 * 1024       // 64k
    DefaultReconnectBufSize   = 8 * 1024 * 1024 // 8MB
    RequestChanLen            = 8
    DefaultDrainTimeout       = 30 * time.Second
    DefaultFlusherTimeout     = time.Minute
    LangString                = "go"
)

hm ... that would explain why the nats js registry logs the "registration error for external service ..." with the "nats: connection closed" 🤔

butonic commented 7 months ago

The registry should kill the service if the underlying store cannot recover the underlying connection.

butonic commented 7 months ago

at least the service should not be healthy if the nats client connection was closed.

wkloucek commented 7 months ago

The reproducer is:

If you're looking at the outgoing connections in the oCIS pods, you don't see any attempt / active connection to nats, see https://github.com/owncloud/ocis/issues/8783#issuecomment-2042992165

kobergj commented 7 months ago

Decided to fix this like the following:

This should cause ocis to automatically reconnect on the next operation when the connection is closed.

butonic commented 7 months ago

For reference: https://docs.nats.io/using-nats/developer/connecting/reconnect

kobergj commented 7 months ago

Here is the micro PR: https://github.com/go-micro/plugins/pull/139

Just a draft for now as I need to test it properly

kobergj commented 7 months ago

@butonic reading the docu you posted: Should we rather use DisconnectedErrCB? Or react to both?

wkloucek commented 6 months ago

@kobergj does #8880 also make the healthcheck fail when service registration is not possible? I think we really must reflect a non working service in the health status, even if we now try to reconnect to the service registry forever.

kobergj commented 6 months ago

Yes. It will correctly reconnect until MaxRetries is reached. If it cannot reconnect, it will os.Exit killing the pod.

We still need to add the same mechanic for the events handlers, but service registration is supposed to work properly now.

micbar commented 6 months ago

@kobergj sorry to ask again, but does the health endpoint switch to not healthy during that reconnect attempts?

kobergj commented 6 months ago

No - we have no control over that. Reconnects are done by the nats pkg. We would need to implement some magic (call nats, use the disconnect handler, ...) to achieve something like that.

But I would not recommend that. If we implement something like that and nats goes down, it would take all services with it. I would say it is better if just nats is dying and other services reconnect when it is back.

wkloucek commented 6 months ago

I'd really like to have a way to see the health of a oCIS deployment WITHOUT looking at the logs. This is not possible with the current approach. Even when looking at the logs, you might not see the state because those reconnects might happen without any logs being issued.

Currently a oCIS installation on Kubernetes is a blackbox to me and I only know if it's working when I logged in, uploaded and downloaded a file, created shares, ....

micbar commented 6 months ago

@wkloucek Let me try to formulate it what we decided together with you some time ago.

Service Health

We define a healthy service when the service itself and its core functions are available.

Dependencies

A lot of services have dependencies, e.g NATS or the reva gateway or the S3 connection. We do not indicate an unhealthy service when one of its dependencies is not working. Our understanding was, if we do that, the whole service mesh could show as not healthy but only one dependency needs to be fixed / restarted.

This case with the NATS connection

In this case, the ocis service is healty and can work as soon as the NATS service becomes healthy again.

kobergj commented 6 months ago

I'd really like to have a way to see the health of a oCIS deployment WITHOUT looking at the logs. This is not possible with the current approach.

@wkloucek I don't understand. Isn't this story the first step on this path? Or would you like ocis dying instead reconnecting as @micbar suggested?

wkloucek commented 6 months ago

Our understanding was, if we do that, the whole service mesh could show as not healthy but only one dependency needs to be fixed / restarted.

But even if failing healthchecks are considered bad in this case, we still should reflect unmet dependencies in the readiness status.

Or would you like ocis dying instead reconnecting

This is not contradictory for me. oCIS services can do reconnects and at the same time report itself as unhealthy or not ready. A supervisor like Kubernetes would let this service run for some time and only restart it after x failing healthchecks. (Basically this is what oCIS does right now. Reconnect x times and then exit with status code 1).

kobergj commented 6 months ago

Ok I see. So what would be the ideal behaviour?

For the single binary it is convenient that ocis exits when reconnect fails.

micbar commented 6 months ago

For the single binary it is convenient that ocis exits when reconnect fails.

We can make this work for the single process only.

wkloucek commented 6 months ago

For Kubernetes I think it would be desirable to:

If you do the "our problem" vs. "someone else's problem" you also avoid this:

We do not indicate an unhealthy service when one of its dependencies is not working. Our understanding was, if we do that, the whole service mesh could show as not healthy but only one dependency needs to be fixed / restarted.

As a drastic example: We forgot to deploy NATS and are configure it to be used as store, cache and service registry. All oCIS services would be healthy but not ready.

micbar commented 2 months ago

Seems not to be fixed.

wkloucek commented 2 months ago

@d7oc shared this log:

proxy-5b9c485cf7-vk8rs 2024-09-02T07:20:05Z ERR Failed to list all roles error="{\"id\":\"ocis-settings\",\"code\":404,\"detail\":\"internal error: list container:Failed to store data in bucket 'xxx': nats: no response from stream\",\"status\":\"Not Found\"}" line=github.com/owncloud/ocis/v2/services/proxy/pkg/userroles/oidcroles.go:211 service=proxy
proxy-5b9c485cf7-vk8rs 2024-09-02T07:20:05Z ERR Error mapping role names to role ids error="{\"id\":\"ocis-settings\",\"code\":404,\"detail\":\"internal error: list container:Failed to store data in bucket 'xxx': nats: no response from stream\",\"status\":\"Not Found\"}" line=github.com/owncloud/ocis/v2/services/proxy/pkg/userroles/oidcroles.go:78 request-id=proxy-5b9c485cf7-vk8rs/gXxOX3jQZQ-021519 service=proxy userid=xxx
proxy-5b9c485cf7-vk8rs 2024-09-02T07:20:05Z ERR Could not get user roles error="{\"id\":\"ocis-settings\",\"code\":404,\"detail\":\"internal error: list container:Failed to store data in bucket 'xxx': nats: no response from stream\",\"status\":\"Not Found\"}" line=github.com/owncloud/ocis/v2/services/proxy/pkg/middleware/account_resolver.go:170 service=proxy

Restarting all pods multiple times fixed it in the end.

Please note that this looks like a cache implementation and not the service registry

micbar commented 2 months ago

@wkloucek @d7oc @butonic Seems like it could not find the NATS service. Failed to store data in bucket: not found

d7oc commented 2 months ago

The NATS service was still there and I didn't found evidences to believe that it was broken. In this end I just restarted it anyway aside the other pods.

wkloucek commented 2 months ago

I'd propose:

So far I'd be keen to close this issue since we recycled a issue that is similar but not the same.

d7oc commented 2 months ago

Would agree here as I also don't see how we can work on this issue without further traces. So as long as there is nothing directly known or visible in code which might have caused this I would also vote to close this issue again and create a new one on next appearance.

butonic commented 2 months ago

if restarting pods fixes it I assume we are suffering the completely broken nats-js-kv implementation in stable5. we need to backport these: https://github.com/owncloud/ocis/issues/8589#issuecomment-2271003646

jvillafanez commented 1 month ago

Could https://github.com/owncloud/ocis/pull/9048 help? I assume that if the service registration fails, the "regular" server stops, so the PR would guarantee that the debug server stops at some point too, causing the healthcheck to fail. We can assume there will be delays (re-registration attempts) or race conditions, but it should be acceptable because the healthcheck would eventually fail

The PR needs an update, and it isn't fully finished because it needs changes in reva, but maybe it can help for some services.

butonic commented 1 month ago

I found an interesting discussion on health und ready checks with insights from kubernetes devs and operators: https://www.reddit.com/r/kubernetes/comments/wayj42/what_should_readiness_liveness_probe_actually/

butonic commented 1 month ago

related: https://github.com/owncloud/ocis/issues/6774

butonic commented 1 month ago

Moved to blocked because @dragonchaser any @fschade will look into https://github.com/owncloud/ocis/issues/9821 first

dragonchaser commented 4 weeks ago

It is not blocked, I am not involved with k6