huggingface / dataset-viewer

Backend that powers the dataset viewer on Hugging Face dataset pages through a public API.
https://huggingface.co/docs/dataset-viewer
Apache License 2.0
680 stars 76 forks source link

what happened to the pods? #388

Closed severo closed 2 years ago

severo commented 2 years ago
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk   1/1     Evicted   0             73m                                             │DEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b   0/1     Pending   0             1s                                              │DEBUG: 2022-06-16 18:42:47,011 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e43 for split 'test' from dataset 'luozhou
datasets-server-prod-datasets-worker-776b774978-cdb4b   0/1     Pending   0             1s                                              │yang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-cdb4b   0/1     OutOfmemory   0             1s                                          │INFO: 2022-06-16 18:42:47,012 - datasets_server.worker - compute split 'test' from dataset 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-7hw4j   0/1     Pending       0             0s                                          │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.85MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j   0/1     Pending       0             0s                                          │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.43MB/s]
datasets-server-prod-datasets-worker-776b774978-7hw4j   0/1     OutOfmemory   0             0s                                          │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 5.07MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd   0/1     Pending       0             0s                                          │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.18MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd   0/1     Pending       0             0s                                          │Downloading builder script: 100%|██████████| 8.67k/8.67k [00:00<00:00, 4.52MB/s]
datasets-server-prod-datasets-worker-776b774978-qtmtd   0/1     OutOfmemory   0             0s                                          │Downloading metadata: 100%|██████████| 2.85k/2.85k [00:00<00:00, 1.76MB/s]
datasets-server-prod-datasets-worker-776b774978-54zr6   0/1     Pending       0             0s                                          │Downloading and preparing dataset dureader/robust (download: 19.57 MiB, generated: 57.84 MiB, post-processed: Unknown size, total: 77.4
datasets-server-prod-datasets-worker-776b774978-54zr6   0/1     Pending       0             0s                                          │1 MiB) to /cache/datasets/luozhouyang___dureader/robust/1.0.0/bdab4855e88c197f2297db78cfc86259fb874c2b977134bbe80d3af8616f33b1...
datasets-server-prod-datasets-worker-776b774978-54zr6   0/1     OutOfmemory   0             0s                                          │Downloading data:   1%|          | 163k/20.5M [01:45<3:40:25, 1.54kB/s]
datasets-server-prod-datasets-worker-776b774978-rxcb2   0/1     Pending       0             0s                                          │DEBUG: 2022-06-16 18:44:44,235 - datasets_server.worker - job finished with error: 62ab6804a502851c834d7e43 for split 'test' from datas
datasets-server-prod-datasets-worker-776b774978-rxcb2   0/1     Pending       0             0s                                          │et 'luozhouyang/dureader' with config 'robust'
datasets-server-prod-datasets-worker-776b774978-rxcb2   0/1     OutOfmemory   0             0s                                          │DEBUG: 2022-06-16 18:44:44,236 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-d8m42   0/1     Pending       0             0s                                          │DEBUG: 2022-06-16 18:44:44,281 - datasets_server.worker - job assigned: 62ab6804a502851c834d7e45 for split 'test' from dataset 'opencli
datasets-server-prod-datasets-worker-776b774978-d8m42   0/1     Pending       0             0s                                          │matefix/nimrod-uk-1km' with config 'sample'
datasets-server-prod-datasets-worker-776b774978-d8m42   0/1     OutOfmemory   0             0s                                          │INFO: 2022-06-16 18:44:44,281 - datasets_server.worker - compute split 'test' from dataset 'openclimatefix/nimrod-uk-1km' with config '
datasets-server-prod-datasets-worker-776b774978-xx7hv   0/1     Pending       0             0s                                          │sample'
datasets-server-prod-datasets-worker-776b774978-xx7hv   0/1     Pending       0             0s                                          │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 6.04MB/s]
datasets-server-prod-datasets-worker-776b774978-xx7hv   0/1     OutOfmemory   0             1s                                          │Downloading builder script: 100%|██████████| 15.2k/15.2k [00:00<00:00, 7.65MB/s]
datasets-server-prod-datasets-worker-776b774978-x7xzb   0/1     Pending       0             0s                                          │2022-06-16 18:44:46.305062: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication b
datasets-server-prod-datasets-worker-776b774978-x7xzb   0/1     Pending       0             0s                                          │earer token failed, returning an empty token. Retrieving token from files failed with "NOT_FOUND: Could not locate the credentials file
datasets-server-prod-datasets-worker-776b774978-x7xzb   0/1     OutOfmemory   0             0s                                          │.". Retrieving token from GCE failed with "FAILED_PRECONDITION: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resol
datasets-server-prod-datasets-worker-776b774978-m5dqs   0/1     Pending       0             0s                                          │ve host name', error details: Could not resolve host: metadata".
datasets-server-prod-datasets-worker-776b774978-m5dqs   0/1     Pending       0             0s                                          │2022-06-16 18:44:46.389820: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.
datasets-server-prod-datasets-worker-776b774978-m5dqs   0/1     Init:0/3      0             0s                                          │1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
datasets-server-prod-datasets-worker-776b774978-g7mpk   0/1     Error         0             73m                                         │2022-06-16 18:44:46.389865: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
datasets-server-prod-datasets-worker-776b774978-m5dqs   0/1     Init:1/3      0             3s                                          │2022-06-16 18:44:46.390005: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on t
datasets-server-prod-datasets-worker-776b774978-m5dqs   0/1     Init:2/3      0             4s
severo commented 2 years ago

cc @XciD

severo commented 2 years ago
NAME                                                    READY   STATUS        RESTARTS      AGE
datasets-server-prod-admin-79798989fb-scmjw             1/1     Running       0             141m
datasets-server-prod-api-6f4477cc64-2tzn6               1/1     Running       0             141m
datasets-server-prod-api-6f4477cc64-6pjnq               1/1     Running       0             140m
datasets-server-prod-api-6f4477cc64-97gsc               1/1     Running       0             141m
datasets-server-prod-api-6f4477cc64-db6m8               1/1     Running       0             140m
datasets-server-prod-datasets-worker-776b774978-54zr6   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-7hw4j   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-cdb4b   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-cgtw2   1/1     Running       1 (20m ago)   97m
datasets-server-prod-datasets-worker-776b774978-cmth8   1/1     Running       0             97m
datasets-server-prod-datasets-worker-776b774978-d8m42   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-g7mpk   0/1     Error         0             97m
datasets-server-prod-datasets-worker-776b774978-m5dqs   1/1     Running       0             23m
datasets-server-prod-datasets-worker-776b774978-q29z6   1/1     Running       0             97m
datasets-server-prod-datasets-worker-776b774978-qtmtd   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-rxcb2   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-x7xzb   0/1     OutOfmemory   0             23m
datasets-server-prod-datasets-worker-776b774978-xx7hv   0/1     OutOfmemory   0             23m
severo commented 2 years ago
NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
datasets-server-prod-admin             1/1     1            1           30d
datasets-server-prod-api               4/4     4            4           31d
datasets-server-prod-datasets-worker   4/4     4            4           31d
datasets-server-prod-reverse-proxy     2/2     2            2           31d
datasets-server-prod-splits-worker     56/56   56           56          31d
XciD commented 2 years ago

Some node reach a Pressure condition, (memory or disk). When this happens, kubernetes will Evict some pod to lower the pressure.

severo commented 2 years ago

OK, thanks. Is it normal that the pods marked as OutOfMemory (and Error) were still in the list? Is it for us to know that they crashed, instead of silently hide them? I had to terminate them using your magic command:

k get pod | grep OutOfmemory | cut -d ' ' -f 1 | xargs -I % kubectl delete pod/% --force
XciD commented 2 years ago

Yes, I think it's for you to know that you had a issue.

severo commented 2 years ago

OK, nice.

severo commented 2 years ago

By the way, about "Evicted":

When a node reaches out its disk or memory limit, a flag is set on the Kubernetes node to indicate that it is under pressure. This flag also blocks new allocation on this node, and following this, an eviction process is started to free some resources.