canonical / charm-microk8s

Charm that deploys MicroK8s
Apache License 2.0
10 stars 16 forks source link

Failed to install package linux-modules-extra-6.8.0-31-generic, charm may misbehave #135

Open mcfly722 opened 1 month ago

mcfly722 commented 1 month ago

Summary

microk8s charm not deploying at all (tried edge, stable versions).

In logs next errors occur:

unit-microk8s-0: 12:44:02 WARNING unit.microk8s/0.install E: Unable to locate package linux-modules-extra-6.8.0-31-generic
unit-microk8s-0: 12:44:02 WARNING unit.microk8s/0.install E: Couldn't find any package by glob 'linux-modules-extra-6.8.0-31-generic'
unit-microk8s-0: 12:44:02 WARNING unit.microk8s/0.install E: Couldn't find any package by regex 'linux-modules-extra-6.8.0-31-generic'
unit-microk8s-0: 12:44:02 WARNING unit.microk8s/0.juju-log failed to install package linux-modules-extra-6.8.0-31-generic, charm may misbehave
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-microk8s-0/charm/src/util.py", line 38, in install_required_packages
    run(["apt-get", "install", "--yes", package])
  File "/var/lib/juju/agents/unit-microk8s-0/charm/src/util.py", line 19, in run
    return subprocess.run(*args, **kwargs)
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['apt-get', 'install', '--yes', 'linux-modules-extra-6.8.0-31-generic']' returned non-zero exit status 100.

What Should Happen Instead?

expecting working charm

Reproduction Steps

Create microk8s-model on lxd cloud, then

juju deploy --model microk8s-model microk8s --channel 1.28/stable --constraints 'cores=2 mem=4G root-disk=40G'

Environment

MicroK8s charm track: latest/edge, 1.28/stable Juju version: 3.4.5 Cloud: LXD

Additional info, logs

Model         Controller            Cloud/Region          Version  SLA          Timestamp
microk8s-mo~  controller-openstack  lxd-microk8s/default  3.4.5    unsupported  13:00:49Z

App       Version  Status       Scale  Charm     Channel      Rev  Exposed  Message
microk8s           maintenance      1  microk8s  latest/edge  232  no       waiting for node

Unit         Workload     Agent      Machine  Public address  Ports      Message
microk8s/0*  maintenance  executing  0        10.88.56.23     16443/tcp  (leader-elected) waiting for node

Machine  State    Address      Inst id        Base          AZ  Message
0        started  10.88.56.23  juju-807c90-0  ubuntu@22.04      Running

After some time next looped errors occur:

unit-microk8s-0: 12:54:31 WARNING unit.microk8s/0.juju-log could not retrieve status of node juju-807c90-0: Command '['/snap/microk8s/current/kubectl', '--kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config', 'get', 'node', 'juju-807c90-0', '-o', "jsonpath={.status.conditions[?(@.type=='Ready')]}"]' returned non-zero exit status 1.
unit-microk8s-0: 12:54:33 DEBUG unit.microk8s/0.juju-log Execute: /snap/microk8s/current/kubectl --kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config get node juju-807c90-0 -o 'jsonpath={.status.conditions[?(@.type=='"'"'Ready'"'"')]}' (args=(['/snap/microk8s/current/kubectl', '--kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config', 'get', 'node', 'juju-807c90-0', '-o', "jsonpath={.status.conditions[?(@.type=='Ready')]}"],), kwargs={'capture_output': True, 'check': True})

Can you suggest a fix?

Exclude package from installing/replace with another one?

Are you interested in contributing with a fix?

mcfly722 commented 1 month ago

even after deploying and integration node no success:

juju status --relations

Model         Controller            Cloud/Region          Version  SLA          Timestamp
microk8s-mo~  controller-openstack  lxd-microk8s/default  3.4.5    unsupported  11:36:28Z

App              Version  Status       Scale  Charm     Channel      Rev  Exposed  Message
microk8s                  maintenance      1  microk8s  latest/edge  232  no       waiting for node
microk8s-worker           waiting          1  microk8s  latest/edge  232  no       waiting for control plane

Unit                Workload     Agent      Machine  Public address  Ports      Message
microk8s-worker/0*  waiting      idle       1        10.88.56.26                waiting for control plane
microk8s/0*         maintenance  executing  0        10.88.56.23     16443/tcp  (leader-elected) waiting for node

Machine  State    Address      Inst id        Base          AZ  Message
0        started  10.88.56.23  juju-f74e97-0  ubuntu@22.04      Running
1        started  10.88.56.26  juju-f74e97-1  ubuntu@22.04      Running

Integration provider  Requirer                       Interface      Type     Message
microk8s-worker:peer  microk8s-worker:peer           microk8s-peer  peer
microk8s:peer         microk8s:peer                  microk8s-peer  peer
microk8s:workers      microk8s-worker:control-plane  microk8s-info  regular

in logs I see looped events:

unit-microk8s-0: 11:29:52 DEBUG unit.microk8s/0.juju-log Execute: /snap/microk8s/current/kubectl --kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config get node juju-f74e97-0 -o 'jsonpath={.status.conditions[?(@.type=='"'"'Ready'"'"')]}' (args=(['/snap/microk8s/current/kubectl', '--kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config', 'get', 'node', 'juju-f74e97-0', '-o', "jsonpath={.status.conditions[?(@.type=='Ready')]}"],), kwargs={'capture_output': True, 'check': True})
unit-microk8s-0: 11:29:52 WARNING unit.microk8s/0.juju-log could not retrieve status of node juju-f74e97-0: Command '['/snap/microk8s/current/kubectl', '--kubeconfig=/var/snap/microk8s/current/credentials/kubelet.config', 'get', 'node', 'juju-f74e97-0', '-o', "jsonpath={.status.conditions[?(@.type=='Ready')]}"]' returned non-zero exit status 1.