kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.33k stars 547 forks source link

custom script install crio Bash is empty #1614

Closed willzhang closed 1 year ago

willzhang commented 1 year ago

What is version of KubeKey has the issue?

v3.0.1

What is your os environment?

ubuntu 22.04

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.72.51, internalAddress: 192.168.72.51, user: root, password: "123456"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  system:
    preInstall:
      - name: install crio
        bash: /bin/bash -x init-crio.sh
        materials:
          - ./init-crio.sh
          - crio.tar.gz
  registry:
    privateRegistry: "192.168.72.15"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
    auths:
      "192.168.72.15":
        username: "admin"
        password: "Harbor12345"
        skipTLSVerify: true
        plainHTTP: true
  addons: []

A clear and concise description of what happend.

i want install crio use preInstall

kk create cluster -f config-sample.yaml -a kubekey-artifact.tar.gz --with-packages -y

and i put files at current dir

root@ubuntu:~/k8s_v1.25.3# ll
total 915028
drwxr-xr-x 1 root root       158 Nov 18 00:34 ./
drwx------ 1 root root       196 Nov 18 00:42 ../
-rw-r--r-- 1 root root      1290 Nov 18 00:34 config-sample.yaml
-rw-r--r-- 1 root root  98724202 Nov 18 00:18 cri-o.tar.gz
-rwxr-xr-x 1 root root       868 Nov 18 00:21 init-crio.sh*
drwxr-xr-x 1 root root       224 Nov 18 00:26 kubekey/
-rw-r--r-- 1 root root 758599552 Nov 17 22:23 kubekey-artifact.tar.gz

init-crio.sh

root@ubuntu:~/k8s_v1.25.3# cat init-crio.sh 
#!/bin/bash
registry_username=admin
registry_password=Harbor12345
registry_domain=192.168.72.15
registry_port=80
TARBALL=cri-o.tar.gz
TMPDIR="$(mktemp -d)"
trap 'rm -rf -- "$TMPDIR"' EXIT

tar xfz ./cri-o.tar.gz --strip-components=1 -C "$TMPDIR"
pushd "$TMPDIR"
echo Installing CRI-O
./install
popd
# Use other network plugin, eg: calico.
rm -rf /etc/cni/net.d/10-crio-bridge.conf

base64pwd=$(echo -n "${registry_username}:${registry_password}" | base64)
logger "username: $registry_username, password: $registry_password, base64pwd: $base64pwd"
cat > /etc/crio/config.json << eof
{
        "auths": {
                "$registry_domain:$registry_port": {
                        "auth": "$base64pwd"
                }
        }
}
eof

systemctl enable --now crio.service
check_status crio
logger "init crio success"

Relevant log output

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
00:42:35 CST success: [node1]
00:42:35 CST [ConfigureOSModule] configure the ntp server for each node
00:42:35 CST skipped: [node1]
00:42:35 CST [CustomScriptsModule Phase:PreInstall] Phase:PreInstall(0/1) script:install crio
00:42:35 CST message: [node1]
custom script install crio Bash is empty
00:42:35 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CustomScriptsModule Phase:PreInstall] exec failed: 
failed: [node1] [Phase:PreInstall(0/1) script:install crio] exec failed after 1 retires: custom script install crio Bash is empty
root@ubuntu:~/k8s_v1.25.3#

Additional information

No response

24sama commented 1 year ago

Maybe missing the quotes, Please try this:

system:
    preInstall:
      - name: install crio
        bash: "/bin/bash -x init-crio.sh"
        materials:
          - ./init-crio.sh
          - crio.tar.gz
willzhang commented 1 year ago

i have try , but same result

root@node1:~/k8s_v1.25.3# cat config-sample.yaml 

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.72.51, internalAddress: 192.168.72.51, user: root, password: "123456"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  system:
    preInstall:
      - name: install crio
        bash: "/bin/bash -x init-crio.sh"
        materials:
          - ./init-crio.sh
          - cri-o.tar.gz
  registry:
    privateRegistry: "192.168.72.15"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: []
    auths:
      "192.168.72.15":
        username: "admin"
        password: "Harbor12345"
        skipTLSVerify: true
        plainHTTP: true
  addons: []

the debug logs, can not see the detail preinstall logs

root@node1:~/k8s_v1.25.3# kk create cluster -f config-sample.yaml -a kubekey-artifact.tar.gz --with-packages -y --debug

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

09:58:17 CST [GreetingsModule] Greetings
09:58:20 CST command: [node1]
sudo -E /bin/bash -c "echo 'Greetings, KubeKey!'"
09:58:20 CST stdout: [node1]
Greetings, KubeKey!
09:58:20 CST message: [node1]
Greetings, KubeKey!
09:58:20 CST success: [node1]
09:58:20 CST [NodePreCheckModule] A pre-check on nodes
09:58:20 CST command: [node1]
which sudo
09:58:20 CST stdout: [node1]
/usr/bin/sudo
09:58:20 CST command: [node1]
sudo -E /bin/bash -c "which curl"
09:58:20 CST stdout: [node1]
/usr/bin/curl
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which openssl"
09:58:21 CST stdout: [node1]
/usr/bin/openssl
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which ebtables"
09:58:21 CST stdout: [node1]
/usr/sbin/ebtables
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which socat"
09:58:21 CST stdout: [node1]
/usr/bin/socat
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which ipset"
09:58:21 CST stdout: [node1]
/usr/sbin/ipset
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which ipvsadm"
09:58:21 CST stdout: [node1]
/usr/sbin/ipvsadm
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which conntrack"
09:58:21 CST stdout: [node1]
/usr/sbin/conntrack
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which chronyd"
09:58:21 CST stdout: [node1]
/usr/sbin/chronyd
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "docker version --format '{{.Server.Version}}'"
09:58:21 CST stdout: [node1]
/bin/bash: line 1: docker: command not found
09:58:21 CST stderr: [node1]
Failed to exec command: sudo -E /bin/bash -c "docker version --format '{{.Server.Version}}'" 
/bin/bash: line 1: docker: command not found: Process exited with status 127
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "containerd --version | cut -d ' ' -f 3"
09:58:21 CST stdout: [node1]
/bin/bash: line 1: containerd: command not found
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which showmount"
09:58:21 CST stderr: [node1]
Failed to exec command: sudo -E /bin/bash -c "which showmount" 
: Process exited with status 1
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which rbd"
09:58:21 CST stderr: [node1]
Failed to exec command: sudo -E /bin/bash -c "which rbd" 
: Process exited with status 1
09:58:21 CST command: [node1]
sudo -E /bin/bash -c "which glusterfs"
09:58:21 CST stderr: [node1]
Failed to exec command: sudo -E /bin/bash -c "which glusterfs" 
: Process exited with status 1
09:58:21 CST command: [node1]
date +"%Z %H:%M:%S"
09:58:21 CST stdout: [node1]
CST 09:58:21
09:58:21 CST success: [node1]
09:58:21 CST [ConfirmModule] Display confirmation form
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| node1 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        |            |            |             |                  | CST 09:58:21 |
+-------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

09:58:21 CST success: [LocalHost]
09:58:21 CST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value
09:58:28 CST success: [LocalHost]
09:58:28 CST [UnArchiveArtifactModule] UnArchive the KubeKey artifact
09:58:28 CST skipped: [LocalHost]
09:58:28 CST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file
09:58:28 CST skipped: [LocalHost]
09:58:28 CST [RepositoryModule] Get OS release
09:58:28 CST command: [node1]
sudo -E /bin/bash -c "cat /etc/os-release"
09:58:28 CST stdout: [node1]
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
09:58:28 CST success: [node1]
09:58:28 CST [RepositoryModule] Sync repository iso file to all nodes
09:58:28 CST command: [node1]
sudo -E /bin/bash -c "if [ -d /tmp/kubekey ]; then rm -rf /tmp/kubekey ;fi && mkdir -m 777 -p /tmp/kubekey"
09:58:30 CST scp local file /root/k8s_v1.25.3/kubekey/repository/amd64/ubuntu/22.04/ubuntu-22.04-amd64.iso to remote /tmp/kubekey/ubuntu-22.04-amd64.iso success
09:58:30 CST success: [node1]
09:58:30 CST [RepositoryModule] Mount iso file
09:58:30 CST command: [node1]
sudo mount -t iso9660 -o loop /tmp/kubekey/ubuntu-22.04-amd64.iso /tmp/kubekey/iso
09:58:30 CST stdout: [node1]
mount: /tmp/kubekey/iso: WARNING: source write-protected, mounted read-only.
09:58:30 CST success: [node1]
09:58:30 CST [RepositoryModule] New repository client
09:58:30 CST success: [node1]
09:58:30 CST [RepositoryModule] Backup original repository
09:58:30 CST command: [node1]
sudo -E /bin/bash -c "mv /etc/apt/sources.list /etc/apt/sources.list.kubekey.bak"
09:58:30 CST command: [node1]
sudo -E /bin/bash -c "mv /etc/apt/sources.list.d /etc/apt/sources.list.d.kubekey.bak"
09:58:30 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /etc/apt/sources.list.d"
09:58:30 CST success: [node1]
09:58:30 CST [RepositoryModule] Add local repository
09:58:30 CST command: [node1]
sudo -E /bin/bash -c "rm -rf /etc/apt/sources.list.d/*"
09:58:31 CST command: [node1]
sudo -E /bin/bash -c "echo 'deb [trusted=yes]  file:///tmp/kubekey/iso   /' > /etc/apt/sources.list.d/kubekey.list"
09:58:32 CST command: [node1]
sudo apt-get update
09:58:32 CST stdout: [node1]
Get:1 file:/tmp/kubekey/iso  InRelease
Ign:1 file:/tmp/kubekey/iso  InRelease
Get:2 file:/tmp/kubekey/iso  Release
Ign:2 file:/tmp/kubekey/iso  Release
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages [53.8 kB]
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Reading package lists... Done
09:58:32 CST stdout: [node1]
Get:1 file:/tmp/kubekey/iso  InRelease
Ign:1 file:/tmp/kubekey/iso  InRelease
Get:2 file:/tmp/kubekey/iso  Release
Ign:2 file:/tmp/kubekey/iso  Release
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages
Ign:3 file:/tmp/kubekey/iso  Packages
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:3 file:/tmp/kubekey/iso  Packages [53.8 kB]
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Get:4 file:/tmp/kubekey/iso  Translation-en_US
Ign:4 file:/tmp/kubekey/iso  Translation-en_US
Get:5 file:/tmp/kubekey/iso  Translation-en
Ign:5 file:/tmp/kubekey/iso  Translation-en
Reading package lists... Done
09:58:32 CST success: [node1]
09:58:32 CST [RepositoryModule] Install packages
09:58:34 CST command: [node1]
sudo -E /bin/bash -c "apt install -y socat conntrack ipset ebtables chrony ipvsadm"
09:58:34 CST stdout: [node1]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
chrony is already the newest version (4.2-2ubuntu2).
conntrack is already the newest version (1:1.4.6-2build2).
ebtables is already the newest version (2.0.11-4build2).
ipset is already the newest version (7.15-1build1).
ipvsadm is already the newest version (1:1.31-1build2).
socat is already the newest version (1.7.4.1-3ubuntu4).
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
09:58:34 CST stdout: [node1]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
chrony is already the newest version (4.2-2ubuntu2).
conntrack is already the newest version (1:1.4.6-2build2).
ebtables is already the newest version (2.0.11-4build2).
ipset is already the newest version (7.15-1build1).
ipvsadm is already the newest version (1:1.31-1build2).
socat is already the newest version (1.7.4.1-3ubuntu4).
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
09:58:34 CST success: [node1]
09:58:34 CST [RepositoryModule] Reset repository to the original repository
09:58:34 CST command: [node1]
sudo -E /bin/bash -c "rm -rf /etc/apt/sources.list.d"
09:58:34 CST command: [node1]
sudo -E /bin/bash -c "mv /etc/apt/sources.list.kubekey.bak /etc/apt/sources.list"
09:58:34 CST command: [node1]
sudo -E /bin/bash -c "mv /etc/apt/sources.list.d.kubekey.bak /etc/apt/sources.list.d"
09:58:34 CST success: [node1]
09:58:34 CST [RepositoryModule] Umount ISO file
09:58:34 CST command: [node1]
sudo -E /bin/bash -c "umount /tmp/kubekey/iso"
09:58:34 CST success: [node1]
09:58:34 CST [NodeBinariesModule] Download installation binaries
09:58:34 CST message: [localhost]
downloading amd64 kubeadm v1.25.3 ...
09:58:35 CST message: [localhost]
kubeadm is existed
09:58:35 CST message: [localhost]
downloading amd64 kubelet v1.25.3 ...
09:58:36 CST message: [localhost]
kubelet is existed
09:58:36 CST message: [localhost]
downloading amd64 kubectl v1.25.3 ...
09:58:37 CST message: [localhost]
kubectl is existed
09:58:37 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
09:58:37 CST message: [localhost]
helm is existed
09:58:37 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
09:58:38 CST message: [localhost]
kubecni is existed
09:58:38 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
09:58:38 CST message: [localhost]
crictl is existed
09:58:38 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
09:58:39 CST message: [localhost]
etcd is existed
09:58:39 CST message: [localhost]
downloading amd64 containerd 1.6.4 ...
09:58:39 CST message: [localhost]
containerd is existed
09:58:39 CST message: [localhost]
downloading amd64 runc v1.1.1 ...
09:58:39 CST message: [localhost]
runc is existed
09:58:39 CST success: [LocalHost]
09:58:39 CST [ConfigureOSModule] Get OS release
09:58:39 CST command: [node1]
sudo -E /bin/bash -c "cat /etc/os-release"
09:58:39 CST stdout: [node1]
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
09:58:39 CST success: [node1]
09:58:39 CST [ConfigureOSModule] Prepare to init OS
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "useradd -M -c 'Kubernetes user' -s /sbin/nologin -r kube || :"
09:58:40 CST stdout: [node1]
useradd: user 'kube' already exists
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "useradd -M -c 'Etcd user' -s /sbin/nologin -r etcd || :"
09:58:40 CST stdout: [node1]
useradd: user 'etcd' already exists
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /usr/local/bin"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /usr/local/bin"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /etc/kubernetes"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /etc/kubernetes"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /etc/kubernetes/pki"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /etc/kubernetes/pki"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /etc/kubernetes/manifests"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /etc/kubernetes/manifests"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /usr/local/bin/kube-scripts"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /usr/local/bin/kube-scripts"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chown kube -R /usr/libexec/kubernetes"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /etc/cni/net.d && chown kube -R /etc/cni"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /opt/cni/bin && chown kube -R /opt/cni"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /var/lib/calico && chown kube -R /var/lib/calico"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mkdir -p /var/lib/etcd && chown etcd -R /var/lib/etcd"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "if [ -d /tmp/kubekey ]; then rm -rf /tmp/kubekey ;fi && mkdir -m 777 -p /tmp/kubekey"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "hostnamectl set-hostname node1 && sed -i '/^127.0.1.1/s/.*/127.0.1.1      node1/g' /etc/hosts"
09:58:40 CST success: [node1]
09:58:40 CST [ConfigureOSModule] Generate init os script
09:58:40 CST scp local file /root/k8s_v1.25.3/kubekey/node1/initOS.sh to remote /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh success
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "mv -f /tmp/kubekey/usr/local/bin/kube-scripts/initOS.sh /usr/local/bin/kube-scripts/initOS.sh"
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "rm -rf /tmp/kubekey/*"
09:58:40 CST success: [node1]
09:58:40 CST [ConfigureOSModule] Exec init os script
09:58:40 CST command: [node1]
sudo -E /bin/bash -c "chmod +x /usr/local/bin/kube-scripts/initOS.sh"
09:58:44 CST command: [node1]
sudo -E /bin/bash -c "/usr/local/bin/kube-scripts/initOS.sh"
09:58:44 CST stdout: [node1]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
09:58:44 CST stdout: [node1]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
09:58:44 CST success: [node1]
09:58:44 CST [ConfigureOSModule] configure the ntp server for each node
09:58:44 CST skipped: [node1]
09:58:44 CST [CustomScriptsModule Phase:PreInstall] Phase:PreInstall(0/1) script:install crio
09:58:44 CST message: [node1]
custom script install crio Bash is empty
09:58:44 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CustomScriptsModule Phase:PreInstall] exec failed: 
failed: [node1] [Phase:PreInstall(0/1) script:install crio] exec failed after 1 retires: custom script install crio Bash is empty

scripts location

root@node1:~/k8s_v1.25.3# pwd
/root/k8s_v1.25.3
root@node1:~/k8s_v1.25.3# ll
total 915036
drwxr-xr-x 1 root root       234 Nov 18 09:57 ./
drwx------ 1 root root       196 Nov 18 09:58 ../
-rw-r--r-- 1 root root       235 Nov 18 09:53 99-crio.conf
-rw-r--r-- 1 root root      1284 Nov 18 09:52 config-sample.yaml
-rw-r--r-- 1 root root  98724202 Nov 18 09:53 cri-o.tar.gz
-rwxr-xr-x 1 root root       868 Nov 18 09:53 init-crio.sh*
drwxr-xr-x 1 root root       144 Nov 18 09:55 kubekey/
-rw-r--r-- 1 root root 758599552 Nov 17 22:23 kubekey-artifact.tar.gz
root@node1:~/k8s_v1.25.3# 

where will kk copy preinstall files to ?

root@ubuntu:~/k8s_v1.25.3# ll kubekey
total 4
drwxr-xr-x 1 root root 144 Nov 18 10:08 ./
drwxr-xr-x 1 root root 158 Nov 18 10:08 ../
-rw-r--r-- 1 root root  32 Nov 18 10:08 artifact.md5
drwxr-xr-x 1 root root  12 Nov 18 10:08 cni/
drwxr-xr-x 1 root root  10 Nov 18 10:08 containerd/
drwxr-xr-x 1 root root  14 Nov 18 10:08 crictl/
drwxr-xr-x 1 root root  14 Nov 18 10:08 etcd/
drwxr-xr-x 1 root root  12 Nov 18 10:08 helm/
drwxr-xr-x 1 root root  50 Nov 18 10:08 images/
drwxr-xr-x 1 root root  14 Nov 18 10:08 kube/
drwxr-xr-x 1 root root  62 Nov 18 10:08 logs/
drwxr-xr-x 1 root root  18 Nov 18 10:08 node1/
drwxr-xr-x 1 root root  10 Nov 18 10:08 repository/
drwxr-xr-x 1 root root  12 Nov 18 10:08 runc/
root@ubuntu:~/k8s_v1.25.3# ll kubekey/node1/
total 8
drwxr-xr-x 1 root root   18 Nov 18 10:08 ./
drwxr-xr-x 1 root root  144 Nov 18 10:08 ../
-rw-r--r-- 1 root root 4532 Nov 18 10:08 initOS.sh
root@ubuntu:~/k8s_v1.25.3# 
root@ubuntu:~/k8s_v1.25.3# ll /usr/local/bin/kube-scripts/
total 8
drwxr-xr-x 1 kube root   18 Nov 18 10:08 ./
drwxr-xr-x 1 kube root   28 Nov 18 10:08 ../
-rwxr-xr-x 1 root root 4532 Nov 18 10:08 initOS.sh*
root@ubuntu:~/k8s_v1.25.3# 
24sama commented 1 year ago

Oh, I found it. https://github.com/kubesphere/kubekey/blob/d0c8a9133683a1b0dca6b8c7fd9a3ca847b297ff/cmd/kk/apis/kubekey/v1alpha2/cluster_types.go#L89-L93

So the field needs to be:

system:
    preInstall:
      - name: install crio
        shell: "/bin/bash -x init-crio.sh"               <--- "shell"
        materials:
          - ./init-crio.sh
          - cri-o.tar.gz
24sama commented 1 year ago

Here is a PR to fix it. https://github.com/kubesphere/kubekey/pull/1615

willzhang commented 1 year ago

i must config Absolute path, otherwise kk can not find init-crio.sh, but it can find initOS.sh

system:
    preInstall:
      - name: install crio
        shell: "/bin/bash -x /tmp/kubekeyPreInstall-0-script/init-crio.sh"               <--- "shell"
        materials:
          - ./init-crio.sh
          - cri-o.tar.gz
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
10:24:09 CST success: [node1]
10:24:09 CST [ConfigureOSModule] configure the ntp server for each node
10:24:09 CST skipped: [node1]
10:24:09 CST [CustomScriptsModule Phase:PreInstall] Phase:PreInstall(0/1) script:install crio
Copy 0/2 materials: Scp -fr ./init-crio.sh root@192.168.72.51:/tmp/kubekeyPreInstall-0-script/init-crio.sh done, take 364.149126ms
Copy 1/2 materials: Scp -fr cri-o.tar.gz root@192.168.72.51:/tmp/kubekeyPreInstall-0-script/cri-o.tar.gz done, take 2.142264927s
10:24:12 CST message: [node1]
Exec Bash: /bin/bash -x init-crio.sh err:Failed to exec command: sudo -E /bin/bash -c "/bin/bash -x init-crio.sh" 
/bin/bash: init-crio.sh: No such file or directory: Process exited with status 127
10:24:12 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CustomScriptsModule Phase:PreInstall] exec failed: 
failed: [node1] [Phase:PreInstall(0/1) script:install crio] exec failed after 1 retires: Exec Bash: /bin/bash -x init-crio.sh err:Failed to exec command: sudo -E /bin/bash -c "/bin/bash -x init-crio.sh" 
/bin/bash: init-crio.sh: No such file or directory: Process exited with status 127
root@ubuntu:/tmp/kubekeyPreInstall-0-script# ls
cri-o.tar.gz  init-crio.sh
root@ubuntu:/tmp/kubekeyPreInstall-0-script# sudo -E /bin/bash -c "/bin/bash -x init-crio.sh" 
+ registry_username=admin
+ registry_password=Harbor12345
+ registry_domain=192.168.72.15
+ registry_port=80
+ TARBALL=cri-o.tar.gz
++ mktemp -d
+ TMPDIR=/tmp/tmp.NnhJFAkJdx
+ trap 'rm -rf -- "$TMPDIR"' EXIT
+ tar xfz ./cri-o.tar.gz --strip-components=1 -C /tmp/tmp.NnhJFAkJdx
+ pushd /tmp/tmp.NnhJFAkJdx
/tmp/tmp.NnhJFAkJdx /tmp/kubekeyPreInstall-0-script
+ echo Installing CRI-O
Installing CRI-O
+ ./install
++ install -d -m 755 /etc/cni/net.d
++ install -D -m 755 -t /opt/cni/bin cni-plugins/bandwidth cni-plugins/bridge cni-plugins/dhcp cni-plugins/firewall cni-plugins/host-device cni-plugins/host-local cni-plugins/ipvlan cni-plugins/loopback cni-plugins/macvlan cni-plugins/portmap cni-plugins/ptp cni-plugins/sbr cni-plugins/static cni-plugins/tuning cni-plugins/vlan cni-plugins/vrf
++ install -D -m 644 -t /etc/cni/net.d contrib/10-crio-bridge.conf
++ install -D -m 755 -t /usr/local/bin bin/conmon
++ install -D -m 755 -t /usr/local/bin bin/crictl
++ install -d -m 755 /usr/local/share/bash-completion/completions
++ install -d -m 755 /usr/local/share/fish/completions
++ install -d -m 755 /usr/local/share/zsh/site-functions
++ install -d -m 755 /etc/containers
++ install -D -m 755 -t /usr/local/bin bin/crio-status
++ install -D -m 755 -t /usr/local/bin bin/crio
++ install -D -m 644 -t /etc etc/crictl.yaml
++ install -D -m 644 -t /usr/local/share/oci-umount/oci-umount.d etc/crio-umount.conf
++ install -D -m 644 -t /etc/crio etc/crio.conf
++ install -D -m 644 -t /etc/crio/crio.conf.d etc/10-crun.conf
++ install -D -m 644 -t /usr/local/share/man/man5 man/crio.conf.5
++ install -D -m 644 -t /usr/local/share/man/man5 man/crio.conf.d.5
++ install -D -m 644 -t /usr/local/share/man/man8 man/crio-status.8
++ install -D -m 644 -t /usr/local/share/man/man8 man/crio.8
++ install -D -m 644 -t /usr/local/share/bash-completion/completions completions/bash/crio
++ install -D -m 644 -t /usr/local/share/fish/completions completions/fish/crio.fish
++ install -D -m 644 -t /usr/local/share/zsh/site-functions completions/zsh/_crio
++ install -D -m 644 -t /etc/containers contrib/policy.json
++ install -D -m 644 -t /usr/local/lib/systemd/system contrib/crio.service
++ install -D -m 755 -t /usr/local/bin bin/pinns
++ install -D -m 755 -t /usr/local/bin bin/crun
++ command -v runc
/usr/local/bin/runc
++ '[' -n '' ']'
+ popd
/tmp/kubekeyPreInstall-0-script
+ rm -rf /etc/cni/net.d/10-crio-bridge.conf
++ echo -n admin:Harbor12345
++ base64
+ base64pwd=YWRtaW46SGFyYm9yMTIzNDU=
+ logger 'username: admin, password: Harbor12345, base64pwd: YWRtaW46SGFyYm9yMTIzNDU='
+ cat
+ systemctl enable --now crio.service
+ rm -rf -- /tmp/tmp.NnhJFAkJdx
24sama commented 1 year ago

Looks like it needs to use the abs path at present.

willzhang commented 1 year ago

Looks like it needs to use the abs path at present.

Thanks a lot anyway.

willzhang commented 1 year ago

kubekey v3.0.5 fixed


  system:
    preInstall:
      - name: install crio
        bash: "xxx"
        materials:
``