aenix-io / kubefarm

Automated Kubernetes deployment and the PXE-bootable servers farm
Apache License 2.0
327 stars 27 forks source link

New installations of versions 1.9.2 yield errors #2

Closed sfxworks closed 3 years ago

sfxworks commented 3 years ago

Hello,

I deviated from my custom output to try the newest version and received an error on the kube API server. Possibly due to a kube binary version difference/new flag requirements? Here are the results:

kubectl get pods
NAME                                                      READY   STATUS             RESTARTS   AGE
cluster1-kubernetes-admin-6fb799f984-jds44                0/1     Running            0          3m26s
cluster1-kubernetes-apiserver-7bb6b845d8-mtl48            0/1     CrashLoopBackOff   4          3m26s
cluster1-kubernetes-apiserver-7bb6b845d8-q8fzp            0/1     CrashLoopBackOff   4          3m26s
cluster1-kubernetes-controller-manager-5bf87785cb-vfbcp   1/1     Running            0          3m26s
cluster1-kubernetes-controller-manager-5bf87785cb-vxjzc   1/1     Running            0          3m26s
cluster1-kubernetes-etcd-0                                1/1     Running            0          3m26s
cluster1-kubernetes-etcd-1                                1/1     Running            0          3m26s
cluster1-kubernetes-etcd-2                                1/1     Running            1          3m26s
cluster1-kubernetes-kubeadm-tasks-c598t                   1/1     Running            0          3m26s
cluster1-kubernetes-scheduler-5cf7cb57c7-jrc4d            1/1     Running            0          3m26s
cluster1-kubernetes-scheduler-5cf7cb57c7-wgq8q            1/1     Running            0          3m26s
cluster1-ltsp-585c59599-96wwd                             3/3     Running            0          3m26s

kubectl logs -f cluster1-kubernetes-apiserver-7bb6b845d8-mtl48
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0227 13:45:20.901095       1 server.go:632] external host was not specified, using 10.244.0.67
Error: [service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]

Here is the values file that I used:

# ------------------------------------------------------------------------------
# Kubernetes control-plane
# ------------------------------------------------------------------------------
kubernetes:
  enabled: true
  # See all available options for kubernetes-in-kubernetes chart on:
  # https://github.com/kvaps/kubernetes-in-kubernetes/blob/master/deploy/helm/kubernetes/values.yaml

  persistence:
    enabled: true
    storageClassName: longhorn

  apiServer:
    service:
      type: LoadBalancer
      annotations: {}
      loadBalancerIP:

    # Specify external endpoints here to have an oportunity to the Kubernetes cluster externaly
    certSANs:
#      ipAddresses:
#      - 10.9.8.10
      dnsNames:
      - farmcluster.sfxworks.net
    #extraArgs:
    #  # advertise-address is required for kube-proxy
    #  advertise-address: 10.9.8.10

# ------------------------------------------------------------------------------
# Kubeadm token generator
# (tokens are needed to join nodes to the cluster)
# ------------------------------------------------------------------------------
tokenGenerator:
  enabled: true
  tokenTtl: 24h0m0s
  schedule: "0 */12 * * *"

# ------------------------------------------------------------------------------
# Network boot server configuration
# ------------------------------------------------------------------------------
ltsp:
  enabled: true
  image:
    repository: docker.io/kvaps/kubefarm-ltsp
    tag: v0.7.0
    PullPolicy: IfNotPresent
    PullSecrets: []
  replicaCount: 1

  publishDHCP: true
  service:
    enabled: true
    type: LoadBalancer
    loadBalancerIP:
    labels: {}
    annotations: {}

  labels: {}
  annotations: {}
  podLabels: {}
  podAnnotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  sidecars: []
  extraVolumes: []

  config:

    # Disable ubuntu apt auto-update task
    disableAutoupdate: true

    # from /usr/share/zoneinfo/<timezone> (eg. Europe/Moscow)
    #timezone: UTC

    # SSH-keys authorized to access the nodes
    sshAuthorizedKeys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8+LqzvVcJShK7W+sZgCQ7kBiUGbZBIXkhpsaFTqICUSRNZ5W8RpK7YHxnHlyN1GgOUzFa1kGpq6/ZX5hPN8QivHV338EzJ5zd7uYTwu1mtxJtKVmj6Ru9Sz/B/IBqqcttQnaMdIDkyFzV5L8M+eVSA0Pxlvr1pftRcRJwpL/ie6jbRgkpD0LpDaskbcbJbQUn8DSesb43XgoG/maUQuCZU94FjQfVhNJe8NoV1BM2WTaaZKWmeQcKwZdECgYSDzQ3Ed+tfAEmt53ZOxEwXS2QZUTnMt/PvrCxKEC+CHctCifR7Eba3hhTAx/qgvm/5Er8hm2wwtSdMlPoIqOIORHz

    # Hashed password for root, use `openssl passwd -1` to generate one
    #rootPasswd: $1$jaKnTiEb$IhpsNUfssXQ8eQg8orald0 # hackme

    # Sysctl setting for the nodes
    sysctls:
      net.ipv4.ip_forward: 1
      net.ipv6.conf.all.forwarding: 1

    # Modules to load during startup
    modules:
      - br_netfilter
      #- ip_vs
      #- ip_vs_rr
      #- ip_vs_wrr
      #- ip_vs_sh

    # Docker configuration file
    dockerConfig:
      exec-opts: [ native.cgroupdriver=systemd ]
      iptables: false
      ip-forward: false
      bridge: none
      log-driver: journald
      storage-driver: overlay2

    # Extra options for ltsp.conf
    options:
      MENU_TIMEOUT: 0
      #KERNEL_PARAMETERS: "forcepae console=tty1 console=ttyS0,9600n8"

    # Extra sections for ltsp.conf
    sections: {}
      #init/: |
      #  cp -f /etc/ltsp/journald.conf /etc/systemd/journald.conf
      #initrd-bottom/: |
      #  echo "Hello World!"

    # These files will be copied into /etc/ltsp directory for all clients
    # Note: all *.service files will be copied and enabled via systemd
    extraFiles: {}
      #journald.conf: |
      #   [Manager]
      #   SystemMaxUse=200M
      #   RuntimeMaxUse=200M

    # Optionaly you might want to map additional ConfigMaps or Secrets 
    # they will be projected to /etc/ltsp drirectory
    extraProjectedItems: []
      #- secret:
      #    name: ipa-join-token

# ------------------------------------------------------------------------------
# Nodes configuration
# ------------------------------------------------------------------------------
nodePools:
  -
    # DHCP range for the node pool, required for issuing leases.
    # See --dhcp-range option syntax on dnsmasq-man page.
    # Note: the range will automatically be appended with the set:{{ .Release.Name }}-ltsp option.
    #
    # examples:
    #   172.16.0.0,static,infinite
    #   172.16.0.1,172.16.0.100,255.255.255.0,172.16.0.255,1h
    #
    # WARNING setting broadcast-address is required! (see: https://www.mail-archive.com/dnsmasq-discuss@lists.thekelleys.org.uk/msg14137.html)
    range: "192.168.0.100,192.168.0.199,255.255.255.0,192.168.0.255,1h"

    # DHCP configuration for each node
    nodes: []
      #- name: node1
      #  mac: 02:00:ac:10:00:0a
      #  ip: 172.16.0.10

    # Extra tags applied to this node pool, tags may contain additional
    # DHCP- and LTSP- options as well node labels and taints
    tags:
      #- debug
      #- foo

    # Extra Options for Dnsmasq. This section can be used to setup Circuit ID matching.
    # Note: Symbol '?' will automatically be replaced by the '{{ .Release.Name }}-ltsp' tag.
    extraOpts: []
      #- dhcp-circuitid: "set:port_0,ge-0/0/0.0:staging"
      #- dhcp-circuitid: "set:port_1,ge-0/0/1.0:staging"
      #- dhcp-range: "set:?,tag:port_0,192.168.11.10,192.168.11.10,255.255.255.0"
      #- dhcp-range: "set:?,tag:port_1,192.168.11.11,192.168.11.11,255.255.255.0"
      #- tag-if: "set:bar,tag:?,tag:port_0"
      #- tag-if: "set:bar,tag:?,tag:port_1"

# ------------------------------------------------------------------------------
# Extra options can be specified for each tag
# ("all" options are aplicable for any node)
# ------------------------------------------------------------------------------
tags:
  dhcpOptions:
    # dnsmasq options
    # see all available options list (https://git.io/JJ0dH)
    all:
      router: 192.168.0.1
      dns-server: 1.1.1.1
      broadcast: 192.168.0.255
#      domain-search: sfxworks.net

  ltspOptions:
    all:
      FSTAB_KUBELET: "tmpfs /var/lib/kubelet tmpfs x-systemd.wanted-by=kubelet.service 0 0"
      FSTAB_DOCKER: "tmpfs /var/lib/docker tmpfs x-systemd.wanted-by=docker.service 0 0"
    debug:
      DEBUG_SHELL: "1"

  kubernetesLabels:
    all: {}
    #foo:
    #  label1: value1
    #  label2: value2

  kubernetesTaints:
    all: {}
    #foo:
    #  - effect: NoSchedule
    #    key: foo
    #    value: bar
kvaps commented 3 years ago

Hi, thanks for the report, I'm going to fix it in the next release.

For now please use the following values:

kubernetes:
  apiServer:
    extraArgs:
      service-account-issuer: https://kubernetes.default.svc.cluster.local
      service-account-signing-key-file: /pki/sa/tls.key
sfxworks commented 3 years ago

The values above worked. Thank you.

kvaps commented 3 years ago

Should be fixed in v0.10.0 by hardcoding --service-account-issuer and --service-account-signing-key-file flags ref https://github.com/kvaps/kubernetes-in-kubernetes/commit/c3573df02324632b7969728ae126b3b5b85b9881