kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.94k stars 4.65k forks source link

Task did not have an address *openstacktasks.FloatingIP panic: runtime error: invalid memory address or nil pointer dereference #11889

Closed wesselvdv closed 2 years ago

wesselvdv commented 3 years ago

/kind bug

1. What kops version are you running? The command kops version, will display this information. 1.20.1

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. 1.21.2

3. What cloud provider are you using? Openstack

4. What commands did you run? What is the simplest way to reproduce this issue?

  kops -v 10 create cluster \
       --cloud openstack \
       --name dev.k8s.local \
       --state s3://kops-dev \
       --zones rc3-a,rc3-b,rc3-c \
       --network-cidr 10.6.0.0/24 \
       --image "ubuntu-20.04" \
       --master-count=3 \
       --node-count=4 \
       --master-size m1_c2_m4_d20 \
       --node-size m1_c2_m8_d20 \
       --topology private \
       --ssh-public-key ~/.ssh/id_rsa.pub \
       --networking weave \
       --os-ext-net public \
       --api-loadbalancer-type internal \
       --container-runtime '' \
       --os-kubelet-ignore-az=true \
       --os-octavia=false

5. What happened after the commands executed? Cluster is deployed successfully but errors are shown in the cli. This seems to be preventing the terraform kops provider from working.

6. What did you expect to happen? That no errors were shown.

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 3
  name: dev.k8s.local
spec:
  api:
    loadBalancer:
      type: Internal
  assets:
    fileRepository: <snip>
  authorization:
    rbac: {}
  channel: stable
  cloudConfig:
    openstack:
      blockStorage:
        bs-version: v3
        csiPluginImage: docker.io/k8scloudprovider/cinder-csi-plugin:v1.20.3
        ignore-volume-az: true
        override-volume-az: nova
      loadbalancer:
        floatingNetwork: public
        floatingNetworkID: 586f0e36-993f-40f0-9ebc-e8329ec06d22
        method: ROUND_ROBIN
        provider: haproxy
        useOctavia: false
      monitor:
        delay: 1m
        maxRetries: 3
        timeout: 30s
      router:
        externalNetwork: public
  cloudControllerManager:
    image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.20.2
  cloudProvider: openstack
  configBase: s3://kops-dev/dev.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-rc3-a
      name: a
    - instanceGroup: master-rc3-b
      name: b
    - instanceGroup: master-rc3-c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-rc3-a
      name: a
    - instanceGroup: master-rc3-b
      name: b
    - instanceGroup: master-rc3-c
      name: c
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    oidcClientID: <snip>
    oidcGroupsClaim: groups
    oidcIssuerURL: <snip>
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.20.8
  masterPublicName: api.dev.k8s.local
  networkCIDR: 10.6.0.0/24
  networking:
    weave: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 10.6.0.32/27
    name: rc3-a
    type: Private
    zone: rc3-a
  - cidr: 10.6.0.64/27
    name: rc3-b
    type: Private
    zone: rc3-b
  - cidr: 10.6.0.96/27
    name: rc3-c
    type: Private
    zone: rc3-c
  - cidr: 10.6.0.0/30
    name: utility-rc3-a
    type: Utility
    zone: rc3-a
  - cidr: 10.6.0.4/30
    name: utility-rc3-b
    type: Utility
    zone: rc3-b
  - cidr: 10.6.0.8/30
    name: utility-rc3-c
    type: Utility
    zone: rc3-c
  topology:
    dns:
      type: Public
    masters: private
    nodes: private
  useHostCertificates: true

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: master-rc3-a
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m4_d20
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-rc3-a
  role: Master
  subnets:
  - rc3-a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: master-rc3-b
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m4_d20
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-rc3-b
  role: Master
  subnets:
  - rc3-b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: master-rc3-c
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m4_d20
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-rc3-c
  role: Master
  subnets:
  - rc3-c

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: nodes-rc3-a
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m8_d20
  maxSize: 2
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-rc3-a
  role: Node
  subnets:
  - rc3-a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: nodes-rc3-b
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m8_d20
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-rc3-b
  role: Node
  subnets:
  - rc3-b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-06-29T08:06:42Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: dev.k8s.local
  name: nodes-rc3-c
spec:
  image: ubuntu-20.04
  machineType: m1_c2_m8_d20
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-rc3-c
  role: Node
  subnets:
  - rc3-c

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

johngmyers commented 3 years ago

/area provider/openstack

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/kops/issues/11889#issuecomment-980101756): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.