kubernetes-sigs / cluster-api-provider-gcp

The GCP provider implementation for Cluster API
Apache License 2.0
185 stars 198 forks source link

Cannot create VMs for Machines. /root/.kube/config missing #66

Closed oxddr closed 5 years ago

oxddr commented 5 years ago

I've created cluster from HEAD. Initial machines as defined in machines.yaml have been created succesfully. However when I tried to create Machinset [1], VMs for underlying Machines have not been created. Based on logs, it seems there is a problem while genering kubeadm token:

E1130 12:14:10.410996       1 machineactuator.go:785] unable to create token: exit status 1 [failed to load admin kubeconfig [open /root/.kube/config: no such file or directory]    

[1]

apiVersion: "cluster.k8s.io/v1alpha1"
kind: MachineSet
metadata:
  name: ms3
spec:
  replicas: 3
  selector:
    matchLabels:
      foo: bar
  template:
    metadata:
      labels:
        foo: bar
    spec:
      providerConfig:
        value:
          apiVersion: "gceproviderconfig/v1alpha1"
          kind: "GCEMachineProviderConfig"
          roles:
          - Node
          zone: "us-central1-f"
          machineType: "n1-standard-1"
          os: "ubuntu-1604-lts"
          disks:
          - initializeParams:
            diskSizeGb: 30
            diskType: "pd-standard"
      versions:
        kubelet: 1.12.0
oxddr commented 5 years ago

As for now, my workaround was to copy /etc/kubernetes/admin.conf over from master machine to worker node running clusterapi stack.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/issues/66#issuecomment-506275949): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.