Hello there!
Long time lurker, first time issue reporter...
During my first attempt at deploying using this project I ran into the following error:
│ Error: remote-exec provisioner error
│
│ with null_resource.wait_for_cloud_init,
│ on main.tf line 184, in resource "null_resource" "wait_for_cloud_init":
│ 184: provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_658460362.sh": Process exited with status 1
After connecting to the Admin node with SSH, I found the following when running an 'apt update':
root@eksa-um5ltb-admin:~# apt update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B]
Err:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
Reading package lists... Done
W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
E: The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
After some digging I found that the current kubernetes key(s) specified in the setup.cloud-init.tftpl file have expired. I added the following new key to address this starting at line 13 of the setup.cloud-init.tftpl:
After making this change, my subsequent deployment attempt appears to have been successful, though I'm not 100% how to verify this. When I run 'kubectl get nodes' from the Admin node I get this output, which seems good:
root@eksa-82kcra-admin:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
139.178.88.179 Ready control-plane,master 24m v1.23.7-eks-7709a84
139.178.88.180 Ready <none> 18m v1.23.7-eks-7709a84
Please let me know if there is any additional info/action I can provide that would be beneficial to updating this repo to address this expired key issue.
Hello there! Long time lurker, first time issue reporter...
During my first attempt at deploying using this project I ran into the following error:
After connecting to the Admin node with SSH, I found the following when running an 'apt update':
After some digging I found that the current kubernetes key(s) specified in the setup.cloud-init.tftpl file have expired. I added the following new key to address this starting at line 13 of the setup.cloud-init.tftpl:
After making this change, my subsequent deployment attempt appears to have been successful, though I'm not 100% how to verify this. When I run 'kubectl get nodes' from the Admin node I get this output, which seems good:
Please let me know if there is any additional info/action I can provide that would be beneficial to updating this repo to address this expired key issue.
Thanks,