Closed displague closed 5 months ago
Adding \\
to the local-exec did not help.
Replacing the command in local-exec
with scp -r
and removing the the *
, did not help. (this command is simpler and should be used in any case)
I also tried, unset SSH_AUTH_SOCK
to no avail.
│ Error: local-exec provisioner error
│
│ with null_resource.get_kubeconfig,
│ on main.tf line 163, in resource "null_resource" "get_kubeconfig":
│ 163: provisioner "local-exec" {
│
│ Error running command ' [[ -d ./auth ]] || mkdir -p ./auth
│ /usr/bin/scp -r -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /Users/mjohansson/.ssh/id_rsa_mos-mx94n root@147.75.45.97:/tmp/artifacts/install/auth/ ./auth/
│ ': exit status 255. Output: Warning: Permanently added '147.75.45.97' (ED25519) to the list of known hosts.
│ Load key "/Users/mjohansson/.ssh/id_rsa_mos-mx94n": invalid format
│ root@147.75.45.97: Permission denied (publickey).
│ /usr/bin/scp: Connection closed
The RSA key starts with -----BEGIN OPENSSH PRIVATE KEY-----
.
Changing the local SSH key storage format to pem
(from openssh
) resolved the problem and terraform apply
completed successfully.
It is worth noting that my other SSH keys, not created by Terraform, are in openssh format and do not have this problem.
After an otherwise successful provision, the last step of the deployment is to copy the kubeconfig for the openshift cluster locally. This step failed when run from my MBP (Sonoma 14.5)
I tried to SSH using the same command locally from my shell, but it failed. I then used
ssh-add -K
as my agent was not initialized, but that did not help either. I edited the wildcard in thescp
arguments and that helped: