Open saidmasoud opened 6 years ago
I had a very similar issue. I was able to login to SSH by using the auto-generated key.
I removed references to
aws_ssh_key_id
in driver
and
ssh_key
in transport
That seemed to work for me.
oh wow that actually worked, thanks @dancfox !
I still believe this is an issue that needs to be addressed, as users should be able to use an already-created SSH key if the option is available to them.
The intention in current versions is that if one does not provide aws_ssh_key_id, then we will auto-generate and use one, but otherwise we still respect one being set. This could be a bug or misconfiguration but we can certainly try to repro.
@cheeseplus yeah that makes sense, it seems like there may indeed be a bug when users provide an SSH key, but at least I can work with auto-gened SSH keys for now. Let me know if I can help repro the issue in any way!
What kind of SSH key is it? It's possible you've got some newer algorithm that either net-ssh doesn't support in the version we use or that we haven't set things up correctly for.
@coderanger is this what you're looking for? The key was generated in AWS EC2 by kops
:
$ openssl rsa -text -noout -in ~/Downloads/dev-fluentd.pem
Private-Key: (2048 bit)
<REDACTED>
I just ran into a similar error. It seemed that i had a orphaned instance that kitchen thought was still there. When running kitchen converge. I deleted the *.yml file that was associated with the build(.kitchen/default-
Same issue as OP and dancfox's workaround (https://github.com/test-kitchen/kitchen-ec2/issues/398#issuecomment-387475145) still solves it.
Running Test Kitchen version 2.3.3
I had a very similar issue. I was able to login to SSH by using the auto-generated key.
I removed references to
aws_ssh_key_id
indriver
and
ssh_key
intransport
That seemed to work for me.
I am experiencing the same issue with a CentOS AMI 8.2
---
driver:
name: ec2
aws_ssh_key_id: <%= ENV['AWS_SSH_KEYNAME'] %>
region: us-east-1
instance_type: <%= ENV['AWS_INSTANCE_TYPE'] %>
spot_price: <%= ENV['AWS_SPOT_PRICE'] %>
associate_public_ip: true
interface: public
subnet_id: <%= ENV['AWS_SUBNET_ID'] %>
security_group_ids: <%= ENV['AWS_SG_ID'] %>
retryable_tries: 200
shared_credentials_profile: demo
user_data: user_data_centos_8.sh
provisioner:
name: shell
log_level: 5
max_retries: 3
wait_for_retry: 30
retry_on_exit_code: # will retry if winrm is unable to connect to the ec2 instance
- -1 #Generic error during Chef execution
- 1 #Generic error during Chef execution
#script: 'bootstrap.sh'
verifier:
name: inspec
format: documentation
reporter:
- cli
- html:./inspec_output.html
transport:
name: ssh
ssh_key: ~/.ssh/<%= ENV['AWS_SSH_KEYNAME'] %>.pem
max_wait_until_ready: 900
connect_timeout: 60
connection_retries: 10
connection_retry_sleep: 10
username: centos
platforms:
- name: centos-8
driver:
image_id: <%= ENV['AWS_AMI_ID'] %>
suites:
- name: default
verifier:
inspec_tests:
- test/os_spec.rb
[SSH] opening connection to centos@54.165.54.234<{:user_known_hosts_file=>"/dev/null", :port=>22, :compression=>false, :compression_level=>0, :keepalive=>true, :keepalive_interval=>60, :keepalive_maxcount=>3, :timeout=>15, :keys_only=>true, :keys=>["/Users/user1/.ssh/demo.pem"], :auth_methods=>["publickey"], :verify_host_key=>:never, :logger=>#<Logger:0x00007fc2b0b18fd0 @level=4, @progname=nil, @default_formatter=#<Logger::Formatter:0x00007fc2b0b18f80 @datetime_format=nil>, @formatter=nil, @logdev=#<Logger::LogDevice:0x00007fc2b0b18f30 @shift_period_suffix=nil, @shift_size=nil, @shift_age=nil, @filename=nil, @dev=#<IO:<STDERR>>, @mon_mutex=#<Thread::Mutex:0x00007fc2b0b18ee0>, @mon_mutex_owner_object_id=70237082404760, @mon_owner=nil, @mon_count=0>>, :password_prompt=>#<Net::SSH::Prompt:0x00007fc2b0b18eb8>, :user=>"centos"}>
I was able to launch the same ami manually and I can manually SSH into the box. So, kitchen-ec2 must be doing something wrong or there is a misconfiguration somewhere.
any news about this one?
It looks like it has been resolved in net-ssh 7.0.0+, but chef requires <7.0. Found some breadcrumbs here. To my understanding a proper solution is to bump net-ssh requirement in Chef to 7.0+. Alternatively kitchen can use the same net-ssh patch as Vagrant does: https://github.com/hashicorp/vagrant/blob/main/lib/vagrant/patches/net-ssh.rb
Ran into this as well, on our platform we're stuck with RSA keys for now. Would make a lot of sense to bump net-ssh to 7.0 instead of backporting all kind of patches.
In my case, during our validation we create a temporal ssh key
aws ec2 create-key-pair --key-name tmp-kitchen --key-type ed25519 | jq -r ".KeyMaterial" > tmp-kitchen.pem
export AWS_SSH_KEYNAME=tmp-kitchen
I was able to get CentOS 8.x work with this setup.
The above works with a different environment variable for me:
export AWS_SSH_KEY_ID=tmp-kitchen
Then adding the transport config to the kitchen config:
transport:
ssh_key: tmp-kitchen.pem
ymmv ;)
I seem to be facing similar issue, I am spinning up a windows EC2 instance using test kitchen and can ssh to this instance without password when try ssh to this server manually. but when kitchen converge it , it says Failed to complete #converge action: [password is a required option]. I have specified the exact path to id_rsa under transport > ssh_key , but ti does not seem to work
OS: macOS 10.12.4
When creating an EC2 instance via
kitchen
, I cannot SSH into the host. I can, however, manually SSH into the host on the command line. I have looked up previously related issues in this repo, and none of the solutions listed helped me out. My~/.ssh/config
file has nothing special in it.Software versions:
.kitchen.yml
:kitchen create
output:kitchen converge
output:/.ssh/config
:Manual SSH attempt: