cloudfoundry-attic / bosh-init

bosh-init is a tool used to create and update the Director VM
Apache License 2.0
31 stars 33 forks source link

bosh-init deploy fails when using OpenStack for VMs #121

Closed sxd closed 7 years ago

sxd commented 7 years ago

Hi,

I'm trying to deploy with bosh-init deploy bosh.yaml and it fails trying to login using the ssh, it fails with this error:

Command 'deploy' failed: Deploying: Creating instance 'bosh/0': Waiting until instance is ready: Starting SSH tunnel: Failed to connect to remote server: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

This it's what we have already check:

Another facts, the cloud it's functional since it's been using for many users during our tests both regions and versions.

Versions: bosh-init: version 0.0.99-1c660f1-2016-11-11T23:51:52Z

I'll include the bosh.yaml that we use.

name: bosh

releases:
- name: bosh
  url: https://bosh.io/d/github.com/cloudfoundry/bosh?v=260.6
  sha1: 1506526f39f7406d97ac6edc7601e1c29fce5df5
- name: bosh-openstack-cpi
  url: https://bosh.io/d/github.com/cloudfoundry-incubator/bosh-openstack-cpi-release?v=30
  sha1: 2fff8e1c241a91267ddd099a553c1339d2709821

resource_pools:
- name: vms
  network: private
  stemcell:
   # url: http://rpm.linets.cl/bosh_3312.tar.gz
   # sha1: dfe7facef0f2b042a216fcf225369afbef68d96f
   url: https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3312.18
   sha1: a9aa1cc80b3e15869a2a1d543127e0f76d005a6e
   # url: https://s3.amazonaws.com/bosh-core-stemcells/openstack/bosh-stemcell-3312.7-openstack-kvm-centos-7-go_agent.tgz
   # sha1: 00f06918a036783af9586ec2c225e72b4bf87fa5
  cloud_properties:
    instance_type: beebop.large

disk_pools:
- name: disks
  disk_size: 20_000

networks:
- name: private
  type: manual
  subnets:
  - range: 10.0.1.0/24 # <--- Replace with a private subnet CIDR
    gateway: 10.0.1.1 # <--- Replace with a private subnet's gateway
    dns: [8.8.8.8] # <--- Replace with your DNS
    cloud_properties: {net_id: 58808f16-519d-4cf3-9cf6-d24911612190} # <--- # Replace with private network UUID
- name: public
  type: vip

jobs:
- name: bosh
  instances: 1

  templates:
  - {name: nats, release: bosh}
  - {name: postgres, release: bosh}
  - {name: blobstore, release: bosh}
  - {name: director, release: bosh}
  - {name: health_monitor, release: bosh}
  - {name: registry, release: bosh}
  - {name: openstack_cpi, release: bosh-openstack-cpi}

  resource_pool: vms
  persistent_disk_pool: disks

  networks:
  - name: private
    static_ips: [10.0.1.254] # <--- Replace with a private IP
    default: [dns, gateway]
  - name: public
    static_ips: [138.219.231.251] # <--- Replace with a floating IP

  properties:
    nats:
      address: 127.0.0.1
      user: nats
      password: nats-password # <--- Uncomment & change

    postgres: &db
      listen_address: 127.0.0.1
      host: 127.0.0.1
      user: postgres
      password: postgres-password # <--- Uncomment & change
      database: bosh
      adapter: postgres

    registry:
      address: 10.0.1.254 # <--- Replace with a private IP
      host: 10.0.1.254 # <--- Replace with a private IP
      db: *db
      http:
        user: admin
        password: admin # <--- Uncomment & change
        port: 25777
      username: admin
      password: admin # <--- Uncomment & change
      port: 25777
      endpoint: http://admin:admin@10.0.1.254:25777 #

    blobstore:
      address: 10.0.1.254 # <--- Replace with a private IP
      port: 25250
      provider: dav
      director:
        user: director
        password: director-password # <--- Uncomment & change
      agent:
        user: agent
        password: agent-password # <--- Uncomment & change

    director:
      address: 127.0.0.1
      name: my-bosh
      db: *db
      cpi_job: openstack_cpi
      max_threads: 3
      user_management:
        provider: local
        local:
          users:
            - {name: admin, password: admin}
            - {name: hm, password: hm-password}
          # - {name: admin, password: admin} # <--- Uncomment & change
          # - {name: hm, password: hm-password} # <--- Uncomment & change

    hm:
      director_account:
        user: hm
        password: hm-password # <--- Uncomment & change
      resurrector_enabled: true

    openstack: &openstack
      auth_url: https://cloud.beebop.sh:5000/v2.0 # <--- Replace with OpenStack Identity API endpoint
     # project: <some project> # <--- Replace with OpenStack project name
      tenant: <some tenant>
      region: <maule>
      #domain: default # <--- Replace with OpenStack domain name
      username: <hidden># <--- Replace with OpenStack username
      api_key: <foo># <--- Replace with OpenStack password
      default_key_name: bosh
      default_security_groups: [bosh]
      human_readable_vm_names: true
      agent: {mbus: "nats://nats:nats-password@10.0.1.254:4222"} # <--- Uncomment & change

    ntp: &ntp [0.pool.ntp.org, 1.pool.ntp.org]

cloud_provider:
  template: {name: openstack_cpi, release: bosh-openstack-cpi}

  ssh_tunnel:
    host: 138.219.231.251 # <--- Replace with a floating IP
    port: 22
    user: vcap
    private_key: ./bosh.pem # Path relative to this manifest file

    mbus: "https://mbus:mbus-password@138.129.231.251:6868" # <--- Uncomment & change

  properties:
    openstack: *openstack
    agent: {mbus: "https://mbus:mbus-password@0.0.0.0:6868"} # <--- Uncomment & change
    blobstore: {provider: local, path: /var/vcap/micro_bosh/data/cache}
    ntp: *ntp
Infra-Red commented 7 years ago

Hi @sxd ! Are you able to connect to another create vm with this bosh.pem key? Each time you click download key pair in horizon dashboard Openstack generate new bosh key for you, so this error can be simple mismatch between this keys.

sxd commented 7 years ago

Hi @Infra-Red , Yes we actually test the key more than one time and in both regions and OpenStack versions, using VMs created with Ubuntu 16.04 and Ubuntu 14.04.

We create all our keys manually specially to avoid any issue related to OpenStack giving use new keypair, so yes, the key it's always the same.

We already discard any trouble with the keypair, keypair mismatch or any mistake on copy moving or anything related

sxd commented 7 years ago

Finally the solution was open the security group since the instance didn't reach the metadata server and didn't download the ssh-key, I'm sure in the future there should be a way to debug this kind of troubles with the instances.

I'll close this issue.