hashicorp / packer

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.
http://www.packer.io
Other
15.05k stars 3.32k forks source link

Packer build with lxd builder and ansible provisioner stuck #9034

Closed Stavroswd closed 3 years ago

Stavroswd commented 4 years ago

Overview of the Issue

Hi guys, I'm trying to build an image for lxd with packer and ansible for provisioning. During the build for some reason the process get's stuck at the gathering facts task. Even after disabling this step it then starts to do the playbook tasks but can't continue.

Reproduction Steps

Run the build command with the provided config files.

sudo packer build build-config.json

Versions

packer => 1.5.5 ansible => 2.9.6 lxd => 3.0.3 lxc => client: 3.0.3 , server 3.0.3

Related files

build-config.json:

{ 
  "builders": [
    {
      "type": "lxd",
      "name": "lxd-image",
      "image": "ubuntu:18.04",
      "output_image": "lxd-image",
      "publish_properties": {
        "description": "Building and provision image."
      }
    }
  ],
  "provisioners": [
    {
      "type": "ansible",
      "playbook_file": "build/modules/vagrant-box-commandcenter/provision.yml",
      "user": "lxd-image",  
      "ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ], 
      "extra_arguments": [ "-vvvv" ]
   }
  ]  
}

Ansible playbook :

- hosts: default
  become: yes
  gather_facts: no
  roles:
    - role: install-ruby

Packer verbose build output:

==> lxd-image: Creating container...
==> lxd-image: Provisioning with Ansible...
==> lxd-image: Executing Ansible: ansible-playbook --extra-vars packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible670992089 /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml -e ansible_ssh_private_key_file=/tmp/ansible-key831963874 -vvvv
    lxd-image: ansible-playbook 2.9.6
    lxd-image:   config file = /devops/repo/namespaces/3-operations/ansible.cfg
    lxd-image:   configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
    lxd-image:   ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
    lxd-image:   executable location = /usr/local/bin/ansible-playbook
    lxd-image:   python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 7.4.0]
    lxd-image: Using /devops/repo/namespaces/3-operations/ansible.cfg as config file
    lxd-image: setting up inventory plugins
    lxd-image: host_list declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
    lxd-image: script declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
    lxd-image: auto declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
    lxd-image: Parsed /tmp/packer-provisioner-ansible670992089 inventory source with ini plugin
    lxd-image: Loading callback plugin yaml of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
    lxd-image:
    lxd-image: PLAYBOOK: provision.yml ********************************************************
    lxd-image: Positional arguments: /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
    lxd-image: become_method: sudo
    lxd-image: inventory: (u'/tmp/packer-provisioner-ansible670992089',)
    lxd-image: forks: 10
    lxd-image: tags: (u'all',)
    lxd-image: extra_vars: (u'packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes', u'ansible_ssh_private_key_file=/tmp/ansible-key831963874')
    lxd-image: verbosity: 4
    lxd-image: connection: smart
    lxd-image: timeout: 10
    lxd-image: 1 plays in /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
    lxd-image:
    lxd-image: PLAY [default] *****************************************************************
    lxd-image: META: ran handlers
    lxd-image:
    lxd-image: TASK [0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user : Install build tools] ***
    lxd-image: task path: /devops/repo/namespaces/0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user/tasks/main.yml:1
    lxd-image: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: lxd-image
    lxd-image: <127.0.0.1> SSH: EXEC ssh -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37521 -o 'IdentityFile="/tmp/ansible-key831963874"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="lxd-image"' -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/e3ac3f2d4b 127.0.0.1 '/bin/sh -c '"'"'echo ~lxd-image && sleep 0'"'"''

Operating system and Environment details

OS: Ubuntu 18.04 generic.

Simply running my ansible playbook against a running container runs perfectly so it must be something with the lxd builder. Has anyone had the same issues with the lxd builder and can help?

SwampDragons commented 4 years ago

Hi, thanks for reaching out. I think this may be solved by setting "use_proxy": false, an option introduced in this PR https://github.com/hashicorp/packer/pull/8625. It's going out with 1.5.6, but you can try it out in the nightly build and see if that fixes things for you.

SwampDragons commented 4 years ago

Actually on second thought LXD doesn't use the SSH communicator so I don't think it'll work for you.

I've seen issues with ansible hanging on the "gathering facts" stage because it can't figure out the correct python interpreter: manually setting the path to Python for Ansible has helped some people move forward. see https://github.com/hashicorp/packer/issues/7667 for more info. Example:

      "extra_arguments": [
        "--extra-vars",
        "ansible_python_interpreter=/usr/bin/python"
      ],
Stavroswd commented 4 years ago

@SwampDragons thanks for your response! Is there anything planned in the near future to fix this? Running a local environment with lxd/lxc containers is very smooth and building images with packer for it would be nice!

SwampDragons commented 4 years ago

There's no plan, currently. Both the LXD builder and Ansible provisioner are "community supported" plugins which just means that the HashiCorp developers who do much of the work on Packer don't spend a lot of time on them other than reviewing PRs.

If the Ansible run works by itself, I'd recommend using the shell-local provisioner to call Ansible directly rather than using the Ansible provisioner. I think you'll be able to access all the relevant instance information to create an inventory file using the "build" template engine: https://www.packer.io/docs/templates/engine.html#build

odbaeu commented 4 years ago

Hello! as far as I can judge, I've got the same issue but with type: "amazon-ebs. This seems not related to LXD only.

Things I found out during troubleshooting:

Workaround After adding ansible_python_interpreter as described above the issue is solved.

Versions ansible-playbook 2.9.7 python 2.7.17 (remote) Packer v1.5.5

This is my apache.json

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "ssh_pty": true,
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "region": "eu-central-1",
    "instance_type": "t3a.small",
    "source_ami_filter": {
      "filters": {
        "virtualization-type": "hvm",
        "name": "amzn2-ami-hvm-2*-x86_64-ebs",
        "root-device-type": "ebs"
      },
      "owners": ["amazon"],
      "most_recent": true
    },
    "ssh_username": "ec2-user",
    "ssh_timeout": "5m",
    "ami_name": "playground/apache {{timestamp}}"
  }],

  "provisioners": [
    {
      "type": "shell",
      "inline": ["echo \"********\" >> ~/.ssh/authorized_keys"]
    },
    {
      "type": "ansible",
      "groups": [ "apache" ],
      "user": "ec2-user",
      "extra_arguments": [
        "-vvvv"
      ],
      "sftp_command": "/usr/libexec/openssh/sftp-server -e",
      "playbook_file": "../../ansible/apache.yml"
    }
  ]
}

apache.yml

---
- name: Hello World!
  hosts: all
  gather_facts: no

  tasks:
  - name: Hello World!
    debug:
      msg: Hello, world!

  - name: Touch file
    file:
      path: /tmp/98t778t67878z9fs
      state: touch

Debug output

$ packer build apache.json 
amazon-ebs: output will be in this color.

==> amazon-ebs: Prevalidating any provided VPC information
==> amazon-ebs: Prevalidating AMI Name: playground/apache 1587770901
    amazon-ebs: Found Image ID: ami-0dbf78a1d4a0612e2
==> amazon-ebs: Creating temporary keypair: packer_5ea37615-a974-7078-cfb0-6e1b76e66019
==> amazon-ebs: Creating temporary security group for this instance: packer_5ea37617-9644-65bb-6f95-0abcdeb02d23
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
    amazon-ebs: Adding tag: "Name": "Packer Builder"
    amazon-ebs: Instance ID: i-0e12b217c45f0daa0
==> amazon-ebs: Waiting for instance (i-0e12b217c45f0daa0) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 18.194.xxx.xxx
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with shell script: /tmp/packer-shell468573963
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible933678927 /home/nell/projects/playground/ansible/apache.yml -e ansible_ssh_private_key_file=/tmp/ansible-key395755440 -vvvv
    amazon-ebs: ansible-playbook 2.9.7
    amazon-ebs:   config file = /etc/ansible/ansible.cfg
    amazon-ebs:   configured module search path = [u'/home/nell/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
    amazon-ebs:   ansible python module location = /usr/lib/python2.7/dist-packages/ansible
    amazon-ebs:   executable location = /usr/bin/ansible-playbook
    amazon-ebs:   python version = 2.7.17 (default, Apr 15 2020, 17:20:14) [GCC 7.5.0]
    amazon-ebs: Using /etc/ansible/ansible.cfg as config file
    amazon-ebs: setting up inventory plugins
    amazon-ebs: host_list declined parsing /tmp/packer-provisioner-ansible933678927 as it did not pass its verify_file() method
    amazon-ebs: script declined parsing /tmp/packer-provisioner-ansible933678927 as it did not pass its verify_file() method
    amazon-ebs: auto declined parsing /tmp/packer-provisioner-ansible933678927 as it did not pass its verify_file() method
    amazon-ebs: Parsed /tmp/packer-provisioner-ansible933678927 inventory source with ini plugin
    amazon-ebs: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc
    amazon-ebs:
    amazon-ebs: PLAYBOOK: apache.yml ***********************************************************
    amazon-ebs: Positional arguments: /home/nell/projects/playground/ansible/apache.yml
    amazon-ebs: become_method: sudo
    amazon-ebs: inventory: (u'/tmp/packer-provisioner-ansible933678927',)
    amazon-ebs: forks: 5
    amazon-ebs: tags: (u'all',)
    amazon-ebs: extra_vars: (u'packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -o IdentitiesOnly=yes', u'ansible_ssh_private_key_file=/tmp/ansible-key395755440')
    amazon-ebs: verbosity: 4
    amazon-ebs: connection: smart
    amazon-ebs: timeout: 10
    amazon-ebs: 1 plays in /home/nell/projects/playground/ansible/apache.yml
    amazon-ebs:
    amazon-ebs: PLAY [Hello World!] ************************************************************
    amazon-ebs: META: ran handlers
    amazon-ebs:
    amazon-ebs: TASK [Hello World!] ************************************************************
    amazon-ebs: task path: /home/nell/projects/playground/ansible/apache.yml:7
    amazon-ebs: ok: [default] => {
    amazon-ebs:     "msg": "Hello, world!"
    amazon-ebs: }
    amazon-ebs:
    amazon-ebs: TASK [Touch file] **************************************************************
    amazon-ebs: task path: /home/nell/projects/playground/ansible/apache.yml:11
    amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ec2-user
    amazon-ebs: <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37665 -o 'IdentityFile="/tmp/ansible-key395755440"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o ControlPath=/home/nell/.ansible/cp/8cc5cbdf4b 127.0.0.1 '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
    amazon-ebs: <127.0.0.1> (0, '/home/ec2-user\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n  7 Dec 2017\r\ndebug1: Reading configuration data /home/nell/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/nell/.ansible/cp/8cc5cbdf4b" does not exist\r\ndebug2: resolving "127.0.0.1" port 37665\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 37665.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /tmp/ansible-key395755440 type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /tmp/ansible-key395755440-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: Remote protocol version 2.0, remote software version Go\r\ndebug1: no match: Go\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: Authenticating to 127.0.0.1:37665 as \'ec2-user\'\r\ndebug3: put_host_port: [127.0.0.1]:37665\r\ndebug3: hostkeys_foreach: reading file "/home/nell/.ssh/known_hosts"\r\ndebug3: send packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT sent\r\ndebug3: receive packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT received\r\ndebug2: local client KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c\r\ndebug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa\r\ndebug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com\r\ndebug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com\r\ndebug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: zlib@openssh.com,zlib,none\r\ndebug2: compression stoc: zlib@openssh.com,zlib,none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug2: peer server KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha1\r\ndebug2: host key algorithms: ssh-rsa\r\ndebug2: ciphers ctos: aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr\r\ndebug2: ciphers stoc: aes128-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr\r\ndebug2: MACs ctos: hmac-sha2-256-etm@openssh.com,hmac-sha2-256,hmac-sha1,hmac-sha1-96\r\ndebug2: MACs stoc: hmac-sha2-256-etm@openssh.com,hmac-sha2-256,hmac-sha1,hmac-sha1-96\r\ndebug2: compression ctos: none\r\ndebug2: compression stoc: none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug1: kex: algorithm: curve25519-sha256@libssh.org\r\ndebug1: kex: host key algorithm: ssh-rsa\r\ndebug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none\r\ndebug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none\r\ndebug3: send packet: type 30\r\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\r\ndebug3: receive packet: type 31\r\ndebug1: Server host key: ssh-rsa SHA256:YCUYe4XIS5N0SPRW3SJygQuAllornJdvoOtwspyxf2Q\r\ndebug3: put_host_port: [127.0.0.1]:37665\r\ndebug3: put_host_port: [127.0.0.1]:37665\r\ndebug3: hostkeys_foreach: reading file "/home/nell/.ssh/known_hosts"\r\ndebug1: checking without port identifier\r\ndebug3: hostkeys_foreach: reading file "/home/nell/.ssh/known_hosts"\r\nWarning: Permanently added \'[127.0.0.1]:37665\' (RSA) to the list of known hosts.\r\ndebug3: send packet: type 21\r\ndebug2: set_newkeys: mode 1\r\ndebug1: rekey after 134217728 blocks\r\ndebug1: SSH2_MSG_NEWKEYS sent\r\ndebug1: expecting SSH2_MSG_NEWKEYS\r\ndebug3: receive packet: type 21\r\ndebug1: SSH2_MSG_NEWKEYS received\r\ndebug2: set_newkeys: mode 0\r\ndebug1: rekey after 134217728 blocks\r\ndebug2: key: /home/nell/.ssh/id_rsa (0x56181641b1b0), agent\r\ndebug2: key: nell@dnevm (0x56181641b250), agent\r\ndebug2: key: /tmp/ansible-key395755440 ((nil)), explicit\r\ndebug3: send packet: type 5\r\ndebug3: receive packet: type 6\r\ndebug2: service_accept: ssh-userauth\r\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey\r\ndebug3: start over, passed a different list publickey\r\ndebug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey\r\ndebug3: authmethod_lookup publickey\r\ndebug3: remaining preferred: ,gssapi-keyex,hostbased,publickey\r\ndebug3: authmethod_is_enabled publickey\r\ndebug1: Next authentication method: publickey\r\ndebug1: Offering public key: RSA SHA256:TdXnBnvYPK6JQxEnMLj0XMGvxbH5OOgrm1DRRqfAwDE /home/nell/.ssh/id_rsa\r\ndebug3: send_pubkey_test\r\ndebug3: send packet: type 50\r\ndebug2: we sent a publickey packet, wait for reply\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey\r\ndebug1: Offering public key: RSA SHA256:s/JXdG/GwI7slvMt0DAFoOZsTtcS/GHMCWaDavgwcf4 nell@dnevm\r\ndebug3: send_pubkey_test\r\ndebug3: send packet: type 50\r\ndebug2: we sent a publickey packet, wait for reply\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey\r\ndebug1: Trying private key: /tmp/ansible-key395755440\r\ndebug3: sign_and_send_pubkey: RSA SHA256:TCegRNZQA9Q32xCun0oC1+/nfPZYCzbMwD4EXyioJXE\r\ndebug3: send packet: type 50\r\ndebug2: we sent a publickey packet, wait for reply\r\ndebug3: receive packet: type 52\r\ndebug1: Authentication succeeded (publickey).\r\nAuthenticated to 127.0.0.1 ([127.0.0.1]:37665).\r\ndebug1: setting up multiplex master socket\r\ndebug3: muxserver_listen: temporary control path /home/nell/.ansible/cp/8cc5cbdf4b.KxmEbig4hC4ijwaA\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug1: channel 0: new [/home/nell/.ansible/cp/8cc5cbdf4b]\r\ndebug3: muxserver_listen: mux listener channel 0 fd 5\r\ndebug2: fd 3 setting TCP_NODELAY\r\ndebug3: ssh_packet_set_tos: set IP_TOS 0x08\r\ndebug1: control_persist_detach: backgrounding master process\r\ndebug2: control_persist_detach: background process is 4868\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug1: forking to background\r\ndebug1: Entering interactive session.\r\ndebug1: pledge: id\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug1: multiplexing control connection\r\ndebug2: fd 6 setting O_NONBLOCK\r\ndebug3: fd 6 is O_NONBLOCK\r\ndebug1: channel 1: new [mux-control]\r\ndebug3: channel_post_mux_listener: new mux channel 1 fd 6\r\ndebug3: mux_master_read_cb: channel 1: hello sent\r\ndebug2: set_control_persist_exit_time: cancel scheduled exit\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\r\ndebug2: process_mux_master_hello: channel 1 slave version 4\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\r\ndebug2: process_mux_alive_check: channel 1: alive check\r\ndebug3: mux_client_request_alive: done pid = 4870\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 348\r\ndebug2: process_mux_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm-256color", cmd "/bin/sh -c \'echo ~ec2-user && sleep 0\'", env 10\r\ndebug3: process_mux_new_session: got fds stdin 7, stdout 8, stderr 9\r\ndebug2: fd 8 setting O_NONBLOCK\r\ndebug2: fd 9 setting O_NONBLOCK\r\ndebug1: channel 2: new [client-session]\r\ndebug2: process_mux_new_session: channel_new: 2 linked to control channel 1\r\ndebug2: channel 2: send open\r\ndebug3: send packet: type 90\r\ndebug3: receive packet: type 91\r\ndebug2: channel_input_open_confirmation: channel 2: callback start\r\ndebug2: client_session2_setup: id 2\r\ndebug1: Sending environment.\r\ndebug1: Sending env LC_MEASUREMENT = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_PAPER = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_MONETARY = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LANG = en_US.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_NAME = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_ADDRESS = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_NUMERIC = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_TELEPHONE = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_IDENTIFICATION = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending env LC_TIME = de_DE.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending command: /bin/sh -c \'echo ~ec2-user && sleep 0\'\r\ndebug2: channel 2: request exec confirm 1\r\ndebug3: send packet: type 98\r\ndebug3: mux_session_confirm: sending success reply\r\ndebug2: channel_input_open_confirmation: channel 2: callback done\r\ndebug2: channel 2: open confirm rwindow 2097152 rmax 32768\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: receive packet: type 99\r\ndebug2: channel_input_status_confirm: type 99 id 2\r\ndebug2: exec request accepted on channel 2\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\r\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\r\ndebug3: receive packet: type 97\r\ndebug2: channel 2: rcvd close\r\ndebug2: channel 2: output open -> drain\r\ndebug2: channel 2: close_read\r\ndebug2: channel 2: input open -> closed\r\ndebug3: channel 2: will not send data after close\r\ndebug2: channel 2: obuf empty\r\ndebug2: channel 2: close_write\r\ndebug2: channel 2: output drain -> closed\r\ndebug2: channel 2: send close\r\ndebug3: send packet: type 97\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: gc: notify user\r\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\r\ndebug2: channel 1: rcvd close\r\ndebug2: channel 1: output open -> drain\r\ndebug2: channel 1: close_read\r\ndebug2: channel 1: input open -> closed\r\ndebug2: channel 2: gc: user detached\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: garbage collecting\r\ndebug1: channel 2: free: client-session, nchannels 3\r\ndebug3: channel 2: status: The following connections are open:\r\n  #1 mux-control (t16 nr0 i3/0 o1/16 fd 6/6 cc -1)\r\n  #2 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)\r\n\r\ndebug2: channel 1: obuf empty\r\ndebug2: channel 1: close_write\r\ndebug2: channel 1: output drain -> closed\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: gc: notify user\r\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\r\ndebug2: channel 1: gc: user detached\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: garbage collecting\r\ndebug1: channel 1: free: mux-control, nchannels 2\r\ndebug3: channel 1: status: The following connections are open:\r\n  #1 mux-control (t16 nr0 i3/0 o3/0 fd 6/6 cc -1)\r\n\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug2: Received exit status from master 0\r\n')
    amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ec2-user
    amazon-ebs: <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37665 -o 'IdentityFile="/tmp/ansible-key395755440"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o ControlPath=/home/nell/.ansible/cp/8cc5cbdf4b 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ec2-user/.ansible/tmp `"&& mkdir /home/ec2-user/.ansible/tmp/ansible-tmp-1587770964.8-4865-98404056827172 && echo ansible-tmp-1587770964.8-4865-98404056827172="` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1587770964.8-4865-98404056827172 `" ) && sleep 0'"'"''
    amazon-ebs: <127.0.0.1> (0, 'ansible-tmp-1587770964.8-4865-98404056827172=/home/ec2-user/.ansible/tmp/ansible-tmp-1587770964.8-4865-98404056827172\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n  7 Dec 2017\r\ndebug1: Reading configuration data /home/nell/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4870\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
    amazon-ebs: <default> Attempting python interpreter discovery
    amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ec2-user
    amazon-ebs: <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37665 -o 'IdentityFile="/tmp/ansible-key395755440"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o ControlPath=/home/nell/.ansible/cp/8cc5cbdf4b 127.0.0.1 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
    amazon-ebs: <127.0.0.1> (0, 'PLATFORM\r\nLinux\r\nFOUND\r\n/usr/bin/python\r\n/usr/bin/python2.7\r\n/usr/bin/python\r\nENDFOUND\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n  7 Dec 2017\r\ndebug1: Reading configuration data /home/nell/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4870\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
    amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ec2-user
    amazon-ebs: <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37665 -o 'IdentityFile="/tmp/ansible-key395755440"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o ControlPath=/home/nell/.ansible/cp/8cc5cbdf4b 127.0.0.1 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
SwampDragons commented 4 years ago

@odbaeu I think you're facing #7667, and #8625 should solve things for you. That'll be released in v1.5.6 of Packer, and you will be able to get it working by setting the new template option "use_proxy": false in your config. This feature is already available in our nightly builds if you want to test it out: https://github.com/hashicorp/packer/releases/tag/nightly

odbaeu commented 4 years ago

Thank you @SwampDragons for your reply! I downloaded the nightly build 89f3aa0 but I've the exact same problem. When I remove ansible_python_interpreter=... the build gets stuck at the same task.

SwampDragons commented 4 years ago

Bummer. Can you set the env var PACKER_DEBUG=1 and share the logs in a gist?

odbaeu commented 4 years ago

Here's the gist: https://gist.github.com/odbaeu/6357a31d712a43c670f5878557cec029

SwampDragons commented 4 years ago

Looking at those logs, the proxy adapter is still getting set up. Did you remember to set "use_proxy": false in your config?

SwampDragons commented 4 years ago

Relevant log lines:

==> amazon-ebs: Provisioning with Ansible... amazon-ebs: Setting up proxy adapter for Ansible....

odbaeu commented 4 years ago

hmm... I tested "use_proxy": false before and it did not work, so I didn't pay attention to it in this test anymore. I re-ran the test with "use_proxy": false and it worked without any problems. I don't know why the previous test failed and I could not reproduce it. Maybe I accidentally used packer v1.5.5 for the previous test. Thank you for you help!

Will "use_proxy": true be supported in future? I think that it is an important feature.

SwampDragons commented 4 years ago

use_proxy: true is the default. In the future I'll change the defaults so that we only use the proxy if told to do so, or if there's no instance IP address so we have to use a proxy to make Ansible work. But yes, the proxy capability isn't going anywhere.

worxli commented 4 years ago

Is there any working configuration for LXD and ansible atm? Afaics plain LXD/ansible doesn't work because the proxy doesn't forward lxc exec output to ssh correctly. And disabling proxy doesn't work because the LXD provider doesn't use the SSH communicator.

SwampDragons commented 4 years ago

There's no plan to work on this, currently -- the LXD builder and ansible provisioner are both community-supported, so the best way to see a fix is probably to make a PR.

worxli commented 4 years ago

Do you have any pointer on how this should be fixed? Which component is the one has the bug?

SwampDragons commented 4 years ago

I suspect the bug is in the ansible provisioner's ssh_proxy that we use to connect to the LXD instance.

worxli commented 4 years ago

A bit offtopic, is there any reason you know of the LXD builder doesn't use the lxd go library github.com/lxc/lxd?

SwampDragons commented 4 years ago

nope; @ChrisLundquist wrote the builders and may have some insight though.

worxli commented 4 years ago

@ChrisLundquist do you remember?

ChrisLundquist commented 4 years ago

A few reasons:

When writing the LXD builder, I noted that the ansible provisioner didn't play nicely. I can't quite recall why, but I vaguely recall the ansible provisioner had a lot of hacks around SSHing. The LXD builder mostly just execs things in the container. I never dug deep enough into fixing it. https://github.com/hashicorp/packer/pull/3625#issuecomment-271796328

Hope This Helps, Chris Lundquist

On Mon, Jul 6, 2020 at 1:07 PM Lukas Bischofberger notifications@github.com wrote:

@ChrisLundquist https://github.com/ChrisLundquist do you remember?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/packer/issues/9034#issuecomment-654439646, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABMICAF6XQWLB7ZYLUYAM3R2IVGHANCNFSM4MEXB5OQ .

worxli commented 4 years ago

Cool thanks for the heads-up. As this issue mentions there are issues again with the ansible provisioner. I wasn't yet able to pinpoint the exact problem but I thought I'd try to use the API interface and see if that fixes things when executing things in LXD containers.

What I noticed was that ansible got stuck when SSH via the packer proxy. Extracting the ansible/SSH command arguments and just executing something like ssh -i /tmp/packerxx 127.0.0.1 -p PROXYPORT date (via the proxy) was also hanging until I hit enter. This made me assume the issue is with the proxy and maybe with the LXD exec parts.

gowthamakanthan commented 4 years ago

hello @SwampDragons ,

Am using following version of packer and ansible to build the image in AWS. Have followed your comments and tried to use the "use_proxy" to overcome the ansible gather_fact hanging issue, however it's failing with "Unknown_configuration_key". Please check.

Packer version: 1.5.5 Ansible version: 2.9.7

+ docker run --rm -t -v /var/lib/jenkins:/jenkins -v /var/lib/jenkins/jobs/DocStore/jobs/Create_AWS_Test4_AMI/workspace:/git/aws-infra -w /git/aws-infra -v /var/lib/jenkins/jobs/DocStore/jobs/Create_AWS_Test4_AMI/workspace/ansible/cloudwatch-agent:/usr/share/ansible -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION=us-east-1 -e NO_PROXY -e USER=some-non-root-user -e ANSIBLE_REMOTE_TMP=/tmp/ dockerhub.cisco.com/xse-docker/packer-ansible:1.5.5-2.9.7 build -var-file packer/constants.json -var-file packer/constants-dev.json -var-file docstore/packer/variables-ci.json -var environment=test4 -only=primary docstore/packer/build-docstore.json
primary: output will be in this color.

1 error occurred:
    * unknown configuration key: "use_proxy"; raws is []interface {}{map[string]interface {}{"ansible_env_vars":[]interface {}{"BECOME_ALLOW_SAME_USER=True"}, "extra_arguments":[]interface {}{"--extra-vars", "remote_tmp=/tmp/", "-vvvv"}, "playbook_file":"docstore/packer/docstore_customise.yml", "use_proxy":false, "user":"centos"}, map[string]interface {}{"packer_build_name":"primary", "packer_builder_type":"amazon-ebs", "packer_debug":false, "packer_force":false, "packer_on_error":"", "packer_template_path":"/git/aws-infra/doc/packer/build-docstore.json", "packer_user_variables":map[interface {}]interface {}{"":"", "aws_account_id":"", "aws_encryption_key_primary_region":"", "aws_encryption_key_secondary_region":"", "aws_kms_key":"", "aws_region_primary":"us-east-1", "aws_region_secondary":"us-east-1", "aws_vpc":"", "base_image_filter_centos7":"base_*", "copy_to_regions":"", "date":"20200726", "datetime":"20200726-0706", "docstore_build_dir":".", "encrypt":"true", "environment":"test4", "ssh_bastion_host":"", "ssh_bastion_private_key_file":"/jenkins/.ssh/id_rsa_idev_hop_nopass", "ssh_bastion_username":"jenkins", "ssh_username_centos7":"centos"}}} 

 and ctx data is map[interface {}]interface {}{"ConnType":"Build_Type. To set this dynamically in the Packer template, you must use the `build` function", "Host":"Build_Host. To set this dynamically in the Packer template, you must use the `build` function", "ID":"Build_ID. To set this dynamically in the Packer template, you must use the `build` function", "PackerHTTPAddr":"Build_PackerHTTPAddr. To set this dynamically in the Packer template, you must use the `build` function", "PackerRunUUID":"Build_PackerRunUUID. To set this dynamically in the Packer template, you must use the `build` function", "Password":"Build_Password. To set this dynamically in the Packer template, you must use the `build` function", "Port":"Build_Port. To set this dynamically in the Packer template, you must use the `build` function", "SSHPrivateKey":"Build_SSHPrivateKey. To set this dynamically in the Packer template, you must use the `build` function", "SSHPublicKey":"Build_SSHPublicKey. To set this dynamically in the Packer template, you must use the `build` function", "SourceAMIName":"Build_SourceAMIName. To set this dynamically in the Packer template, you must use the `build` function", "User":"Build_User. To set this dynamically in the Packer template, you must use the `build` function", "WinRMPassword":"{{.WinRMPassword}}"}

Kindly let me know if this needed to be raised as a new issue.
SwampDragons commented 4 years ago

@gowthamakanthan the use_proxy option was not added until Packer version 1.5.6, so you just need to upgrade :)

gowthamakanthan commented 4 years ago

@SwampDragons That's worked, thank you.

fred-gb commented 4 years ago

Hello,

Ubuntu 20.04 LXD Snap 4.4 Packer 1.6.1

I don't no if it related, but after adding, "use_proxy": false

I have this error:

PACKER_DEBUG=1 packer build -debug ansible.json
Debug mode enabled. Builds will not be parallelized.
ubuntu-2004-qw: output will be in this color.

==> ubuntu-2004-qw: Creating container...
==> ubuntu-2004-qw: Pausing after run of step 'stepLxdLaunch'. Press enter to continue. 
==> ubuntu-2004-qw: Pausing before the next provisioner . Press enter to continue. 
==> ubuntu-2004-qw: Provisioning with Ansible...
==> ubuntu-2004-qw: Pausing before cleanup of step 'stepLxdLaunch'. Press enter to continue. 
==> ubuntu-2004-qw: Unregistering and deleting deleting container...
Build 'ubuntu-2004-qw' errored: unexpected EOF

==> Some builds didn't complete successfully and had errors:
--> ubuntu-2004-qw: unexpected EOF

==> Builds finished but no artifacts were created.

ansible.json:

{
    "variables": {
    },
    "builders": [
        {
          "type": "lxd",
          "name": "ubuntu-2004-qw",
          "image": "ubuntu-minimal:focal",
          "output_image": "ubuntu-2004-qw",
          "publish_properties": {
            "description": "Image LXD ubuntu bionic ansible ready"
          }
      }
    ],
    "provisioners": [
      {
        "type": "ansible",
        "user": "root",
        "use_proxy": false,
        "playbook_file": "config.yml",
        "extra_arguments": ["-vvvv"]
      }
    ]
}

and config.yml:

- name: 'Provision Image'
  hosts: default
  become: true

  tasks:
    - name: install Apache
      package:
        name: 'apache2'
        state: present

Where is the problem? No luck with packer :'(

SwampDragons commented 4 years ago

The use_proxy option isn't going to work with LXD since it uses its own non-ssh and non-winrm communicator. We'll need to figure out some other solution for the adapter, or you can try downgrading ansible to < 2.8, or you can try setting the python path as described in this example: https://github.com/hashicorp/packer/issues/7667#issuecomment-498918788

fred-gb commented 4 years ago

Hello, Thanks. I found this workaround for ansible provisioner and LXD.

ansible.json

    "provisioners": [
      {
        "type": "ansible",
        "user": "root",
        "inventory_file": "./inventory.ini",
        "playbook_file": "./config.yml",
        "extra_arguments": [
          "-vvvvv"
        ]
      }
    ]

config.yml

- name: Complete common container build
  hosts: packer-ubuntu-2004-qw
  tasks:
    - name: install nginx
      apt:
        name: nginx
        state: present
        update_cache: yes

inventory.ini

[lxd]
packer-ubuntu-2004-qw ansible_host=target-lxd-server:packer-ubuntu-2004-qw

[lxd:vars]
ansible_connection=lxd

I used ansible connection by lxd instead of standard SSH.

SwampDragons commented 4 years ago

Awesome! I'll see if I can make the Ansible provisioner smart enough to set the ansible_connection to lxd for you, like we do for WinRM.

fred-gb commented 4 years ago

Thanks

aacebedo commented 4 years ago

Same problem here for docker builder and ansible provisioner. Settings the ansible_connection to docker does not solve the problem, the provisioner cannot run as it uses 127.0.0.1 as the container name to run the "docker exec" command and this won't work obviously. Settings the python interpreter has no effect too.

I am unable to trace and find the reason why the ssh connection just stops and stays stuck when using the proxy.

ghost commented 3 years ago

This issue has been automatically migrated to hashicorp/packer-plugin-ansible#25 because it looks like an issue with that plugin. If you believe this is not an issue with the plugin, please reply to hashicorp/packer-plugin-ansible#25.

ghost commented 3 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.