Closed johnjelinek closed 7 years ago
Yes that is a built in functionality in kitchen
provisioner:
name: ansible_push
playbook: tests/integration/default/default.yml # <----- I'd like to move this
ansible_config: tests/ansible.cfg
chef_bootstrap_url: nil
verifier:
name: inspec
platforms:
- name: centos-7.3
suites:
- name: default
verifier:
inspec_tests:
- tests/integration/default
provisioner :
playbook: tests/integration/default/default.yml
have a look at this example
Hope that helps
awesome, thanks!
@ahelal: I'm also having an issue getting the right number of machines to show up in kitchen list
. I have this .kitchen.yml
:
---
driver:
name: vagrant
provisioner:
name: ansible_push
ansible_config: tests/ansible.cfg
chef_bootstrap_url: nil
verifier:
name: inspec
platforms:
- name: swarm-manager
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.33'}]
- name: swarm-worker
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.53'}]
suites:
- name: manager
verifier:
inspec_tests:
- tests/integration/manager
provisioner:
playbook: tests/integration/manager/manager.yml
groups:
docker-swarm-manager:
- manager
- name: worker
verifier:
inspec_tests:
- tests/integration/worker
provisioner:
playbook: tests/integration/worker/worker.yml
groups:
docker-swarm-join:
- worker
now if I run kitchen list
:
Instance Driver Provisioner Verifier Transport Last Action Last Error
manager-swarm-manager Vagrant AnsiblePush Inspec Ssh <Not Created> <None>
manager-swarm-worker Vagrant AnsiblePush Inspec Ssh <Not Created> <None>
worker-swarm-manager Vagrant AnsiblePush Inspec Ssh <Not Created> <None>
worker-swarm-worker Vagrant AnsiblePush Inspec Ssh <Not Created> <None>
I'm expecting only two boxes to show up -- any ideas how I'm supposed to format this differently?
I thought maybe I could move the stuff from platforms
into suites
, but that resulted in no boxes in kitchen list
This is a kitchen functionality not an ansiblepush. Every suite will use each platform thats why you have 4. If I understand your requirement correct. I guess this is what you want
---
driver:
name: vagrant
provisioner:
name: ansible_push
ansible_config: tests/ansible.cfg
chef_bootstrap_url: nil
verifier:
name: inspec
platforms:
- name: swarm-manager
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.33'}]
verifier:
inspec_tests:
- tests/integration/manager
provisioner:
playbook: tests/integration/manager/manager.yml
groups:
docker-swarm-manager:
- manager
- name: swarm-worker
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.53'}]
verifier:
inspec_tests:
- tests/integration/worker
provisioner:
playbook: tests/integration/worker/worker.yml
groups:
docker-swarm-join:
- worker
suites:
- name: default
Thanks -- strangely though, the network settings don't seem to be applying.
my fault indent the network one level eg.
- name: swarm-worker
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.53'}]
oic, thanks! One more:
---
driver:
name: vagrant
provisioner:
name: ansible_push
ansible_config: tests/ansible.cfg
chef_bootstrap_url: nil
groups:
consul_server:
- swarm-manager
- swarm-worker-01
- swarm-worker-02
docker-swarm-join:
- swarm-worker-01
- swarm-worker-02
verifier:
name: inspec
platforms:
- name: swarm-manager
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.33'}]
verifier:
inspec_tests:
- tests/integration/manager
provisioner:
playbook: tests/integration/manager/manager.yml
mygroup:
- docker-swarm-manager
- name: swarm-worker-01
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.53'}]
verifier:
inspec_tests:
- tests/integration/worker
provisioner:
playbook: tests/integration/worker/worker.yml
- name: swarm-worker-02
driver:
customize:
memory: 1024
box: bento/centos-7.4
network:
- ['private_network', {ip: '192.168.33.73'}]
verifier:
inspec_tests:
- tests/integration/worker
provisioner:
playbook: tests/integration/worker/worker.yml
suites:
- name: default
I've refactored my groups .. my consul playbook builds a comma delimited list of IPs from the consul_server
group -- so I've got that group defined 👆 . However, it's acting like the group only has one node in it (just the one that ansible is currently running for during a converge). How can I get it to find all the ones in the group?
I see it's resolving groups though 🤔 :
"groups": {
"all": [
"swarm-worker-01",
"swarm-worker-02",
"swarm-manager"
],
"consul_server": [
"swarm-manager",
"swarm-worker-01",
"swarm-worker-02"
],
"docker-swarm-join": [
"swarm-worker-01",
"swarm-worker-02"
],
"docker-swarm-manager": [
"swarm-manager"
],
"ungrouped": []
}
Not sure what the problem is. but my best guess is try to do
kitchen converge
kitchen verify
That will spin all instance then run tests.
@ahelal: is it possible to specify a shared playbook at the top level and append additional playbooks to run within the suites? example:
---
driver:
name: vagrant
provisioner:
name: ansible_push
playbook: tests/integration/default/default.yml
ansible_config: tests/ansible.cfg
chef_bootstrap_url: nil
verifier:
name: inspec
platforms:
- name: centos-7.4
- name: oracle-6.8
suites:
- name: dba
provisioner:
playbooks:
- tests/integration/dba/dba.yml
verifier:
sudo: true
sudo_options: "-u dba"
inspec_tests:
- tests/integration/dba
so, I want both the default.yml
and dba.yml
playbooks to run when I converge.
Hey,
Nop the way it works is override. I might suggest to take a role approach. So in your play you can compose the desired end state
I have something like this:
I'd like to add multiple suites, and point each suite to its own playbook (just like you can set your
run_list
for chef under suites). Is this supported?