Closed mathieuchateau closed 1 year ago
Hello!
For creating infrastructure, I'd recommend against manually remove/creating VMs and instead nuking/recreating everything.
Ansible is idempotent, so you can reapply playbooks infinitely, see e.g. https://github.com/christophetd/Adaz/blob/main/doc/operations.md#adding-users-groups-ous-after-the-lab-has-been-instantiated
Yes but then you loose things like GPO, elastic search data to compare before/after. I am doing hardening, which can break things on client workstations.
Nuking workstation is not enough, as then new VM don't have ansible playbook run, because virtual resource in terraform already exist
Have you tried just renaming the workstation in the domain.yaml
file? That should recreate a workstation properly configured
yes it create VM but never tries ansible playbook. Since I destroy null_resource.provision_workstation_once_dc_has_been_created evertyime it works everytime :)
I guess I am officially a noob on this.. :)
When using you config, without changing anything except Azure Region, all works: -Kibana ready and receiving logs -DC VM with AD accounts, -WKS domain joined and forwarding logs
If i add/remove WKS VM and do another terraform apply, it's created again, but no domain join etc.. Nothing in console output about an error, just not doing it. Tried multiple times, same result.
It appears ansible is called through resource "provision_workstation_once_dc_has_been_created". After fist run this ressource is kept provisioned, so ansible is not called on new VM later.
During hardening and others tests, it's neat to get a new VM and try again.
I found at least a workaround by destroying it and creating WKS VM again:
terraform destroy --target null_resource.provision_workstation_once_dc_has_been_created
Then ansible is applied again. But may break others already deployed WKS I guess (did not try)