Open auscop opened 4 years ago
Hi!,
I have the same problem. We have an inventory with 500 hosts and it is refreshing the node list everytime. This action overloads a lot the pod (in our case we are running rundeck in kubernetes) and origins other problems.
Also, when a node can't execute the ansible script to check its status (scheduled every time):
/usr/bin/python /home/<user>/.ansible-<user>/tmp/ansible-tmp-1589815139.1577857-18710571914723/AnsiballZ_setup.py
the process stucks and, when the process is going to be executed again, rundeck jobs defined into the project are not executed. This is because rundeck is waiting until the first process finishes, so it is stucked forever...
Have you discovered the problem? Do you also have this problem?
Daniel.
@dfradejas I finally made it a little better by setting the "Cache Delay" on "Edit Nodes" Configuration tab to 28800 seconds (8 hours). It can still be an issue but an improvement is better that none. Still not a very efficient way of building a node list. I would like to understand why it was done this way.
Was any way ever determined to improve this? I've set it to 86400 and it's still rebuilding the entire list whenever I click on anything even remotely related to nodes.
I have dynamic inventory with almost 1000 nodes and "Ansible Resource Model Source" is practically unusable. Same problem as #238
Same here...
me too
I may have a hard time documenting my issue as my Rundeck server is not connected to the Internet and am unable to upload any output. Apologies.
Rundeck version: 3.1.2.20190927 Ansible version: 2.6.1 Python Version 2.7.5
Our Ansible inventory is fairly complex, approx 1200 hosts, with many host groups (built dynamically) and ini files for vars definitions etc. I have a number of projects, each project uses the same Ansible inventory file, and filters are used within Ansible-Resource-Model to limit hosts within the project.
By what I have read Rundeck keeps its own inventory that is built from the supplied Ansible inventory. When filters are used within Ansible-Resource-Model it seems that Rundeck builds its inventory very inefficiently.
As an example say I have 1000 hosts, with a naming convention of A001 > A100, B001 > B100, C001 > C100 all the way through to J100... 10 X 100 = 1000. If I select only the "B" hosts in the Ansible-Resource-Model filter (B*), When I look at the output of service.log it is rapidly spewing data and I see the list seems to be being built by searching my entire inventory of 1000 hosts one host at a time, finding the first instance of a B host eg: B001, it then searches the 1000 hosts again and finds the next instance, B002 and appends this to its list, it then searches the 1000 hosts again, and appends B003 and so on. So to build my complete list of B hosts the inventory is searched 100 times.
This takes up a huge amount of CPU while doing this as dozens and dozens of "ansible-playbook gather-hosts.yml" processes are running. This would not be a huge deal if it only ran after editing the Ansible-Resource-Model however every time you click in to the project nodes page it starts off again. If you are working within a number of projects then in a very short time Rundeck becomes unresponsive.
An example of a process is: /opt/anisble/bin/python /usr/local/bin/ansible-playbook gather-hosts.yml --inventory-file=/var/lib/rundeck/ansible/environment/prod/inventory/1_hosts/all-hosts.ini -l B* --extra-vars=@/tmp/rundeck..... I have seen over 60 of these running at one time while the inventory is built.
Is there a way to stop Rundeck constantly rebuilding its inventory every time you enter a project nodes page or click on Edit Nodes page? Ideally it would only rebuild if a nodes config file has been updated. Is there a way to have Rundeck filter more efficiently?
Help would be appreciated... Stay safe..
Austin