Closed widhalmt closed 7 months ago
Thanks for the contribution!
I'm not sure if we should really collect all loaded configurations from the API. The amount of data we collect would grow a lot in that case. Collecting the current stage ID for example makes perfect sense. But I would refrain from collecting the complete API at the moment.
Can you give me the background of what you might need this for other than "what we have, we have"?
Hm... yes, you're right. In fact, the "currently active" configuration would be totally sufficient. But finding the currently active stage and just collecting this is beyond my Go knowledge for now.
I would actually not collect the complete config of the active stage, but only collect to the ID of the stage.
I have to check whether there is further "metadata" within the API we could collect.
But the config itself below the active stage would definitely be too much in my opinion.
If we need to reproduce errors in testing or for further investigation a copy of the full configuration would be very useful. I guess, using compression for the tarball would massively shrink the space we require for transport
I have added the active-stage form _api
and director
.
Also added list of the existing stage directories.
I would still not add the whole directory itself including content. This is way too much.
If you want to reproduce a system based on the support-collector data, you can already use /etc/icinga2/*
. If the loaded stage is really needed, you should ask for it separately.
This PR should add configuration that's sent to the local node. This should collect configuration on every kind of Icinga 2 node including configuration that is managed by Director or via API.
Please take into account, that I'm totally new to the code of this tool. Don't expect that I know what I'm doing.