Closed pmoravec closed 2 months ago
@Erbenos any thoughts on this one?
One thing I noticed in a very brief look was that the crash is in a libpcp_pmda namespace routine & this PMDA is multi-threaded. It would be worthwhile auditing the code to check we cannot have different threads reading and writing global PMDA data structures concurrently, as this will cause problems (things like the namespace tree, the metrics and indom tables, etc are global state and need to be updated by either one thread only - the usual case - or under some lock protection, which the PMDA must provide)
It probably didn't occur to me that applies to pmdaRehash function at the time I was writing it.
Its clear that the pmdaRehash call is not guarded against race conditions in https://github.com/performancecopilot/pcp/blob/b71b90bc724337e4f3dae7be15690c98dc1d1886/src/pmdas/statsd/src/pmda-callbacks.c#L463
guarding mutex for that is the one in
From top of my head I think the statsd_possible_reload's pthread_mutex_unlock should move to the end of function and the locking/unlocking of given mutex in the statsd_map_stats (since its called by statsd_possible_reload) should be removed (since now the caller manages it). Same goes for other callee deeper down: reset_stat (EDIT: nevermind thats different mutex). That should avoid the race conditions that could have arisen above though now the locking would be much less granular.
Earliest I can do the change is saturday. Then I can also look into it some more, this is just some quick analysis.
Perhaps the author of the issue can try changing that code on their system and see if issue persists in their scenario? I can supply the diff/patch.
@Erbenos thanks for taking a look!
| Earliest I can do the change is saturday.
That'd be great - no huge rush, though we have a release planned for the 17th of this month. If it can make in time for that it'd be fantastic, but if not the next one will follow soon enough (towards end Oct).
Hello, yes I should be able to build a package based on a given patch, and test it on my scenario.
Upon further look, I don't see any obvious code that would lead to race condition there, so changing code I described above would be completely blind which is not really something I want to do. Guess I ll do my best trying to reproduce it locally.
I can provide e.g. collected stats if that helps mimicking my reproducer.
I managed to simplify the reproducer in the way it should be enough to:
deploy Foreman (https://theforeman.org/)
maybe also Katello on"top" of it (https://theforeman.org/plugins/katello/)
enable collecting all metrics like https://github.com/pmoravec/sat-perf-correlation/blob/main/sat6-perf-monitor.yaml#L162-L188 BUT allow all:
name: Enable statsd telemetry in Satellite ansible.builtin.command: satellite-installer --foreman-telemetry-prometheus-enabled false --foreman-telemetry-statsd-enabled true
name: Update 5_telemetry.rb source code to make allowed_labels configurable ansible.builtin.blockinfile: path: /usr/share/foreman/config/initializers/5_telemetry.rb insertbefore: 'telemetry.add_allowed_tags!(allowed_labels)' block: 'allowed_labels.merge!(SETTINGS[:telemetry][:allowed_labels]) if SETTINGS[:telemetry] && SETTINGS[:telemetry][:allowed_labels]' notify: "Restart foreman"
name: Add needed allowed_labels ansible.builtin.blockinfile: path: /etc/foreman/settings.yaml insertbefore: " # Rails logs end up in logger named 'telemetry' when enabled" block: | {% filter indent(width=2, first=true) %} :allowed_labels: :controller:
handlers:
and query each and every API endpoint - just to generate as many different metrics as possible, like:
hname=$(hostname -f)
for endpoint in /katello/api/activation_keys /katello/api/alternate_content_sources /katello/api/ansible_collections /katello/api/ansible_collections/compare /ansible/api/ansible_inventories/hosts /ansible/api/ansible_inventories/hostgroups /ansible/api/ansible_playbooks/fetch /ansible/api/ansible_roles /ansible/api/ansible_roles/fetch /ansible/api/ansible_variables /api/architectures /api/compliance/arf_reports /api/audits /api/auth_source_externals /api/auth_source_internals /api/auth_source_ldaps /api/auth_sources /api/bookmarks /katello/api/capsules /api/common_parameters /api/compute_profiles /api/compute_resources /api/config_reports /foreman_virt_who_configure/api/v2/configs /katello/api/content_credentials /katello/api/content_exports /katello/api/content_imports /katello/api/content_view_versions /katello/api/content_views /api/dashboard /katello/api/debs /katello/api/debs/compare /api/v2/discovered_hosts /api/v2/discovery_rules /api/bootdisk /api/bootdisk/generic /katello/api/docker_manifest_lists /katello/api/docker_manifest_lists/compare /katello/api/docker_manifests /katello/api/docker_manifests/compare /katello/api/docker_tags /katello/api/docker_tags/compare /api/domains /katello/api/errata /katello/api/errata/compare /api/fact_values /katello/api/files /katello/api/files/compare /api/filters /foreman_tasks/api/tasks/summary /foreman_tasks/api/tasks /katello/api/content_units /katello/api/content_units/compare /katello/api/ostree_refs /katello/api/python_packages /api /api/status /katello/api/host_collections /api/host_statuses /api/hostgroups /api/hosts /api/http_proxies /api/instance_hosts /api/job_invocations /api/job_templates /api/job_templates/revision /katello/api/environments /api/locations /api/mail_notifications /api/media /api/models /katello/api/module_streams /katello/api/module_streams/compare /api/operatingsystems /katello/api/organizations /api/compliance/oval_contents /api/compliance/oval_policies /katello/api/package_groups /katello/api/package_groups/compare /katello/api/packages /katello/api/packages/compare /api/permissions /api/permissions/resource_types /katello/api/ping /katello/api/status /api/ping /api/statuses /api/plugins /api/compliance/policies /api/preupgrade_reports /katello/api/products /api/provisioning_templates /api/provisioning_templates/revision /api/ptables /api/ptables/revision /api/realms /foreman_tasks/api/recurring_logics /api/register /api/remote_execution_features /api/report_templates /api/report_templates/revision /katello/api/repositories /katello/api/repositories/compare /katello/api/repositories/repository_types /katello/api/content_types /katello/api/repository_sets /api/roles /api/compliance/scap_content_profiles /api/compliance/scap_contents /api/settings /api/smart_proxies /katello/api/srpms /katello/api/srpms/compare /api/bootdisk /api/subnets /katello/api/subscriptions /katello/api/sync_plans /api/compliance/tailoring_files /api/template_kinds /api/usergroups /api/users /api/current_user /api/users/extlogin /api/webhook_templates /api/webhooks /api/webhooks/events; do curl -u admin:PASSWORD https://${hname}${endpoint} done
I expect you can use just a httpd server that reports several statsd metrics per each URI requested, and fire many tens URIs requests in a loop to the httpd / nginx server.
I did some analysis of the source code during the week and I think I found a sequence of events that may lead to the segfault in the hashing procedure (i think). The issue isn't as much some faulty locking, rather "uncomitted" metrics (the check is at https://github.com/performancecopilot/pcp/blob/87dd059ba5196f3d573ea20d47e6a77d8701df2c/src/pmdas/statsd/src/pmda-callbacks.c#L358) that the agent holds in the internal representation being mapped into PCP through the https://github.com/performancecopilot/pcp/blob/87dd059ba5196f3d573ea20d47e6a77d8701df2c/src/pmdas/statsd/src/pmda-callbacks.c#L98.
Creating an internal metric records may happen in multiple steps, most notably when using labels/tags (as metric creation and appending the associated label are two separate operations, each one of them locking/unlocking the same mutex that is used for synchronization of mapping metrics to the PCP) https://github.com/performancecopilot/pcp/blob/87dd059ba5196f3d573ea20d47e6a77d8701df2c/src/pmdas/statsd/src/aggregator-metrics.c#L126 and when the creation of metric succeeds and processing of labels fails, then the metric is deleted from internal hashtable and associated memory freed (and because this memory also holds strings that the PCP metric representation is built with, its understandable that one would get segfault when PCP tries to read that). The reason that these "uncomitted" metrics would pass the check mentioned earlier is that the flag isn't correctly set when memory for internal metric representation is allocated at https://github.com/performancecopilot/pcp/blob/87dd059ba5196f3d573ea20d47e6a77d8701df2c/src/pmdas/statsd/src/aggregator-metrics.c#L299, the flag "pernament" (yeah spelling is incorrect) of metric struct is left un-initialized.
I didn't verify it yet, but it should be doable to do with debugger and setting correct breakpoints and simulating such states, but the main issue for me right now is that I am having problems getting environment setup for PCP to do so.
@Erbenos let me know if there's anything I can help with the environment - and if you have a patch in mind, I may be able to do a build/test cycle using the internal system @pmoravec is working with.
FYI I have standalone reproducer where you need just RHEL8 and installed foreman
on top of it. Roughly speaking follow https://theforeman.org/manuals/3.11/index.html#2.1Installation and then enable foreman->statsd->pcp monitoring of everything like Ansible playbook in https://github.com/pmoravec/sat-perf-correlation . In particular, run on a RHEL8 system:
# ensure DNS recognizes FQDN of the host, or run:
echo $(ip a | grep -v "127.0.0.1/8" | grep -m1 "inet " | cut -d/ -f1 | awk '{ print $2 }') $(hostname -f) $(hostname -s) >> /etc/hosts
subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms
dnf -y install https://yum.puppet.com/puppet7-release-el-8.noarch.rpm
dnf -y install https://yum.theforeman.org/releases/3.11/el8/x86_64/foreman-release.rpm
dnf -y module enable postgresql:13
dnf -y module enable foreman:el8
dnf install "puppet-agent-7.28*"
dnf -y install foreman-installer
# RECALL THE PASSWORD from the foreman-installer output
foreman-installer
PASSWORD=whatever-installer-prints-to-you # customize per your installation
curl -k -u admin:${PASSWORD} https://$(hostname -f)/apidoc/v2 | json_reformat | grep -e api_url -e GET | grep -B1 GET | grep api_url | sed "s/:id/1/g" | sed "s/:organization_id/1/g" | sed "s/:location_id/1/g" | sed "s/:user_id/1/g" | cut -d\" -f4 > api_endpoints.txt
dnf -y install python3-policycoreutils pcp pcp-pmda-statsd pcp-system-tools pcp-pmda-openmetrics foreman-pcp
ln -s /etc/pcp/proc/foreman-hotproc.conf /var/lib/pcp/pmdas/proc/hotproc.conf
semanage permissive -a pcp_pmcd_t # maybe ridiculous but I got some AVCs otherwise..
# Install statsd and hotproc, openmetrics might be redundant
for d in /var/lib/pcp/pmdas/proc /var/lib/pcp/pmdas/statsd /var/lib/pcp/pmdas/openmetrics; do cd $d; ./Install; cd; done
# log statsd metrics
sed -i '/^\[access\]/i log advisory on default {\n statsd\n}\n' /var/lib/pcp/config/pmlogger/config.default
systemctl enable pmcd pmlogger
systemctl start pmcd pmlogger
# Configure statsd telemetry to report everything in foreman
sed -i '/^ controller: \[/a \ \ \ \".*\",' /usr/share/foreman/config/initializers/5_telemetry.rb
sed -i '/^ action: \[/a \ \ \ \".*\",' /usr/share/foreman/config/initializers/5_telemetry.rb
sed -i '/^ class: \[/a \ \ \ \".*\",' /usr/share/foreman/config/initializers/5_telemetry.rb
# Enable statsd telemetry in foreman
foreman-installer --foreman-telemetry-prometheus-enabled false --foreman-telemetry-statsd-enabled true
# restart foreman related processes, just for sure
systemctl restart foreman dynflow-sidekiq@*
Now, the generated api_endpoints.txt
file has 250-ish URIs that we will query for. Some endpoints are wrong, some other will fail, it doesnt matter. Just generate as "wide range" of requests to foreman as possible:
PASSWORD=the-foreman-password-from-installer
hname=$(hostname -f)
for endpoint in $(cat api_endpoints.txt); do
curl -k -u admin:${PASSWORD} https://${hname}${endpoint}
done
Run that in a loop:
while true; do date; ./reproducer_for_pmdastatsd_segfault.sh > /dev/null 2>&1; sleep 5; done
and wait for segfault. Though, I get it quite sporadically only.
The machines that are available to me all run on ARM, emulating different architectures works but is basically unusable for me because its insanely slow. Hence above case is too difficult to setup for me.
On the other note, did you try looking into and could you perhaps share the agent's logs before it errors out? Perhaps that could identify some relevant code paths, especially if the segfault happens in proximity to similar type of message across multiple reproductions. Try to set logging to be as verbose as possible per its documentation.
I was testing the recent binary that @natoscott shipped on my testing machine. Running my reproducer for 5 hours, no segfault! (while previously it was a matter of 5-10 minutes).
I am not sure what patch I was previously testing that didnt work (see https://github.com/performancecopilot/pcp/pull/2069#issuecomment-2352643292). Now I do see fully stable pmdastatsd
. Great work with the fix, thanks!
I ll celebrate when the stability is measured in months not hours 🤣
I ll celebrate when the stability is measured in months not hours 🤣
Hold my calendar :D . I run one day test, with pmcd
and pmlogger
frequent and random restarts (since I noticed the segfaults occur most often within 5-10 minutes after a restart. Once pmdastatsd
survives for say hour, it will most probably survive my reproducer forever). Even in that reproducer, the fixed pmdstatsd
has not segfaulted yet.
I would call it successful. But if you wish, tell me the amount of time I can leave the test running such long - that is no problem.
There is no need, if you believe the issue has been addressed feel free to close it. Feels nice someone is actually using it after such long time and it does serve some purpose.
There is no need, if you believe the issue has been addressed feel free to close it. Feels nice someone is actually using it after such long time and it does serve some purpose.
I think so. Thanks for the fix!
When sending a few thousands of metrics via
statsd
to PCP, I get random segfault ofpmdastatsd
. I tuned my reproducer to hit this every 30 minutes on average. Reproducer requires foreman+katello+pulp installed, so hard to reproduce on your own, I think. But I can provide full 250MB coredump if required.Backtrace of the segfaulting thread:
Some variables from the backtrace:
or:
It seems tome the rehash was slowly happening at a time some other thread updated the hash table (the
for (m = 0; m < pmda->e_nmetrics; m++) {
was processing 4992th metric out of 5091)?Relevant PCP version: