nicolargo / glances

Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
http://nicolargo.github.io/glances/
Other
26.63k stars 1.52k forks source link

Unable to monitor multiple podman sockets #2871

Open Xelaph opened 3 months ago

Xelaph commented 3 months ago

Check the bug I could not find a similar bug

Describe the bug On my server, I have multiple users that run podman containers. I would like to monitor all containers through one glances instance. I tried adding an extra socket through the configuration file, but that did not seem to work.

To Reproduce Steps to reproduce the behavior:

  1. Start Glances with podman sockets bound at /run/user/1000/podman/podman.sock and /run/user/1001/podman/podman.sock.
  2. Add podman_sock=unix:///run/user/1001/podman/podman.sock to the configuration file
  3. Only the one bound at /run/user/1000/podman/podman.sock gets used

Expected behavior I want to be able to add multiple podman sockets so that I can monitor all containers on my server

Environement (please complete the following information)

=============================================================================== Glances 4.1.2 (/app/glances/init.py) Python 3.11.9 (/venv/bin/python3) PsUtil 6.0.0 (/venv/lib/python3.11/site-packages/psutil/init.py)

alert [OK] 0.00002s [] amps [OK] 0.00013s key=name [{'count': 0, 'countmax': None, 'countmin': 1.0, 'key': 'name', 'name': 'Dropbox', 'refresh': 3.0, 'regex': True, 'result': None, 'timer': 2.4659695625305176}, ...] cloud [NA] connections [NA] containers [OK] 0.19756s key=name [{'command': '', 'cpu_percent': 17.09026645977991, 'created': '2024-07-04T12:37:08.850570835+02:00', 'engine': 'docker', 'id': '7120454bf596141087533e676f2835d56d50a38e4bd9a96c9f0a2e3120838036', 'image': '---', 'io_rx': 648540160, 'io_wx': 27496407040, 'key': 'name', 'memory_percent': None, 'memory_usage': 453464064, 'name': '----', 'network_rx': 0, 'network_tx': 0, 'status': 'running', 'uptime': '6 days'}, ...] core [OK] 0.00081s {'log': 16, 'phys': 8} cpu [OK] 0.00032s {'cpucore': 16, 'ctx_switches': 0, 'ctx_switches_gauge': 5555183355, 'guest': 0.2, 'idle': 94.5, 'interrupts': 0, 'interrupts_gauge': 5792440849, 'iowait': 0.3, 'irq': 0.2, 'nice': 0.0, 'soft_interrupts': 0, 'soft_interrupts_gauge': 2788712335, 'steal': 0.0, 'syscalls': 0, 'syscalls_gauge': 0, 'system': 2.6, 'time_since_update': 2.2919604778289795, 'total': 5.2, 'user': 2.3} diskio [OK] 0.00030s key=disk_name [{'disk_name': 'sda', 'key': 'disk_name', 'read_bytes': 0, 'read_bytes_gauge': 50618880, 'read_count': 0, 'read_count_gauge': 2748, 'time_since_update': 2.2919132709503174, 'write_bytes': 0, 'write_bytes_gauge': 29228150784, 'write_count': 0, 'write_count_gauge': 487402}, ...] folders [OK] 0.00001s [] fs [OK] 0.00035s key=mnt_point [{'device_name': '/dev/nvme0n1p5', 'free': 98028691456, 'fs_type': 'btrfs', 'key': 'mnt_point', 'mnt_point': '/etc/os-release', 'percent': 7.3, 'size': 107374182400, 'used': 7711367168}, ...] gpu [OK] 0.00470s key=gpu_id [{'fan_speed': None, 'gpu_id': 'amd0', 'key': 'gpu_id', 'mem': 75, 'name': 'AMD GPU', 'proc': 1, 'temperature': 41}, ...] help [OK] 0.00000s None ip [OK] 0.00092s {'address': '', 'gateway': '', 'mask': '', 'mask_cidr': '', 'public_address': '', 'public_info_human': ''} irq [NA] load [OK] 0.00002s {'cpucore': 16, 'min1': 0.4072265625, 'min15': 0.55810546875, 'min5': 0.603515625} mem [OK] 0.00010s {'active': 10278461440, 'available': 22804983808, 'buffers': 1122304, 'cached': 23118716928, 'free': 22804983808, 'inactive': 19953504256, 'percent': 31.9, 'shared': 115191808, 'total': 33499123712, 'used': 10694139904} memswap [OK] 0.00015s {'free': 3488608256, 'percent': 59.4, 'sin': 205873152, 'sout': 5290233856, 'time_since_update': 2.292628526687622, 'total': 8589930496, 'used': 5101322240} network [OK] 0.00153s key=interface_name [{'alias': None, 'bytes_all': 0, 'bytes_all_gauge': 28604466, 'bytes_all_rate_per_sec': 0.0, 'bytes_recv': 0, 'bytes_recv_gauge': 14302233, 'bytes_recv_rate_per_sec': 0.0, 'bytes_sent': 0, 'bytes_sent_gauge': 14302233, 'bytes_sent_rate_per_sec': 0.0, 'interface_name': 'lo', 'key': 'interface_name', 'speed': 0, 'time_since_update': 2.2081403732299805}, ...] now [OK] 0.00002s {'custom': '2024-07-11 08:22:33 UTC', 'iso': '2024-07-11T08:22:33+00:00'} percpu [OK] 0.00035s key=cpu_number [{'cpu_number': 0, 'dpc': None, 'guest': 0.9, 'guest_nice': 0.0, 'idle': 89.0, 'interrupt': None, 'iowait': 0.0, 'irq': 0.0, 'key': 'cpu_number', 'nice': 0.0, 'softirq': 0.4, 'steal': 0.0, 'system': 3.9, 'total': 11.0, 'user': 6.6}, ...] ports [OK] 0.00000s [] processcount [OK] 0.00117s {'pid_max': 0, 'running': 1, 'sleeping': 3, 'thread': 63, 'total': 4} processlist [OK] 0.00001s [] psutilversion [OK] 0.00001s '6.0.0' quicklook [OK] 0.00027s {'cpu': 5.2, 'cpu_hz': 4673000000.0, 'cpu_hz_current': 4220344749.9999995, 'cpu_log_core': 16, 'cpu_name': 'AMD Ryzen 7 5700G with Radeon Graphics', 'cpu_phys_core': 8, 'load': 3.5, 'mem': 31.9, 'percpu': [{...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}], 'swap': 59.4} raid [NA] sensors [OK] 0.00000s key=label [{'critical': 94, 'key': 'label', 'label': 'Composite', 'type': <SensorType.CPU_TEMP: 'temperature_core'>, 'unit': 'C', 'value': 44, 'warning': 89}, ...] smart [NA] system [OK] 0.00000s {'hostname': ---, 'hr_name': 'Fedora Linux 40 64bit / Linux 6.9.7-200.fc40.x86_64', 'linux_distro': 'Fedora Linux 40', 'os_name': 'Linux', 'os_version': '6.9.7-200.fc40.x86_64', 'platform': '64bit'} uptime [OK] 0.00009s {'seconds': 596733} version [OK] 0.00001s '4.1.2' wifi [OK] 0.00005s []

Total time to update all stats: 0.21058s

RazCrimson commented 3 months ago

Its not a bug as we currently don't support multiple sockets.

But we plan to: https://github.com/nicolargo/glances/pull/2471

Will try to finish that PR by this week

github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 3 months with no activity.