Closed yochananmarqos closed 1 year ago
Can you add threading.
at the beginng of the line and try again?
But do not remove empty spaces at the beginning of the line.
Line 2641 in Libsysmon.py file:
Thread(target=gpu_load_nvidia_func, daemon=True).start()
Can you share screenshots for both GPUs if it works?
Note: Integrated Intel GPUs provide limited information in its directories.
Like this?
threading.Thread(target=gpu_load_nvidia_func, daemon=True).start()
That produces no output, however there are still no stats at all.
What about Intel GPU?
there are still no stats at all.
:wink:
Can you share output of this command?
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,driver_version,utilization.gpu,utilization.memory,utilization.encoder,utilization.decoder,memory.total,memory.free,memory.used,temperature.gpu,clocks.current.graphics,clocks.max.graphics,clocks.current.memory,clocks.max.memory,power.draw,power.limit --format=csv
name, pci.bus_id, driver_version, utilization.gpu [%], utilization.memory [%], utilization.encoder [%], utilization.decoder [%], memory.total [MiB], memory.free [MiB], memory.used [MiB], temperature.gpu, clocks.current.graphics [MHz], clocks.max.graphics [MHz], clocks.current.memory [MHz], clocks.max.memory [MHz], power.draw [W], power.limit [W]
NVIDIA GeForce RTX 3060 Laptop GPU, 00000000:01:00.0, 535.54.03, 0 %, 1 %, 0 %, 0 %, 6144 MiB, 4821 MiB, 1116 MiB, 42, 472 MHz, 2100 MHz, 6000 MHz, 7001 MHz, 22.82 W, [N/A]
Can you follow these steps?
src
folderpython3 ./run_from_source.py
The last version may be deleted.
There are some stats now, however GPU & Memory usage is not populating:
card1 (Intel):
card0 (NVIDIA):
Output clicking on card0 (NVIDIA):
Traceback (most recent call last):
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/MainWindow.py", line 950, in on_row_activated
Gpu.loop_func()
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/Gpu.py", line 253, in loop_func
gpu_load_memory_frequency_power_dict = Libsysmon.get_gpu_load_memory_frequency_power(gpu_pci_address, device_vendor_id, selected_gpu_number, gpu_list, gpu_device_path_list, gpu_device_sub_path_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/Libsysmon.py", line 2641, in get_gpu_load_memory_frequency_power
Thread(target=gpu_load_nvidia_func, daemon=True).start()
^^^^^^
NameError: name 'Thread' is not defined
Traceback (most recent call last):
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/MainWindow.py", line 1127, in main_gui_tab_loop
Gpu.loop_func()
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/Gpu.py", line 253, in loop_func
gpu_load_memory_frequency_power_dict = Libsysmon.get_gpu_load_memory_frequency_power(gpu_pci_address, device_vendor_id, selected_gpu_number, gpu_list, gpu_device_path_list, gpu_device_sub_path_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yochanan/tmp/system-monitoring-center-f3e6fc0c2492f7c7c090ce4158f0d7a72f5eccab/src/Libsysmon.py", line 2641, in get_gpu_load_memory_frequency_power
Thread(target=gpu_load_nvidia_func, daemon=True).start()
^^^^^^
NameError: name 'Thread' is not defined
Finally, can you add threading.
again?
Line:
Thread(target=gpu_load_nvidia_func, daemon=True).start()
Can you test SMC v2.18.1?
card1 (Intel):
card0 (NVIDIA):
Output clicking on card0 (NVIDIA):
Traceback (most recent call last):
File "/usr/share/system-monitoring-center/systemmonitoringcenter/MainWindow.py", line 950, in on_row_activated
Gpu.loop_func()
File "/usr/share/system-monitoring-center/systemmonitoringcenter/Gpu.py", line 609, in loop_func
gpu_load_memory_frequency_power_dict = Libsysmon.get_gpu_load_memory_frequency_power(gpu_pci_address, device_vendor_id, selected_gpu_number, gpu_list, gpu_device_path_list, gpu_device_sub_path_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/system-monitoring-center/systemmonitoringcenter/Libsysmon.py", line 2649, in get_gpu_load_memory_frequency_power
gpu_load_memory_frequency_power_dict = process_gpu_tool_output_nvidia(gpu_pci_address)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/system-monitoring-center/systemmonitoringcenter/Libsysmon.py", line 3117, in process_gpu_tool_output_nvidia
if gpu_tool_output != "-":
^^^^^^^^^^^^^^^
NameError: name 'gpu_tool_output' is not defined
Note: Integrated Intel GPUs provide limited information in its directories.
There will not be improvements for integrated Intel GPUs.
This bug is also fixed. Currently, there is no new version. Probably, the error is printed one time. But it does not affect the GPU information shown.
But Max GPU power is not shown. Only current power consumption is shown.
Can you share output of these commands?
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,driver_version,utilization.gpu,utilization.memory,utilization.encoder,utilization.decoder,memory.total,memory.free,memory.used,temperature.gpu,clocks.current.graphics,clocks.max.graphics,clocks.current.memory,clocks.max.memory,power.draw,power.limit,power.limit.enforced --format=csv
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,driver_version,utilization.gpu,utilization.memory,utilization.encoder,utilization.decoder,memory.total,memory.free,memory.used,temperature.gpu,clocks.current.graphics,clocks.max.graphics,clocks.current.memory,clocks.max.memory,power.draw,power.limit,enforced.power.limit --format=csv
Field "power.limit.enforced" is not a valid field to query.
name, pci.bus_id, driver_version, utilization.gpu [%], utilization.memory [%], utilization.encoder [%], utilization.decoder [%], memory.total [MiB], memory.free [MiB], memory.used [MiB], temperature.gpu, clocks.current.graphics [MHz], clocks.max.graphics [MHz], clocks.current.memory [MHz], clocks.max.memory [MHz], power.draw [W], power.limit [W], enforced.power.limit [W]
NVIDIA GeForce RTX 3060 Laptop GPU, 00000000:01:00.0, 535.54.03, 3 %, 2 %, 0 %, 0 %, 6144 MiB, 4777 MiB, 1160 MiB, 39, 465 MHz, 2100 MHz, 6000 MHz, 7001 MHz, 21.95 W, [N/A], 80.00 W
Can you test the latest source code?
Can you share this information?
There were no errors.
Can you try again?
:+1:
What about decoding engine during a video playing?
The Video Encoder / Decoder graphs do not show anything while nvtop does.
Can you share output of the following commands? You can share only elapsed time output parts (last 3-4 lines) of them.
A lot of new parameters added. I do not know if running the command requires long time. I did not learn the result from this thread.
time nvidia-smi --query-gpu=gpu_name,gpu_bus_id,driver_version,utilization.gpu,utilization.memory,utilization.encoder,utilization.decoder,memory.total,memory.free,memory.used,temperature.gpu,clocks.current.graphics,clocks.max.graphics,clocks.current.memory,clocks.max.memory,power.draw,power.limit --format=csv
time nvidia-smi --query-gpu=gpu_name,gpu_bus_id,driver_version,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used,temperature.gpu,clocks.current.graphics,clocks.max.graphics,power.draw --format=csv
You can write here if you are bored. Decoding/encoding load bugs may be fixed in another version when another user provides outputs. My GPU is very old and it does not support lots of features because of the driver. GPU tab changes very rare. Because it is one of the hardest part of the application. There were very detailed changes this time. There are a lot of GPUs, models.
name, pci.bus_id, driver_version, utilization.gpu [%], utilization.memory [%], utilization.encoder [%], utilization.decoder [%], memory.total [MiB], memory.free [MiB], memory.used [MiB], temperature.gpu, clocks.current.graphics [MHz], clocks.max.graphics [MHz], clocks.current.memory [MHz], clocks.max.memory [MHz], power.draw [W], power.limit [W]
NVIDIA GeForce RTX 3060 Laptop GPU, 00000000:01:00.0, 535.54.03, 8 %, 11 %, 0 %, 0 %, 6144 MiB, 4923 MiB, 1014 MiB, 42, 390 MHz, 2100 MHz, 810 MHz, 7001 MHz, 18.32 W, [N/A]
nvidia-smi --format=csv 0.00s user 0.01s system 54% cpu 0.013 total
name, pci.bus_id, driver_version, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB], temperature.gpu, clocks.current.graphics [MHz], clocks.max.graphics [MHz], power.draw [W]
NVIDIA GeForce RTX 3060 Laptop GPU, 00000000:01:00.0, 535.54.03, 1 %, 1 %, 6144 MiB, 4907 MiB, 1030 MiB, 42, 510 MHz, 2100 MHz, 22.97 W
nvidia-smi --format=csv 0.00s user 0.00s system 48% cpu 0.011 total
0.013 total
vs
0.011 total
There is no big change.
What is reported if you use this command during video playback?
nvidia-smi --query-gpu=utilization.encoder,utilization.decoder --format=csv
Does nvtop require root privileges for getting Intel GPU information?
What is reported if you use this command during video playback?
nvidia-smi --query-gpu=utilization.encoder,utilization.decoder --format=csv
Sorry, turns out I'm not seeing any Encoder / Decoder activity at all. If I remember correctly, MPV needs to be configured manually to use GPU acceleration. I used to have it working in Chromium based browsers, but I've given up on fiddling. Any suggestions?
Does nvtop require root privileges for getting Intel GPU information?
No.
There may be problems about nvidia-smi. Video decoding load changes if a video it watched (by using hardware acceleration from web browser, from HDD, etc.). Maybe other GPU is used. I do not know.
Currently there is no code change. nvidia-smi decoding/encoding information is directly shown on the GUI. It looks like it is get as 0.
You can close this issue. Or you can keep it opened if you want to listen different suggestions about this decoding load issue.
Additionally, after some time, a tool like nvtop may be used for getting more detailed GPU information. This depends on popularity of the application.
As far as this issue is concerned, it was addressed and working well, thank you.
You can open new issues for bugs, new features, etc.
After adding another GPU tool for getting GPU information, fixing GPU releated problems may not be that difficult. Currently, Sytem Monitoring Center uses its code for these features.
xrandr
dependency is not required for SMC v2.
I just updated to 2.18.0 and the GPU tab shows nothing for both my integrated Intel graphics and decidated NVIDIA graphics on a hybrid laptop. Note that I have external monitors connected, so the Intel graphics are not used at all.
inxi -Gazy
``` Graphics: Device-1: Intel Alder Lake-P Integrated Graphics vendor: CLEVO/KAPOK driver: i915 v: kernel arch: Gen-12.2 process: Intel 10nm built: 2021-22+ ports: active: DP-2 off: eDP-2 empty: DP-3,DP-4,DP-5 bus-ID: 00:02.0 chip-ID: 8086:46a6 class-ID: 0300 Device-2: NVIDIA GA106M [GeForce RTX 3060 Mobile / Max-Q] vendor: CLEVO/KAPOK driver: nvidia v: 535.54.03 alternate: nouveau,nvidia_drm non-free: 535.xx+ status: current (as of 2023-07) arch: Ampere code: GAxxx process: TSMC n7 (7nm) built: 2020-22 pcie: gen: 2 speed: 5 GT/s lanes: 8 link-max: gen: 4 speed: 16 GT/s lanes: 16 ports: active: none off: DP-1,HDMI-A-1 empty: eDP-1 bus-ID: 01:00.0 chip-ID: 10de:2520 class-ID: 0300 Device-3: Logitech Webcam C270 driver: snd-usb-audio,uvcvideo type: USB rev: 2.0 speed: 480 Mb/s lanes: 1 mode: 2.0 bus-ID: 3-1.1:4 chip-ID: 046d:0825 class-ID: 0102 serial: