Open leelewin opened 1 year ago
As an aside you have to the end of the month to uplift to 20.04 due to the end of support of 18.04
What is the output of the following?
nvidia-smi --query-gpu=utilization.gpu -- format=csv
Because I switched the GPU to Intel GPU, the nvidia-smi didn't work anymore.
$ nvidia-smi --query-gpu=utilization.gpu -- format=csv
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
And, I found a solution to this problem by trying to remove the nvgpu related from /home/xxxx/.indicator-sysmonitor.json file custom_text pair. Although this solves the problem temporarily. But, I am sure there are bugs in the software.
I have the same problem.
I uninstalled my NVIDIA drivers and indicator-sysmonitor stopped working.
I suppose, the solution of problem will be removing "GPU sensors" (no drivers -> they are not supported anymore) from configuration of indicator-sysmonitor.
How can I change configuration of my indicator-sysmonitor, without starting it?
indicator sysmonitor stores stuff in two locations - you can remove those before starting to reset things
./sensors.py:52: SETTINGS_FILE = os.getenv("HOME") + '/.indicator-sysmonitor.json'
./preferences.py:222: SETTINGS_FILE = os.getenv("HOME") + '/.cache/indicator-sysmonitor/preferences.json'
Great, it solved my problem
os: ubuntu 18 de: gnome 3.28 gpu1: intel HD Graphics gpu2: nvidia geforce
indicator-systmonitor perferences customize output: | cpu |net |memory| nvgpu |
After changing the GPU from NV GPU to Intel GPU using the Nvidia X Server Settings, this application stopped working after restarting the computer. Furthermore, an error occurred when attempting to manually launch the application in the terminal.
ERROR MESSAGE: $ indicator-sysmonitor INFO:root:start INFO:root:Menu shown INFO:root:Fetcher started Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/indicator-sysmonitor/sensors.py", line 674, in run data = self.fetch() File "/usr/lib/indicator-sysmonitor/sensors.py", line 668, in fetch return self.mgr.get_results() File "/usr/lib/indicator-sysmonitor/sensors.py", line 280, in get_results value = instance.get_value(sensor) File "/usr/lib/indicator-sysmonitor/sensors.py", line 344, in get_value return "{:02.0f}%".format(self._fetch_gpu()) File "/usr/lib/indicator-sysmonitor/sensors.py", line 347, in _fetch_gpu result = subprocess.check_output(['nvidia-smi', '--query-gpu=utilization.gpu', '--format=csv']) File "/usr/lib/python3.6/subprocess.py", line 356, in check_output **kwargs).stdout File "/usr/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['nvidia-smi', '--query-gpu=utilization.gpu', '--format=csv']' returned non-zero exit status 9.
Traceback (most recent call last): File "/usr/bin/indicator-sysmonitor", line 266, in
app = IndicatorSysmonitor()
File "/usr/bin/indicator-sysmonitor", line 101, in init
self.load_settings()
File "/usr/bin/indicator-sysmonitor", line 174, in load_settings
self.update_indicator_guide()
File "/usr/bin/indicator-sysmonitor", line 135, in update_indicator_guide
guide = self.sensor_mgr.get_guide()
File "/usr/lib/indicator-sysmonitor/sensors.py", line 204, in get_guide
data = self._fetcher.fetch()
File "/usr/lib/indicator-sysmonitor/sensors.py", line 668, in fetch
return self.mgr.get_results()
File "/usr/lib/indicator-sysmonitor/sensors.py", line 280, in get_results
value = instance.get_value(sensor)
File "/usr/lib/indicator-sysmonitor/sensors.py", line 344, in get_value
return "{:02.0f}%".format(self._fetch_gpu())
File "/usr/lib/indicator-sysmonitor/sensors.py", line 347, in _fetch_gpu
result = subprocess.check_output(['nvidia-smi', '--query-gpu=utilization.gpu', '--format=csv'])
File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nvidia-smi', '--query-gpu=utilization.gpu', '--format=csv']' returned non-zero exit status 9.