Open TilCreator opened 3 years ago
Color change if a value goes over a certain value would be nice, is there a good way of implementing that for every single value or should this just be implemented for graphics pipe
and vram
only?
Since we already have Nvidia I guess it makes sense to have this too.
get some more example dump data
I might be able to test what happens with the gpu unplugged. Or maybe unloading the kernel module at a TTY and ssh in.
Color change
The only way to achieve it works be to have each bit of info displayed in a separate widget (for separate colours), or just apply the colour to the whole block. What's graphics pipe, is that the utilization?
Graphics pipe is similar to utilization, jup
I implemented the radeontop block in block_add_radeontop, but after this pr fluff radeontop hard, now I'm using LACT.
Would it be possible to assign the issue to me?
Would it be possible to assign the issue to me?
Sure, does it change anything for you? :smiley:
after this pr fluff radeontop hard, now I'm using LACT.
The maintainer's response was a bit disappointing, wasn't it. One point for using radeontop
would be that it seems to be more prevalent and has packages in Ubuntu, where as LACT would require those users to build from source (barrier of entry, assuming those users are adverse to doing such things.) How hairy is the parsing code if we stuck to radeontop?
Another idea, perhaps we could call this block gpu
, and add a driver
option to choose between nvidia-smi
, radeontop
, lact-cli
(search for driver
in blocks.md to see other blocks operating similarily). That way we can avoid code duplication and also avoid having to create a new block if a different tool is needed in the future (for example, only implementing nvidia-smi
and lact-cli
for now and leaving radeontop
for the future). In this case the Nvidia block would be deprecated in favour of using the gpu
block with the driver
set to nvidia-smi
.
Sure, does it change anything for you? smiley
Nope, but thx ^^
I'm disliking radeontop
not only because of the maintainer (and the current release being able to crash systems), but it's also missing a lot of features compared to LACT (and the other way around), radeontop can measure the load on specific hardware, but lact can also measure power (in watt), temperature, gpu voltage and gpu usage (reported by the driver). For me LACT is very preferable because I don't think anyone has such a long status to display all the specific hardware load stuff ^^. The next problem is that those utilities have very differed data fields that are hard to join, so I don't think that it is worth it crating a general gpu block, even thou I would prefer it as a user.
As soon as https://github.com/ilyazzz/LACT/issues/23 is closed I will also create build files for NixOS and Arch, so hopefully that utility will be more easily available soon.
An alternative to using LACT as a daemon would also be to use LACT directly, that would limit the functions a bit since gpu power states etc. can not be changed without running as root (And I also don't know if lact is wirtten to be able to do this). I do like the idea, so I will play a bit with it.
Hello, I've randomly stumbled across my project being mentioned and I'd like to help where I can.
Some clarifications:
Currently, the CLI is not very usable. While it would be possible to implement something like a machine-readable json interface with it, I'm mainly focusing on the GUI.
As LACT is also written in rust, a much better solution would be to talk to the socket directly. There is a rust interface for it implemented here, and you can see how it's used in the CLI or the GUI (example).
Though if you do decide to use it this way please inform me. Since currently the ABI is only used to communicate between LACT's own GUI/CLI and the daemon, it often changes between versions. If I'm to expose it to third-party applications a stable ABI would be needed.
Hi, my current method is already by using the socket, but is more trying around, long not finished. One idea was also to use the daemon directly (run it as a subprocess of i3status-rs), so that a user who only wants to see his gpu usage (and not set fan profile stuff) don't need to install or run the daemon. How bad exactly is that idea? ^^
If you only want basic stuff like GPU usage then running an entire LACT daemon from within i3status-rs would be very overkill IMO. You can just read the sysfs directly (e.g. cat /sys/class/drm/card*/device/gpu_busy_percent
from a script)
Yes, but I would still like to use LACT, mostly so I don't have to maintain the direct reading ^^ And the target is also to allow the user to display everything that LACT is able to provide and maybe even cycle through profiles by clicking on the block (Wouldn't be possible if running as a unprivileged user of course).
Did you end up making the block?
I started, but lost motivation, so maybe I will continue work in a few days or it will take a few weeks...
I'm still a little bit unsure about the implementation, one way would be to just copy some code from lact, the other way would be to implement the lact socket and require the daemon.
The copy way would be smaller, because it's just a few reads from /sys
.
The lact impl would be easier and also enable changing the gpu power profile, but it would be quite big (last time I tried it, it required the whole daemon as dependency to use the socket) and would require the user to run the daemon.
I'm like the lact impl more, because it's easier and doesn't require much maintenance.
It would be possible to move out the GPU handling code out of LACT into a separate library. That would let you access the GPU information without manually reading from /sys
but also not require a daemon running.
That would be nice. :+1: I would probably still want to include the socket connection as an optional feature to be able to switch the gpu power profile.
@TilCreator I've created a very basic prototype of the library, it currently only has very barebones functionality but it should give you the general idea on how the library will look like.
I will upload it to crates.io once it's more complete.
thx so much for that, I will try to not take too long with my code ^^
@TilCreator Did you find time to try out the crate?
Not jet, sry, hopefully soon
@TilCreator Any updates?
Since this seems to be the only issue about AMD GPUs, I'll just toss the quick minimal custom block definition & script here that I ended up scrambling up to at least get the utilization stats due to there seeming to be no other way currently for it;
...
[[block]]
block = "custom"
command = "amd-gpu.sh"
json = true
interval = 1
#!/bin/sh
DEVICE=card0
DEV_PATH="/sys/class/drm/${DEVICE}/device"
PERCENT=$(cat "${DEV_PATH}/gpu_busy_percent")
VRAM_USED=$(numfmt --to=iec < "${DEV_PATH}/mem_info_vram_used")
VRAM_TOTAL=$(numfmt --to=iec < "${DEV_PATH}/mem_info_vram_total")
echo '{"icon": "gpu", "text": "'"${PERCENT}"'% '"${VRAM_USED}"'/'"${VRAM_TOTAL}"'"}'
It'd be awesome though to get better support for AMD GPUs.
@ljoonal a quick WIP based on your script: https://github.com/MaxVerevkin/i3status-rust/tree/amd_gpu
^ That seems good enough for now so I added it in 4f6a07e615cd6e5d9ce4890baeabc7cc8b1d6f89.
Actually better since getting it from sysfs, but I guess no fan control that way if we were to add it in the future?
FYI LACT now does have a JSON API. However if the goal of the block is only to show data, and not change anything, it's probably better to keep reading the sysfs directly to avoid depending on a separate service.
but I guess no fan control that way if we were to add it in the future?
We can add driver = "lact"
in the future to add extra features.
Linux AMDGPU Control Application as block
old
The radeontop utility has a mode for dumping the acquired data, this can be parsed and used for a block, similar to the nvidia_gpu block. Example data:
My approach would be to parse this output and provide every value as a format option, any further features? I would really appreciate to get some more example dump data, especially when there is no gpu available or if there are more than one gpu available.
I will gladly implement this after some discussion.