Open jan146 opened 3 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I realised this if the delay in zscroll was increased, the spam goes slower. Maybe an issue with zscroll?
I think I might have figured out why the issue is happening, but I haven't though of a good fix for it yet.
Basically, the script uses playerctl
to get the arist and title every X seconds (this X is 0.1 by default, but making this delay longer just makes it a lot uglier, which is not a worthy compromise in my opinion). Additionally, playerctl
uses dbus to fetch this information, hence the flood of dbus messages.
You can simply test this yourself by using the watch
command:
watch -n 0.1 "playerctl -p spotify metadata -f \"{{ title }} - {{ artist }}\""
This will print the title and artist of the currently playing song, same as in the script and update it every 0.1 seconds.
As this is running, you can type sudo dbus-monitor --system
in a separate tty session and you will see the same result as with the script: dbus will be flooded with messages and the cpu usage will rise.
One solution would be to make two delays: one for scrolling the text and another on for updating it (checking playerctl
). I'm working on a solution of my own, but I won't give any guarantees yet.
The better solution would be to find a way to make the spotify app or its dbus interface to "message" the script automatically when the song is updated so it doesn't have to check its status all the time, which would obviously be better than the first solution. I have no real idea of how to do this yet, however maybe somebody else does.
Any feedback is greatly appreciated, thanks in advance.
@jan146 Thanks for taking such a deep dive into this problem. I never constantly monitor my idle CPU usage (and I should, thanks for the tip!) and never tried to measure the computation load of this module.
I'm a student and so I wasn't able to get back quickly but let me take a look at what you're proposing.
Solution 1 is obviously a naive way to deal with it. Keep us updated with what you come up with. And as far as solution 2 is concerned, I'm as lost as you are. I want to maintain the module in such a way that it works playerctl
in general and not just Spotify. So I'll try to take a look at what the guys at playerctl
may have discussed regarding CPU load.
@brokenax3 Nice observation! I'll definitely keep that in mind.
After doing some more testing, I found out that communicating with dbus directly resulted in much lower resource usage. Instead of using playerctl, I used the dbus-send
command to get info about the player status and usage went from ~8%, down to ~2% at most. For some reason, using playerctl sends a message to both the session and system bus, which already unnecessarily doubles resource usage. Additionally, each of these burns approximately double as many resources as a simple dbus-send
request, again, for some unknown reason (to me). Consequently, it uses approximately 4 times as many resources.
In a nutshell, the issue lays somewhere within playerctl
, but finding it will most likely prove to be more difficult than just making direct queries to dbus.
I am using this solution (along with some other performance optimizations) in my own project. I originally planned on forking this one, since I made pretty much the same thing, however I do not require anything particular from this repo.
I hope I have pointed you towards a possible solution, or at least narrowed down the problem, so it can be fixed sometime in the future.
Thank you for your time. Kind regards, Jan
My CPU usage was going haywire with the playerctl
based script. I was initially using the first solution where I added a sleep 0.5
to the get status script but that was also adding delay to the zscroll effect because of how they were connected and it was pretty ugly.
Using the dbus-send
method fixed the issue. Average CPU usage went from ~12% to ~6% for me.
Thank you, @jan146
I did try a playerctl
-based async version here https://gist.github.com/LDAP/6b7393b61fb218b7a2ff54690a6f47ac.
It has hardly any CPU usage.
You guys might be surprised but its as easy as adding a simple interval in your polybar configuration for the module.
This reduces the CPU usage very noticeably and I even created a PR for it here: #23
@DashCruft Thanks for the quick PR. If you got any numbers on how much of an improvement this change made, please feel free to post them here.
After I implemented this module into my polybar configuration, I noticed that my idle cpu usage was slightly higher than usual. After disabling the module, I would get about ~10% lower cpu usage on average.
After looking a bit more into it, I found the processes that were the culprit:
Each one of them usually consumes ~5% of my cpu utilization (i5-8600k). After looking up what these processes could be (before I realized it was this polybar module), I found out that they were spamming/flooding dbus messages (by running
sudo dbus-monitor --system
). The messages look something like this:Again, after disabling the spotify module, the messages don't appear anymore. I belive I have properly edited both .sh files, but I will include them in the attachments just in case, along with the polybar module. Since GitHub doesn't support uploading .sh files, I will upload screenshots of the files instead.
Any help is greatly appreciated, thanks in advance.