chjj / compton

A compositor for X11.
Other
2.24k stars 501 forks source link

The usage of CPU by newer version of Compton increased significantly #163

Closed guotsuan closed 10 years ago

guotsuan commented 10 years ago

Hi,

I want to report the significant increasing of usage of the CPUs by the newer version of compton. Although I used the richardgv-dev branch, I report this just in case.

After I updated the compton from commit gd897740 to commit g9e05391, I notice the usage of cpu by compton has significantly increased.

I ran the benchmarks on the almost same conditions(some desktop, some configure file).

gd897740:

0.38user 0.71system 1:06.90elapsed 1%CPU (0avgtext+0avgdata 38520maxresident)k 0inputs+0outputs (0major+7580minor)pagefaults 0swaps

g9e05391

15.56user 43.98system 1:06.93elapsed 88%CPU (0avgtext+0avgdata 38544maxresident)k 0inputs+0outputs (0major+7587minor)pagefaults 0swaps /usr/bin/time compton --blur-kern=$blur_kern --backend glx --benchmark 1000 15.57s user 43.99s sy stem 88% cpu 1:06.93 total

I used glx backend with --blur-kern compton-convgen.py --dump-compton -f=sigma=11.0 gaussian 15

Arch linux, kernel: 3.12.6 nvidia driver: 331.20 card: GTS 450 xorg-server: 1.14.5-2

The configure of compton is:

# Shadow
shadow = false;
no-dnd-shadow = true;
no-dock-shadow = true;
clear-shadow = true;
shadow-radius = 7;
shadow-offset-x = -7;
shadow-offset-y = -7;
# shadow-opacity = 0.7;
# shadow-red = 0.0;
# shadow-green = 0.0;
# shadow-blue = 0.0;
shadow-exclude = [ "name = 'Notification'", "class_g = 'Conky'", "class_g ?= 'Notify-osd'" ];
# shadow-exclude = "n:e:Notification";
shadow-ignore-shaped = false;

# Opacity
menu-opacity = 0.8;
inactive-opacity = 1.0;
# active-opacity = 0.8;
frame-opacity = 1.0;
inactive-opacity-override = false;
alpha-step = 0.06;
# inactive-dim = 0.2;
# inactive-dim-fixed = true;
blur-background = true;
blur-background-frame = false;
blur-background-fixed = true;
blur-background-exclude = ["window_type = 'desktop'" ];

# Fading
fading = false;
# fade-delta = 30;
fade-in-step = 0.03;
fade-out-step = 0.03;
no-fading-openclose = true;
fade-exclude = [ ];

# Other
backend = "glx"

vsync = "opengl-swc";
dbe = false;
paint-on-overlay = true;
sw-opti = true;
unredir-if-possible = true;
focus-exclude = [ ];
detect-transient = true;
detect-client-leader = true;
invert-color-include = [ ];

# GLX backend
glx-no-stencil = true;
#glx-copy-from-front = false;
# glx-use-copysubbuffermesa = true;
glx-no-rebind-pixmap = true;
glx-swap-method = "undefined";

# Window type settings
wintypes:
{
  tooltip = { fade = true; shadow = false; opacity = 0.75; focus = true; };
};

I hope that my report is useful. Let me know if you need any other information.

Best wishes

richardgv commented 10 years ago

Thanks for the report! I changed glFlush() to glFinish() in fbd70e146c, in hope it will be beneficial for VSync, but I didn't realize it raises CPU usage so significantly. (It's weird indeed, though. glFinish() should only freeze the process instead of letting it waste CPU cycles.) 3e783f3 (richardgv-dev) reverts the change. How does it work for you?

guotsuan commented 10 years ago

Yes. I have just tested the new commit 3e783f3. I think the usage of cpu is now normal :) It is great. Thank you very much.

0.54user 0.25system 1:06.98elapsed 1%CPU (0avgtext+0avgdata 47952maxresident)k 0inputs+0outputs (0major+8003minor)pagefaults 0swaps

Could the issue be caused by my choice of Vsync, which seems not be beneficial but suffer from changing from glFlush() to glFinish()?

richardgv commented 10 years ago

Could the issue be caused by my choice of Vsync, which seems not be beneficial but suffer from changing from glFlush() to glFinish()?

  • This problem only occurs when you have VSync enabled since we only call glFinish() in this case (or when you are using xr_glx_hybrid). It doesn't imply the VSync is bad or something.
  • glFinish() doesn't have the best reputation. It's necessary for xr_glx_hybrid backend to operate correctly but I guess it's a bad idea to enable it on other backends.
  • opengl-swc works good enough with glFlush() that glFinish() isn't needed from the beginning.
guotsuan commented 10 years ago

I see. Thank you again for the explanation. Have a nice day!

ghost commented 10 years ago

Huh… ever since using https://github.com/chjj/compton/commit/3e783f3, I often get some kind of "short white lightnings" (it's hard to describe, I hope you get my point) on the screen when closing windows, switching workspaces etc. (using xr_glx_hybrid).

By enabling fading, you can even see some light "flickering" when performing the described tasks – the more fading time you set, the better you can observe this issue.

Since I don't get any of these things using the master branch's https://github.com/chjj/compton/commit/fbd70e146c6fa46250dc2b435afb347c3cf54539, I'd say this is probably caused by the glFlush-glFinish-thing, because this commit and the "just-a-typo-commit" seem to be the only two differences regarding https://github.com/chjj/compton/commit/fbd70e146c6fa46250dc2b435afb347c3cf5453, and there, as said before, everything was fine.

For good measure, I downgraded to https://github.com/chjj/compton/commit/fbd70e146c6fa46250dc2b435afb347c3cf5453, and tada, the probleme disappears. :-)

That's why I posted in this thread, because it already covers the aspects of https://github.com/chjj/compton/commit/3e783f3.

Btw. this happens also with nothing but the backend defined, so this doesn't seem like a user config problem…

ghost commented 10 years ago

Never mind, I just missed the fact that you also introduced the new bool vsync-use-glfinish, and even without knowing C, I can see how this works now in https://github.com/chjj/compton/commit/3e783f3. :-)

Thanks and sorry for all the noice and happy new year.

ghost commented 10 years ago

Oh man, now it gets really strange: if I run compton https://github.com/chjj/compton/commit/3e783f3 via compton --vsync-use-glfinish, I get no errors, but I also don't see any improvement compared to running it without this flag, i. e. the problems described above two comments stay.

However, If I downgrade again to https://github.com/chjj/compton/commit/fbd70e146c6fa46250dc2b435afb347c3cf5453 and run compton, these flickering problems are just gone, although I didn't cange anything in the config. o.O

I don't know what to say and I really don't want to offend you, but are you sure that you implemented vsync-use-glfinish completely correctly?

richardgv commented 10 years ago

@cju:

Unfortunately with compton (git-v0.1_beta2-5-g3e783f3-2013-12-26) and compton --backend xr_glx_hybird --vsync-use-glfinish I can't reproduce the flickering issue in 10 minutes (but I can if I take --vsync-use-glfinish off). The implementation of --vsync-use-glfinish is so simple that it isn't too likely to have a bug inside... You could run gdb --args ./compton --backend xr_glx_hybird --vsync-use-glfinish in screen/tmux, set a breakpoint on compton.c:1946, run it and check if the glFinish() is executed. (You need to attach screen/tmux in a virtual console (or another X screen?) since your X will appear locked up when compton is interrupted.)

And it's so annoying that nvidia-drivers demands glFinish() to be used for OpenGL to catch all things X Render painted. Can't the card/driver do proper queuing? The CPU usage is high as hell with --vsync-use-glfinish.

ghost commented 10 years ago

Thanks for your quick response, although flickering stays for my part. ¬¬

Perhaps this also has to do w/ i3/urxvt/X himself but what do I know? For now, I guess I can live with it since this isn't a total catastrophy and obviously all my problems are caused by nvidia… on a 4 year old standard Intel laptop chipset, compton runs just f***ing amazing, it gets somehow kind of frustrating w/ nvidia, since also the KWin guys have massive problems with it btw. :-( (Note to myself: Next time, buy a mainboard w/ onboard graphics.)

Anyway, once again, thanks for your efforts.

richardgv commented 10 years ago

@cju:

Eeeh, if the old version works, it's most likely isn't a problem in other parts... But I can't reproduce it here, and the debugging needs to performed on your end.

ghost commented 10 years ago

Well, this is kinda crazy: I downgraded again to the old version, restartet X and compton and I didn't notice any strong flickering like w/ the new version. Then wanted to be really sure and rebooted and there the flickering occured again, although I was running the old version! Then I upgraded: flickering stayed. Then I downgraded again: flickering seem to have passed!

Since this was way too confusing, I created a new user, compiled there the dev-branch, but built this time a proper package, installed it and ran compton just w/ hybrid; well, it was flickering. Then I enabled the given vsync option and flickering virtually passed away, but not totally, if you watch closely you can still see it on fading operations (probably I missed that earlier, I don't know anymore).

Anyway, I'll keep testing w/ the new user the next few days and if I discover anything new, I'll let you know – btw. your propsed test showed that glFinish is executed, so this is definetively not your fault, sorry for suspecting you. ;-)

richardgv commented 10 years ago

@cju: Thanks for all the experiments! But now I have no idea what is the correct way to ensure GL synchronizes with X Render correctly...

ghost commented 10 years ago

I'm really sorry that I have to bring this up again, but now there is one new testing result:

(To have a clean testing environment, I created again a new user, so there practically can't be any interdependencies from all the previous stuff. Then I testet quite a while to be really sure this time. Therefore, I built a master branch package and a dev branch package which I installed alternatiely, so pacman took care that there is always just only one clean compton version on my system at the same time. I didn't create or use a config, so nothing could intervene from this side.)

So something is working differenty in this two versions, although they should behave exactly the same when setting the glfinish flag in the dev version, isn't it? I am at my wit's end.

richardgv commented 10 years ago

@cju:

$ git diff master richardgv-dev

diff --git a/src/common.h b/src/common.h
index 5ac37ac..7786f82 100644
--- a/src/common.h
+++ b/src/common.h
@@ -513,6 +513,9 @@ typedef struct {
   bool dbe;
   /// Whether to do VSync aggressively.
   bool vsync_aggressive;
+  /// Whether to use glFinish() instead of glFlush() for (possibly) better
+  /// VSync yet probably higher CPU usage.
+  bool vsync_use_glfinish;

   // === Shadow ===
   /// Enable/disable shadow for specific window types.
@@ -1491,6 +1494,11 @@ parse_backend(session_t *ps, const char *str) {
       ps->o.backend = i;
       return true;
     }
+  // Keep compatibility with an old revision containing a spelling mistake...
+  if (!strcasecmp(str, "xr_glx_hybird")) {
+    ps->o.backend = BKEND_XR_GLX_HYBRID;
+    return true;
+  }
   printf_errf("(\"%s\"): Invalid backend argument.", str);
   return false;
 }
diff --git a/src/compton.c b/src/compton.c
index 95f6b77..3ddf46f 100644
--- a/src/compton.c
+++ b/src/compton.c
@@ -1905,7 +1905,10 @@ paint_all(session_t *ps, XserverRegion region, XserverRegion region_real, win *t
     XSync(ps->dpy, False);
 #ifdef CONFIG_VSYNC_OPENGL
     if (ps->glx_context) {
-      glFinish();
+      if (ps->o.vsync_use_glfinish)
+        glFinish();
+      else
+        glFlush();
       glXWaitX();
     }
 #endif
@@ -1939,7 +1942,10 @@ paint_all(session_t *ps, XserverRegion region, XserverRegion region_real, win *t
 #ifdef CONFIG_VSYNC_OPENGL
     case BKEND_XR_GLX_HYBRID:
       XSync(ps->dpy, False);
-      glFinish();
+      if (ps->o.vsync_use_glfinish)
+        glFinish();
+      else
+        glFlush();
       glXWaitX();
       paint_bind_tex_real(ps, &ps->tgt_buffer,
           ps->root_width, ps->root_height, ps->depth,
@@ -5421,6 +5427,7 @@ get_cfg(session_t *ps, int argc, char *const *argv, bool first_pass) {
     { "unredir-if-possible-exclude", required_argument, NULL, 308 },
     { "unredir-if-possible-delay", required_argument, NULL, 309 },
     { "write-pid-path", required_argument, NULL, 310 },
+    { "vsync-use-glfinish", no_argument, NULL, 311 },
     // Must terminate with a NULL entry
     { NULL, 0, NULL, 0 },
   };
@@ -5668,6 +5675,7 @@ get_cfg(session_t *ps, int argc, char *const *argv, bool first_pass) {
         // --write-pid-path
         ps->o.write_pid_path = mstrcpy(optarg);
         break;
+      P_CASEBOOL(311, vsync_use_glfinish);
       default:
         usage(1);
         break;

There's nothing I could see from the 18 lines of differences between 14ef6152bc8b7d9f586f3f9fdfe856609236e430 and 3e783f3e1e0dc4f4c0b22abee43f546031c4f122, unless the hash contains a magic spell. :-D I hope you compiled with same CFLAGS?

And just to confirm, how is 9e053910f28806ae23879c9e636749b44e265b4f working? And compton doesn't print anything out, right?

ghost commented 10 years ago

Yes, the compiling process of all tested versions was exactly the same (this time, as explained above, I let makepkg do the entire work for me, so this should be pretty save).

https://github.com/chjj/compton/commit/9e053910f28806ae23879c9e636749b44e265b4f works of course, but also with occasional flickering. :(

Hm. Is apart from that any general difference between the master and the dev branch I should know about?

(The only difference to the master branch I still see is the if-statement that provides compatibility with the old spelling of "hybrid", but afaics this part just can't cause the occasional flickering, right?)

I always run compton in a terminal to observe its output, but as always, there's just nothing but the well known BadWindow errors (error 3) when opening a new window, but they were always there and on every machine anyway…

ghost commented 10 years ago

Oh boy! Although I was so sure this time, the flickering ISN'T dependend on what branch I use, I'm really sorry, it's not your fault at all, but at least I know now how to reproduce the flickering, no mather what version I use:

The "magic trick" to cause flickering with fading enabled when switching workspaces is to wait at least about 30 seconds and do nothing before you switch!

If you keep switching workspaces every few seconds or faster, the flickering doesn't occure, but if you wait some time doing nothing and then switch again, it does flicker again! Btw. the more fading time you set, the more it flickers, the flickering frequency seems rather to stay the same, don't know if this is relevant…

(So I guess it was simply by hazard that I just didn't wait long enough when testing the master branch and waited obviously long enough when testing the dev branch. I feel kinda stupid, but I didn't have any clue that time could play a role in this case… Well, at least this explains also why I got so many different results the last days. ¬¬)

So I guess this is caused by some ressource that is going to sleep (or at least sampling down) if it isn't needed constantly. Does that sound possible to you? If so, where can I start the debugging in this direction?

richardgv commented 10 years ago

@cju:

nvidia-drivers might put your GPU to sleep when idling. Change "PowerMizer" -> "PowerMizer settings" -> "Preferred Mode" in nvidia-settings to "Prefer Maximum Performance" will keep your card on maximum performance level (and costs you more for electricity).

Your kernel may put CPU on powersaving. The method to keep your CPU waked up depends on your operating system and selected CPU governer in the kernel. Just ask Google. :-)

ghost commented 10 years ago

Well, I set both CPU (governor performance) and GPU (PowerMizer maximum performance) to full power, but that didn't help anything concerning the flickering… too bad, but worth a shot of course.

I have another suspect: Can't it be that in the fading period from one to another workspace the detection of the actual focused window sort of fails since fading is a rather long transition (and so it toggles and that results in flickering)? It's just a guess, but I also get this issue w/ popups (like a Firefox dialog for example) when closing them… If you watch really closely, you can also see it w/o fading, but then it's I'd say just one short flicker you'd hardly even notice if you weren't onto the suspect…

richardgv commented 10 years ago

@cju:

I have another suspect: Can't it be that in the fading period from one to another workspace the detection of the actual focused window sort of fails since fading is a rather long transition (and so it toggles and that results in flickering)? It's just a guess, but I also get this issue w/ popups (like a Firefox dialog for example) when closing them… If you watch really closely, you can also see it w/o fading, but then it's I'd say just one short flicker you'd hardly even notice if you weren't onto the suspect…

  1. Does it occur on other backends? If it's some other than a pure renderer issue, it most likely will occur on other backends as well.
  2. You could trace compton with apitrace. Please keep --glx-no-rebind-pixmap disabled when doing so. It may get us some more info to diagnose, though I suppose it's more likely that the issue would disappear when you are tracing it with apitrace.
  3. Adding a glFlush() or glFinish() call after paint_bind_tex_real(ps, &ps->tgt_buffer, ... (now in src/compton.c, line 1950) might help.
ghost commented 10 years ago
  1. It seems like it doesn't occur with the glx-backend, however with xrender, it does. Hm. I don't know, maybe I ask too much, but is it possible in hybrid to let glx also perform this step or would this lead inevitably to the well-known issues https://github.com/chjj/compton/issues/155 and https://github.com/chjj/compton/issues/152 again?
  2. Do I still need to do this since this actually seems to be a rendering issue or is it worthless therefor?
  3. I'm not done with testing, sorry, it was a really long day today, but I added at least the following lines:
#1950   paint_bind_tex_real( … );
1951    if (ps->o.vsync_use_glfinish)
1952       glFinish();
1953    else
1954       glFlush();
#1955   glx_render( … );

So to be sure before I start extensive testing tomorrow: Did you mean it like that? Could you please explain briefly why this would be needed again at this place? Noobish guess: Could even an additional glXWaitX() do any good here?

As always: thanks you very much.

richardgv commented 10 years ago

It seems like it doesn't occur with the glx-backend, however with xrender, it does.

If it happens with X Render backend as well... Then it isn't a simple problem, I guess, and I was thinking along the incorrect path since the beginning... It's hard to imagine that X Render backend has such an obvious issue, though.

There's one situation that may cause a somewhat flicker-ish effect: Your WM adjusts window stack order. However it typically happens only once at the start of fading.

And, does it flicker with xcompmgr?

Do I still need to do this since this actually seems to be a rendering issue or is it worthless therefor?

If it happens with X Render backend as well... Well, tracing result still helps in some degree.

If with apitrace you could no longer reproduce the issue, please record a video showing the problem if possible. I need to see what is displayed when the flicker occurs and re-evaluate the problem.

So to be sure before I start extensive testing tomorrow: Did you mean it like that? Could you please explain briefly why this would be needed again at this place?

Yes. I thought glXBindTexImageEXT() may have a synchronization issue -- this is not possible if it occurs on X Render backend, though.

Noobish guess: Could even an additional glXWaitX() do any good here?

Probably. Go ahead and try.

ghost commented 10 years ago

Well, your astonishment about this happening also with xrender made test this once again. The result: This doesn't seem to be the same flickering, but only "glitches" caused by bad vsync. I didn't notice the difference at first because when you use inactive-dim (what I really like btw.) and switch between windows or so, the failing vsync looks like a short flickering; with longer fading times, it becomes clearer. Oh dear. ¬¬

Anyway, all this stuff probably doesn't seem to matter anymore, since I added the lines above plus the glXWaitX() and it looks like you were right, didn't flicker anymore. :smile:

#1950   paint_bind_tex_real( … );
1951    if (ps->o.vsync_use_glfinish)
1952       glFinish();
1953    else
1954       glFlush();
1955    glXWaitX();
#1956   glx_render( … );

So if this isn't likely to do any kind of harm to any other users, is there a chance that you could merge these lines into the next commit? Please? ;-)

richardgv commented 10 years ago

@cju:

So if this isn't likely to do any kind of harm to any other users, is there a chance that you could merge these lines into the next commit? Please? ;-)

  1. I'm glad that you found a working solution (workaround). :-)
  2. Could you confirm that is the issue has gone away or just happening very rarely?
  3. Is --vsync-use-glfinish() necessary right now?
  4. I hope you didn't spot too much further increase in CPU usage, did you?
ghost commented 10 years ago
  1. Me too. ^^
  2. Of course I don't know if this is totally bullet-proof, but I didn't manage to reproduce flickering ever since then.
  3. It seems so. But since it isn't activated by default and you have to opt in manually, this shouldn't be a problem for other users?
  4. Nope. But there's one thing I forgot to tell about this: At the beginning, I also had increased CPU usage with glfinish, but it seems ever since the update of mesa (https://www.archlinux.org/packages/extra/x86_64/mesa/) last week, this issue was gone. I'm not totally sure if it was really the mesa update, but at this time, there weren't any others that had to do anything with graphical stuff. So from this moment on, I didn't notice any difference between glflush and glfinish anymore concernig the CPU usage… In the changelog they say the following, perhaps you are able to make use of this:

    Revert GLXContextID typedef from glx.h (FS#38392) git-svn-id: file:///srv/repos/svn-packages/svn@203235 eb2447ed-0c53-47e4-bac8-5bc4a241df78

richardgv commented 10 years ago

@cju:

Of course I don't know if this is totally bullet-proof, but I didn't manage to reproduce flickering ever since then.

I will add it after a few days if we found no other issues.

It seems so. But since it isn't activated by default and you have to opt in manually, this shouldn't be a problem for other users?

Yeah, I'm just asking.

Nope. But there's one thing I forgot to tell about this: At the beginning, I also had increased CPU usage with glfinish, but it seems ever since the update of mesa (https://www.archlinux.org/packages/extra/x86_64/mesa/) last week, this issue was gone. I'm not totally sure if it was really the mesa update, but at this time, there weren't any others that had to do anything with graphical stuff. So from this moment on, I didn't notice any difference between glflush and glfinish anymore concernig the CPU usage… In the changelog they say the following, perhaps you are able to make use of this:

Eeeeeeh... Well, I saw glFinsh() brings vast increase in CPU usage here with nvidia-drivers. Thanks for the info, but at least, "Revert GLXContextID typedef from glx.h" isn't going to have such a great effect.

ghost commented 10 years ago

I will add it after a few days if we found no other issues.

So, did you find anything?

richardgv commented 10 years ago

@cju:

None, but seemingly nobody else encountered the issue either... Anyway, added to richardgv-dev branch. The CPU usage isn't very tolerable here, still.

ghost commented 10 years ago

Thank you very much. The fact that nobody else encountered this issue is probably based on only very few people using xr_glx_hybrid AND nvidia AND things like urxvt at the same time, because there's actually no need to use it if everthing runs fine with glx, I suppose…

Concerning the CPU usage: May I ask which distro you use? Because on Wheezy, I got the high usage when using glFinish() too, but as I said before, this problem disappeared on Arch in the last week. I'd bet this has at least somehow to do with all the new X-stuff that came up here, since I didn't change anything else – but wait, there is actually one thing I did: I set my graphics card to fixed frequencies, but I guess this just can't have such effects, right? I would try it out myself of course, but I'm not at home at the moment, sorry.

richardgv commented 10 years ago

@cju:

Thank you very much. The fact that nobody else encountered this issue is probably based on only very few people using xr_glx_hybrid AND nvidia AND things like urxvt at the same time, because there's actually no need to use it if everthing runs fine with glx, I suppose…

Indeed.

Concerning the CPU usage: May I ask which distro you use? Because on Wheezy, I got the high usage when using glFinish() too, but as I said before, this problem disappeared on Arch in the last week. I'd bet this has at least somehow to do with all the new X-stuff that came up here, since I didn't change anything else – but wait, there is actually one thing I did: I set my graphics card to fixed frequencies, but I guess this just can't have such effects, right? I would try it out myself of course, but I'm not at home at the moment, sorry.

I use Gentoo ~amd64, xorg-server-1.15.0 (no good with 1.14.5 either), nvidia-drivers-331.38 (it happens with 331.20 as well), mesa-9.2.5 (upgrade to 10.0.2 is not fun, quite a few package breakages).

If you setting graphic cards to fixed frequency means using "Prefer Maximum Performance" in nvidia-settings, which wires performance level to 3, then no, it doesn't help here, unfortunately.

ghost commented 10 years ago

I'm sorry I have to bring this up again, but I had some spare time the last days so I decided to test some other wm than i3.

The result was that the flickering issue only seems to occur w/ i3, w/ dwm or awesome for example, this problem just doesn't appear… I tested w/ the old version ebfd4c9 where this problem is still alive, of course. ;-)

So since this is sooo specific, I guess it would be the best to undo this fix (or make it at least optional), because obviously nobody else than me seems to need it but everybody is affected without having any advantage and that makes me feel kinda bad about it… If I stick to i3, I can patch compton myself very easily via the PKGBUILD, so that would be alright.

richardgv commented 10 years ago

@cju:

Hi, cju, thanks for your time spent on testing compton! As of 44faf1ae9a0bfad2f23a6b92296f8f8ac944886f, I could still see flickering issue with fvwm-2.6.5 unless I enable --vsync-use-glfinish, so I suppose it might be slightly too early to state the issue only occurs on i3. It's a complicated timing issue that I still don't yet understand (and likely never able to before I get assistance from Xorg developers, or somebody else very familiar with synchronization of X Render and OpenGL). Presently, unless --vsync-use-glfinish is enabled, I don't see that the workaround has any other visible effects on the operation of compton.

ghost commented 10 years ago

Two things I like to add:

1) Due to https://github.com/chjj/compton/issues/181, I'm now using xr_glx_hybrid again, and I also need the --vsync-use-glfinish to prevent flickering, no mather what wm I use. But this workaround described under https://github.com/chjj/compton/issues/163#issuecomment-31966712 is really only needed in i3; I tried six (!) other wms, no one else than i3 needs that lines, so could you please remove them? Since I'm going to change my wm anyway, there's nobody left who would have any advantage of these lines…

2) It's a pretty dumb question, but why is xr_glx_hybrid written with underscores and not with dashes like opengl-swc or all the other stuff? When writing a config file, this is a bit confusing, so could you please unify the spelling (I'd prefer the dashes :smile: )? Or is there a specific reason why the backend has to have these underscores? Or alternatively you could allow both spellings, i.e. xr_glx_hybrid and xr-glx-hybrid, like it is allowed for kernel modules. That would very be nice, too.

richardgv commented 10 years ago

Due to https://github.com/chjj/compton/issues/181, I'm now using xr_glx_hybrid again, and I also need the --vsync-use-glfinish to prevent flickering, no mather what wm I use. But this workaround described under https://github.com/chjj/compton/issues/163#issuecomment-31966712 is really only needed in i3; I tried six (!) other wms, no one else than i3 needs that lines, so could you please remove them? Since I'm going to change my wm anyway, there's nobody left who would have any advantage of these lines…

I don't want to see those i3 users shouting at me later. I tested with --benchmark and the performance hit is not too significant here -- Less than 10%. It seems worthwhile to keep the workaround code in order to save me explaining to i3 users again and again. :-D Do you spot any significant decrease in performance there, or just want "the purity"?

It's a pretty dumb question, but why is xr_glx_hybrid written with underscores and not with dashes like opengl-swc or all the other stuff? When writing a config file, this is a bit confusing, so could you please unify the spelling (I'd prefer the dashes :smile: )? Or is there a specific reason why the backend has to have these underscores? Or alternatively you could allow both spellings, i.e. xr_glx_hybrid and xr-glx-hybrid, like it is allowed for kernel modules. That would very be nice, too.

I forgot why I used underscores. Must be insane that night. :-D compton (9950d08, richardgv-dev) is now accepting xr-glx-hybrid as an alias. Thanks for the suggestion!

ghost commented 10 years ago

Thanks for the feel-good alias! :)

I don't want to see those i3 users shouting at me later. I tested with --benchmark and the performance hit is not too significant here -- Less than 10%. It seems worthwhile to keep the workaround code in order to save me explaining to i3 users again and again. :-D Do you spot any significant decrease in performance there, or just want "the purity"?

Well, of course my GPU/CPU doesn't burn like hell with the workaround and I can confirm your performance values, but I noticed that with the workaround, my avarage GPU temperature after hours of performing the same things is constantly 1-2°C higher than without it. Not a big deal, I know, but somehow it deranges me…

I understand completely why you'd like to keep this and this is probably a bit annoying for you, but keep in mind that nobody else than me ever reportet about this. Also, there aren't that many i3 users out there with my hardware/software combo, so it isn't very likely that somebody is going to bring this up again – but there are many others who use hybrid and don't have any benefits of this. And yes, I like purity, touché. :-D

How about a deal: You remove those few lines and if there reeeaaally ever somebody (i.e. another i3 user) comes along who would actually profit of them, I'll kill him I'll convince hin to use another wm, he can yell at me, you could just reimplement these lines straightaway and I'd officially apollogize for making so much noice. :-D What do you say?

richardgv commented 10 years ago

Nope, I can't make a deal with an open-source project. :-D But I will make it optional if I can't resolve the flickering issue through other means. The X Sync fence patch on #181 doesn't resolve this problem, unfortunately. I plan to seek for support from nVidia later. Borrow me some of your luck so they will answer my question. :-D

ghost commented 10 years ago

Hehe, I knew you'd say something like that! :-D But don't get me wrong, I'm totally fine with making it optional, so: thank you very much.

Believe me, you don't want any of my luck, because there's almost none concerning things like these. ;-)