ControlSystemStudio / cs-studio

Control System Studio is an Eclipse-based collections of tools to monitor and operate large scale control systems, such as the ones in the accelerator community.
https://controlsystemstudio.org/
Eclipse Public License 1.0
113 stars 96 forks source link

CS-Studio high CPU on QuantumRenderer #2396

Open berryma4 opened 6 years ago

berryma4 commented 6 years ago

I have a cs-studio that seems to get in a bad state over time (high CPU). Running Debian Jessie, oracle java 1.8.0.151 The thread dump shows this as the running process:

"QuantumRenderer-0" #3147 daemon prio=6 os_prio=0 tid=0x00007f706c98b800 nid=0x93f runnable [0x00007f70d5d2f000]
   java.lang.Thread.State: RUNNABLE
    at com.sun.prism.es2.GLContext.nReadPixelsInt(Native Method)
    at com.sun.prism.es2.GLContext.readPixels(GLContext.java:549)
    at com.sun.prism.es2.ES2RTTexture.readPixels(ES2RTTexture.java:328)
    at com.sun.prism.es2.ES2RTTexture.readPixels(ES2RTTexture.java:336)
    at com.sun.javafx.tk.quantum.UploadingPainter.run(UploadingPainter.java:155)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at com.sun.javafx.tk.RenderJob.run(RenderJob.java:58)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at com.sun.javafx.tk.quantum.QuantumRenderer$PipelineRunnable.run(QuantumRenderer.java:125)
    at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
    - <0x000000009948cbb8> (a java.util.concurrent.ThreadPoolExecutor$Worker)
    - <0x000000009975ff28> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)

I did see a very similar topic on Stack Overflow: https://stackoverflow.com/questions/25906033/javafx-8-quantumrenderer-high-cpu-usage

I'll check to see if -Dprism.vsync=false helps. This will be difficult to test, as it's difficult to repeat.

kasemir commented 6 years ago

Which JavaFX-based views do you have open?

I've recently started to run with -Dprism.verbose=true -Dprism.showdirty=true. This way the JFX displays highlight the region that was last redrawn, so I can spot which part of the screen keeps updating.

With the SWT-hosted JFX displays the overall CPU load tends to be in the complete update of the FXCanvas, but at least we can start to assert that the JFX displays don't unnecessarily refresh and thus get better performance once they run on a standalone JFX stage.

For what it's worth, I get this output from the prism.verbose with both Java 8, JFX inside SWT, and the standalone JFX on Java 9:

Prism pipeline init order: es2 sw 
Using java-based Pisces rasterizer
Using dirty region optimizations
Not using texture mask for primitives
Not forcing power of 2 sizes for textures
Using hardware CLAMP_TO_ZERO mode
Opting in for HiDPI pixel scaling
Prism pipeline name = com.sun.prism.es2.ES2Pipeline
Loading ES2 native library ... prism_es2
    succeeded.
GLFactory using com.sun.prism.es2.X11GLFactory
(X) Got class = class com.sun.prism.es2.ES2Pipeline
Initialized prism pipeline: com.sun.prism.es2.ES2Pipeline
Maximum supported texture size: 16384
Maximum texture size clamped to 4096
Non power of two texture support = true
Maximum number of vertex attributes = 16
Maximum number of uniform vertex components = 16384
Maximum number of uniform fragment components = 16384
Maximum number of varying components = 128
Maximum number of texture units usable in a vertex shader = 32
Maximum number of texture units usable in a fragment shader = 32
Graphics Vendor: Intel Open Source Technology Center
       Renderer: Mesa DRI Intel(R) Haswell Desktop 
        Version: 3.0 Mesa 17.0.1
 vsync: true vpipe: true

Performance wise, when I run the display builder examples/fishtank in RCP CS-Studio and Phoebus, the latter uses only 1/4 of the CPU and 1/2 of the memory.

berryma4 commented 6 years ago

It seems related to the Save Set Restore UI. I'll have to try the prism options, to check what's updating. Thank you!