jtpio / jupyterlab-system-monitor

JupyterLab extension to display system metrics
BSD 3-Clause "New" or "Revised" License
299 stars 30 forks source link

Memory consuption shows 0 #6

Closed ghostshad closed 4 years ago

ghostshad commented 5 years ago

Hello,

I'm running the latest jupyterhub from conda-forge.

I've installed the system monitor module by running:

jupyter labextension install jupyterlab-topbar-extension jupyterlab-system-monitor

The widget in the top appeared and shows Mem:0.0 B

The endpoint /resuse returns the json and has some meaningfull rss value.

Could you help to investigate the issue?

jtpio commented 5 years ago

@ghostshad thanks for the report.

For the endpoint do you mean /metrics?

Are there any error in the dev tools console?

ghostshad commented 5 years ago

Well, /metrics is working as well.

I've limited the used extensions and don't have any errors now in the console, but the memory is still 0.00

I've checked the same plugin for classical notebooks that uses nsresuse and it works and uses /resuse endpoint for it's data.

Currently I can see the requests for /metrics in the dev console. It has the process_virtual_memory_bytes value there with some float value that looks sane

felihong commented 4 years ago

Hi @jtpio ,

I'm encountering a similar issue here, after installed the extension

pip install nbresuse
jupyter labextension install jupyterlab-topbar-extension jupyterlab-system-monitor

I saw the memory tab top of the right panel, however this remains 0 the whole time.

Screenshot 2020-02-04 at 16 22 17

Any ideas? Thanks.

stefanvangastel commented 4 years ago

Same here, running JupyterHub with Kubernetes spawners.

jtpio commented 4 years ago

@Felihong @stefanvangastel @ghostshad is the value displayed in the JupyterLab status bar?

image

stefanvangastel commented 4 years ago

@jtpio No, nothing at all next to the kernel icon. I do see a xhr call to /metrics showing the metrics just fine (I guess, what value is used by the statusbar?)

jtpio commented 4 years ago

OK that makes sense, since the values in the status bar and in the top bar should be the same.

It looks like there are already a couple of open issues in other repositories, for example this one (and the others it links to): https://github.com/yuvipanda/nbresuse/issues/17

stefanvangastel commented 4 years ago

@jtpio Nevermind, somewhere allong the way the pip install nbresuse got lost in my dockerefiles... added it and now it works just fine!

jtpio commented 4 years ago

great!

octavd commented 4 years ago

Hello @stefanvangastel , could you please tell me how did you configure this for KubeSpawner?

Currently i have -> c.KubeSpawner.args = [ "--ResourceUseDisplay.track_cpu_percent=True", "--ResourceUseDisplay.mem_limit=17179869184" ]

It's always showing 0...

@jtpio , i've tried also with: c = get_config()

c.NotebookApp.ResourceUseDisplay.mem_limit = 17179869184 c.NotebookApp.ResourceUseDisplay.track_cpu_percent = True

and it's the same.

Could you please help?

octavd commented 4 years ago

I've solved with the memory - it displays now but the cpu won't display at all I have a KubeSpawner:

tried with: c.KubeSpawner.args = [ "--ResourceUseDisplay.track_cpu_percent=True", "--ResourceUseDisplay.cpu_limit=0.75" ] -> nothing

Also with: c.NotebookApp.ResourceUseDisplay.track_cpu_percent = True c.NotebookApp.ResourceUseDisplay.cpu_limit = 0.75

or c.KubeSpawner.ResourceUseDisplay.track_cpu_percent = True c.KubeSpawner.ResourceUseDisplay.cpu_limit = 0.75

and still nothing.

If i open the /metrics it shows -> {"rss": 66908160, "limits": {"memory": {"rss": 536870912, "warn": false}, "cpu": {"cpu": 0.75, "warn": false}}, "cpu_percent": 0.0, "cpu_count": 8}

Any ideas?

jtpio commented 4 years ago

@octavd which version of jupyterlab-system-monitor are you using? The latest that includes the CPU indicator is 0.6.0.

octavd commented 4 years ago

oh, so that's why it wasn't working. :) thank you very much, jtpio!

jtpio commented 4 years ago

Closing as answered.

Don't hesitate to open a new issue to discuss more.