Open medwatt opened 2 weeks ago
I see similar spikes when in a buffer that molten is attached to. I suspect that this is b/c we're constantly polling the kernel in the background.
Fixing requires a large rewrite. Something I just haven't really had time to do. In an ideal world, this plugin would be written in rust, and I could ship it with luarocks.
I might never seriously start to work on that though, I don't have much motivation to do it.
@benlubas, thanks for letting me know. The thing is, most of the time I just have a jupyter notebook opened in the background. I was thinking of switching my notebook activities to neovim (for obvious reasons). However, consuming 4-5 % of CPU usage while doing nothing is too much. I guess I'll stick to jupyter notebooks for now, until something more optimized comes along.
I believe this is the function (in runtime.py
) where the constant polling is done. Is there any reason why this code wasn't written with asyncio
?
def tick(self, output: Optional[Output]) -> bool:
did_stuff = False
assert isinstance(
self.kernel_client,
jupyter_client.blocking.client.BlockingKernelClient,
)
if not self.is_ready():
try:
self.kernel_client.wait_for_ready(timeout=0)
self.state = RuntimeState.IDLE
did_stuff = True
except RuntimeError:
return False
if output is None:
return did_stuff
while True:
try:
message = self.kernel_client.get_iopub_msg(timeout=0)
if "content" not in message or "msg_type" not in message:
continue
did_stuff_now = self._tick_one(output, message["msg_type"], message["content"])
did_stuff = did_stuff or did_stuff_now
if output.status == OutputStatus.DONE:
break
except EmptyQueueException:
break
return did_stuff
The reason is I didn't write it. It came from magma.
The two things you could look into if you're willing to contribute are switching from the blocking kernel client to a non blocking one and switching from polling to not polling.
Potentially the issue with async is communicating back to the running nvim process, might have/want to totally change the way that's done. But it's definitely gotta be doable.
Btw there's more than just that function iirc
Also also. You might bc interested in neopyter. Pretty cool plugin too
make sure to do the following: read the README check existing issues (the
config problem
tag is helpful) try with the latest version of molten and image.nvim, latest releases and then main/master branches run:UpdateRemotePlugins
pacman
`:checkhealth molten`
``` molten-nvim ~ - OK NeoVim >=0.9 - OK Python >=3.10 - OK Python module pynvim found - OK Python module jupyter-client found - WARNING Optional python module cairosvg not found - ADVICE: - pip install cairosvg - WARNING Optional python module pnglatex not found - ADVICE: - pip install pnglatex - WARNING Optional python module plotly not found - ADVICE: - pip install plotly - WARNING Optional python module kaleido not found - ADVICE: - pip install kaleido - OK Python module pyperclip found - OK Python module nbformat found - OK Python module pillow found ````:checkhealth provider` (the python parts)
``` provider.python: require("provider.python.health").check() Python 3 provider (optional) ~ - `g:python3_host_prog` is not set. Searching for python3 in the environment. - Executable: /usr/bin/python3 - Python version: 3.12.6 - pynvim version: 0.5.0 - OK Latest pynvim is installed. Python virtualenv ~ - OK no $VIRTUAL_ENV ```Description
There's a non-negligible CPU usage when using molten even when doing nothing. The CPU usage drops to zero when the focus shifts to something else. See video below.
https://github.com/user-attachments/assets/aa9e711e-a269-48b1-aa8e-86b2418a4582
Reproduction Steps
Steps to reproduce the behavior. ie. open this file, run this code and wait for the output window to open, then do x
Optionally you can include a minimal config to reproduce the issue. This will help me figure things out much more quickly. You can find a sample minimal config here. If you include one, please also include the output of
pip freeze
from the python3 host program that you specify in the config.Expected Behavior
A clear and concise description of what you expected to happen.