Closed jmanuel1 closed 5 months ago
I'm sorry to hear that @jmanuel1. Never happened to me in the current version of the plugin, but this doesn't mean that there is no bug there. It's certainly possible that one particular note breaks the code for some reason.
It's going to be a little tricky to debug without an example note that enables reproduction of the bug. And because the plugin processes multiple notes in parallel (so not even necessarily the 250th in line) it's going to be tricky to figure out which note fails (unless we find a good error message). It also occurred to me that I personally don't use HTML notes, and it's generally a good idea to do some testing around them (will work on it).
Tools-->Jarvis-->Exclude notebook from note DB
. It's reversible, anyway.I haven't enabled debugging. I also haven't checked the size of my notes... I'll get back to you on both of these.
Here's what logs I managed to capture that seemed relevant
from devtools: all-logs.txt jarvis-logs.txt
Unfortunately, I couldn't just right-click and select "save as" in the devtools console, so I copied the above logs manually. They're a bit soupy.
from profile directory: profile-folder-logs.txt
@alondmnt
I've been moving my largest notes to a folder I made specifically to exclude them from the database, and now the update stops at 450 notes.
EDIT: Now it's 750.
Hi @jmanuel1, sorry for the long wait (I was preoccupied with... life). I went over the logs, and although I could not recognize any explicit error, I think that the dynamics of the DB construction are visible in the logs. It's easy to get from the logs the list of the last processed batch of notes, in order to identify the culprit note(s) that stop the update from completing.
Here's how to read it:
.... Got message (3): joplin.data.get ['notes'] {fields: Array(5), page: 1, limit: 50}
(get the next 50 notes).... Got message (3): joplin.data.get (3) ['notes', 'd2e4331630a146be9bfc070704b5d6e2', 'tags'] {fields: Array(1)}
So the last group of 'tags' messages contains all the notes that were being processed while Jarvis failed for some reason. 'd2e4331630a146be9bfc070704b5d6e2' in this case is the note ID, and you can use it to locate the note and exclude it from the Jarvis DB, or share it if you wish so I can try to figure out what went wrong. I understand that 50 may be a little too ambiguous, so I'm attaching a custom Jarvis version that processes notes one by one (this is the only change from the current v0.7.0).
Once you've identified a note that failed, you may add the exclude.from.jarvis
tag to it, and the next time the update runs it will be ignored. You may repeat this as many times as necessary.
Hope this helps.
Unzip this file and manually install the plugin: joplin.plugin.alondmnt.jarvis.jpl.zip
@alondmnt Sorry for letting this issue drop by the wayside. I do have something of note, though: when I installed the custom Jarvis version that processes notes one at a time, it seems like the process started to finish! I haven't actually seen it finish since it takes a while, but at the very least, the database update gets very far.
I might try to un-ignore some notes to see if the update process gets stuck on any of them, and also record the process to get evidence that it completes.
Thanks for the update @jmanuel1.
Note that if you use the latest Jarvis v0.8.2 there's a new setting, Notes: Parallel jobs
, where you can select the number of notes that will be processed concurrently during a database update. Setting it to 1 is the same as the custom version that I posted above.
I installed the latest Jarvis and didn't change the Parallel jobs setting (it's 10), and Jarvis completed updating the database. So, it seems like my issue's fixed. Thanks for the help.
I have the same issue. I put 1 parallel note for debugging.
It seems like there is a memory leak and a memory limit of some sort: as the DB is getting updated, RAM usage keeps growing (about 100MB per note), until a limit of around 4GB is met. After that, systemd-coredump is invoked (Joplin freezes in the meantime) and the generation stops. Updating the DB now results getting stuck at the same point. After rebooting Joplin, I can do more notes, the more I generate the more quickly the RAM grows.
In the console I get:
Uncaught (in promise) Error: Failed to link vertex and fragment shaders.
at BA (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1077496)
at ik.createProgram (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1143235)
at /home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1172657
at /home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1173072
at Dz.getAndSaveBinary (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1176303)
at Dz.runWebGLProgram (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1172111)
at yB (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1232377)
at Object.kernelFunc (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:1385239)
at e (/home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:375789)
at /home/dave/.config/joplin-desktop/tmp/plugin_joplin.plugin.alondmnt.jarvis.js:2:376684
(A quick search shows that this is a typical out-of-GPU-RAM error.)
Nothing relevant in the logs.
Do I have... too much stuff?
Hi @ElDavoo, sorry for the late response.
It's possible that the issue is caused by HTML notes, if you have any. HTML notes can lead to 1000s of blocks being sent to the embedding model (due to incompatible processing that is intended for Markdown), and may overload memory. I just published v0.9.1 that skips HTML notes, and should avoid this problem. Please check it out.
I expected the logs to show messages like Got message (3): joplin.data.get (3) ['notes', 'd2e4331630a146be9bfc070704b5d6e2', 'tags'] {fields: Array(1)}
, which may enable you to identify the culprit note (the last one before Joplin gets stuck). Maybe you need to enable debugging? Once you identify a note that causes the problem, you can tag it with exclude.from.jarvis
.
There could be an issue with the Universal Sentence Encoder model on your machine (the default model for embeddings). You could try to serve other offline models using Ollama.
I have a lot of notes, most of which were imported from Evernote as HTML. When Jarvis starts to update the note database, it gets to a certain number of notes, usually 250, before not progressing any further. I haven't noticed any logs in DevTools from Jarvis, and the "Universal Sentence Encoder" file exists and has some data in it. And occasionally the related notes search comes up and I can actually use it, but not often.
I wonder if one of my notes is causing the issue, but since I didn't find logs, I wouldn't know which one.