Closed derritter88 closed 2 years ago
I am not sure but from my point of view the tagging lets the app Preview Generator
do something which makes it crash?!
And recognize
writes everything multiple times to the database.
I have disabled the preview generation for category "Movie" - let's see how it will do.
So now it simply fails with a:
root@wwwubuntu:/var/www/cloud# sudo -u www-data php ./occ recognize:classify -vvv
Failed to classify images
Within Nextcloud log there are not entries which indicates why it fails.
This may due to your CPU architecture. You can try running the classifier script directly to check:
$ apps/recognize/bin/node apps/recognize/src/classifier_imagenet.js path/to/a/photo.jpg
I do not think that this is a CPU architecture issue as I "show" my VM what kind of CPU I am using and it also has the mentioned AVX flag:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Byte-Reihenfolge: Little Endian
Adressgrößen: 48 bits physical, 48 bits virtual
CPU(s): 8
Liste der Online-CPU(s): 0-7
Thread(s) pro Kern: 1
Kern(e) pro Socket: 8
Sockel: 1
NUMA-Knoten: 1
Anbieterkennung: AuthenticAMD
Prozessorfamilie: 23
Modell: 49
Modellname: AMD EPYC 7272 12-Core Processor
Stepping: 0
CPU MHz: 2894.363
BogoMIPS: 5788.72
Virtualisierung: AMD-V
L1d Cache: 512 KiB
L1i Cache: 512 KiB
L2 Cache: 4 MiB
L3 Cache: 16 MiB
NUMA-Knoten0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_de
adline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni x
saveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Also a test run worked fine:
sudo -u www-data apps/recognize/bin/node apps/recognize/src/classifier_imagenet.js /mnt/cloud/nextcloud/3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3/files/IMG_20211102_222424.jpg
2021-11-03 10:54:07.618633: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-03 10:54:09.609194: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 36000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.665774: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.696105: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.719646: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.742251: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
{
className: 'caldron',
probability: 0.557533860206604,
rule: { label: 'cooking', threshold: 0.1 }
}
{
className: 'cup',
probability: 0.08837056905031204,
rule: { label: 'cup', threshold: 0.1 }
}
{
className: 'coffee mug',
probability: 0.043676361441612244,
rule: { label: 'coffee', threshold: 0.2, categories: [ 'beverage' ] }
}
{
className: 'tiger cat',
probability: 0.01587754860520363,
rule: {
label: 'cat',
priority: 5,
threshold: 0.1,
categories: [ 'animal' ]
}
}
{
className: 'egyptian cat',
probability: 0.009242596104741096,
rule: {
label: 'cat',
priority: 5,
threshold: 0.1,
categories: [ 'animal' ]
}
}
{
className: 'pitcher',
probability: 0.008557061664760113,
rule: { label: 'beverage', threshold: 0.1 }
}
{
className: 'teapot',
probability: 0.006404006388038397,
rule: { threshold: 0.1, categories: [ 'beverage' ] }
}
["cooking"]
Nice! That's good. Then we can try to increase the log verbosity to debug, to see what's happening when the app does the same thing.
You mean the Nextcloud log itself or just the app as I run it already with -vvv
The nextcloud log, yeah, the command doesn't print any logs at the moment (which I'm aware isn't ideal).
The problem of setting the general log to Debug is that I have billions of "deprecated" things and the log is rotating a lot:
{"reqId":"YYJe_HfHlXE-HTCFlBOEDAAAVBE","level":0,"time":"2021-11-03T11:05:48+01:00","remoteAddr":"192.168.10.2","user":"3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3","app":"video_converter","method":"GET","url":"/apps/logreader/poll?lastReqId=YYJe-3fHlXE-HTCFlBOECwAAVxE","message":"/appinfo/app.php is deprecated, use \\OCP\\AppFramework\\Bootstrap\\IBootstrap on the application class instead.","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0","version":"22.2.0.2"}
So I left it running at debug level & it crashed again - also again with an SQL exception.
Classifying photos of user 4479F707-255F-465B-9468-9D92669CDB71
Failed to classify images
An exception occurred while executing a query: SQLSTATE[HY000]: General error: 7 server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The debug log of Nextcloud does not show anything useful.
Do you still get errors in postgres? that may be the reason the connection is terminated
Next step of trying: Bypass my DB loadbalancer and connect directly to the master DB.
Do you still get errors in postgres? that may be the reason the connection is terminated
I am not sure if this is a Postgres error or a HAProxy.
How is recognize communicating with Postgres? Is it doing kind of a batch-job? So recognizing everything before inputing stuff into the DB or is it running on a syncronised base?
There's two points of communication with the db. During a classification run, each file is tagged separately, but there's a sort of batch job that adds the unrecognized tag to all files that have been processed but yielded no tags.
Hmmm it might be possible, as my users have tons of pictures that it might run into a database timeout? I will keep an eye on the direct Postgres communication to verify if it has another issue or not.
On HAProxy the regular timeout is set on 30 minutes
I spoke too soon, it appears I already refactored the latter function to tag files individually: https://github.com/marcelklehr/recognize/blob/master/lib/Service/TagManager.php#L62
Is this also part of the current version of your app from the Nextcloud store?
Ah. Good point :D let's see
But why did I saw so many "tag already existing" database entries?
If you still see those, that's definitely something we should investigate. Possibly that's a nextcloud bug.
Yup, it's still appearing at the DB log:
2021-11-03 11:38:59.521 UTC [3509011] nextcloud@nextcloud STATEMENT: INSERT INTO "oc_systemtag_object_mapping" ("objectid", "objecttype", "systemtagid") VALUES($1, $2, $3)
2021-11-03 11:38:59.522 UTC [3509011] nextcloud@nextcloud ERROR: duplicate key value violates unique constraint "oc_systemtag_object_mapping_pkey"
2021-11-03 11:38:59.522 UTC [3509011] nextcloud@nextcloud DETAIL: Key (objecttype, objectid, systemtagid)=(files, 9710849, 1) already exists.
Could this come from my testing during spring/summer when I tried to have my GPU working with recognize?
Could this come from my testing during spring/summer when I tried to have my GPU working with recognize?
I doubt it.
Hmmm... not sure what or why this is happening then.
Additional: After I changed to direct DB connections it keeps running and running and so on - no timeout so far.
So maybe it does some asynchrone stuff in the background?
So no timeout but the "generic" issue again:
Classifying photos of user 4479F707-255F-465B-9468-9D92669CDB71
Failed to classify images
Classifier process error
Maybe you're running out of memory?
Nope - my VM has 24 GB of dedicated RAM. Yesterday when recognize was able to run my VM consumed around 14 to 15 GB RAM - so it should have enought space.
I have also checked PHP-FPM log and Apache logs - nothing would indicate that I am running out of memory.
Is there any kind of resetting recognize?`
Even manually like droping tables at the DB and removing files from the server?!
Maybe it would make sense have a clear beginning
In the settings there's a button that resets the tags
By the way you might right regarding free memory - have a look at this:
need to put it in a separate file as it is too big...
I will keep an eye on the RAM usage - currently it's ~16 GB out of 24 GB
So it might be a memory issue...
Are there any details how much recognize or tensorflow would like to have? I upgrade my VM now to 32 GB RAM
Are there any details how much recognize or tensorflow would like to have?
It shouldn't require more than 4-5GB. The script may have a memory leak, though.
I have restarted the server for the memory change and the memory consumation jumps around. It was around 10 GB RAM - now it is around 15 GB.
I will continue to monitor it.
The script consumes a lot of RAM - after an hour of running it my memory usage is at ~26 GB RAM
Okay so the memory consumption went up to ~31 GB and fall down to 11 GB. Then the script crash.
From my point of view it looks like the script consumes more and more RAM until the OS says "no more".
From my point of view it looks like the script consumes more and more RAM until the OS says "no more".
Alright, then I'll have to find the memory leak. Which of the three classifier scripts is the culprit?
Which of the three classifier scripts is the culprit?
How can I determine this?
If you use htop, e.g. it should show the process command line, e.g. recognize/bin/node recognize/src/classifier_imagenet.js -
Thanks! It's exactly that script which is running and consuming RAM & CPU:
/var/www/cloud/recognize/bin//node /var/www/cloud/apps/recognize/src/classifier_imagenet.js -
Additional I have set the log rotation size to 10 GB + Debug-logging on. So I should be able to determine which picture would be the issue.
Unfortunately there are no really information within the debug log. How would an error message look like within the log?
So I should be able to determine which picture would be the issue.
there is likely no single picture causing this. It's a software bug that causes accumulation of stale memory without freeing it.
Ahhh... okay. Just a question: Is recognize only doing its job on user pictures or also on pictures being resized with the app preview?
It should only process pictures in users' files
I'm also suffering of the same issue, I'm using an Ubuntu VM inside proxmox, with 6GB total ram and 8 cores.
The system runs out of ram that it starves the rest of the system, prometheus exporters die and don't respond till the memory is released, and it you can see the gaps in graph below.
I'll also disable the plugin till the issue is resolved.
One interesting note, the number of remaining images to be processed has been the same since the issue started to happen, so it seems that there's a certain file that the app can't process, could be a big file or something like that
One interesting note, the number of remaining images to be processed has been the same since the issue started to happen, so it seems that there's a certain file that the app can't process, could be a big file or something like that
Nice find, thanks!
Describe the bug After some time recognize throws an error and stops working (see
additional context
)To Reproduce Steps to reproduce the behavior:
Recognize (please complete the following information):
Server (please complete the following information):
Additional context
Error message from bash:
Nextcloud log:
+ direclty after such an error:
PostgreSQL log (literally millions entries, log = 1.4 GB):
From within the PostgreSQL logs I cannot find anything which indicates a closing of the connection.