Open pgudge opened 6 years ago
Wow, I've never encountered that error before and I have some larger albums, but maybe increasing the node memory allocation as you mention might help in your case. I found a mention of the argument --max-old-space-size=<size>
that you could try passing to node. I would try myself but I'm not sure I could reproduce the issue.
In order for the argument to work inside the docker container, you could try using the NODE_OPTIONS environment variable to pass the argument to node through docker. To do this, the docker run command might looks something similar to docker run -e 'NODE_OPTIONS=--max-old-space-size=4096' jwater7/responsive-photo-gallery
for a 4GB memory allocation. Not sure what the default size is, so you might try playing around with it a bit.
If you get that to work, you wouldn't need to do this, but to directly modify the node command when running the docker container, you could substitute the command for a different one by passing it into the docker run command like this: docker run -it jwater7/responsive-photo-gallery npm run build-frontend && node --max-old-space-size=4096 ./bin/www
If this continues to be a problem, maybe we should look into using a different node module instead of exif-reader. Please let me know how it goes and thanks for the nice writeup.
Thanks for the info. I added that option, and indeed it ran for longer, 7 hours! Just checked now and it looks like an exception on the thumb generation for dates or something.
{"log":" returnMetadata['modifyDate'] = new Date(metadata.format.tags.creation_time); //'2018-03-26 18:07:46'\n","stream":"stderr","time":"2018-09-03T07:41:47.320204321Z"}
{"log":" ^\n","stream":"stderr","time":"2018-09-03T07:41:47.320211442Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:41:47.320214757Z"}
{"log":"TypeError: Cannot read property 'creation_time' of undefined\n","stream":"stderr","time":"2018-09-03T07:41:47.320217784Z"}
{"log":" at /usr/src/app/node_modules/fast-image-processing/index.js:233:68\n","stream":"stderr","time":"2018-09-03T07:41:47.320221026Z"}
{"log":" at handleCallback (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:106:9)\n","stream":"stderr","time":"2018-09-03T07:41:47.320224209Z"}
{"log":" at handleExit (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:223:11)\n","stream":"stderr","time":"2018-09-03T07:41:47.320227398Z"}
{"log":" at Socket.\u003canonymous\u003e (/usr/src/app/node_modules/fluent-ffmpeg/lib/ffprobe.js:248:9)\n","stream":"stderr","time":"2018-09-03T07:41:47.320230568Z"}
{"log":" at Socket.emit (events.js:187:15)\n","stream":"stderr","time":"2018-09-03T07:41:47.320234175Z"}
{"log":" at Pipe._handle.close (net.js:598:12)\n","stream":"stderr","time":"2018-09-03T07:41:47.320255442Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.574190899Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.574213438Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.574504Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.574513253Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.5842583Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.584267493Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.584536576Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.584545541Z"}
{"log":"\n","stream":"stderr","time":"2018-09-03T07:42:49.766828416Z"}
{"log":"(sharp:56): GLib-CRITICAL **: g_hash_table_lookup: assertion 'hash_table != NULL' failed\n","stream":"stderr","time":"2018-09-03T07:42:49.766856065Z"}
...
...
{"log":"Segmentation fault (core dumped)\n","stream":"stderr","time":"2018-09-03T07:56:15.656834537Z"}
{"log":"npm ERR! code ELIFECYCLE\n","stream":"stderr","time":"2018-09-03T07:56:15.658551152Z"}
{"log":"npm ERR! errno 139\n","stream":"stderr","time":"2018-09-03T07:56:15.658730459Z"}
{"log":"npm ERR! responsive-photo-gallery@0.2.3 start: `node ./bin/www`\n","stream":"stderr","time":"2018-09-03T07:56:15.659375497Z"}
{"log":"npm ERR! Exit status 139\n","stream":"stderr","time":"2018-09-03T07:56:15.659443863Z"}
{"log":"npm ERR! \n","stream":"stderr","time":"2018-09-03T07:56:15.659517228Z"}
{"log":"npm ERR! Failed at the responsive-photo-gallery@0.2.3 start script.\n","stream":"stderr","time":"2018-09-03T07:56:15.659525997Z"}
{"log":"npm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n","stream":"stderr","time":"2018-09-03T07:56:15.659608165Z"}
Sorry about the log format, it's from Docker container log file. Those lines above the ... are repeated thousands of times. It looks like it has created all the thumbs in the persistent/storage/thumbs/NAS/
Could you describe your albums a bit - are there lots of small videos? or just a couple larger ones and the rest photos? etc
Video support is still pretty new and I'm working out the kinks. I'm committing some stuff that should at least help with the date TypeErrors in your log. My hunch is that node changed a bit when I updated and it's running LOTS of ffmpeg stuff concurrently, so I also added some limits there too. Hope that fixes what you're seeing if you docker pull latest.
I'm thinking that for video, it might be time to implement a small database or cache the metadata in some form a bit better.
Please let me know how it goes if you feel like trudging forward with this. I'll be interested if any of the latest stuff helps. Thanks
I did end up committing a limiter for the number of generated thumbnails (previews) of the album and also modified the album view to show more progress as images load, so this should no longer be an issue as long as people take care not to add too many images to one album.
Hi,
Thanks for the container.
When mounting a large photo library from a remote NAS the containers runs for about 5 minutes then dies with
What's the best way to give more RAM to node, so that it can continue?
Thanks, Paul.