quru / qis

Dynamic image server for web and print
https://quruimageserver.com
GNU Affero General Public License v3.0
89 stars 7 forks source link

How to configure the image max file size? #32

Closed martnst closed 4 years ago

martnst commented 4 years ago

Hi, when uploading larger images we get an error looking like this.

image

qururoland commented 4 years ago

Hi Martin, Not sure what has happened here. Normally the errors are described in understandable English so it strikes me that this is a new error. Can you tell us how big the file is and what type it is and I will see if I can reproduce it.

fozcode commented 4 years ago

The default max upload size is 1GB and defined here: https://github.com/quru/qis/blob/758c4486566e2fa331fb0eb0bb227d9ef194bf7a/src/imageserver/conf/base_settings.py#L152

It can be changed by setting a new value in /opt/qis/conf/local_settings.py

I would guess the error message is missing because the error is raised from Flask rather than from user code but yes it should be shown either way.

martnst commented 4 years ago

I see. For us it seems to be related to image sizes as it works with very small images around 1MB.

martnst commented 4 years ago

We have QIS running via docker-compose on a DigitalOcean Droplet.

$cat docker-compose.yml

#
# Sample QIS docker-compose launch file
#
# Launches containers running:
# QIS Application Server, Postgres and Memcached
#
# Required environment variables:
#
#   QIS_HOSTNAME - host name that the QIS web server should serve from
#   QIS_DATA_DIR - host path in which to store QIS images, database, and log files
#
# The data directory ends up containing $QIS_DATA_DIR/images
#                                       $QIS_DATA_DIR/data
#                                       $QIS_DATA_DIR/logs
#
# The environment variables can be persisted in a text file called .env.
# See https://docs.docker.com/compose/env-file/
#

version: "2"

services:
  qis_db:
    image: quru/qis-postgres
    volumes:
      - "$QIS_DATA_DIR/data:/opt/qis/data"
    mem_limit: 512M
    environment:
      - PG_USER=qis
      - PG_PASSWORD=qisdbpass
      - PG_SHARED_BUFFERS=256MB
      - PG_EFFECTIVE_CACHE_SIZE=512MB

  qis_cache:
    image: quru/qis-memcached
    mem_limit: 512M
    environment:
      - MEMCACHED_SIZE=512

  qis_as:
    image: quru/qis-as
    ports:
      - "8080:80"
      - "4443:443"
    volumes:
      - "$QIS_DATA_DIR/images:/opt/qis/images"
      - "$QIS_DATA_DIR/logs/qis:/opt/qis/logs"
      - "$QIS_DATA_DIR/logs/apache:/var/log/apache2"
    environment:
      - HOSTNAME=$QIS_HOSTNAME
      - DB_USER=qis
      - DB_PASSWORD=qisdbpass
      - HTTP_PROCESSES=2
      - HTTPS_PROCESSES=2
    depends_on:
      - qis_cache
      - qis_db

I noticed the memcahed is not running anymore. Could the be the cause?

CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS                  PORTS                                         NAMES
79a1e4a7295d        quru/qis-as         "/run-qis.sh"        4 months ago        Up 4 months (healthy)   0.0.0.0:8080->80/tcp, 0.0.0.0:4443->443/tcp   qis_qis_as_1
e5d2dec1e49e        quru/qis-postgres   "/run-postgres.sh"   4 months ago        Up 4 months (healthy)   5432/tcp                                      qis_qis_db_1
martnst commented 4 years ago

I just restarted the full stack, so that now also quru/qis-memcached is running as well. However I am still getting this error when trying to upload a 1.5 MB jpg image.

fozcode commented 4 years ago

Are you seeing any corresponding errors in the $QIS_DATA_DIR/logs/qis/qis.log file?

Actually my guess here is that if Memcached died on its own and only small image uploads are working that your droplet doesn't have enough RAM. I don't think DO droplets come with any swap space by default and that will cause services to get killed and operations to fail unexpectedly if QIS + Postgres + Memcached try to use more RAM than is available. The docker compose config above is aimed at servers with 4GB or more. You might get away with 2GB but not any less.

martnst commented 4 years ago

@fozcode no error is logged actually.

image

martnst commented 4 years ago

The memory was in fact a bit low with.

              total        used        free      shared  buff/cache   available
Mem:           985M        549M         93M         35M        342M        259M
Swap:            0B          0B          0B

So i created a 4GB swap file for the time being (might scale the droplets later). I restarted the qis_as docker container, but still the file upload behaved odd and not error logs:

tail -f data/logs/qis/qis.log 
2020-02-24 11:08:03,901 qis_8      INFO     Quru Image Server v4.1.4 engine startup
2020-02-24 11:08:03,902 qis_8      INFO     Using settings base_settings + local_settings.py
2020-02-24 11:08:03,928 qis_8      INFO     Cache usage currently 8263 out of 536870912 bytes (0%), holding 4 objects (via pylibmc).
2020-02-24 11:08:03,907 qis_11     INFO     Quru Image Server v4.1.4 engine startup
2020-02-24 11:08:03,908 qis_11     INFO     Using settings base_settings + local_settings.py
2020-02-24 11:08:03,922 qis_11     INFO     Cache usage currently 8263 out of 536870912 bytes (0%), holding 4 objects (via pylibmc).
2020-02-24 11:08:03,903 qis_9      INFO     Quru Image Server v4.1.4 engine startup
2020-02-24 11:08:03,908 qis_9      INFO     Using settings base_settings + local_settings.py
2020-02-24 11:08:03,925 qis_9      INFO     Cache usage currently 8263 out of 536870912 bytes (0%), holding 4 objects (via pylibmc).
2020-02-24 11:08:03,905 qis_10     INFO     Quru Image Server v4.1.4 engine startup
2020-02-24 11:08:03,911 qis_10     INFO     Using settings base_settings + local_settings.py
2020-02-24 11:08:03,936 qis_10     INFO     Cache usage currently 8263 out of 536870912 bytes (0%), holding 4 objects (via pylibmc).
2020-02-24 11:08:04,058 qis_11     WARNING  Waiting to obtain global lock
2020-02-24 11:08:04,059 qis_9      WARNING  Waiting to obtain global lock
2020-02-24 11:08:04,069 qis_10     WARNING  Waiting to obtain global lock
2020-02-24 11:08:04,080 qis_8      INFO     Cache control database opened.
2020-02-24 11:08:04,174 qis_8      INFO     Management + stats database opened
2020-02-24 11:08:04,175 qis_8      INFO     Housekeeping task scheduler started
2020-02-24 11:08:04,247 qis_8      INFO     Loaded imaging library: Pillow version: 5.2.0
2020-02-24 11:08:04,382 qis_11     INFO     Cache control database opened.
2020-02-24 11:08:04,536 qis_11     INFO     Management + stats database opened
2020-02-24 11:08:04,537 qis_11     INFO     Housekeeping task scheduler started
2020-02-24 11:08:04,607 qis_11     INFO     Loaded imaging library: Pillow version: 5.2.0
2020-02-24 11:08:04,683 qis_9      INFO     Cache control database opened.
2020-02-24 11:08:04,865 qis_9      INFO     Management + stats database opened
2020-02-24 11:08:04,866 qis_9      INFO     Housekeeping task scheduler started
2020-02-24 11:08:04,910 stats_122  INFO     Stats server running
2020-02-24 11:08:04,923 tasks_125  INFO     Task server running
2020-02-24 11:08:05,027 qis_9      INFO     Loaded imaging library: Pillow version: 5.2.0
2020-02-24 11:08:05,108 qis_10     INFO     Cache control database opened.
2020-02-24 11:08:05,315 qis_10     INFO     Management + stats database opened
2020-02-24 11:08:05,316 qis_10     INFO     Housekeeping task scheduler started
2020-02-24 11:08:05,409 qis_10     INFO     Loaded imaging library: Pillow version: 5.2.0
2020-02-24 11:08:30,013 qis_8      INFO     Folder permissions initialising with version 9
2020-02-24 11:08:30,015 qis_8      INFO     Portfolio permissions initialising with version 8
2020-02-24 11:09:00,705 qis_9      INFO     Folder permissions initialising with version 9
2020-02-24 11:09:00,708 qis_9      INFO     Portfolio permissions initialising with version 8
2020-02-24 11:09:05,141 tasks_125  INFO     Task 'Remove expired portfolio exports' starting on thread 1
2020-02-24 11:09:05,147 tasks_125  INFO     Task 'Remove expired portfolio exports' completed
$ free -mh
              total        used        free      shared  buff/cache   available
Mem:           985M        488M        108M         34M        389M        321M
Swap:          4.0G        4.9M        4.0G
martnst commented 4 years ago

At this point I doubt this issue is related to ram. This is the 30-day graph for the memory usage:

image

Now I scaled the droplet to have 4GB of RAM, but still the same behaviour. Tested with 1.2MB jpg.

fozcode commented 4 years ago

Agreed, probably not the RAM after all. Some errors end up in the Apache log files. Please can you check $QIS_DATA_DIR/logs/apache/error.log

martnst commented 4 years ago

Nothing useful either I would say:

[Mon Feb 24 11:07:59.513353 2020] [ssl:warn] [pid 7:tid 140686177240960] AH01909: images1.ebalance.dev:443:0 server certificate does NOT include an ID which matches the server name
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
[Mon Feb 24 11:07:59.526909 2020] [ssl:warn] [pid 7:tid 140686177240960] AH01909: images1.ebalance.dev:443:0 server certificate does NOT include an ID which matches the server name
[Mon Feb 24 11:07:59.527046 2020] [wsgi:warn] [pid 7:tid 140686177240960] mod_wsgi: Compiled for Python/3.5.1+.
[Mon Feb 24 11:07:59.527058 2020] [wsgi:warn] [pid 7:tid 140686177240960] mod_wsgi: Runtime using Python/3.5.2.
[Mon Feb 24 11:07:59.527082 2020] [core:warn] [pid 7:tid 140686177240960] AH00098: pid file /var/run/apache2/apache2.pid overwritten -- Unclean shutdown of previous Apache run?
[Mon Feb 24 11:07:59.530579 2020] [mpm_event:notice] [pid 7:tid 140686177240960] AH00489: Apache/2.4.18 (Ubuntu) OpenSSL/1.0.2g mod_wsgi/4.3.0 Python/3.5.2 configured -- resuming normal operations
[Mon Feb 24 11:07:59.530615 2020] [core:notice] [pid 7:tid 140686177240960] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND'
[Mon Feb 24 11:08:02.907717 2020] [wsgi:error] [pid 79:tid 140686177240960] A logging server is already running.
[Mon Feb 24 11:08:02.909047 2020] [wsgi:error] [pid 76:tid 140686177240960] A logging server is already running.
[Mon Feb 24 11:08:02.917592 2020] [wsgi:error] [pid 78:tid 140686177240960] A logging server is already running.
[Mon Feb 24 11:08:04.030476 2020] [wsgi:error] [pid 9:tid 140686177240960] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:08:04.030536 2020] [wsgi:error] [pid 9:tid 140686177240960]   """)
[Mon Feb 24 11:08:04.034018 2020] [wsgi:error] [pid 8:tid 140686177240960] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:08:04.034079 2020] [wsgi:error] [pid 8:tid 140686177240960]   """)
[Mon Feb 24 11:08:04.036735 2020] [wsgi:error] [pid 11:tid 140686177240960] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:08:04.036764 2020] [wsgi:error] [pid 11:tid 140686177240960]   """)
[Mon Feb 24 11:08:04.048761 2020] [wsgi:error] [pid 10:tid 140686177240960] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:08:04.048799 2020] [wsgi:error] [pid 10:tid 140686177240960]   """)
[Mon Feb 24 11:08:29.981278 2020] [wsgi:error] [pid 174:tid 140685745092352] A stats server is already running.
[Mon Feb 24 11:08:29.994127 2020] [wsgi:error] [pid 176:tid 140685745092352] A task server is already running.
[Mon Feb 24 11:09:00.675799 2020] [wsgi:error] [pid 186:tid 140685770270464] A stats server is already running.
[Mon Feb 24 11:09:00.690271 2020] [wsgi:error] [pid 188:tid 140685770270464] A task server is already running.
[Mon Feb 24 11:29:35.795716 2020] [ssl:warn] [pid 19:tid 140665101346688] AH01909: images1.ebalance.dev:443:0 server certificate does NOT include an ID which matches the server name
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.19.0.4. Set the 'ServerName' directive globally to suppress this message
[Mon Feb 24 11:29:35.809848 2020] [ssl:warn] [pid 19:tid 140665101346688] AH01909: images1.ebalance.dev:443:0 server certificate does NOT include an ID which matches the server name
[Mon Feb 24 11:29:35.817609 2020] [wsgi:warn] [pid 19:tid 140665101346688] mod_wsgi: Compiled for Python/3.5.1+.
[Mon Feb 24 11:29:35.817689 2020] [wsgi:warn] [pid 19:tid 140665101346688] mod_wsgi: Runtime using Python/3.5.2.
[Mon Feb 24 11:29:35.826229 2020] [mpm_event:notice] [pid 19:tid 140665101346688] AH00489: Apache/2.4.18 (Ubuntu) OpenSSL/1.0.2g mod_wsgi/4.3.0 Python/3.5.2 configured -- resuming normal operations
[Mon Feb 24 11:29:35.826290 2020] [core:notice] [pid 19:tid 140665101346688] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND'
[Mon Feb 24 11:29:41.271219 2020] [wsgi:error] [pid 96:tid 140665101346688] A logging server is already running.
[Mon Feb 24 11:29:41.277849 2020] [wsgi:error] [pid 98:tid 140665101346688] A logging server is already running.
[Mon Feb 24 11:29:41.285117 2020] [wsgi:error] [pid 99:tid 140665101346688] A logging server is already running.
[Mon Feb 24 11:29:42.375128 2020] [wsgi:error] [pid 23:tid 140665101346688] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:29:42.375184 2020] [wsgi:error] [pid 23:tid 140665101346688]   """)
[Mon Feb 24 11:29:42.377915 2020] [wsgi:error] [pid 20:tid 140665101346688] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:29:42.377956 2020] [wsgi:error] [pid 20:tid 140665101346688]   """)
[Mon Feb 24 11:29:42.378479 2020] [wsgi:error] [pid 22:tid 140665101346688] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:29:42.378534 2020] [wsgi:error] [pid 22:tid 140665101346688]   """)
[Mon Feb 24 11:29:42.395674 2020] [wsgi:error] [pid 21:tid 140665101346688] /opt/qis/lib/python3.5/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
[Mon Feb 24 11:29:42.395722 2020] [wsgi:error] [pid 21:tid 140665101346688]   """)
[Mon Feb 24 11:29:56.191045 2020] [wsgi:error] [pid 186:tid 140664659236608] A stats server is already running.
[Mon Feb 24 11:29:56.200391 2020] [wsgi:error] [pid 188:tid 140664659236608] A task server is already running.
[Mon Feb 24 11:29:58.245742 2020] [wsgi:error] [pid 190:tid 140664694380288] A stats server is already running.
[Mon Feb 24 11:29:58.254287 2020] [wsgi:error] [pid 192:tid 140664694380288] A task server is already running.
[Mon Feb 24 11:31:15.801925 2020] [wsgi:error] [pid 221:tid 140664694380288] A stats server is already running.
[Mon Feb 24 11:31:15.820738 2020] [wsgi:error] [pid 223:tid 140664694380288] A task server is already running.
fozcode commented 4 years ago

I think the next thing to check is what happens on the client side. Are you familiar with using web browser developer tools? There are 2 things to look at:

1) When you click Upload now, are there any errors shown in the Console?

Screenshot 2020-02-25 at 09 35 09

2) Open the Network tab then click Upload now, look for whether the upload operation returns an error. In the screenshot below you can see upload as a POST with a status of (pending). The status should change to something else. If it does, click on that line and take a look at the response text.

Screenshot 2020-02-25 at 09 35 52

martnst commented 4 years ago

Hi @fozcode ,

I am actually getting an 413 Request Entity Too Large error.

<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->

This brings us back to the original question? How can I increase the size?

Oh looking at the header… it's might be my nginx setting in front of it to handle https.

martnst commented 4 years ago

✅ Fixed see https://www.daveperrett.com/articles/2009/11/18/nginx-error-413-request-entity-too-large/

Thanks for your support @fozcode

fozcode commented 4 years ago

Great, glad you got it working 👍