QFieldCloud allows seamless synchronization of your field data with your spatial infrastructure with change tracking, team management and online-offline work capabilities in QField.
I would like to test the local stack with the DEBUG mode set to OFF DEBUG=0 and with a fake domain qfieldcloud.local configured in my laptop (host) /etc/hosts file with:
127.0.0.1 qfieldcloud.local
At first, I configured the environment file to keep DEBUG=1 but would like to use https://qfieldcloud.local/api/v1/ instead of http://app:8000/api/v1/ to access the app container from within the worker_wrapper container, as described in https://github.com/opengisch/QFieldCloud/issues/873#issuecomment-2192048554 . This way I would be able to use DEBUG=0.
As visible in the COMPOSE_FILE variable, I use a mix of the original / local / prod / mdouchin docker compose files. This is a way to test a local standalone stack (with minio, db, etc.) without modifying the source files (prod & local) or creating a dedicated file like in #844 (I think the approach in #844 is not easily maintainable when source code evolves between versions). I would also like to use a local environment as close as possible to the production stack (no DEBUG mode, no extra ports, etc.).
The docker-compose.override.mdouchin.yml contains :
This stack works great if I do not use QFIELDCLOUD_WORKER_QFIELDCLOUD_URL=https://qfieldcloud.local/api/v1/
But when I use the variable QFIELDCLOUD_WORKER_QFIELDCLOUD_URL=https://qfieldcloud.local/api/v1/ , I get the error when I upload a project from QGIS, in the job Process QGIS Project File:
"error": "HTTPSConnectionPool(host='qfieldcloud.local', port=443):
Max retries exceeded with url: /api/v1/files/6e2cc4d1-573a-4fe5-9593-a52f4957ed83/?skip_metadata=1
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x76b996d61f10>: Failed to establish a new connection: [Errno 111] Connection refused'))",
Indeed, I need to add the -k flag to access the app container from the worker_wrapper container with the domain qfieldcloud.local when I test access with docker compose exec:
# Problem (SSL one)
docker compose exec worker_wrapper curl https://qfieldcloud.local/api/v1/
# returns
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
# If I add the -k flag, API is reachable from the worker_wrapper
docker compose exec worker_wrapper curl https://qfieldcloud.local/api/v1/ -k
# displays :
{"code":"not_authenticated","message":"Not authenticated","debug":{"view":"<rest_framework.routers.APIRootView object at 0x77fc01029f30>","args":[],"kwargs":{},"request":"<rest_framework.request.Request: GET '/api/v1/'>","detail":""}}
Has anyone already tested this kind of stack with a fake domain configured in the host with /etc/hosts ?
Hi,
I would like to test the local stack with the DEBUG mode set to OFF
DEBUG=0
and with a fake domainqfieldcloud.local
configured in my laptop (host)/etc/hosts
file with:At first, I configured the environment file to keep
DEBUG=1
but would like to usehttps://qfieldcloud.local/api/v1/
instead ofhttp://app:8000/api/v1/
to access the app container from within the worker_wrapper container, as described in https://github.com/opengisch/QFieldCloud/issues/873#issuecomment-2192048554 . This way I would be able to useDEBUG=0
.Some relevant parts of the
.env
file :As visible in the
COMPOSE_FILE
variable, I use a mix of theoriginal / local / prod / mdouchin
docker compose files. This is a way to test a local standalone stack (with minio, db, etc.) without modifying the source files (prod & local) or creating a dedicated file like in #844 (I think the approach in #844 is not easily maintainable when source code evolves between versions). I would also like to use a local environment as close as possible to the production stack (no DEBUG mode, no extra ports, etc.).The docker-compose.override.mdouchin.yml contains :
This stack works great if I do not use
QFIELDCLOUD_WORKER_QFIELDCLOUD_URL=https://qfieldcloud.local/api/v1/
But when I use the variable
QFIELDCLOUD_WORKER_QFIELDCLOUD_URL=https://qfieldcloud.local/api/v1/
, I get the error when I upload a project from QGIS, in the jobProcess QGIS Project File
:Indeed, I need to add the
-k
flag to access theapp
container from theworker_wrapper
container with the domainqfieldcloud.local
when I test access withdocker compose exec
:Has anyone already tested this kind of stack with a fake domain configured in the host with
/etc/hosts
?