Open theycallmemac opened 5 years ago
go to localhost:3000/api and see if the backend server is up. you can also try logging in to localhost:3000/api/auth/token/obtain On Jan 4, 2019 16:18, "James McDermott" notifications@github.com wrote:
Have used the docker compose method:
wget https://raw.githubusercontent.com/hooram/ownphotos/dev/docker-compose.yml docker-compose https://raw.githubusercontent.com/hooram/ownphotos/dev/docker-compose.ymldocker-compose up -d
Once it's finished, I go to localhost:3000, login with admin as user and admin as password and the following red box appears underneath.
[image: image] https://user-images.githubusercontent.com/16108563/50692372-81460a80-102b-11e9-8c7a-5c6703e317b3.png
I've check the ownphotos-backend container and it seems to be running fine. Any ideas what I'm doing wrong?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hooram/ownphotos/issues/52, or mute the thread https://github.com/notifications/unsubscribe-auth/AAT4JmW-PmEcih7EaiQkK85VEtTFVsgJks5u_2LNgaJpZM4Zpu3K .
localhost:3000/api isn't up, what should i do from here?
ownphotos-backend
container is up with docker ps
and if it exists but is not running via docker ps -a
. docker-compose up -d
and make sure it says " recreating" or "starting"docker logs ownphotos-backend logs --tail 50 --follow
and see if its failing on something.Container is up. Logs seem to say that there's a worker timeout in the ownphotos-backend
based off what I see in the logs.
The worker crashes, its a bug I started to see too. @hopram perhaps we can catch why its crashing? For now do:
sudo docker stop ownphotos-backend
sudo docker start ownphotos-backend
We should open a bug so the rq worker recovers automatically, perhaps have another process in case of a failure
Huh, I just got it too, and when i ran the restart i got:
* Restarting nginx nginx [ OK ]
System check identified some issues:
WARNINGS:
api.LongRunningJob.result: (postgres.E003) JSONField default should be a callable instead of an instance so that it's not shared between all field instances.
HINT: Use a callable instead, e.g., use `dict` instead of `{}`.
No changes detected in app 'api'
System check identified some issues:
WARNINGS:
api.LongRunningJob.result: (postgres.E003) JSONField default should be a callable instead of an instance so that it's not shared between all field instances.
HINT: Use a callable instead, e.g., use `dict` instead of `{}`.
Operations to perform:
Apply all migrations: admin, api, auth, contenttypes, database, sessions
Running migrations:
No migrations to apply.
Traceback (most recent call last):
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: duplicate key value violates unique constraint "api_albumthing_title_owner_id_131b9f89_uniq"
DETAIL: Key (title, owner_id)=(man made, 2) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/venv/lib/python3.5/site-packages/django/core/management/base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "/venv/lib/python3.5/site-packages/django/core/management/base.py", line 353, in execute
output = self.handle(*args, **options)
File "/venv/lib/python3.5/site-packages/django/core/management/commands/shell.py", line 92, in handle
exec(sys.stdin.read())
File "<string>", line 2, in <module>
File "/venv/lib/python3.5/site-packages/django/db/models/query.py", line 663, in delete
deleted, _rows_count = collector.delete()
File "/venv/lib/python3.5/site-packages/django/db/models/deletion.py", line 290, in delete
{field.name: value}, self.using)
File "/venv/lib/python3.5/site-packages/django/db/models/sql/subqueries.py", line 107, in update_batch
self.get_compiler(using).execute_sql(NO_RESULTS)
File "/venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 1383, in execute_sql
cursor = super().execute_sql(result_type)
File "/venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 1065, in execute_sql
cursor.execute(sql, params)
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/venv/lib/python3.5/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "api_albumthing_title_owner_id_131b9f89_uniq"
DETAIL: Key (title, owner_id)=(man made, 2) already exists.
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
The only way to fix that is the same as in #48 , you need to ender adminer and delete the long running job entry.
@hooram Can you point me how to delete this on startup, right after the migration command? this bug keeps comming up.
This happens when you try and access from another host, not the localhost:3000. e.g. If the server ip is 192.168.1.10 and I go to 192.168.1.10:3000 this happens. If I ssh into the server and portforward 3000 to 127.0.0.1:3000 then access the site at http://localhost:3000 it works perfectly
@xternaal Ownphotos has to have the origin set. AKA you need to specify in the docker-compose from where you are accessing the server. This is why it works for you on localhost, but not using a different hostname.
You need to update this line to include your hostname, in your case the IP, take a look at the docker file: https://github.com/hooram/ownphotos/blob/dev/docker-compose.yml#L32
Here is the line and the comment:
environment:
# This is the path to the backend host public facing. if your website is ownphotos.org then this should be "ownphotos.org".
# Default here is assuming you are running on localhost on port 3000 as given in ownphotos-proxy service
- BACKEND_HOST=localhost:3000
This is not the case for @theycallmemac , because he is using localhost
So what I've figured out is that there seems to be some proxy settings going on that are refusing connectiong from anything that isn't being accessed from $BACKEND_HOST. This seems to be able to get around if you put your Traefik address or host IP in there.
What I haven't figured out if its related to this is if the reason when I update my profile picture (or any profile settings) it doesn't save them is due to this backend problem or if its is because of either a networking problem with Reddis or wherever the Celery workers are located ( I am assuming its Celery as I thought I saw some familiar log info show up).
More or less the docker-compose script is all kinds of messed up. It would be great if we got a listing of all the ports that are used and I could explicitly route ports to the correct locations.
This seems like a really neat project I just gotta dig into the source code to figure all this out I guess.
@usmcamp0811 its not a bug, its by design. Its because of Cross-Origin Resource Sharing or (CORS). More info here. The value at BACKEND_HOST
defines where your origin is, and lets both frontend and backend talk to each other.
If its set incorrectly, you can open your browser console and see a message that a call was blocked due to CORS.
Regarding ports, there is only one port you need, 80, defied here: https://github.com/hooram/ownphotos/blob/dev/docker-compose.yml#L14 I routed port 3000 to 80 because that is how @hooram originally set the frontend.
@grafixoner Illegal instruction means you are running on a machine that is not 64bit intel. Are you using a raspberrypi or something like that?
Will note there are too many septate issues in this issue.
People who issue is resolved please report so, and also how or which comment helped them solve it.
If you think you have a different issue please open a new one. no connection to backend server can have many causes.
This calls for opening a troubleshooting section in the wiki.
Noted and opening a new issue. It's not odd architecture It is x86_64 with an older core2 duo CPU. I'll open a new issue so we can troubleshoot it there.
Edit, new issue at: #54
I also get a 'No connection to backend server' error. I tried to stop and start the docker container, but that doesn't solve it. My log:
Running backend server...
[2019-02-04 21:06:34 +0000] [79] [INFO] Starting gunicorn 19.8.1
[2019-02-04 21:06:34 +0000] [79] [INFO] Listening at: http://0.0.0.0:8001 (79)
[2019-02-04 21:06:34 +0000] [79] [INFO] Using worker: sync
[2019-02-04 21:06:34 +0000] [86] [INFO] Booting worker with pid: 86
/miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
21:06:41 Registering birth of worker d7a850934783.77
21:06:41 RQ worker 'rq:worker:d7a850934783.77' started, version 0.12.0
21:06:41 *** Listening on default...
21:06:41 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
21:06:41 Cleaning registries for queue: default
21:06:41 *** Listening on default...
21:06:41 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
I left the instance running overnight, and just checked the logs: A lot of 'Sent heartbeat' messages, and at the moment I tried to login again I got the 'Not found' message.
07:14:58 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
07:21:43 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
07:28:28 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
07:35:13 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
07:41:58 Sent heartbeat to prevent worker timeout. Next one should arrive within 420 seconds.
Not Found: /robots.txt
Not sure if this is relevant information ;-)
I think there might be a worker hanging issue. Which might be worth opening a seprate issue for. But this issue that OP @theycallmemac reported is failure on startup.
Also, @theycallmemac Is there any progress on your end? Otherwise I will close the issue. The docker image improved since Jan 4th and pulling latest might fix your issue.
Hi, I am getting the similar issue. I have a synology nas, running DSM version "DSM 6.2.1-23824 Update 4" Installed ownphots using docker-compose. I am able to get the frontend working, changed the BACKEND_HOST under frontend section in docker-compose.yml to private ip of nas server, able to access frontend with the 10.0.0.41:3000 but getting "No connection to backend server"
Also tried running /api directly sshing to backend docker container, and getting no response for http://localhost/api as well.
FYI, just before checking this one, had deleted and re-imported all 3 images, backend, frontend, and proxy.
root@5a5cb9ca9e3b:/code/logs# cat gunicorn.log [2019-02-12 23:25:33 +0000] [102] [INFO] Starting gunicorn 19.8.1 [2019-02-12 23:25:33 +0000] [102] [INFO] Listening at: http://0.0.0.0:8001 (102) [2019-02-12 23:25:33 +0000] [102] [INFO] Using worker: sync [2019-02-12 23:25:33 +0000] [108] [INFO] Booting worker with pid: 108 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:26:03 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:108) [2019-02-12 23:26:04 +0000] [108] [INFO] Worker exiting (pid: 108) [2019-02-12 23:26:04 +0000] [137] [INFO] Booting worker with pid: 137 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:26:34 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:137) [2019-02-12 23:26:34 +0000] [137] [INFO] Worker exiting (pid: 137) [2019-02-12 23:26:35 +0000] [151] [INFO] Booting worker with pid: 151 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:27:05 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:151) [2019-02-12 23:27:05 +0000] [151] [INFO] Worker exiting (pid: 151) [2019-02-12 23:27:06 +0000] [162] [INFO] Booting worker with pid: 162 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:27:36 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:162) [2019-02-12 23:27:37 +0000] [173] [INFO] Booting worker with pid: 173 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:28:08 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:173) [2019-02-12 23:28:08 +0000] [173] [INFO] Worker exiting (pid: 173) [2019-02-12 23:28:09 +0000] [233] [INFO] Booting worker with pid: 233 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp [2019-02-12 23:28:39 +0000] [102] [CRITICAL] WORKER TIMEOUT (pid:233) [2019-02-12 23:28:40 +0000] [233] [INFO] Worker exiting (pid: 233) [2019-02-12 23:28:40 +0000] [344] [INFO] Booting worker with pid: 344 /miniconda/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp Unauthorized: /api/rqavailable/
Regards, Vishal Kadam
@vkadam I am not sure it would work with an IP, you might need a hostname what does your browser say regarding CORS? CTRL+ALT+J in chrome and CTRL+ALT+K in Firefox
For sure not getting the CROS. It's able to make the request. Another update, kept docker running overnight and in morning was able to login. After then next requests failed, nothing different in log statement than what's already posted. After that again for some time some request pass and then at a point it fails again. Looks to me with some interval requests are passing, unable to get it consistent result now.
Also tried re-creating whole dockers no difference. When I create dockers using same docker-compose.yml on mac it works well. It's something specific to Synology NAS. If you can point me where should I look, I can provide more data for troubleshooting.
@vkadam, are you able to browse through fine otherwise? The periodic 401 responses from the server are intended that way, because the jwt access token has a short expiration time. In the frontend, everytime a request gets 401'd, it tries to refresh the token and tries the same request again (see https://github.com/hooram/ownphotos-frontend/blob/dev/src/api_client/apiClientDeploy.js#L43).
I am getting intermittent request timeout errors. As I said, if I don't make any requests(stay browser tab idle) for some time then following some requests pass before it starts failing again.
Hi, i have this log:
File "/miniconda/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/miniconda/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/code/ownphotos/wsgi.py", line 16, in
application = get_wsgi_application()
File "/miniconda/lib/python3.6/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/miniconda/lib/python3.6/site-packages/django/init.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/miniconda/lib/python3.6/site-packages/django/apps/registry.py", line 112, in populate
app_config.import_models()
File "/miniconda/lib/python3.6/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/miniconda/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/code/api/models.py", line 41, in
from django_cryptography.fields import encrypt
File "/miniconda/lib/python3.6/site-packages/django_cryptography/fields.py", line 10, in
from django_cryptography.core.signing import SignatureExpired
File "/miniconda/lib/python3.6/site-packages/django_cryptography/core/signing.py", line 20, in
from ..utils.crypto import constant_time_compare, salted_hmac
File "/miniconda/lib/python3.6/site-packages/django_cryptography/utils/crypto.py", line 12, in
from ..conf import CryptographyConf
File "/miniconda/lib/python3.6/site-packages/django_cryptography/conf.py", line 9, in
class CryptographyConf(AppConf):
File "/miniconda/lib/python3.6/site-packages/appconf/base.py", line 74, in new
new_class._configure()
File "/miniconda/lib/python3.6/site-packages/appconf/base.py", line 105, in _configure
cls._meta.configured_data = obj.configure()
File "/miniconda/lib/python3.6/site-packages/django_cryptography/conf.py", line 35, in configure
force_bytes(self.configured_data['KEY'] or settings.SECRET_KEY))
File "/miniconda/lib/python3.6/site-packages/django/utils/encoding.py", line 105, in force_bytes
return s.encode(encoding, errors)
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 25-26: surrogates not allowed
[2019-04-04 01:01:59 +0000] [141] [INFO] Worker exiting (pid: 141)
[2019-04-04 01:01:59 +0000] [142] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/miniconda/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/miniconda/lib/python3.6/site-packages/gunicorn/workers/ggevent.py", line 203, in init_process
super(GeventWorker, self).init_process()
File "/miniconda/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/miniconda/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/miniconda/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/miniconda/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/code/ownphotos/wsgi.py", line 16, in
application = get_wsgi_application()
File "/miniconda/lib/python3.6/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/miniconda/lib/python3.6/site-packages/django/init.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/miniconda/lib/python3.6/site-packages/django/apps/registry.py", line 112, in populate
app_config.import_models()
File "/miniconda/lib/python3.6/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/miniconda/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/code/api/models.py", line 41, in
from django_cryptography.fields import encrypt
File "/miniconda/lib/python3.6/site-packages/django_cryptography/fields.py", line 10, in
from django_cryptography.core.signing import SignatureExpired
File "/miniconda/lib/python3.6/site-packages/django_cryptography/core/signing.py", line 20, in
from ..utils.crypto import constant_time_compare, salted_hmac
File "/miniconda/lib/python3.6/site-packages/django_cryptography/utils/crypto.py", line 12, in
from ..conf import CryptographyConf
File "/miniconda/lib/python3.6/site-packages/django_cryptography/conf.py", line 9, in
class CryptographyConf(AppConf):
File "/miniconda/lib/python3.6/site-packages/appconf/base.py", line 74, in new
new_class._configure()
File "/miniconda/lib/python3.6/site-packages/appconf/base.py", line 105, in _configure
cls._meta.configured_data = obj.configure()
File "/miniconda/lib/python3.6/site-packages/django_cryptography/conf.py", line 35, in configure
force_bytes(self.configured_data['KEY'] or settings.SECRET_KEY))
File "/miniconda/lib/python3.6/site-packages/django/utils/encoding.py", line 105, in force_bytes
return s.encode(encoding, errors)
UnicodeEncodeError: 'utf-8' codec can't encode characters in position 25-26: surrogates not allowed
[2019-04-04 01:01:59 +0000] [142] [INFO] Worker exiting (pid: 142)
[2019-04-04 01:02:01 +0000] [128] [INFO] Shutting down: Master
[2019-04-04 01:02:01 +0000] [128] [INFO] Reason: Worker failed to boot.
Restarting nginx nginx
...done.
Requirement already satisfied: gevent in /miniconda/lib/python3.6/site-packages (1.4.0)
Requirement already satisfied: greenlet>=0.4.14; platform_python_implementation == "CPython" in /miniconda/lib/python3.6/site-packages (from gevent) (0.4.15)
/miniconda/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
What could be the problem? Regards
@JuezFenix Maybe you have a non-ascii character in your secret key variable?
@JuezFenix Maybe you have a non-ascii character in your secret key variable?
Solved, but still in the same problem, no conection to backend.
I've been away from this issue, didn't have time to pursue. Does any of this look wrong to anyone here?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c6e2c2d5dbef guysoft/ownphotos-proxy "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:3000->80/tcp ownphotos-proxy
71ab2cbc579b hooram/ownphotos-frontend:dev "./run.sh" 3 minutes ago Up 3 minutes 3000/tcp ownphotos-frontend
8233c3a10cf2 hooram/ownphotos:dev "/bin/sh -c ./entryp…" 3 minutes ago Up 3 minutes 80/tcp, 5000/tcp ownphotos-backend
a9aad63ef769 redis "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 6379/tcp ownphotos-redis
169990f7703b postgres "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 5432/tcp ownphotos-db
run.sh only can replace at the first time. It can't change the right server ip in apiClient.js
run.sh only can replace at the first time. It can't change the right server ip in apiClient.js
What is run.sh trying to do?
try to replace the backend ip setting in apiClient.js, but it failed. so I need to change the ip in apiClient.js directly
PS : I have install it in my NAS, I am not using localhost:3000.
Okay I'll make the change to my apiClient.js directly and see what results I get
I think, It should also work if you recreate the container. You shouldn't really keep state and change stuff in the container.
@guysoft I've recreated the container multiple times to no avail
Not sure how to go from here
I think we(or me) are misunderstand / different assume on BACKEND_HOST Environment. As I think the BACKEND_HOST can change at Environment Value, and after it start again and it will use the new setting. But on your case, it need to make it when setup. and it will not change after that.
in my .yml : environment:
@ffchung you need to run docker-compose up -d
so the container gets recreated with the new value. Just editing the file does not update the container.
@guysoft I want to gently push back on your assertion that CORS checking is needed here at all.
CORS is important when setting up your services architecture manually on multiple servers.
However, Docker Compose, by nature, runs a service architecture on a single server...in effect, a "virtual microservices" environment. In other words, the "frontend" and "backend" and "db" and "async events handler thingy" are all on the same server. This environment is provisioned via an "internal network," as explained in https://docs.docker.com/compose/networking. By default, all services within a docker-compose file can access each other over this "internal network."
So, there is no need for CORS checking..in fact, there is no need to specify ports of services as env vars.
Have a look at https://github.com/pahaz/docker-compose-django-postgresql-redis-example/blob/master/docker-compose.yml for an example of how to use the depends_on
setting to get the kind of interoperability between services that we're aiming at here.
Thoughts?
CORS is a browser feature, not a server feature. Its the browser that checks if the CORS works, and uses the response headers to decide what to check. So I am not sure what you are trying to push. You could, for example write a browser extension or a browser that ignores CORS all together (here is an implementation of that for chrome).
For sure not getting the CROS. It's able to make the request. Another update, kept docker running overnight and in morning was able to login. After then next requests failed, nothing different in log statement than what's already posted. After that again for some time some request pass and then at a point it fails again. Looks to me with some interval requests are passing, unable to get it consistent result now.
Also tried re-creating whole dockers no difference. When I create dockers using same docker-compose.yml on mac it works well. It's something specific to Synology NAS. If you can point me where should I look, I can provide more data for troubleshooting.
Getting this to work on a Synology NAS would open this up to a much wider audience to be interesting in, although it has a big competition with Moments, that is quite similar to Google Photos, yet it's a huge advantage being able to actually contribute to improve features and could be used in parallel =) I would be open to try in case there is any more progress of making this work.
Have used the docker compose method:
Once it's finished, I go to localhost:3000, login with admin as user and admin as password and the following red box appears underneath.
I've check the ownphotos-backend container and it seems to be running fine. Any ideas what I'm doing wrong?