Closed krossekrabbe closed 4 months ago
I just tried the bare example docker compose yml and it is working (https://github.com/logto-io/logto/blob/master/docker-compose.yml).
So it seems to be somehow related to the environment I have set it up in. Maybe the reverse proxy or something. I will try to narrow down the issue a bit more, but if anyone has an idea in the meanwhile, please let me know.
@xiaoyijun Minimal reproduction repository, here you go: https://github.com/kaiwa/logto-unauthorized-example/tree/master (I have used Caddy instead of nginx for simplicity here, but the problem was the same with nginx.)
It is a very simple reverse proxy setup. Admin is at http://admin.localhost. As soon as you have created your admin user, logged in and then move to some page, you should see the "Unauthorized" error.
Please let me know as soon as you can confirm the issue.
Am experiencing this too - was working fine. Then I pulled latest docker image, ran DB migrations from CLI and now I get the same error.
Am experiencing this too - was working fine. Then I pulled latest docker image, ran DB migrations from CLI and now I get the same error.
Have you tried clearing the browser cache?
I have this problem too, but on Raw Node.js setup.
@xiaoyijun
Tested several docker images now:
+ β Tag 1.15 is working
+ β Tag 1.16 is working
- β Tag 1.17 is not working
- β Tag latest is not working
- β Tag edge is not working
@kaiwa Thank you for your feedback. I am looking into it.
Am experiencing this too - was working fine. Then I pulled latest docker image, ran DB migrations from CLI and now I get the same error.
Have you tried clearing the browser cache?
Yes clearing storage of all types has not fixed this. Am on version 1.17.
I am also using a reverse proxy (traefik). 1.16 works no problem.
Hi everyone,
I finally found the issue. When Logto validates the token, it requests the relevant configuration from oidc-config
:
const oidcConfigUrl = appendPath(issuer, '/.well-known/openid-configuration');
const configuration = await ky
.get(
oidcConfigUrl // 'http://auth.localhost/oidc/.well-known/openid-configuration'
)
.json();
In the Logto container, requests to 'http://auth.localhost' are directed to 127.0.0.1:80
, which is the address of the Logto container itself. The request should actually be made to the address of the entire Docker network so that it can be proxied by Caddy.
To fix this, map admin.localhost
and auth.localhost
to the IP address of the Caddy container in the Logto container:
logto:
image: local/logto:latest
entrypoint: ["sh", "-c", "npm run cli db seed -- --swe && npm start"]
environment:
- TRUST_PROXY_HEADER=1
- DB_URL=postgres://postgres:p0stgr3s@host.docker.internal:5432/logto
# Mandatory for GitPod to map host env to the container, thus GitPod can dynamically configure the public URL of Logto;
# Or, you can leverage it for local testing.
- ENDPOINT=http://auth.localhost
- ADMIN_ENDPOINT=http://admin.localhost
extra_hosts:
- "auth.localhost:172.17.0.1"
- "admin.localhost:172.17.0.1"
And in versions < 1.17.0, the configuration is read from the local database, so this issue does not occur.
Hi everyone,
I finally found the issue. When Logto validates the token, it requests the relevant configuration from
oidc-config
:const oidcConfigUrl = appendPath(issuer, '/.well-known/openid-configuration'); const configuration = await ky .get( oidcConfigUrl // 'http://auth.localhost/oidc/.well-known/openid-configuration' ) .json();
In the Logto container, requests to 'http://auth.localhost' are directed to
127.0.0.1:80
, which is the address of the Logto container itself. The request should actually be made to the address of the entire Docker network so that it can be proxied by Caddy.To fix this, map
admin.localhost
andauth.localhost
to the IP address of the Caddy container in the Logto container:logto: image: local/logto:latest entrypoint: ["sh", "-c", "npm run cli db seed -- --swe && npm start"] environment: - TRUST_PROXY_HEADER=1 - DB_URL=postgres://postgres:p0stgr3s@host.docker.internal:5432/logto # Mandatory for GitPod to map host env to the container, thus GitPod can dynamically configure the public URL of Logto; # Or, you can leverage it for local testing. - ENDPOINT=http://auth.localhost - ADMIN_ENDPOINT=http://admin.localhost extra_hosts: - "auth.localhost:172.17.0.1" - "admin.localhost:172.17.0.1"
And in versions < 1.17.0, the configuration is read from the local database, so this issue does not occur.
Do you have suggestion how to handle same issue in version installed by npm-init and not using reverse proxy? I am using same domain for ENDPOINT and ADMIN_ENDPOINT, just different ports, is that okay?
@aladin-bilalagic
I am using same domain for ENDPOINT and ADMIN_ENDPOINT, just different ports, is that okay?
Yes, you can use different ports, but remember to specify the port in the URL.
it's not work for me.
nginx:
server {
listen 443 ssl;
server_name dev.logtoserve.com;
ssl_certificate /etc/nginx/conf.d/pem/localhost+3.pem;
ssl_certificate_key /etc/nginx/conf.d/pem/localhost+3-key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://192.168.99.142:13001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
server {
listen 443 ssl;
server_name dev.logtoadmin.com;
ssl_certificate /etc/nginx/conf.d/pem/localhost+3.pem;
ssl_certificate_key /etc/nginx/conf.d/pem/localhost+3-key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://192.168.99.142:13002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
.env
# This compose file is for demonstration only, do not use in prod.
version: "3.9"
services:
app:
depends_on:
postgres:
condition: service_healthy
image: svhd/logto:${TAG-latest}
entrypoint: ["sh", "-c", "npm run cli db seed -- --swe && npm start"]
ports:
- 13001:3001
- 13002:3002
environment:
- TRUST_PROXY_HEADER=1
- DB_URL=postgres://postgres:p0stgr3s@postgres:5432/logto
# Mandatory for GitPod to map host env to the container, thus GitPod can dynamically configure the public URL of Logto;
# Or, you can leverage it for local testing.
- ENDPOINT=https://dev.logtoserve.com
- ADMIN_ENDPOINT=https://dev.logtoadmin.com
- TRUST_PROXY_HEADER=1
extra_hosts:
- "dev.logtoserve.com:172.17.0.1"
- "dev.logtoadmin.com:172.17.0.1"
postgres:
image: postgres:14-alpine
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: p0stgr3s
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
also, i used npm-init create my project. not work to.
@xiaoyijun Thank you!!! π
@xiaoyijun Thank you so much! Problem resolved!!! logto: image: svhd/logto:1.19.0 container_name: lobe-logto depends_on: postgresql: condition: service_healthy environment:
"lobe-auth-ui.xxx.com:172.17.0.1"
Just add the extra_hosts part.
I am stick to the same problem with v.1.21. I use traefik. but i am not sure which network/gateway address I need to put into the extra_hosts. I have a dedicated traefik service network and an logto network where db and app are connected to. The examples above show the address which leads to my bridge network. Which address I need to use for extra hosts?
Describe the bug
Fresh docker install, I set up the admin account, log in successfully. As soon as I change to some page which loads data (e.g. Applications page), the error message "Unauthorized. Please check credentials and its scope." appears and the api calls are failing.
The api response is
Expected behavior
No error
How to reproduce?
Context
Behind reverse proxy.
Env:
nginx config
Screenshots