Closed mattlathrop closed 1 year ago
Have you self-hosted the app? Have you added a secret key to the env system?
Hello, I have the same error with an SECRET_KEY defined.
.env
# App
TZ=Europe/Paris
SECRET_KEY=7e885874-a543-11ec-a459-00155d553b9b
# URLs
PUBLIC_URL=http://localhost:3000
PUBLIC_SERVER_URL=http://localhost:3100/api
# Database
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USERNAME=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DATABASE=postgres
POSTGRES_SSL_CERT=
# Auth
JWT_SECRET=change-me
JWT_EXPIRY_TIME=604800
# Google
PUBLIC_GOOGLE_CLIENT_ID=change-me
GOOGLE_CLIENT_SECRET=change-me
GOOGLE_API_KEY=change-me
# SendGrid (Optional)
SENDGRID_API_KEY=
SENDGRID_FORGOT_PASSWORD_TEMPLATE_ID=
SENDGRID_FROM_NAME=
SENDGRID_FROM_EMAIL=
logs:
[client] wait - compiling /[username]/[slug]/printer (client and server)...
[client] event - compiled client and server successfully in 1084 ms (12461 modules)
[client] wait - compiling / (client and server)...
[client] event - compiled client and server successfully in 940 ms (12620 modules)
[client] error - Error: socket hang up
[client] wait - compiling /_error (client and server)...
[client] wait - compiling / (client and server)...
[client] event - compiled client and server successfully in 2.1s (12461 modules)
[client] event - compiled client and server successfully (12505 modules)
[server] [Nest] 21116 - 03/17/2022, 1:00:48 PM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded.
[server] =========================== logs ===========================
[server] waiting for selector "html.wf-active" to be visible
[server] ============================================================
[server] page.waitForSelector: Timeout 30000ms exceeded.
[server] =========================== logs ===========================
[server] waiting for selector "html.wf-active" to be visible
[server] ============================================================
[server] at PrinterService.printAsPdf (/home/enzo/Reactive-Resume/server/src/printer/printer.service.ts:35:16)
[server] [Nest] 21116 - 03/17/2022, 1:01:04 PM ERROR [ExceptionsHandler] Request failed with status code 400
[server] Error: Request failed with status code 400
[server] at createError (/home/enzo/Reactive-Resume/node_modules/.pnpm/axios@0.26.0/node_modules/axios/lib/core/createError.js:16:15)
[server] at settle (/home/enzo/Reactive-Resume/node_modules/.pnpm/axios@0.26.0/node_modules/axios/lib/core/settle.js:17:12)
[server] at IncomingMessage.handleStreamEnd (/home/enzo/Reactive-Resume/node_modules/.pnpm/axios@0.26.0/node_modules/axios/lib/adapters/http.js:322:11)
[server] at IncomingMessage.emit (node:events:538:35)
[server] at endReadableNT (node:internal/streams/readable:1345:12)
[server] at processTicksAndRejections (node:internal/process/task_queues:83:21)
Please Use PUBLIC_SERVER_URL=http://localhost:3100/
instead of PUBLIC_SERVER_URL=http://localhost:3100/api
It will work as expected
I have the same issue with PUBLIC_SERVER_URL=http://localhost:3100/
With docker I have the same issue
docker-compose.yml
version: '3'
services:
postgres:
image: postgres
container_name: postgres
ports:
- 5432:5432
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
traefik:
image: traefik
container_name: traefik
command:
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
server:
image: amruthpillai/reactive-resume:server-latest
container_name: server
env_file: .env
environment:
- POSTGRES_HOST=postgres
depends_on:
- traefik
- postgres
labels:
- traefik.enable=true
- traefik.http.routers.server.entrypoints=web
- traefik.http.routers.server.rule=Host(`ec2-52-58-184-203.eu-central-1.compute.amazonaws.com`) && PathPrefix(`/api/`)
- traefik.http.routers.server.middlewares=server-stripprefix
- traefik.http.middlewares.server-stripprefix.stripprefix.prefixes=/api
- traefik.http.middlewares.server-stripprefix.stripprefix.forceslash=true
client:
image: amruthpillai/reactive-resume:client-latest
container_name: client
env_file: .env
depends_on:
- traefik
- server
labels:
- traefik.enable=true
- traefik.http.routers.client.rule=Host(`ec2-52-58-184-203.eu-central-1.compute.amazonaws.com`)
- traefik.http.routers.client.entrypoints=web
volumes:
pgdata:
.env
# App
TZ=Europe/Paris
SECRET_KEY=c106bff8-a640-11ec-aa72-06ab43eb872a
# URLs
PUBLIC_URL=http://ec2-52-58-184-203.eu-central-1.compute.amazonaws.com/
PUBLIC_SERVER_URL=http://ec2-52-58-184-203.eu-central-1.compute.amazonaws.com/
# Database
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USERNAME=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DATABASE=postgres
POSTGRES_SSL_CERT=
# Auth
JWT_SECRET=6b1a15c4-a544-11ec-a710-06ab43eb872a
JWT_EXPIRY_TIME=604800
# Google
PUBLIC_GOOGLE_CLIENT_ID=change-me
GOOGLE_CLIENT_SECRET=change-me
GOOGLE_API_KEY=change-me
# SendGrid (Optional)
SENDGRID_API_KEY=
SENDGRID_FORGOT_PASSWORD_TEMPLATE_ID=
SENDGRID_FROM_NAME=
SENDGRID_FROM_EMAIL=
Same with PUBLIC_SERVER_URL=http://ec2-52-58-184-203.eu-central-1.compute.amazonaws.com/api/
# App
TZ=Europe/Paris
SECRET_KEY=c106bff8-a640-11ec-aa72-06ab43eb872a
# URLs
PUBLIC_URL=http://ec2-52-58-184-203.eu-central-1.compute.amazonaws.com
PUBLIC_SERVER_URL=http://ec2-52-58-184-203.eu-central-1.compute.amazonaws.com
# Database
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USERNAME=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DATABASE=postgres
POSTGRES_SSL_CERT=
# Auth
JWT_SECRET=6b1a15c4-a544-11ec-a710-06ab43eb872a
JWT_EXPIRY_TIME=604800
# Google
PUBLIC_GOOGLE_CLIENT_ID=change-me
GOOGLE_CLIENT_SECRET=change-me
GOOGLE_API_KEY=change-me
# SendGrid (Optional)
SENDGRID_API_KEY=
SENDGRID_FORGOT_PASSWORD_TEMPLATE_ID=
SENDGRID_FROM_NAME=
SENDGRID_FROM_EMAIL=
If you notice I have removed the forward slashes from the end of the URL, which will make this work you
Please try modifying the Docker File according to this
I am self hosting as well. Running the instance in docker. I have set my PUBLIC_URL and PUBLIC_SERVER_URL values to the same address now and have no trailing slashes and it still does not work. I also have a SECRET_KEY defined.
I am self hosting as well. Running the instance in docker. I have set my PUBLIC_URL and PUBLIC_SERVER_URL values to the same address now and have no trailing slashes and it still does not work. I also have a SECRET_KEY defined.
Can you share the .env file?
Here is a partial kubeconfig file (the docker containers are running in kubernetes)
containers:
- name: reactive-client
image: amruthpillai/reactive-resume:client-latest
ports:
- containerPort: 3100
name: reactive-client
env:
- name: POSTGRES_HOST
value: localhost
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DATABASE
value: postgres
- name: GOOGLE_API_KEY
value: <the key>
- name: JWT_EXPIRY_TIME
value: "604800"
- name: JWT_SECRET
value: <long random sting>
- name: SECRET_KEY
value: <long random sting>
- name: PUBLIC_URL
value: https://subdomain.domain.com
- name: PUBLIC_URL_API
value: https://subdomain.domain.com
- name: reactive-server
image: amruthpillai/reactive-resume:server-latest
ports:
- containerPort: 3000
name: reactive-server
env:
- name: POSTGRES_HOST
value: localhost
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DATABASE
value: postgres
- name: GOOGLE_API_KEY
value: <the key>
- name: JWT_EXPIRY_TIME
value: "604800"
- name: JWT_SECRET
value: <long random sting>
- name: SECRET_KEY
value: <long random sting>
- name: PUBLIC_URL
value: https://subdomain.domain.com
- name: PUBLIC_URL_API
value: https://subdomain.domain.com
- name: postgres
image: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_HOST
value: localhost
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DATABASE
value: postgres
I also have the following Traefik rules on my load balancer:
[http.routers]
[http.routers.resume-rtr]
entryPoints = ["https"]
rule = "Host(`subdomain.domain.ax`)"
service = "resume-svc"
middlewares = ["chain-no-auth"]
[http.routers.resume-rtr.tls]
certresolver = "dns-route53"
[http.services]
[http.services.resume-svc]
[http.services.resume-svc.weighted.healthCheck]
[[http.services.resume-svc.weighted.services]]
name = "resume-svc-n1"
[http.services.resume-svc-n1]
[http.services.resume-svc-n1.loadBalancer]
[http.services.resume-svc-n1.loadBalancer.healthCheck]
path = "/"
interval = "10s"
timeout = "3s"
[[http.services.resume-svc-n1.loadBalancer.servers]]
url = "http://10.0.1.218:3000"
[http.routers]
[http.routers.resume-api-rtr]
entryPoints = ["https"]
rule = "Host(`subdomain.domain.ax`) && PathPrefix(`/api`)"
service = "resume-api-svc"
middlewares = ["chain-no-auth", "middlewares-api-stripprefix"]
[http.routers.resume-api-rtr.tls]
certresolver = "dns-route53"
[http.services]
[http.services.resume-api-svc]
[http.services.resume-api-svc.loadBalancer]
passHostHeader = true
[[http.services.resume-api-svc.loadBalancer.servers]]
url = "http://10.0.1.218:3100"
Just to add some extra light using postman to just run a GET request to both
subdomain.domain.ax/api/printer/<username>/test
and 10.0.1.218:3100/printer/<username>/test
produce the same result as the GUI.
Can confirm I'm also getting the same issue.
The following error occurs:
today at 13:10:38[Nest] 42 - 03/29/2022, 1:10:38 PM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded.
today at 13:10:38=========================== logs ===========================
today at 13:10:38waiting for selector "html.wf-active" to be visible
today at 13:10:38============================================================
today at 13:10:38page.waitForSelector: Timeout 30000ms exceeded.
today at 13:10:38=========================== logs ===========================
today at 13:10:38waiting for selector "html.wf-active" to be visible
today at 13:10:38============================================================
today at 13:10:38 at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:40:20)
My compose file:
###################
##Reactive Resume##
###################
reactiveresumeserver:
image: amruthpillai/reactive-resume:server-latest
container_name: ReactiveResume-Server
environment:
- PUBLIC_URL=https://resume.$DOMAINNAME
- POSTGRES_HOST=reactiveresumedb
- POSTGRES_DATABASE=$RR_POSTGRES_DATABASE
- POSTGRES_PASSWORD=$RR_POSTGRES_PASSWORD
- POSTGRES_USER=$RR_POSTGRES_USERNAME
- TZ=$TZ
- SECRET_KEY=$RR_SECRET_KEY
- JWT_SECRET=$RR_JWT_SECRET
- JWT_EXPIRY_TIME=$RR_JWT_EXPIRY_TIME
- PUBLIC_GOOGLE_CLIENT_ID=$RR_PUBLIC_GOOGLE_CLIENT_ID
- GOOGLE_CLIENT_SECRET=$RR_GOOGLE_CLIENT_SECRET
- GOOGLE_API_KEY=$RR_GOOGLE_API_KEY
depends_on:
- reactiveresumedb
networks:
pihole:
ipv4_address: '172.22.0.140'
isolated:
labels:
- autoheal=true
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.resumeserver-rtr.entrypoints=https"
- "traefik.http.routers.resumeserver-rtr.rule=Host(`resume.$DOMAINNAME`) && PathPrefix(`/api/`)"
- "traefik.http.routers.resumeserver-rtr.tls=true"
## Middlewares
- "traefik.http.routers.resumeserver-rtr.middlewares=chain-no-auth@file, resume-api" # No Authentication
# - "traefik.http.routers.resumeserver-rtr.middlewares=chain-basic-auth@file" # Basic Authentication
# - "traefik.http.routers.resumeserver-rtr.middlewares=chain-oauth@file" # Google OAuth 2.0
# - "traefik.http.routers.resumeserver-rtr.middlewares=chain-authelia@file" # Authelia
- "traefik.http.middlewares.resume-api.stripprefix.prefixes=/api"
- "traefik.http.middlewares.resume-api.stripprefix.forceslash=true"
## HTTP Services
- "traefik.http.routers.resumeserver-rtr.service=resumeserver-svc"
- "traefik.http.services.resumeserver-svc.loadbalancer.server.port=3100"
volumes:
- $USERDIR/ReactiveResume/uploads:/app/server/dist/assets/uploads
healthcheck:
test: curl -fSs http://localhost:3100/health || exit 1
interval: 30s
timeout: 5s
retries: 3
restart: always
reactiveresumeclient:
image: amruthpillai/reactive-resume:client-latest
container_name: ReactiveResume-Client
environment:
- PUBLIC_SERVER_URL=https://resumeserver.$DOMAINNAME
- TZ=$TZ
# - SECRET_KEY=$RR_SECRET_KEY
# - JWT_SECRET=$RR_JWT_SECRET
# - JWT_EXPIRY_TIME=$RR_JWT_EXPIRY_TIME
- PUBLIC_GOOGLE_CLIENT_ID=$RR_PUBLIC_GOOGLE_CLIENT_ID
- GOOGLE_CLIENT_SECRET=$RR_GOOGLE_CLIENT_SECRET
- GOOGLE_API_KEY=$RR_GOOGLE_API_KEY
- PUBLIC_FLAG_DISABLE_SIGNUPS=true
depends_on:
- reactiveresumeserver
- reactiveresumedb
networks:
pihole:
ipv4_address: '172.22.0.141'
labels:
- autoheal=true
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.resume-rtr.entrypoints=https"
- "traefik.http.routers.resume-rtr.rule=Host(`resume.$DOMAINNAME`)"
- "traefik.http.routers.resume-rtr.tls=true"
## Middlewares
- "traefik.http.routers.resume-rtr.middlewares=chain-no-auth@file" # No Authentication
# - "traefik.http.routers.resume-rtr.middlewares=chain-basic-auth@file" # Basic Authentication
# - "traefik.http.routers.resume-rtr.middlewares=chain-oauth@file" # Google OAuth 2.0
# - "traefik.http.routers.resume-rtr.middlewares=chain-authelia@file" # Authelia
## HTTP Services
- "traefik.http.routers.resume-rtr.service=resume-svc"
- "traefik.http.services.resume-svc.loadbalancer.server.port=3000"
## Flame Dashboard
- flame.type=application # "app" works too
- flame.name=Reactive Resume
- flame.icon=https://raw.githubusercontent.com/modem7/MiscAssets/master/Icons/rxresume.png
healthcheck:
test: curl -fSs 127.0.0.1:3000 || exit 1
interval: 30s
timeout: 5s
retries: 3
restart: always
reactiveresumedb:
image: postgres:alpine
container_name: ReactiveResume-DB
environment:
- TZ=$TZ
- POSTGRES_DB=$RR_POSTGRES_DATABASE
- POSTGRES_PASSWORD=$RR_POSTGRES_PASSWORD
- POSTGRES_USER=$RR_POSTGRES_USERNAME
networks:
- isolated
volumes:
- $USERDIR/ReactiveResume/db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $RR_POSTGRES_USERNAME" ]
interval: 30s
timeout: 5s
retries: 3
restart: always
Same issue over here, also self-hosted.
Please Use PUBLIC_SERVER_URL=http://localhost:3100/
instead of PUBLIC_SERVER_URL=http://localhost:3100/api
It will work as expected
HI, @chandiwalaaadhar i have the same issue too, with download pdf.
My env:
The following log
server | [Nest] 42 - 04/06/2022, 10:01:18 AM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded.
server | =========================== logs ===========================
server | waiting for selector "html.wf-active" to be visible
server | ============================================================
server | page.waitForSelector: Timeout 30000ms exceeded.
server | =========================== logs ===========================
server | waiting for selector "html.wf-active" to be visible
server | ============================================================
server | at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:40:20)
the second log is print from https://github.com/AmruthPillai/Reactive-Resume/issues/721#issuecomment-1071938664
@swoiow Haven't checked this with Docker yet, will check once and look for a resolution
@swoiow Haven't checked this with Docker yet, will check once and look for a resolution
Ok, thx, bro. Looking forward the good news.
Ran into this issue as well, Fix in my case was a firewall change. self managed stack is running behind a Nginx Proxy Manager instance which was configured to only allow certain IP access to service. The instance IP that reactive resume itself was running on was denied access which lead to the print function waiting for a page to load which would never load. suggest you look at url the server is trying to connect to and make sure its accessible
@kgotso I am not very good when it comes to nginx, is there a setting in compose file I could add to make it work?
@Luk164 is there a setting in compose file I could add to make it work?
This issue was happening for me as well in v3.6.5 (latest). I suspect this is because the server instance is unable to reach the PUBLIC_URL configured. In my case, I was running the server/client instance behind a reverse proxy, so I was using my domain url as PUBLIC_URL.
Once I fixed the PUBLIC_URL for the server, I am able to export pdf successfully. Also made sure both server/client is in the same network
Here is my config in docker-compose yaml. Hope this helps.
networks:
frontend:
external: true
backend:
external: true
...
postgres:
networks:
- backend
...
resume-server:
environment:
- PUBLIC_URL=http://resume-client:3000
- PUBLIC_SERVER_URL=https://resume-server.mydomain.com
networks:
- frontend
- backend
...
resume-client:
environment:
- PUBLIC_URL=https://resume.mydomain.com
- PUBLIC_SERVER_URL=https://resume-server.mydomain.com
networks:
- frontend
Note: I have following configurations in my reverse proxy setup to forward request https://resume.mydomain.com to http://resume-client:3000 https://resume-server.mydomain.com to http://resume-server:3100
Hello, I would like to know if anyone managed to solve this problem. I still can't download in PDF, but it works in json. I specify that it is in local. Here is the error message. Thanks
@reactive-resume/client:dev: event - compiled client and server successfully in 3.1s (13133 modules) @reactive-resume/server:dev: [Nest] 7498 - 29/10/2022 18:29:49 ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded. @reactive-resume/server:dev: =========================== logs =========================== @reactive-resume/server:dev: waiting for selector "html.wf-active" to be visible @reactive-resume/server:dev: ============================================================ @reactive-resume/server:dev: page.waitForSelector: Timeout 30000ms exceeded. @reactive-resume/server:dev: =========================== logs =========================== @reactive-resume/server:dev: waiting for selector "html.wf-active" to be visible @reactive-resume/server:dev: ============================================================ @reactive-resume/server:dev: at PrinterService.printAsPdf (/home/user/Desk/project_Resum/Reactive-Resume-main/server/src/printer/printer.service.ts:36:16) @reactive-resume/client:dev: wait - compiling /dashboard... @reactive-resume/client:dev: wait - compiling /[username]/[slug]/printer (client and server)... @reactive-resume/client:dev: event - compiled client and server successfully in 2.8s (13147 modules) @reactive-resume/client:dev: event - compiled client and server successfully (13149 modules)
I am also experiencing this issue, accessing locally, not through a reverse proxy.
I experience the same issue (both client and server apps behind a reverse proxy), PDF export is really a core feature, hope it will be fixed soon. Not sure what is wrong, the public url is accessible from the server (tried with CURL).
I experience the same issue (both client and server apps behind a reverse proxy), PDF export is really a core feature, hope it will be fixed soon. Not sure what is wrong, the public url is accessible from the server (tried with CURL).
If the server public URL is accessible through curl, that's good. Now, the next step would be to figure out which URL the client is trying to access. As you can see here: https://github.com/AmruthPillai/Reactive-Resume/blob/main/client/services/axios.ts, it always looks for the ENV SERVER_URL
if it exists. If it does, and it matches the public server url, then it should work. Otherwise, it would revert to /api
which means it would look for the /api route on client's own domain (which is what is happening in your case, I believe).
Still does not work. So I have the following setup:
http://resume.mydomain.local
http://resume.mydomain.local/api
A load balancer in front of them(Traefik), so http://resume.mydomain.local
points to client:3000, and http://resume.mydomain.local/api
points to server:3100.
Server is able to access client url, client is able to access server url, but the error still persists.
The only log I get is:
> server_1 | @reactive-resume/server:start: [Nest] 83 - 11/10/2022, 10:20:35 AM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded.
> server_1 | @reactive-resume/server:start: =========================== logs ===========================
> server_1 | @reactive-resume/server:start: waiting for selector "html.wf-active" to be visible
> server_1 | @reactive-resume/server:start: ============================================================
> server_1 | @reactive-resume/server:start: page.waitForSelector: Timeout 30000ms exceeded.
> server_1 | @reactive-resume/server:start: =========================== logs ===========================
> server_1 | @reactive-resume/server:start: waiting for selector "html.wf-active" to be visible
> server_1 | @reactive-resume/server:start: ============================================================
> server_1 | @reactive-resume/server:start: at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:41:20)
>
To resolve this problem:
It's works, but wo HTTPS :(
My Portainer Stack config (change example.com to your domain):
version: "3.8"
services:
example.com:
image: martadinata666/reactive-resume:latest
env_file:
- stack.env
environment:
- TZ=UTC
networks:
- reactiveresume
ports:
- 3000:3000
- 3100:3100
volumes:
- /srv/reactive_resume_martadinata/uploads:/home/debian/reactiveresume/server/dist/assets/uploads
- /srv/reactive_resume_martadinata/exports:/home/debian/reactiveresume/server/dist/assets/exports #photo and export storage
restart: unless-stopped
db:
image: postgres:13
#user: 1000:1000 #debian image had user override feature, not working on alpine image
environment:
- POSTGRES_USER=reactiveresume
- POSTGRES_PASSWORD=reactiveresumepass
- POSTGRES_DB=reactiveresume
restart: unless-stopped
volumes:
- /srv/reactive_resume_martadinata/db:/var/lib/postgresql/data
networks:
- reactiveresume
volumes:
db:
networks:
reactiveresume:
name: reactiveresume
Stack.env:
TZ=UTC
SECRET_KEY=12345
POSTGRES_HOST=db
POSTGRES_PORT=5432
POSTGRES_USER=reactiveresume
POSTGRES_PASSWORD=reactiveresumepass
POSTGRES_DB=reactiveresume
POSTGRES_SSL_CERT=
JWT_SECRET=12345
JWT_EXPIRY_TIME=604800
GOOGLE_API_KEY=
PUBLIC_FLAG_DISABLE_SIGNUPS=false
PUBLIC_URL=http://example.com:3000
PUBLIC_SERVER_URL=http://example.com:3100
@vinogradovnet Thanks for help, but I want to hide the app behind reverse proxy, so only 80 and 443 are available from outside. Besides that, it should also work with https (which it doesn't, and I am not referring to your solution).
So I believe this issue should be fixed in amruthpillai/reactive-resume
images.
if your call pdf download api, there is /api prefix. But client call api from internal, there isn't api prefix. This is the reason caused this err.
@Jack-Kingdom no, client call PDF export API through proxy, not internally. So which should be the docker configuration with both client and server running behind reverse proxy, under a single domain?
@grozandrei I recheck those code and I found that whether client ssr request page through proxy depends env config SERVER_URL
that user provide.
If you want client ssr request page from the internal network or with different domain. You can provide a SERVER_URL
with domain and port. For example: http://example.com:80
.
But, if you want client ssr request page through a proxy with the same domain. You must provide SERVER_URL
with api
suffix. Because it's the rule that you added in the proxy. For example: http://example.com:80/api
Still unable to get PDF downloads working. I have also tried using martadinata666's container because it has client and server together but it gave me the same result. I am using the most basic setup copied from the repository:
version: "3.8"
services:
postgres:
image: postgres:alpine
restart: always
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
server:
image: amruthpillai/reactive-resume:server-latest
# build:
# context: .
# dockerfile: ./server/Dockerfile
restart: always
ports:
- 3100:3100
depends_on:
- postgres
environment:
- PUBLIC_URL=http://localhost:3000
- PUBLIC_SERVER_URL=http://localhost:3100
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=change-me-to-something-secure
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=change-me-to-something-secure
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@rxresu.me
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
- PDF_DELETION_TIME=
client:
image: amruthpillai/reactive-resume:client-latest
# build:
# context: .
# dockerfile: ./client/Dockerfile
restart: always
ports:
- 3000:3000
depends_on:
- server
environment:
- PUBLIC_URL=http://localhost:3000
- PUBLIC_SERVER_URL=http://localhost:3100
- PUBLIC_GOOGLE_CLIENT_ID=
volumes:
pgdata:
Hello, Please has anyone managed to solve the problem of downloading PDF locally? Thanks
Same problem here. Has anyone found a solution?
It seems there may be multiple reasons people encounter this, but I found a solution to my issue. It had to do with how my reverse proxy was configured and the fact that I was trying to serve them off of the same port of the same domain. The issue was the server calls its public URL to its own printer
endpoint differently than the endpoint to return a resume to the browser. So it must be able to resolve its own traffic back to itself locally using the same DNS & routing as over a public network.
In my Docker Compose file, I have it configured to serve the client from port 3180
and the server from port 3190
.
My URL env
variables are configured like this:
PUBLIC_URL=https://<sub>.<domain>.com/
PUBLIC_SERVER_URL=https://<sub>.<domain>.com/api
These variables are identical between the server and client containers.
In my Nginx reverse proxy, I configured traffic to <sub>.<domain>.com
to be directed to the client on http://<docker_host>:3180
, but then I needed to add a custom location to the same Nginx config specifically for https://<sub>.<domain>.com/api/
and forward all traffic to /api
to http://<docker_host>:3190/
without the /api
suffix. Here is what that extra rule looks like:
location /api/ {
proxy_pass http://<docker_host>:3190/;
}
If you're using Nginx Proxy Manager, this is what the extra config looks like:
Can anyone provide a working example with a traefik reverse proxy in front, using a local domain name?
Running locally on Docker, using localhost isn't possible to use the print function.
Any news on this with local usage?
I've also had 500 error when trying to generate PDF on local with docker. I've checked some docker logs from server
container and to print pdf, server is trying to connect to localhost:3000
and gets connection refused errors.
@Etoile2 @joaquinvacas For all of you running locally with docker (by just running docker compose up -d
), here is how to make PDF work:
diff --git a/docker-compose.yml b/docker-compose.yml
index 7d92ffd9..ca32537e 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -4,8 +4,7 @@ services:
postgres:
image: postgres:alpine
restart: always
- ports:
- - 5432:5432
+ network_mode: host
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
@@ -25,8 +24,7 @@ services:
# context: .
# dockerfile: ./server/Dockerfile
restart: always
- ports:
- - 3100:3100
+ network_mode: host
depends_on:
- postgres
environment:
@@ -37,7 +35,7 @@ services:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=change-me-to-something-secure
- - POSTGRES_HOST=postgres
+ - POSTGRES_HOST=localhost
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=change-me-to-something-secure
@@ -64,8 +62,7 @@ services:
# context: .
# dockerfile: ./client/Dockerfile
restart: always
- ports:
- - 3000:3000
+ network_mode: host
depends_on:
- server
environment:
@@ -74,4 +71,4 @@ services:
- PUBLIC_GOOGLE_CLIENT_ID=
Then docker compose up -d
and it should work now after refreshing page in browser.
The trick is basically to use network_mode: host
on all services and use POSTGRES_HOST=localhost
.
:warning: This is a workaround only for temporary use! Is is not secure and it will expose whole app to everyone who can access your machine in your network or from internet.
As I understand on local use with docker proper url for pdf printer is http://client:3000
not http://localhost:3000
because we are making request from withing server
container and localhost
means server
container itself.
So all this could probably be avoided by adding new optional env var e.g. PDF_PRINTER_URL
that if set, could be used in https://github.com/AmruthPillai/Reactive-Resume/blob/main/server/src/printer/printer.service.ts#L50 like this
const url = this.configService.get<string>('app.pdf_printer_url', this.configService.get<string>('app.url'));
(I'm not js dev so sorry for garbage code)
This would allow to return correct urls from backend to client app and use this new pdf printer url to make internal calls to client container from within server container.
Having the same issue when exporting to pdf:
@reactive-resume/server:start: [Nest] 103 - 02/24/2023, 2:57:33 PM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded.
@reactive-resume/server:start: =========================== logs ===========================
@reactive-resume/server:start: waiting for locator('html.wf-active') to be visible
@reactive-resume/server:start: ============================================================
@reactive-resume/server:start: page.waitForSelector: Timeout 30000ms exceeded.
@reactive-resume/server:start: =========================== logs ===========================
@reactive-resume/server:start: waiting for locator('html.wf-active') to be visible
@reactive-resume/server:start: ============================================================
@reactive-resume/server:start: at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:59:24)
I'm not using reverse proxy. It's running on my server, which I access in my local network via http://nuc.localdomain.be:3111. Localdomain.be is fake, I'm using another one in reality. I use 3111 as port because 3000 is taken.
Docker-compose:
version: "3.8"
services:
postgres:
image: postgres:alpine
restart: always
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
server:
image: amruthpillai/reactive-resume:server-latest
# build:
# context: .
# dockerfile: ./server/Dockerfile
restart: always
ports:
- 3100:3100
volumes:
- /home/niek/docker/reactive-resume/uploads:/app/server/dist/assets/uploads
- /home/niek/docker/reactive-resume/exports:/app/server/dist/assets/exports
depends_on:
- postgres
environment:
- PUBLIC_URL=http://nuc.localdomain.be:3111
- PUBLIC_SERVER_URL=http://nuc.localdomain.be:3100
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=<redacted>
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=<redacted>
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@rxresu.me
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
- PDF_DELETION_TIME=
client:
image: amruthpillai/reactive-resume:client-latest
# build:
# context: .
# dockerfile: ./client/Dockerfile
restart: always
ports:
- 3111:3000
depends_on:
- server
environment:
- PUBLIC_URL=http://nuc.localdomain.be:3111
- PUBLIC_SERVER_URL=http://nuc.localdomain.be:3100
- PUBLIC_GOOGLE_CLIENT_ID=
volumes:
pgdata:
Creation of an account works and I can start creating a resume, but exporting to pdf fails. I don't like the solution above, to run the container in the host network, so what's a better option?
Having the same issue when exporting to pdf:
@reactive-resume/server:start: [Nest] 103 - 02/24/2023, 2:57:33 PM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded. @reactive-resume/server:start: =========================== logs =========================== @reactive-resume/server:start: waiting for locator('html.wf-active') to be visible @reactive-resume/server:start: ============================================================ @reactive-resume/server:start: page.waitForSelector: Timeout 30000ms exceeded. @reactive-resume/server:start: =========================== logs =========================== @reactive-resume/server:start: waiting for locator('html.wf-active') to be visible @reactive-resume/server:start: ============================================================ @reactive-resume/server:start: at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:59:24)
I'm not using reverse proxy. It's running on my server, which I access in my local network via http://nuc.localdomain.be:3111. Localdomain.be is fake, I'm using another one in reality. I use 3111 as port because 3000 is taken.
Docker-compose:
version: "3.8" services: postgres: image: postgres:alpine restart: always ports: - 5432:5432 volumes: - pgdata:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] start_period: 15s interval: 30s timeout: 30s retries: 3 environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres server: image: amruthpillai/reactive-resume:server-latest # build: # context: . # dockerfile: ./server/Dockerfile restart: always ports: - 3100:3100 volumes: - /home/niek/docker/reactive-resume/uploads:/app/server/dist/assets/uploads - /home/niek/docker/reactive-resume/exports:/app/server/dist/assets/exports depends_on: - postgres environment: - PUBLIC_URL=http://nuc.localdomain.be:3111 - PUBLIC_SERVER_URL=http://nuc.localdomain.be:3100 - PUBLIC_GOOGLE_CLIENT_ID= - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - SECRET_KEY=<redacted> - POSTGRES_HOST=postgres - POSTGRES_PORT=5432 - POSTGRES_SSL_CERT= - JWT_SECRET=<redacted> - JWT_EXPIRY_TIME=604800 - GOOGLE_CLIENT_SECRET= - GOOGLE_API_KEY= - MAIL_FROM_NAME=Reactive Resume - MAIL_FROM_EMAIL=noreply@rxresu.me - MAIL_HOST= - MAIL_PORT= - MAIL_USERNAME= - MAIL_PASSWORD= - STORAGE_BUCKET= - STORAGE_REGION= - STORAGE_ENDPOINT= - STORAGE_URL_PREFIX= - STORAGE_ACCESS_KEY= - STORAGE_SECRET_KEY= - PDF_DELETION_TIME= client: image: amruthpillai/reactive-resume:client-latest # build: # context: . # dockerfile: ./client/Dockerfile restart: always ports: - 3111:3000 depends_on: - server environment: - PUBLIC_URL=http://nuc.localdomain.be:3111 - PUBLIC_SERVER_URL=http://nuc.localdomain.be:3100 - PUBLIC_GOOGLE_CLIENT_ID= volumes: pgdata:
Creation of an account works and I can start creating a resume, but exporting to pdf fails. I don't like the solution above, to run the container in the host network, so what's a better option?
So the common misunderstanding with the pdf issue is that the manner in which the backend generates the pdf is by being able to make an http request to the frontend. in the case that api should be able to request the url. however it sounds like in your case you are accessing it locally and the api is unable to resolve the domain to the internal domain. This is why its recommended to use a reverse proxy that has a domain pointing to it. if you want to be able to resume your current setup then you need to bind via internal dns
In my case it was on vps using nginx proxy manager to act as the reverse proxy, however I had a whitelist of which IPs could access the frontend, resulted in the api not being able to connect to itself. I think it just needs to be documented better so that everyone understand this issue. PDF download should work if networking and routing is correct. @AmruthPillai can you fact check me here please
The docker container is running on my proxmox server, a different server in my LAN.
http://nuc.localdomain.be resolves to that proxmox server.
I can access the client on port 3111 and the server on 3100 (the latter giving this: {"statusCode":404,"message":"Cannot GET /","timestamp":"2023-02-25T11:11:54.873Z","path":"/"}
, but I understand that's as expected), so I see no reason why the backend wouldn't be able to talk to the frontend?
Thank you, @funkybunch, it's works for me.
But I also need to change permissions on folder /home/debian/reactiveresume/server/dist/assets/
to debian:debian
inside container.
It seems there may be multiple reasons people encounter this, but I found a solution to my issue. It had to do with how my reverse proxy was configured and the fact that I was trying to serve them off of the same port of the same domain. The issue was the server calls its public URL to its own
printer
endpoint differently than the endpoint to return a resume to the browser. So it must be able to resolve its own traffic back to itself locally using the same DNS & routing as over a public network.In my Docker Compose file, I have it configured to serve the client from port
3180
and the server from port3190
.My URL
env
variables are configured like this:
PUBLIC_URL=https://<sub>.<domain>.com/
PUBLIC_SERVER_URL=https://<sub>.<domain>.com/api
These variables are identical between the server and client containers.
In my Nginx reverse proxy, I configured traffic to
<sub>.<domain>.com
to be directed to the client onhttp://<docker_host>:3180
, but then I needed to add a custom location to the same Nginx config specifically forhttps://<sub>.<domain>.com/api/
and forward all traffic to/api
tohttp://<docker_host>:3190/
without the/api
suffix. Here is what that extra rule looks like:location /api/ { proxy_pass http://<docker_host>:3190/; }
If you're using Nginx Proxy Manager, this is what the extra config looks like:
I still have the same problem, but using Traefik as reverse proxy, here is my setup:
version: '3.8'
services:
postgres:
image: postgres:15.2-alpine
restart: unless-stopped
expose:
- 5432
volumes:
- resume-postgres:/var/lib/postgresql/data
networks:
- resume
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=*****
server:
image: amruthpillai/reactive-resume:server-3.7.1
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.server.entrypoints=web"
- "traefik.http.routers.server.rule=Host(`resume.mydomain.lcl`) && PathPrefix(`/api/`)"
- "traefik.http.routers.server.middlewares=path-strip"
- "traefik.http.middlewares.path-strip.stripprefix.prefixes=/api"
- "traefik.http.middlewares.path-strip.stripprefix.forceSlash=false"
environment:
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api
- SERVER_URL=http://resume.mydomain.lcl/api
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=*****
- SECRET_KEY=*****
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=*****
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@resume.mydomain.lcl
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_S3_ENABLED=false
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
volumes:
- ./resume/uploads:/app/server/dist/assets/uploads
- ./resume/exports:/app/server/dist/assets/exports
expose:
- 3100
networks:
- resume
depends_on:
- postgres
client:
image: amruthpillai/reactive-resume:client-3.7.1
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.http.routers.client.rule=Host(`resume.mydomain.lcl`)"
environment:
- SIGNUPS_VERIFY=false
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api
- SERVER_URL=http://resume.mydomain.lcl/api
- PUBLIC_GOOGLE_CLIENT_ID=
expose:
- 3000
networks:
- resume
depends_on:
- server
traefik:
image: traefik:v2.9.10
restart: unless-stopped
command:
- "--log.level=INFO"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--accesslog=true"
- "--accesslog.filePath=/logs/access.log"
expose:
- 80
environment:
- VIRTUAL_HOST=resume.mydomain.lcl
- VIRTUAL_PORT=80
networks:
- proxy
- resume
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/logs/:/logs/
volumes:
resume-postgres:
driver: local
networks:
resume: {}
proxy:
external: true
Hi all,
good thread with very helpful discussions and tries to understand the actual problem.
I got the whole thing working behind a Traefik reverse proxy.
Many thanks @grozandrei for your provided Traefik example. I adjusted the path-strip labels slightly and now it works flawlessly. Even PDF download is working correctly.
Note: Make sure that you expose both the client and server container behind the same (sub)domain. Otherwise, you'll receive CORS errors, as the Same Origin Policy (SOP) will prevent access from Domain A (client) to Domain B (server). So let both run on the same domain and tell your reverse proxy (here traefik) that the server container will handle all /api requests.
My docker-compose.yml file:
version: "3.8"
services:
postgres:
image: postgres:alpine
container_name: rxresume-db
restart: always
expose:
- 5432
volumes:
- ${DOCKER_VOLUME_STORAGE:-/mnt/docker-volumes}/rxresume/postgresql:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- proxy
server:
image: amruthpillai/reactive-resume:server-latest
container_name: rxresume-server
restart: always
expose:
- 3100
depends_on:
- postgres
environment:
- PUBLIC_URL=https://resume.example.com
- PUBLIC_SERVER_URL=https://resume.example.com/api # only change the subdomain, leave /api as is
- SERVER_URL=https://resume.example.com/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=change-me-to-something-secure
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=change-me-to-something-secure
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@rxresu.me
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
- PDF_DELETION_TIME=
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-server.rule=Host(`resume.example.com`) && PathPrefix(`/api`) # only change the subdomain, leave /api as is
- traefik.http.services.rxresume-server.loadbalancer.server.port=3100
- traefik.docker.network=proxy
# Part for optional traefik middlewares
- traefik.http.routers.rxresume-server.middlewares=path-strip # may add local-ipwhitelist@file for access control
- traefik.http.middlewares.path-strip.stripprefix.prefixes=/api
- traefik.http.middlewares.path-strip.stripprefix.forceSlash=false
client:
image: amruthpillai/reactive-resume:client-latest
container_name: rxresume-client
restart: always
expose:
- 3000
depends_on:
- server
environment:
- PUBLIC_URL=https://resume.example.com
- PUBLIC_SERVER_URL=https://resume.example.com/api # only change the subdomain, leave /api as is
- SERVER_URL=https://resume.example.com/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-client.rule=Host(`resume.example.com`)
- traefik.http.services.rxresume-client.loadbalancer.server.port=3000
- traefik.docker.network=proxy
# Part for optional traefik middlewares
#- traefik.http.routers.rxresume-client.middlewares=local-ipwhitelist@file # may enable this middleware for access control
networks:
proxy:
external: true
I've added the compose example also to my GitHub repository: https://github.com/Haxxnet/Compose-Examples/tree/main/examples/rxresume
For all people running a Nginx Proxy Manager reverse proxy, check out the great response from @funkybunch: https://github.com/AmruthPillai/Reactive-Resume/issues/721#issuecomment-1405283786
@l4rm4nd
Still not working. The only change I see in your example is the removal of ending slash in path prefix: PathPrefix(/api/
).
This is my compose file, can you please check what I am missing here?
version: '3.8'
services:
postgres:
image: postgres:15.2-alpine
restart: unless-stopped
expose:
- 5432
volumes:
- resume-postgres:/var/lib/postgresql/data
networks:
- resume
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres-password
server:
image: amruthpillai/reactive-resume:server-3.7.2
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.server.entrypoints=web"
- "traefik.http.routers.server.rule=Host(`resume.mydomain.lcl`) && PathPrefix(`/api`)"
- "traefik.http.routers.server.middlewares=path-strip"
- "traefik.http.services.server.loadbalancer.server.port=3100"
- "traefik.http.middlewares.path-strip.stripprefix.prefixes=/api"
- "traefik.http.middlewares.path-strip.stripprefix.forceSlash=false"
environment:
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api
- SERVER_URL=http://resume.mydomain.lcl/api
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres-password
- SECRET_KEY=secret-key
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=secret-key
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@resume.mydomain.lcl
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_S3_ENABLED=false
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
volumes:
- ./resume/uploads:/app/server/dist/assets/uploads
- ./resume/exports:/app/server/dist/assets/exports
expose:
- 3100
networks:
- resume
depends_on:
- postgres
client:
image: amruthpillai/reactive-resume:client-3.7.2
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.http.routers.client.rule=Host(`resume.mydomain.lcl`)"
- "traefik.http.services.client.loadbalancer.server.port=3000"
environment:
- SIGNUPS_VERIFY=false
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api
- SERVER_URL=http://resume.mydomain.lcl/api
- PUBLIC_GOOGLE_CLIENT_ID=
expose:
- 3000
networks:
- resume
depends_on:
- server
traefik:
image: traefik:v2.10.1
restart: unless-stopped
command:
- "--log.level=INFO"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--accesslog=true"
- "--accesslog.filePath=/logs/access.log"
expose:
- 80
environment:
- VIRTUAL_HOST=resume.mydomain.lcl
- VIRTUAL_PORT=80
networks:
- proxy
- resume
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/logs/:/logs/
volumes:
resume-postgres:
driver: local
networks:
resume: {}
proxy:
external: true
@l4rm4nd
Still not working. The only change I see in your example is the removal of ending slash in path prefix: PathPrefix(
/api/
).This is my compose file, can you please check what I am missing here?
version: '3.8' services: postgres: image: postgres:15.2-alpine restart: unless-stopped expose: - 5432 volumes: - resume-postgres:/var/lib/postgresql/data networks: - resume healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] start_period: 15s interval: 30s timeout: 30s retries: 3 environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres-password server: image: amruthpillai/reactive-resume:server-3.7.2 restart: unless-stopped labels: - "traefik.enable=true" - "traefik.http.routers.server.entrypoints=web" - "traefik.http.routers.server.rule=Host(`resume.mydomain.lcl`) && PathPrefix(`/api`)" - "traefik.http.routers.server.middlewares=path-strip" - "traefik.http.services.server.loadbalancer.server.port=3100" - "traefik.http.middlewares.path-strip.stripprefix.prefixes=/api" - "traefik.http.middlewares.path-strip.stripprefix.forceSlash=false" environment: - PUBLIC_URL=http://resume.mydomain.lcl - PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api - SERVER_URL=http://resume.mydomain.lcl/api - PUBLIC_GOOGLE_CLIENT_ID= - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres-password - SECRET_KEY=secret-key - POSTGRES_HOST=postgres - POSTGRES_PORT=5432 - POSTGRES_SSL_CERT= - JWT_SECRET=secret-key - JWT_EXPIRY_TIME=604800 - GOOGLE_CLIENT_SECRET= - GOOGLE_API_KEY= - MAIL_FROM_NAME=Reactive Resume - MAIL_FROM_EMAIL=noreply@resume.mydomain.lcl - MAIL_HOST= - MAIL_PORT= - MAIL_USERNAME= - MAIL_PASSWORD= - STORAGE_S3_ENABLED=false - STORAGE_BUCKET= - STORAGE_REGION= - STORAGE_ENDPOINT= - STORAGE_URL_PREFIX= - STORAGE_ACCESS_KEY= - STORAGE_SECRET_KEY= volumes: - ./resume/uploads:/app/server/dist/assets/uploads - ./resume/exports:/app/server/dist/assets/exports expose: - 3100 networks: - resume depends_on: - postgres client: image: amruthpillai/reactive-resume:client-3.7.2 restart: unless-stopped labels: - "traefik.enable=true" - "traefik.http.routers.client.entrypoints=web" - "traefik.http.routers.client.rule=Host(`resume.mydomain.lcl`)" - "traefik.http.services.client.loadbalancer.server.port=3000" environment: - SIGNUPS_VERIFY=false - PUBLIC_URL=http://resume.mydomain.lcl - PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api - SERVER_URL=http://resume.mydomain.lcl/api - PUBLIC_GOOGLE_CLIENT_ID= expose: - 3000 networks: - resume depends_on: - server traefik: image: traefik:v2.10.1 restart: unless-stopped command: - "--log.level=INFO" - "--api.insecure=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.web.address=:80" - "--accesslog=true" - "--accesslog.filePath=/logs/access.log" expose: - 80 environment: - VIRTUAL_HOST=resume.mydomain.lcl - VIRTUAL_PORT=80 networks: - proxy - resume volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./traefik/logs/:/logs/ volumes: resume-postgres: driver: local networks: resume: {} proxy: external: true
Currently not at home yet but will try your example.
Have you tried the latest rxresume images? Have you tried running it behind traefik with ssl and therefore https URLs? What is currently not working? Does the login work or is this already broken? What do the developer tools and console say? Any errors there?
@grozandrei
As usual, it's DNS, which causes some errors. I've tried your example and defined a locally available domain only. Login and register works flawlessly but exporting a PDF fails as the containers could not resolve the specified domain in the environment variables SERVER_URL
and PUBLIC_SERVER_URL
. You will receive a container log like:
ERROR [ExceptionsHandler] page.goto: net::ERR_NAME_NOT_RESOLVED at http://resume.example.com/asd/my-resume-example/printer?secretKey=c74d1f4a-01a0-4477-af2f-bf8513accb20
Therefore, I defined my local DNS server for the containers, which resolves the domain to the correct IP address of my server that runs the traefik instance as reverse proxy. Afterwards, the PDF export functions correctly.
Here my compose example, which works by defining your local DNS server for proper DNS resolution:
version: "3.8"
services:
postgres:
image: postgres:alpine
container_name: rxresume-db
restart: always
expose:
- 5432
volumes:
- ${DOCKER_VOLUME_STORAGE:-/mnt/docker-volumes}/rxresume/postgresql:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- proxy
server:
image: amruthpillai/reactive-resume:server-latest
container_name: rxresume-server
restart: always
expose:
- 3100
depends_on:
- postgres
environment:
- PUBLIC_URL=http://resume.example.com
- PUBLIC_SERVER_URL=http://resume.example.com/api # only change the subdomain, leave /api as is
- SERVER_URL=http://resume.example.com/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=c74d1f4a-01a0-4477-af2f-bf8513accb20
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=c74d1f4a-01a0-4477-af2f-bf8513accb20
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@rxresu.me
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
- PDF_DELETION_TIME=
dns:
- 192.168.178.99
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-server.rule=Host(`resume.example.com`) && PathPrefix(`/api`) # only change the subdomain, leave /api as is
- traefik.http.services.rxresume-server.loadbalancer.server.port=3100
- traefik.docker.network=proxy
# Part for optional traefik middlewares
- traefik.http.routers.rxresume-server.middlewares=path-strip # may add local-ipwhitelist@file for access control
- traefik.http.middlewares.path-strip.stripprefix.prefixes=/api
- traefik.http.middlewares.path-strip.stripprefix.forceSlash=false
- traefik.http.routers.rxresume-server.entrypoints=web
client:
image: amruthpillai/reactive-resume:client-latest
container_name: rxresume-client
dns:
- 192.168.178.99
restart: always
expose:
- 3000
depends_on:
- server
environment:
- PUBLIC_URL=http://resume.example.com
- PUBLIC_SERVER_URL=http://resume.example.com/api # only change the subdomain, leave /api as is
- SERVER_URL=http://resume.example.com/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-client.rule=Host(`resume.example.com`)
- traefik.http.services.rxresume-client.loadbalancer.server.port=3000
- traefik.http.routers.rxresume-client.entrypoints=web
- traefik.docker.network=proxy
# Part for optional traefik middlewares
#- traefik.http.routers.rxresume-client.middlewares=local-ipwhitelist@file # may enable this middleware for access control
traefik:
image: traefik:v2.10.1
container_name: rxresume-traefik
restart: unless-stopped
dns:
- 192.168.178.99
command:
- "--log.level=INFO"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- 80:80
- 8080:8080
environment:
- VIRTUAL_HOST=resume.example.com
- VIRTUAL_PORT=80
networks:
- proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
proxy:
external: true
Alternatively, you may use the Docker Compose extra_hosts
feature to tell the client and server container that the domain will be resolved to your server's IP address. Just specify your domain name resume.mydomain.lcl
and the IP address of your server where traefik is running. In my case, the server had the internal IP address 192.168.178.63
:
version: "3.8"
services:
postgres:
image: postgres:alpine
container_name: rxresume-db
restart: always
expose:
- 5432
volumes:
- ${DOCKER_VOLUME_STORAGE:-/mnt/docker-volumes}/rxresume/postgresql:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- proxy
server:
image: amruthpillai/reactive-resume:server-latest
container_name: rxresume-server
restart: always
extra_hosts:
- "resume.mydomain.lcl:192.168.178.63"
expose:
- 3100
depends_on:
- postgres
environment:
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api # only change the subdomain, leave /api as is
- SERVER_URL=http://resume.mydomain.lcl/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=c74d1f4a-01a0-4477-af2f-bf8513accb20
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_SSL_CERT=
- JWT_SECRET=c74d1f4a-01a0-4477-af2f-bf8513accb20
- JWT_EXPIRY_TIME=604800
- GOOGLE_CLIENT_SECRET=
- GOOGLE_API_KEY=
- MAIL_FROM_NAME=Reactive Resume
- MAIL_FROM_EMAIL=noreply@rxresu.me
- MAIL_HOST=
- MAIL_PORT=
- MAIL_USERNAME=
- MAIL_PASSWORD=
- STORAGE_BUCKET=
- STORAGE_REGION=
- STORAGE_ENDPOINT=
- STORAGE_URL_PREFIX=
- STORAGE_ACCESS_KEY=
- STORAGE_SECRET_KEY=
- PDF_DELETION_TIME=
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-server.rule=Host(`resume.mydomain.lcl`) && PathPrefix(`/api`) # only change the subdomain, leave /api as is
- traefik.http.services.rxresume-server.loadbalancer.server.port=3100
- traefik.docker.network=proxy
# Part for optional traefik middlewares
- traefik.http.routers.rxresume-server.middlewares=path-strip # may add local-ipwhitelist@file for access control
- traefik.http.middlewares.path-strip.stripprefix.prefixes=/api
- traefik.http.middlewares.path-strip.stripprefix.forceSlash=false
- traefik.http.routers.rxresume-server.entrypoints=web
client:
image: amruthpillai/reactive-resume:client-latest
container_name: rxresume-client
restart: always
extra_hosts:
- "resume.mydomain.lcl:192.168.178.63"
expose:
- 3000
depends_on:
- server
environment:
- PUBLIC_URL=http://resume.mydomain.lcl
- PUBLIC_SERVER_URL=http://resume.mydomain.lcl/api # only change the subdomain, leave /api as is
- SERVER_URL=http://resume.mydomain.lcl/api # only change the subdomain, leave /api as is
- PUBLIC_GOOGLE_CLIENT_ID=
networks:
- proxy
labels:
- traefik.enable=true
- traefik.http.routers.rxresume-client.rule=Host(`resume.mydomain.lcl`)
- traefik.http.services.rxresume-client.loadbalancer.server.port=3000
- traefik.http.routers.rxresume-client.entrypoints=web
- traefik.docker.network=proxy
# Part for optional traefik middlewares
#- traefik.http.routers.rxresume-client.middlewares=local-ipwhitelist@file # may enable this middleware for access control
traefik:
image: traefik:v2.10.1
container_name: rxresume-traefik
restart: unless-stopped
command:
- "--log.level=INFO"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- 80:80
- 8080:8080
environment:
- VIRTUAL_HOST=resume.mydomain.lcl
- VIRTUAL_PORT=80
networks:
- proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
proxy:
external: true
@l4rm4nd It's not about DNS, the domain gets resolved, I can execute a curl
from both client and server containers. I tried first on plain http, cause something was not working with https, but it should work also with plain http.
The only log I get is from server container:
server_1 | server start: [Nest] 66 - 05/02/2023, 10:49:43 AM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded. server_1 | server start: =========================== logs =========================== server_1 | server start: waiting for locator('html.wf-active') to be visible server_1 | server start: ============================================================ server_1 | server start: page.waitForSelector: Timeout 30000ms exceeded. server_1 | server start: =========================== logs =========================== server_1 | server start: waiting for locator('html.wf-active') to be visible server_1 | server start: ============================================================ server_1 | server start: at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:59:24)
Login works fine, only export to pdf feature fails. I am wondering why there are not more explanatory logs on the server side, to detect what is going wrong, to cause the timeout exception.
@l4rm4nd It's not about DNS, the domain gets resolved, I can execute a
curl
from both client and server containers. I tried first on plain http, cause something was not working with https, but it should work also with plain http.The only log I get is from server container:
server_1 | server start: [Nest] 66 - 05/02/2023, 10:49:43 AM ERROR [ExceptionsHandler] page.waitForSelector: Timeout 30000ms exceeded. server_1 | server start: =========================== logs =========================== server_1 | server start: waiting for locator('html.wf-active') to be visible server_1 | server start: ============================================================ server_1 | server start: page.waitForSelector: Timeout 30000ms exceeded. server_1 | server start: =========================== logs =========================== server_1 | server start: waiting for locator('html.wf-active') to be visible server_1 | server start: ============================================================ server_1 | server start: at PrinterService.printAsPdf (/app/server/dist/printer/printer.service.js:59:24)
Login works fine, only export to pdf feature fails. I am wondering why there are not more explanatory logs on the server side, to detect what is going wrong, to cause the timeout exception.
Then I am afraid that I won't be able to help more.
The provided docker-compose.yml example above with the extra_hosts
works flawlessly on my end. Even with any fictive domain like example.com
. I've just added a new entry in /etc/hosts
to resolve the fictive domain name during testing.
If I use my real domain + an already existing traefik instance with HTTPS, everything works fine too. PDF export works in less than 5 seconds as advertised.
I also cannot get it to work with the above examples using traefik.
These are the warnings and errors that pop up in the developer console when I try to export a pdf
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
_app-43dbbd28d75c6765.js:195
GET https://resume.mydomain.com/api/printer/name/my-name?lastUpdated=1683091784 500
(anonymous) @ _app-43dbbd28d75c6765.js:195
xhr @ _app-43dbbd28d75c6765.js:195
eK @ _app-43dbbd28d75c6765.js:195
Promise.then (async)
request @ _app-43dbbd28d75c6765.js:195
e2.<computed> @ _app-43dbbd28d75c6765.js:195
(anonymous) @ _app-43dbbd28d75c6765.js:195
i @ _app-43dbbd28d75c6765.js:195
fn @ _app-43dbbd28d75c6765.js:210
u @ _app-43dbbd28d75c6765.js:210
c @ _app-43dbbd28d75c6765.js:210
t.executeMutation @ _app-43dbbd28d75c6765.js:210
(anonymous) @ _app-43dbbd28d75c6765.js:210
Promise.then (async)
t.execute @ _app-43dbbd28d75c6765.js:210
r.mutate @ _app-43dbbd28d75c6765.js:210
Y @ build-90afc5a151a44d66.js:1
await in Y (async)
eU @ framework-e23f030857e925d4.js:9
eH @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
re @ framework-e23f030857e925d4.js:9
rn @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
oP @ framework-e23f030857e925d4.js:9
eF @ framework-e23f030857e925d4.js:9
ro @ framework-e23f030857e925d4.js:9
nU @ framework-e23f030857e925d4.js:9
nD @ framework-e23f030857e925d4.js:9
_app-43dbbd28d75c6765.js:210 {statusCode: 500, message: 'Internal server error'}
(anonymous) @ _app-43dbbd28d75c6765.js:210
Promise.catch (async)
t.execute @ _app-43dbbd28d75c6765.js:210
r.mutate @ _app-43dbbd28d75c6765.js:210
Y @ build-90afc5a151a44d66.js:1
await in Y (async)
eU @ framework-e23f030857e925d4.js:9
eH @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
re @ framework-e23f030857e925d4.js:9
rn @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
oP @ framework-e23f030857e925d4.js:9
eF @ framework-e23f030857e925d4.js:9
ro @ framework-e23f030857e925d4.js:9
nU @ framework-e23f030857e925d4.js:9
nD @ framework-e23f030857e925d4.js:9
build-90afc5a151a44d66.js:1 Uncaught (in promise) {statusCode: 500, message: 'Internal server error'}
Y @ build-90afc5a151a44d66.js:1
await in Y (async)
eU @ framework-e23f030857e925d4.js:9
eH @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
re @ framework-e23f030857e925d4.js:9
rn @ framework-e23f030857e925d4.js:9
(anonymous) @ framework-e23f030857e925d4.js:9
oP @ framework-e23f030857e925d4.js:9
eF @ framework-e23f030857e925d4.js:9
ro @ framework-e23f030857e925d4.js:9
nU @ framework-e23f030857e925d4.js:9
nD @ framework-e23f030857e925d4.js:9
I've just copied the above docker-compose example (the one with extra_hosts) again and pasted it into a new, blank Linux VM with Docker. The only changes I made was the domain name and extra_hosts IP address. As soon as the containers were up, I registered a new user account, logged in and created a new resume. I've also preloaded the resume with the available example data. Hitting the export button generated the PDF promptly.
I even tried accessing the traefik instance from another computer in LAN. Also worked flawlessly and I could generate new resumes and export them as PDF.
I've used multiple browsers (MS Edge, Mozilla Firefox and Google Chrome). With each browser it just worked. Not sure how to troubleshoot further, sorry.
Make sure to create a new resume as soon as you've changed some configs. It helped me during initial troubleshooting as older resumes were hitting errors again and new ones not.
Make sure to create a new resume as soon as you've changed some configs. It helped me during initial troubleshooting as older resumes were hitting errors again and new ones not.
This was the ticket for me. I recreated my resume from scratch by copying and pasting the info from my existing one to a new template and PDF export started working without the need for dns
or extra_hosts
entries. Duplicating or uploading a saved JSON did not work, I had to recreate from a blank template. I was also able to remove the extra SERVER_URL
from environment variables.
Describe the bug PDF Download gives 500 Internal Server Error
To Reproduce
Expected behavior PDF should be created and downloaded
Additional context