Open JohanE67 opened 6 months ago
Hi @JohanE67
thank you for rising this issue.
The gallery is a Single Page Application (SPA) with URL based routing. So https://[my-domain]/homegallery/years/2015
must be mapped correctly that /homegallery
is a static prefix which is excluded from the SPA routing /years/2015
.
To fix that the server command has a appropriate parameter --base-path
for it:
$ ./gallery.js server -h
gallery.js server
Start web server
Options:
--version Show version number [boolean]
-l, --log-level Console log level
[string] [choices: "trace", "debug", "info", "warn", "error", "silent"]
--log-json-format Log output format in json [boolean]
-L, --log-file Log file
--log-file-level Log file level
[string] [choices: "trace", "debug", "info", "warn", "error"] [default:
"debug"]
-c, --config Configuration files
-s, --storage Storage directory
-d, --database Database filename
-e, --events Events filename
-H, --host Listening host IP address
[string] [default: 0.0.0.0]
-p, --port Listening TCP port [number] [default: 3000]
-K, --key SSL key file
-C, --cert SSL certificate file
-b, --base-path Base path of static page. e.g. "/gallery"
[string] [default: /]
-U, --user User and password for basic authentication.
Format is username:password. Password schema
can be {SHA}, otherwise it is plain [array]
-R, --ip-whitelist-rule, --rule IP whitelist rule in format type:network.
E.g. allow:192.168.0/24 or deny:all. First
matching rule wins. [array]
--open-browser Open browser on server start
[boolean] [default: true]
--remote-console-token Enable remote console with given debug auth
token [string]
--watch-sources Watch source files for changes
[boolean] [default: true]
-h, --help Show help [boolean]
--level [default: debug]
With your docker setup you can add an environment variable GALLERY_BASE_PATH
to /homegallery
.
When the base path is set you need to access the gallery always below /homegallery
. So it is not possible to have a nginx proxy with /homegallery
in front of it and parallel you access the gallery server directly within your home network with base path /
.
So, I added the GALLERY_BASE_PATH
variable and removed the rewrite /homegallery(.*) $1 break;
from the proxy confguration. This does not solve the problem, however.
When I now go to https://[my-domain]/homegallery/years/2015
, the log shows:
[2024-01-08 07:53:21.755]: server.request 304 GET /homegallery/years/2015 9ms
But I still see only "Your Home Gallery is loading..."
When I go to http://192.168.1.10:3030/homegallery/years/2015
, the log shows:
[2024-01-08 07:57:08.668]: server.request 304 GET /homegallery/years/2015 7ms
[2024-01-08 07:57:08.744]: server.api.events Add new client 39e059c3-3978-4872-99a8-443a6a08e3ff
[2024-01-08 07:57:08.751]: server.api.events Events file /data/config/events.db does not exist yet. It will be created on the first manual tag
[2024-01-08 07:57:08.754]: server.request warn 404 GET /api/events.json 7ms
[2024-01-08 07:57:08.767]: server.request 304 GET /api/database/tree/root.json 6ms
However, now the initial page is shown, not the year. Also, the url in the address bar switches back to plain http://192.168.1.10:3030/
. Clicking to the year, it appears that http://192.168.1.10:3030/years/2015
still works.
So it looks like the base-path directive is getting ignored??
Hi @JohanE67
I've checked a simple example ant it seems that GALLERY_BASE_PATH
requires a tailing slash. The base path /homegallery/
ensures that every web resource gets the base path as prefix. Further the base path must be in sync with nginx.conf
location directive.
Here is my working configuration:
Browser --> Nginx: http://localhost:3002/homegallery/index.html
Nginx --> gallery: http://gallery:3000/index.html
docker-compose.yml
:
version: "3.9"
services:
api:
# custom build via
#build: packages/api-server
image: xemle/home-gallery-api-server
environment:
# TensorflowJS backends
# - cpu: slowest and best support
# - wasm: good perfromance for arm64 and amd64 platforms
# - node: best performance on amd64 platform
#- BACKEND=cpu
- BACKEND=wasm
#- BACKEND=node
gallery:
# custom build via
#build: .
image: xemle/home-gallery
environment:
- GALLERY_API_SERVER=http://api:3000
- GALLERY_API_SERVER_CONCURRENT=1 # for SoC devices like Rasperry Pi. Use 5 otherwise
- GALLERY_API_SERVER_TIMEOUT=60 # for SoC devices like Rasperry Pi. Use 30 otherwise
#- GALLERY_USE_NATIVE=ffprobe,ffmpeg,vipsthumbnail # On issues with sharp resizer
- GALLERY_OPEN_BROWSER=false
# Use polling for safety of possible network mounts. Try 0 to use inotify via fs.watch
- GALLERY_WATCH_POLL_INTERVAL=300
- GALLERY_BASE_PATH=/homegallery/
volumes:
- ./data:/data
# Mount your media directories below /data
- ${HOME}/Pictures:/data/Pictures
ports:
- "3000:3000"
user: "${CURRENT_USER}"
entrypoint: ['node', '/app/gallery.js']
command: ['run', 'server']
nginx:
image: nginx:latest
ports:
- "3002:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./data/html:/usr/share/nginx/html
restart: always
nginx.conf
:
events {
}
http {
server {
root /usr/share/nginx/html;
location /homegallery {
# Set to external URL
return 307 http://localhost:3002/homegallery/;
}
location /homegallery/ {
proxy_pass http://gallery:3000/;
}
}
}
after lots and lots and LOTS of trying, I cannot for the life of me get this to work. I'm using homegallery through docker with my own local API instance. Reverse Proxy is handled by Caddy, though in my testing I found that issues also arise when accessing the service directly via IP:3000. Configuration is done via the gallery.config.yml
file mounted in /data/config/
. Here's my server block from this file:
server:
#port: 3000
#host: '0.0.0.0'
# security configuration for https
# key: '{configDir}/server.key'
# cert: '{configDir}/server.crt'
#openBrowser: true
basePath: /gallery/
watchSources: true
I have not found a way to confirm that the container is properly loading this configuration, but reading the file from a shell inside the container shows the correct settings. I have also not found a way to read the HTTP access logs of the app. But I have the access logs from my browser, when accessing the website with this config (presumably) loaded.
When loading server:3000/
without a path, I load onto a "Your Home Gallery is loading..." screen that will never go away. Ressources loaded are
This looks promising, but something in the JS must not properly load the following required ressources, so I get stuck on the loadscreen.
When loading server:3000/gallery/
(where I would expect a working app), I am immediately redirected to server:3000/
, which then loads all the UI and images and I have a working app in front of me. Ressources loaded (excerpt) are
In the entire list, I cannot find a single entry that was loaded from /gallery/. Not the images, not the static files, not the API calls. It all comes from the root, where I cannot even make contact when directly loading it. How can this be? I've spent hours trying to fix it, but cannot pinpoint the source of the issue. Note that whenever testing this, the cache was disabled through the developer tools, so I am discarding "cached file locations" or anything of the sort as a possible root.
Thanks in advance for any help you can provide.
Hi @bytebone
thank you for your input and your patience so far. I am sorry that is was not successful until now.
I'm using homegallery through docker with my own local API instance. Reverse Proxy is handled by Caddy
I try to reproduce your setting. For your comment it is not 100% clear how your setup is created. So HomeGallery server, local API and Caddy are running inside docker via docker compose? I would like to reproduce this setting with Caddy - never played around with Caddy but heard it should be awesome. Would you mind to provide your docker-compose.yml
and Caddy configuration?
Can you reproduce my previous provided docker-compose.yml
with nginx? Is there a reason to use Caddy instead of nginx?
though in my testing I found that issues also arise when accessing the service directly via IP:3000
You can only serve the server with one defined base path. Either /
as default or /gallery/
but not both at the same time due the SPA constrains and the <base />
HTML tag which is not served dynamically.
If you use a base path the Browser sends a request with http://<outer host>/gallery/
towards the proxy (Caddy, nginx, you name it). The proxy should strip the base path /gallery/
and forwards the request to the internal service http://<containe ip>:3000
. The HTML response contains the <base href="/gallery/" />
tag which is honored by subsequent requests. e.g. http://<outer host>/gallery/App.[...].js
, which should reach the container server again as http://<container ip>:3000/App.[...].js
through the proxy.
I tested the http proxy settings via curl and checked if the response is correct.
If you provide your docker-compose.yml
I can have a look...
I have also not found a way to read the HTTP access logs of the app
In the mounted /data/config
volume you will find gallery.log
which should contain requests information like (piped though jq):
{
"level": 30,
"time": 1651569028169,
"pid": 51670,
"hostname": "....",
"module": "server.request",
"req": {
"id": 9,
"method": "GET",
"url": "/App.js",
"query": {},
"params": {},
"headers": {
"host": "localhost:3000",
"user-agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0",
...
},
"remoteAddress": "127.0.0.1",
"remotePort": 58016
},
"res": {
"statusCode": 200,
"headers": {
...
}
},
"responseTime": 40,
"msg": "200 GET /App.js 40ms"
}
Thank you for the elaborate and prompt response.
The proxy should strip the base path /gallery/ and forwards the request to the internal service
This, combined with the reminder that when using a base path I kind of have to access the app through the reverse proxy instead of the local IP, gave me the "click" i needed to make it work. I've updated my Caddyfile to remove the basePath before handing requests to the app, and voilá! It works.
I'll still try to answer your questions as best as I can:
Is there a reason to use Caddy instead of nginx?
I've run NPM (nginx spinoff) for a while and am unhappy with the high memory usage. plus, caddy has been on my list of things to try for a while, so this second server of mine is the perfect playground to do so.
your comment it is not 100% clear how your setup is created
Home Gallery App and API, as well as Caddy are all running on Docker. The two HG services are in one "HG-exclusive" network, the frontend app additionally joins the existing, Caddy-managed caddy network.
compose.yml:
version: "3.9"
services:
api:
image: xemle/home-gallery-api-server
container_name: homegallery-api
environment:
- BACKEND=node
networks:
- internal
app:
image: xemle/home-gallery
container_name: homegallery-app
environment:
- GALLERY_API_SERVER=http://api:3000
- GALLERY_API_SERVER_CONCURRENT=5 # for SoC devices like Rasperry Pi. Use 5 otherwise
- GALLERY_API_SERVER_TIMEOUT=30 # for SoC devices like Rasperry Pi. Use 30 otherwise
- GALLERY_OPEN_BROWSER=false
- GALLERY_WATCH_POLL_INTERVAL=0
volumes:
- /mnt/user/appdata/homegallery:/data
- /mnt/user/pictures1:/data/pictures/source1:ro
- /mnt/user/pictures2:/data/pictures/source2:ro
ports:
- "3000:3000"
networks:
- internal
- caddy
entrypoint: ['node', '/app/gallery.js']
command: ['run', 'server']
networks:
internal:
name: hg_internal
caddy:
external: true
Caddyfile (before fixing it as described above):
http://my.domain.com {
redir /gallery /gallery/
handle /gallery/* {
reverse_proxy http://homegallery-app:3000
}
}
To fix the issue, replace the handle
function with handle_path
. The former passes the path onto the container (used in the case of *arr apps for example), while the latter strips it.
(Sidenote: Caddy hands all traffic to a Cloudflare tunnel, which handles SSL encryption at the CF edge, which is why I'm completely skipping encryption on my server)
Also good to know that this gallery.log has the access logs. I saw it plenty of times when editing the config, but never had the thought to actually look inside.
Hi @bytebone I am very happy than you were able to solve your problem. Thank you for sharing your insights.
Have fun with your digital memories!
Unfortunately, I'm still unable to get it to work. Even using the nginx proxy setup you provided, I'm getting only the login form and then the "Your Home Gallery is loading..." page. Could be something in the "swag" setup I'm using, but I'm unable to figure out what it is...
One thing I notice in the logs is that, upon a request through the proxy I now get this:
[2024-01-10 15:53:18.969]: server.request 200 GET / 21ms
[2024-01-10 15:53:19.021]: server.request 200 GET / 35ms
[2024-01-10 15:53:19.029]: server.request 200 GET / 37ms
[2024-01-10 15:53:19.094]: server.request 200 GET / 32ms
[2024-01-10 15:53:19.103]: server.request 200 GET / 38ms
So it looks as if the calls are being made, but without the items to "GET". I'm giving up for now, for lack of time. Will pick up again later.
Hi @JohanE67
I am sorry that your setup is not working yet. Would you mind to provide your configs of ningx, docker compose and your gallery.config.yml with the required sections for the gallery server?
In your first post you have the nginx location directive:
location ^~ /homegallery/ {
Maybe the ^~
between location and /homegallery/
is here important. In my provided example it is only location /homegallery/ {
.
[2024-01-10 15:53:18.969]: server.request 200 GET / 21ms
If all requests lead to a GET /
there is a configuration problem in nginx.
You can check your settings by a browser to call https://[my-domain]/homegallery/api/database/tree/root.json
, this should lead to following log:
[2024-01-10 15:53:18.969]: server.request 200 GET /api/database/tree/root.json 21ms
As I mentioned in the first post, I'm using the LinuxServer/Swag container. The main nginx config is this:
## Version 2022/08/16 - Changelog: https://github.com/linuxserver/docker-baseimage-alpine-nginx/commits/master/root/defaults/nginx/nginx.conf.sample
### Based on alpine defaults
# https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf?h=3.15-stable
user abc;
# Set number of worker processes automatically based on number of CPU cores.
include /config/nginx/worker_processes.conf;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
# Configures default error logger.
error_log /config/log/nginx/error.log;
# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;
# Include files with config snippets into the root context.
include /etc/nginx/conf.d/*.conf;
events {
# The maximum number of simultaneous connections that can be opened by
# a worker process.
worker_connections 1024;
}
http {
# Includes mapping of file name extensions to MIME types of responses
# and defines the default type.
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Name servers used to resolve names of upstream servers into addresses.
# It's also needed when using tcpsocket and udpsocket in Lua modules.
#resolver 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001;
include /config/nginx/resolver.conf;
# Don't tell nginx version to the clients. Default is 'on'.
server_tokens off;
# Specifies the maximum accepted body size of a client request, as
# indicated by the request header Content-Length. If the stated content
# length is greater than this size, then the client receives the HTTP
# error code 413. Set to 0 to disable. Default is '1m'.
client_max_body_size 0;
# Sendfile copies data between one FD and other from within the kernel,
# which is more efficient than read() + write(). Default is off.
sendfile on;
# Causes nginx to attempt to send its HTTP response head in one packet,
# instead of using partial frames. Default is 'off'.
tcp_nopush on;
# all ssl related config moved to ssl.conf
include /config/nginx/ssl.conf;
# Enable gzipping of responses.
#gzip on;
# Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
gzip_vary on;
# Helper variable for proxying websockets.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Sets the path, format, and configuration for a buffered log write.
access_log /config/log/nginx/access.log;
# Includes virtual hosts configs.
include /etc/nginx/http.d/*.conf;
include /config/nginx/site-confs/*.conf;
}
daemon off;
pid /run/nginx.pid;
The site config, that is included in the above file (/config/nginx/site-confs/default.conf
) is:
## Version 2022/10/03 - Changelog: https://github.com/linuxserver/docker-swag/commits/master/root/defaults/nginx/site-confs/default.conf.sample
# redirect all traffic to https
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
return 301 https://$host$request_uri;
}
}
# main server block
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
root /config/www;
index index.html index.htm index.php;
# enable subfolder method reverse proxy confs
include /config/nginx/proxy-confs/*.subfolder.conf;
# enable for ldap auth (requires ldap-location.conf in the location block)
#include /config/nginx/ldap-server.conf;
# enable for Authelia (requires authelia-location.conf in the location block)
#include /config/nginx/authelia-server.conf;
location / {
# enable for basic auth
#auth_basic "Restricted";
#auth_basic_user_file /config/nginx/.htpasswd;
# enable for ldap auth (requires ldap-server.conf in the server block)
#include /config/nginx/ldap-location.conf;
# enable for Authelia (requires authelia-server.conf in the server block)
#include /config/nginx/authelia-location.conf;
try_files $uri $uri/ /index.html /index.php$is_args$args =404;
}
location ~ ^(.+\.php)(.*)$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
# deny access to .htaccess/.htpasswd files
location ~ /\.ht {
deny all;
}
}
# enable subdomain method reverse proxy confs
include /config/nginx/proxy-confs/*.subdomain.conf;
# enable proxy cache for auth
proxy_cache_path cache/ keys_zone=auth_cache:10m;
The proxy configs are included via the file above. For home gallery I created config/nginx/proxy-confs/homegallery.subfolder.conf
, that currently looks like this (following the suggestions from you first response):
location /homegallery {
return 307 $scheme://$host/homegallery/;
}
location /homegallery/ {
#include /config/nginx/proxy.conf;
#include /config/nginx/resolver.conf;
set $upstream_app home-gallery;
set $upstream_port 3000;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port/;
}
So I changed 301 to 307 in return 307 $scheme://$host/homegallery/;
, removed the ^~
directive and removed the two includes. When I add the includes, proxying is not working at all...
So I think that basically, I have the same setup as you. Still it's not working. When I try the call to https://[my-domain]/homegallery/api/database/tree/root.json
, I get this in the log:
[2024-01-13 09:01:13.126]: server.request 200 GET / 37ms
[2024-01-13 09:01:13.204]: server.request 200 GET / 54ms
[2024-01-13 09:01:13.220]: server.request 200 GET / 63ms
[2024-01-13 09:01:13.262]: server.request 200 GET / 29ms
[2024-01-13 09:01:13.272]: server.request 200 GET / 42ms
Adding the trailing slash fixed the issue for me.
Documentation should be updated i think... Just put an example in the YML file with the training slash would be enough.
My NGINX config is:
location /gallery/ {
proxy_pass http://127.0.0.1:3022/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
and in the config yml i have
BasePath: /gallery/
I am running home gallery in a docker container setup and within my network it's all working as expected.
Now, for remote access, I am attempting to add home gallery to my remote proxy setup that I have running for several other web apps. I can see that the proxying is working, in a sense that the requests arrive at the home-gallery container. However, it stops at the initial "GET" request and nothing ever appears in the browser but the black screen with the house and text "Your Home Gallery is loading...".
Here's my docker-compose.yml:
And here's the relevant proxy setup (this is using the LinuxServer/Swag container. Filename "proxy-confs/homegallery.subfolder.conf":
When looking in the home-gallery container log, when attempting to open the gallery from the local network (http://192.168.1.10:3030) I see this:
And the photostream appears in the browser.
When going through the reverse proxy (https://[my-domain]/homegallery/), I see only this:
[2024-01-04 14:15:33.587]: server.request 200 GET / 37ms
And am stuck at the "loading..." page.The "rewrite" in the proxy conf appears to be working correctly, for when I retrieve a specific year for example (https://[my-domain]/homegallery/years/2015), I will see this in the log:
[2024-01-04 14:19:56.539]: server.request 304 GET /years/2015 10ms
And then nothing ... While this works fine when running on the local network.I hope someone can give a hint on where this is going wrong. Thanks, Johan