Closed neuronflow closed 1 year ago
I experimented with setting file access rights of the datasets folder to 777 and changing file ownership.no success.
in the browser console I found the following:
Requesting data from layer "color" failed. Some rendered areas might remain empty. Retrying... Failed to fetch - Url: https://..../datasets/iterm/C00/layers/color/data?token=EdackSBoGkD9tG2IZoQG-Q
n @ vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2
vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2 TypeError: Failed to fetch
at Function.h (fetch_buffer_with_headers.worker.worker.js:1:7721)
at r (fetch_buffer_with_headers.worker.worker.js:1:3063)
n @ vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2
vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2 Requesting data from layer "color" failed. Some rendered areas might remain empty. Retrying... Failed to fetch - Url: https://http://.....de//data/datasets/iterm/C00/layers/color/data?token=EdackSBoGkD9tG2IZoQG-Q
n @ vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2
vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2 TypeError: Failed to fetch
at Function.h (fetch_buffer_with_headers.worker.worker.js:1:7721)
at r (fetch_buffer_with_headers.worker.worker.js:1:3063)
n @ vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2
vendors~main.js?nocache=43f7cbf99e49d0680efed3290f3e61754245be4c:2 Requested api version: 3 which is the latest version.
view#994,1764,561,0,27.622:1 Access to XMLHttpRequest at 'https://api.airbrake.io/api/v3/projects/insert-valid-projectID-here/notices?key=insert-valid-projectKey-here' from origin 'http://......de' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
api.airbrake.io/api/v3/projects/insert-valid-projectID-here/notices?key=insert-valid-projectKey-here:1 Failed to load resource: net::ERR_FAILED
I notice the https requests, currently my server is only reachable via VPN and does not have an SSL certificate.
Hi @neuronflow!
I am trying to deploy webknossos on a local VM [...]
So, the docker image is running on localhost, correct?
in the browser console I found the following:
Can you switch to the network tab of the browser dev tools? There, you should see the failing requests including status code and maybe an error message. A screenshot would be helpful. It would also be interesting if the correct base URL is used. I think, the default domain for requesting the data is localhost
, but in case you started the docker image on another server, this will be incorrect.
Hi, thanks for the quick response!
I find the https://http://adress requests quite weird (see above)?
My docker compose - I use caddy as a reverse proxy, it will also handle SSL certificate handling once it goes online.
This is pretty much the standard docker-compose proposed in the tutorial. I just replaced nginx with caddy. Further, I had to make adjustments to the userid. For postgres I needed to ` ports:
- "5432"` with the default it would always complain about the port already being taken - tried different ports.
# run this with: " CURRENT_UID=$(id -u):$(id -g) docker-compose up ""
version: "3"
services:
##################################################################################################
caddy:
restart: unless-stopped
# docker run -it -p 80:80 -p 443:443 -p 2019:2019 --rm --name perception_caddy perception_caddy
image: caddy:alpine
container_name: caddy
hostname: caddy
user: root
# user: ${CURRENT_UID}
ports:
- "80:80"
- "443:443"
- "443:443/udp"
extra_hosts:
- dockerhost:${DOCKERHOST}
volumes:
# Just a note - as of the latest caddy/caddy images, these locations are now /config/caddy and /data/caddy. See the (new!) docs for some details: https://github.com/caddyserver/caddy-docker#️-a-note-about-persisted-data 1
# - "./caddy_secrets/data_lets_encrypt_storage:/data"
# - "./caddy_secrets/config_storage:/config"
- $PWD/caddy/caddy_file/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- $PWD/caddy/caddy_data:/data
- $PWD/caddy/caddy_config:/config
# sysctls:
# - net.ipv4.ip_unprivileged_port_start=0
# cap_add:
# - CAP_NET_BIND_SERVICE
##################################################################################################
webknossos:
restart: unless-stopped
image: scalableminds/webknossos:${DOCKER_TAG:-22.05.1}
ports:
# - "127.0.0.1:9000:9000"
- "9000:9000"
depends_on:
postgres:
condition: service_healthy
fossildb:
condition: service_healthy
redis:
condition: service_healthy
command:
- -Dconfig.file=conf/application.conf
- -Djava.net.preferIPv4Stack=true
- -Dtracingstore.fossildb.address=fossildb
- -Dtracingstore.redis.address=redis
- -Ddatastore.redis.address=redis
- -Dslick.db.url=jdbc:postgresql://postgres/webknossos
- -DwebKnossos.sampleOrganization.enabled=false
- -Dtracingstore.publicUri=https://${PUBLIC_HOST}
- -Ddatastore.publicUri=https://${PUBLIC_HOST}
volumes:
- ./webk_binaries:/webknossos/binaryData
environment:
- POSTGRES_URL=jdbc:postgresql://postgres/webknossos
- VIRTUAL_HOST=${PUBLIC_HOST}
- LETSENCRYPT_HOST=${PUBLIC_HOST}
user: ${USER_UID:-1001}:${USER_GID:-1001}
# Postgres
postgres:
restart: unless-stopped
image: postgres:10-alpine
environment:
POSTGRES_DB: webknossos
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -h 127.0.0.1 -p 5432"]
interval: 2s
timeout: 5s
retries: 30
ports:
# - "127.0.0.1:5132:5432"
- "5432"
volumes:
- "./persistent/postgres:/var/lib/postgresql/data/"
psql:
restart: unless-stopped
extends: postgres
command: psql -h postgres -U postgres webknossos
depends_on:
postgres:
condition: service_healthy
environment:
PGPASSWORD: postgres
# FossilDB
fossildb:
restart: unless-stopped
image: scalableminds/fossildb:master__410
command:
- fossildb
- -c
- skeletons,skeletonUpdates,volumes,volumeData,volumeUpdates
user: 0:0
volumes:
- "./persistent/fossildb/data:/fossildb/data"
- "./persistent/fossildb/backup:/fossildb/backup"
# Redis
redis:
restart: unless-stopped
image: redis:5.0
command:
- redis-server
healthcheck:
test:
- CMD
- bash
- -c
- "exec 3<> /dev/tcp/127.0.0.1/6379 && echo PING >&3 && head -1 <&3 | grep PONG"
timeout: 1s
interval: 5s
retries: 10
in the console it seems the psql exits from time to time?
pwild_website_psql_1 exited with code 0
webknossos_1 | 2023-01-30 10:08:37,295 [INFO] com.scalableminds.webknossos.datastore.services.DataSourceService - Finished scanning inbox (binaryData): 3 active, 0 inactive
webknossos_1 | 2023-01-30 10:08:37,308 [INFO] controllers.WKRemoteDataStoreController - Received dataset list from datastore 'localhost': 3 active, 0 inactive datasets
pwild_website_psql_1 exited with code 0
As requested the networking tab:
I find the https://http://adress requests quite weird (see above)?
I'm not sure to what you refer with "see above". I don't see https://http://adress
mentioned anywhere else?
Could you click on a failed request in the network tab? Then the full URL should be shown (I suspect the domain to be incorrect). This URL should be accessible from your browser, otherwise the data loading won't work.
Also, what is the PUBLIC_HOST variable in your setup? It should not include http:// or https:// prefixes.
Here, you can also see the https://http://adress
like in the logs above.
Here: https://http//itmrdw1.helmholtz-muenchen.de <- this is the server, note https, http Origin: http://pwild.helmholtz-munich.de <- this is the domain
Currently, while developing, these are only reachable via VPN. As long as this is the case I cannot obtain a let's encrypt SSL certificate from my naive understanding. Further, we have a second domain: pwild.helmholtz-muenchen.de
currently I had, which was obviously wrong:
export PUBLIC_HOST=http://itmrdw1.helmholtz-muenchen.de/
I set this to the following now:
export PUBLIC_HOST=pwild.helmholtz-munich.de
resulting in:
I notice the request goes to https://...
Side question: How should I set PUBLIC_HOST when dealing with multiple domains, or should one domain redirect to the other?
The easiest way of setting webKnossos up is with HTTPS. This is due to the fact that browser security mechanisms require HTTPS when making fetches to remote domains. This is the reason, why the default installation bundles nginx with letsencrypt and presets the configuration to HTTPS.
If you need to use HTTP, all of webKnossos needs to be on one domain. Also, you need to change these lines in the docker-compose to http: https://github.com/scalableminds/webknossos/blob/master/tools/hosting/docker-compose.yml#L23-L24
If this becomes too much trouble, you can try out webKnossos on webknossos.org.
Thanks, this worked!
I agree, and it makes sense that browsers require SSL. I will definitely work with SSL for deployment.
webknossos.org is, unfortunately, no option for us as the datasets are too large.
pwild_website_psql_1 exited with code 0
webknossos_1 | 2023-01-31 22:42:54,891 [INFO] com.scalableminds.webknossos.datastore.services.DataSourceService - Finished scanning inbox (binaryData): 3 active, 0 inactive
webknossos_1 | 2023-01-31 22:42:54,901 [INFO] controllers.WKRemoteDataStoreController - Received dataset list from datastore 'localhost': 3 active, 0 inactive datasets
pwild_website_psql_1 exited with code 0
webknossos_1 | 2023-01-31 22:43:54,912 [INFO] com.scalableminds.webknossos.datastore.services.DataSourceService - Finished scanning inbox (binaryData): 3 active, 0 inactive
webknossos_1 | 2023-01-31 22:43:54,922 [INFO] controllers.WKRemoteDataStoreController - Received dataset list from datastore 'localhost': 3 active, 0 inactive datasets
pwild_website_psql_1 exited with code 0
webknossos_1 | 2023-01-31 22:44:54,930 [INFO] com.scalableminds.webknossos.datastore.services.DataSourceService - Finished scanning inbox (binaryData): 3 active, 0 inactive
webknossos_1 | 2023-01-31 22:44:54,937 [INFO] controllers.WKRemoteDataStoreController - Received dataset list from datastore 'localhost': 3 active, 0 inactive datasets
pwild_website_psql_1 exited with code 0
Are these regular psql exits normal?
webknossos.org is, unfortunately, no option for us as the datasets are too large.
In that case an option would be to set up a static file server (nginx, apache, caddy) to serve your data as OME-Zarr. webknossos.org and other tools will then be able to access that data. The benefit is that on webknossos.org, we manage all server infrastructure, maintenance and upgrades for you.
Are these regular psql exits normal?
Not really. Actually the psql
service does not need to be started. It is just there for debugging purposes to quickly connect to the database.
Closing this, as setting up https/single-domain http solved the issue. Feel free to reopen or open another issue if you encounter further problems.
Context
I am trying to deploy webknossos on a local VM. I preprocessed my data using wcuber, now when I load it in webknossos it just displays grey/gray. The same happens for two other datasets.
Expected Behavior
I expected to see the 3D microscopy that is also visible in the TIFF stack.
Current Behavior
Steps to Reproduce the bug
Your Environment for bug