Open magno23 opened 3 years ago
Sorry it's taking a bit of time since haven't had time to test this plus dont have a spare router around that i can use to create a non-internet network and have the devices connected to it, but just had a quick question on this, does this non-internet network have it's own DNS? in other words, the NTP time server you have locally is on this non-internet based network ? in other words, the routing table contains both the NTP time server and the RPi4?
I know you described your setup, but it would help if you would instead use something like Draw.io (https://app.diagrams.net/) and give a visual description of the setup.
Example of what I mean by visual description, the image below was simply google'd in order to show an example of how you should draw your setup:
I have the exact same problem. With an older version of screenly (older version) it works without a problem (on an RPi3).
But now I wanted to do the same on an RPi4 in an environment, where the raspberries don't have internet access. The no internet is realized in our firewall and this won't change.
This raspberry would have our internal DNS and NTP server which also every other PC with internet access would use.
Why does it need internet access now when in older version it didn't need it?
@iDazai
Why does it need internet access now when in older version it didn't need it?
As far as I know, we did not force it to need internet access, if there is an issue there now - it was definitely not intentionally done, maybe something dealing with docker since internal web error means the web server is not running, but the docker container ls
command shows all containers running.. so no idea as of yet, thus we are trying to figure that out, but we need as much info as possible from you guys end since it is rare to have this setup of offline.
Can you send a picture of the error on the screen? as well as the output of cat /etc/resolv.conf
I just tested my Pi4 with master branch and uploaded offline content to show (image and video), and these show and play without any issue, so if we go down a list of troubleshooting steps, this would prove that it is not a docker container, and it will also prove that screenly plays uploaded offline content just fine.
This is why I wanted to see a picture of the error, and at the same time the contents of /etc/resolv.conf
, and now that I remember, can you guys show the contents of your /etc/systemd/timesyncd.conf
file?
One final thing for now, export the output of systemctl
to pastebin and show me it, wondering if all services are loaded..
@ealmonte32
I installed the latest Raspbian Lite from January 11th 2021. I also activated/installed SSH, VNC and fail2ban. Then I installed Screenly OSE with Dev/Master branch. No to the "manage your network" and "full system upgrade". This is what I saw after the installation:
After first reboot I saw nothing. After second reboot I see the attached "Site can't be reached"
Github rotates this screenshot for some reason.
I can also bring up the console. On another issue I saw that I shouldn't be able to do this? On some reboots it just shows me the screenly splash screen and then nothing else. When I bring up the console and I enter the user id, it won't jump to the password authentication. I can ping the pi on the internal network, but I can't connect to the web interface. It just says connection failed.
dockercontainer: https://pastebin.com/SenZ9Hyi resolv: https://pastebin.com/FTGNtxTs systemctl: https://pastebin.com/E1wM0Zys timesyncd: https://pastebin.com/RAXFW7Zr
I try to run the install script again and see if it changed something.
Ok, just a few thoughts:
Based on your pastebin data, you dont seem to have the screenly/srly-ose-nginx:latest
container up and running, this is what runs as the web server, thus serving the data, this is not to be confused with the srly-ose-server
container.
If your nginx
container is not running, this explains why you cannot reach the IP address of the Pi via URL.
Then, this container below was restarting after all the others were online for 8 minutes, did you do that intentionally through docker container restart, because if you didnt, then the logs would show what error is causing the container to not start:
0d370b610168 screenly/srly-ose-viewer:latest-pi4 "/usr/bin/entry.sh b…" 25 minutes ago Restarting screenly_srly-ose-viewer_1
To get logs from this specific container just type: docker container logs screenly_srly-ose-viewer_1
with regards to the nameserver/resolv.conf, I assume you are obfuscating this, but just to make sure, from the Pi, are you able to ping the name server? not from your PC, but from within the Pi itself, if you ssh into the Pi and from there try to ping your nameserver, which from your previous pastebin I believe your gateway for this was 10.213.8.254
, so I will assume this is the router/AP that does not have internet access, but this router/AP does need to have a route to the NTP server obviously so that from the Pi that is in this same network can access that route to would look something like:
Pi
--> (router/DEVICEACCESS AP)
--> arp table contains address of NTP server thus Pi
--> router
--> NTP server
Does it look something like that?
One last thing, if you dont mind rebuilding the container images on the Pi4, just so that we know you are for sure running the latest containers with any update package/info, if you decide to go this route, there is this rebuild_containers.sh
script that can do this for you, if you download it with wget
from this link, you then make it executable by running chmod +x rebuild_containers.sh
, it will take some time, maybe less than an hour.. but will rebuild/recreate all containers, make sure you have the latest github repo cloned to the Pi, so that your /home/pi/screenly
folder contains the latest files from the master branch.
Update & possible solution
I just realized something after posting the above.. I was looking at some error I was getting on one of the containers and going through my troubleshooting I realized, the first "URL" the viewer tries to open is the local hostname of the container, for example below is what happens when you restart the viewer container and you have lets say one asset to load, in this example I have the screenly weather widget URL:
Loading browser...
Generating asset-list...
Current url is http://srly-ose-nginx:80/splash-page
Current url is http://srly-ose-nginx:80/static/img/loading.png
Showing asset https://weather.srly.io?lang=en&24h=0&wind_speed=0 (webpage)
Current url is https://weather.srly.io?lang=en&24h=0&wind_speed=0
Sleeping for 60
As you can see, the URL the Pi is the local hostname of the container at that moment, but that made me realize, when you move the Pi from your network with internet, to the network without internet, the IP changes, your whole DNS and DHCP stuff changes, but the containers were made with the old IP info.
This also reminds me, if you run the rebuild_containers.sh
that has to be done with the device on the internet network.
Then, after you do that and all containers are 100% latest and online and working, when you move it to the network without internet, you need to run the other non-image-rebuilding script: https://raw.githubusercontent.com/Screenly/screenly-ose/master/bin/upgrade_containers.sh
Which you then need to replace the 8.8.8.8
IP with the IP of your local DNS/gateway/router so that the new local IP of the Pi gets inserted into this container building, so instead of 8.8.8.8
on this script, use your 10.213.8.254
which will allow the route to go through and pick up the local IP of the Pi since it is not needed to go out to google's DNS but rather locally on your subnet.. get me?
This whole thing is pretty much a networking situation that you need to go piece by piece and pick up the areas where offline and containers already built conflict..
@ealmonte32
i think you are close to a solution
I did what you said, connect the Pi to the internet and ran rebuild_containers.sh
no errors, then connected the Pi to my network with no internet access and ran upgrade_containers.sh
with 8.8.8.8
replaced for my DNS server
Now when i boot screenly it shows the screen with the IP to manage content and after a couple of seconds my page is displayed as intended.
If i connect via browser from my computer this is the page that i get
if i go for example into settings and then again to schedule overview, the page show like this
I did it on a new sd card just now. This time it didn't take long on installing docker and during the installation I already saw the screenly splash screen with the IP address (on the internet subnet). What I experienced though, is that the response of the RPi is super slow. For example after a ping I can't type in another command because it seems like it didn't finish that process. CTRL+C doesn't help either.
After some time I got this error on the RPi:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
After another SD card flash, changed my keyboard layout, installed fail2ban, etc... I put it on the subnet without internet access.
Previously I downloaded the upgrade_containers.sh
and changed the DNS-Server, but I got the following error:
Error response from daemon: Cannot restart container 7825aa791bf1: driver failed programming external connectivity on endpoint screenly_srly-ose-nginx_1 (327f5b5031a5b312520a490677c17f54593746c02d4c8fc5703e3c7f9172e53b): Bind for 0.0.0.0:80 failed: port is already allocated
Some other remarks:
The Image screenly/srly-ose-viewer:latest-pi4
always restarts.
I can ping the DNS from the pi.
1st time I could connect via SSH. After a reboot I couldn't anymore. After another reboot I was able to SSH into it again.
I don't know what else I'm supposed to just make it work.
@magno23 , I experienced the same but it was because after the reboot of the Pi, the device and containers had not all fully loaded and the cache on the browser was not letting static data get updated, I simply opened up an incognito window and everything showed properly fine. Also, yes I thought about it and it all makes sense now with regards to the problem and the solution for offline is to build the containers in the way I suggested.
@iDazai
Since you said you had a new SD card I assume it was a fresh install of Raspian OS lite, and thus you essentially ran the bash install script and indeed got all the latest files from the repo, so after this completes I assume you restarted the Pi, this needs to happen so that system properly finishes installation.
If you reboot the Pi, then waited until everything loaded and tested it was working while on the network with internet access, then you would have shut it down, moved it over to the offline network (still need dhcp/dns on that offline network), then edited the upgrade_containers.sh
script to include the gateway/router IP address instead of google's, then after this completes and you should have restarted the Pi and it would have worked.
If you are getting container errors, I would first stop all containers: docker container stop $(docker container ls -aq)
then, if that fails or gives an error, I would restart the docker service which in the past has fixed any errors for me when trying to deal with docker: sudo systemctl restart docker.service
then after that finishes restarting the docker service and you are back at console prompt, try to run the edited upgrade_containers.sh
script again and see if this time it works and you get the new offline network IP address which would then properly show the IP on the splash screen upon reboot and pass the routes to each container and all should work offline.`
I did it exactly as you described.
New SD card, latest Raspbian OS lite, bash install, restart.
After that restart, while still on the internet, I can't access that srly-ose-nginx site.
I download the upgrade_containers.sh
change it to my internal DNS server and connect to the no internet subnet.
I have an internal IP, can ping the DNS from the Pi, but somehow SSH is disabled. So I have to bring up the console on the pi (which I shouldn't be able to, according to another issue) and execute bash upgrade_containers.sh
(also tried it after stopping all containers) and I get this:
`Pulling redis ... error Pulling srly-ose-server ... error Pulling srly-ose-websocket ... error Pulling srly-ose-nginx ... error Pulling srly-ose-celery ... error Pulling srly-ose-viewer ... error
ERROR: for srly-ose-websocket Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:58920->23.22.155.84:443: read: connection reset by peer ERROR: for redis Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:58914->23.22.155.84:443: read: connection reset by peer ERROR: for srly-ose-celery Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:58918->23.22.155.84:443: read: connection reset by peer ERROR: for srly-ose-viewer Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:50422->52.55.168.20:443: read: connection reset by peer ERROR: for srly-ose-nginx Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:58916->23.22.155.84:443: read: connection reset by peer ERROR: for srly-ose-server Get https://registry-1.docker.io/v2/: read tcp 192.168.10.175:58922->23.22.155.84:443: read: connection reset by peer ERROR: Get https://registry-1.docker.io/v2/: read tcp $RPi:58920->23.22.155.84:443: read: connection reset by peer Get https://registry-1.docker.io/v2/: read tcp $RPi:58914->23.22.155.84:443: read: connection reset by peer Get https://registry-1.docker.io/v2/: read tcp $RPi:58918->23.22.155.84:443: read: connection reset by peer Get https://registry-1.docker.io/v2/: read tcp $RPi:50422->52.55.168.20:443: read: connection reset by peer Get https://registry-1.docker.io/v2/: read tcp $RPi:58916->23.22.155.84:443: read: connection reset by peer Get https://registry-1.docker.io/v2/: read tcp $RPi:58922->23.22.155.84:443: read: connection reset by peer screenly_srly-ose-server_1 is up-to-date screenly_redis_1 is up-to-date screenly_srly-ose-websocket_1 is up-to-date screenly_srly-ose-viewer_1 is up-to-date screenly_srly-ose-celery_1 is up-to-date 7222a1843721_screenly_srly-ose-nginx_1 is up-to-date`
You said that this script should be run offline, can you tell my why it tries to communicate with a public ip address then?
Get https://registry-1.docker.io/v2/: read tcp $RPi:58914->**23.22.155.84**:443: read: connection reset by peer
Another weird thing I noticed.
After I ran the update_container.sh
with the internal DNS configured and then changed to the internet subnet again and I ran the rebuild_container.sh
. Suddenly, I saw the Screenly splash screen with the internal IP (with no internet access).
Hard for me to grasp the inner workings of it...
@iDazai
Ok, lets get the docker stuff out of the way in a very simplified way:
If you look over the script, lets look at these lines:
sudo -E docker-compose \
-f /home/pi/screenly/docker-compose.yml \
-f /home/pi/screenly/docker-compose.override.yml \
pull
sudo -E docker-compose \
-f /home/pi/screenly/docker-compose.yml \
-f /home/pi/screenly/docker-compose.override.yml \
up -d
What this is doing is saying ok docker-compose i want you to use this file (hence the -f reference) to build these containers. If you open up those .yml files, you can see what each one does, how it calls the dockerfiles to build the containers, etc. You can get a pretty good idea just by looking them over. The first step is pulling these images, which if you dont have them locally, they go out to dockerhub registry to download them. The up step is just bringing them up (starting them up, like turning them on) and the -d flag just means do it in the background.
So, if you look at some of the files, for example nginx dockerfile needs alpine image, it is a very tiny image because the OS is very small, but it is running the nginx web server all on its own in that tiny image. Other images are being pulled from raspbian or even screenly. This is dockerhub screenly account which holds ready made and ready to pull images: https://hub.docker.com/search?q=screenly&type=image
Ok, so now that you have a little overview of what is going on, think about the process initially, you must have internet in order to get these images and build the containers.
After they are locally stored and built, you are simply bringing them up.
Maybe, in your specific offline situation, we simply need to comment out and skip the pulling part since they are already on the device, and just use the up -d
section.
This may work only if the original building of images and pulling etc was completed 100%, so if you type docker container ls -a
this needs to show ALL containers required for screenly to function, these are:
Now, focus on that upgrade_containers.sh
script, if you look it over, it simply gives the system the environment variables it needs for the screenly dockerfile to compose properly, such as the MY_IP, VIEWER_MEMORY..., DOCKER_TAG, etc.. any of the export variables.
Ok, so then we already know that when the Pi is offline, we need the MY_IP env variable so this is when you need an active network device that will respond to that route command so that the MY_IP ends up with the default IP address of the Pi, and passes that on to the container.
This is the tricky part though, remember we skipped the "pulling" of the images, if you just bring the images up, and nothing is changed, the MY_IP variable wont get the new offline IP, so this is where your testing happens because I dont have an offline network to be testing all this but simply know how it should theoretically work.
The reason the pull should work, is because the images are offline and locally downloaded so the device no longer needs to go out to the internet to get them, so one important step is that we need the containers now to have the new modified IP address that you gave it through the script.
So, in general, as I tested, if you do everything properly to begin with, and end up with a working screenly-ose, pulling the device offline would still allow it to work, you just need a way to get that new IP address to show on the splash screen after it has been put offline, which is completely doable, again, just trial and error for the specific steps are on your end.
I am not an expert on the inner workings and all of the CLI of docker and docker-compose, but I hope I gave you an idea of how it is all put together in an overview way.
So, in the end you write about seeing the new internet offline IP address, do you sort of understand now why the splash screen showed this IP after running the rebuild script?
Alright. I can clarify the issue here as it seems they are experiencing the same issue I am. Viewer crashes on startup after failing to check for updates from GitHub if there is no internet connection, and the docker container is set to die on any error, regardless of what it was, by default in screenly.
Here's the error I'm seeing:
Loading browser...
Generating asset-list...
Current url is http://srly-ose-nginx:80/splash-page
Current url is http://srly-ose-nginx:80/static/img/loading.png
Viewer crashed.
Traceback (most recent call last):
File "viewer.py", line 464, in <module>
main()
File "viewer.py", line 459, in main
asset_loop(scheduler)
File "viewer.py", line 316, in asset_loop
is_up_to_date()
File "/usr/src/app/lib/github.py", line 78, in is_up_to_date
latest_sha, retrieved_update = fetch_remote_hash()
File "/usr/src/app/lib/github.py", line 55, in fetch_remote_hash
'https://api.github.com/repos/screenly/screenly-ose/git/refs/heads/{}'.format(branch)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/screenly/screenly-ose/git/refs/heads/master (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xf2b4b0b0>: Failed to establish a new connection: [Errno 113] No route to host',))
Traceback (most recent call last):
File "viewer.py", line 464, in <module>
main()
File "viewer.py", line 459, in main
asset_loop(scheduler)
File "viewer.py", line 316, in asset_loop
is_up_to_date()
File "/usr/src/app/lib/github.py", line 78, in is_up_to_date
latest_sha, retrieved_update = fetch_remote_hash()
File "/usr/src/app/lib/github.py", line 55, in fetch_remote_hash
'https://api.github.com/repos/screenly/screenly-ose/git/refs/heads/{}'.format(branch)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/screenly/screenly-ose/git/refs/heads/master (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xf2b4b0b0>: Failed to establish a new connection: [Errno 113] No route to host',))
@jallen2281
Do me a favor, can you change the code on your viewer.py
on this line (314):
def asset_loop(scheduler):
disable_update_check = getenv("DISABLE_UPDATE_CHECK", True)
if not disable_update_check:
is_up_to_date()
asset = scheduler.get_next_asset()
Notice the True default instead of False on original code. This way we try to disable the checking of is_up_to_date()
function.
You don't need to rebuild your containers for testing this, simply get into the viewer container by running docker exec -it screenly_srly-ose-viewer_1 bash
, i think that's the container name, you can simply press tab for auto completion to check this.
I forgot to mention, you can do this on the container itself or you can simply edit the viewer.py from the Pi, using nano or vi or whatever, and then use docker cp to copy that file from the working directory to the container itself and then restarting the container to use the new viewr.py modified file...
@ealmonte32 Hey, sorry it's taken me a while to get back to this, but I was out on vaca. Anyways , yes that is essentially what I did to get past that one, which fixes viewer, but then you get a similar error on the server container:
ConnectionError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/screenly/screenly-ose/git/refs/heads/master (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xf28f6070>: Failed to establish a new connection: [Errno 113] No route to host',))
Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 272, in error_router
return original_handler(e)
File "/usr/local/lib/python2.7/dist-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/src/app/lib/auth.py", line 248, in decorated
return settings.auth.authenticate_if_needed() or orig(*args, **kwargs)
File "server.py", line 1639, in viewIndex
return template('index.html', ws_addresses=ws_addresses, player_name=player_name, is_demo=is_demo)
File "server.py", line 233, in template
context['up_to_date'] = is_up_to_date()
File "/usr/src/app/lib/github.py", line 78, in is_up_to_date
latest_sha, retrieved_update = fetch_remote_hash()
File "/usr/src/app/lib/github.py", line 55, in fetch_remote_hash
'https://api.github.com/repos/screenly/screenly-ose/git/refs/heads/{}'.format(branch)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/screenly/screenly-ose/git/refs/heads/master (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xf28f6050>: Failed to establish a new connection: [Errno 110] Connection timed out',))
I looked at this briefly but didn't notice the same check anywhere but in viewer
@ealmonte32 Alright, had a few moments to look at this and found a workaround for now. Just comment out the following line from server.py that checks for updates whenever a template is rendered:
def template(template_name, **context):
"""Screenly template response generator. Shares the
same function signature as Flask's render_template() method
but also injects some global context."""
# Add global contexts
context['date_format'] = settings['date_format']
context['default_duration'] = settings['default_duration']
context['default_streaming_duration'] = settings['default_streaming_duration']
context['template_settings'] = {
'imports': ['from lib.utils import template_handle_unicode'],
'default_filters': ['template_handle_unicode'],
}
#context['up_to_date'] = is_up_to_date() <----------COMMENT THIS LINE FOR OFFLINE USE
context['use_24_hour_clock'] = settings['use_24_hour_clock']
return render_template(template_name, context=context)
@jallen2281 If these two edits solves the issue for the rare offline use cases then I think users can proceed with the changes since it doesn't break anything, hopefully users who need this can follow along. When you tested, did you upload your own content (images/videos, etc)?
@ealmonte32 I'm using it to serve a locally hosted page (a node app) running on the same pi actually. It sits on a secure subnet, so there is no internet gateway. That said, it works as normal within the confines of the subnet (uploaded or network content).
@ealmonte32 I'm using it to serve a locally hosted page (a node app) running on the same pi actually. It sits on a secure subnet, so there is no internet gateway. That said, it works as normal within the confines of the subnet (uploaded or network content).
Cool.. sounds good.
I tried the new changes too. Changed the server.py and viewer.py and copied them into their respective containers. Picture upload and display worked for me without problems (after I set the correct time on the RPI 😅).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@magno23 Can you close this ? Trying to cleanup stale and resolved issues. Thanks.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Originally posted in https://github.com/Screenly/screenly-ose/issues/1426#issuecomment-774110312 I'm trying to use screenly on a network without internet access, get "Internal server error" on the screen and on the browser when i try to access screenly If I connect to a network with internet everything works OK.
Here is the output of
docker container ls
,journalctl
andip route
journalctl -> https://pastebin.com/hrYgYfuE docker container ls -> https://pastebin.com/j3kfEynv ip route -> https://pastebin.com/1mHUMRNnHere is what i did for installation and config: Fresh install of screenly on buster lite, development branch and no network manager Enable ssh Connect to a network without internet access Connect to pi via ssh and edit ntp setting from /etc/systemd/timesyncd.conf to use my own server Access to screenly via browser from my pc (on the same network) and get the message "Internal Server Error", that same message appears on the pi screen