dcaputo-harmoni / open-balena-admin

Open Balena Admin
MIT License
90 stars 18 forks source link

User can correctly login into dashboard, but app won't fetch any data from the openBalena server. #2

Open saveriogzz opened 2 years ago

saveriogzz commented 2 years ago

hey @dcaputo-harmoni , I was able to start up the admin and open it up on my local machine at http://localhost:8080/. However, the admin is not fetching anything from the openBalena server!

Empty Admin Dashboard

Is there perhaps something I am overlooking?

Summary of my configuration - openBalena running on a remote server - custom domain configured - openBalenaAdmin running on same server - opening dashboard on local machine at [http://localhost:8080/](http://localhost:8080/)

Thanks!!

dcaputo-harmoni commented 2 years ago

@saveriogzz when you say that open-balena-admin is running on the same remote server as open-balena, I just want to clarify a few things. open-balena-admin includes three packages - postgrest, ui and remote. There is no dashboard server, dashboard is just a hostname alias that points to open-balena-ui which runs all web interface functions (including the main dashboard and device dashboards). I will clarify the documentation to this effect - basically there are two hostnames that point to open-balena-ui, admin.<yourhostname> and dashboard.<yourhostname>

So back to your issue, if you installed open-balena-admin on your remote host, you already have a running version of open-balena-ui there, so you should not need to access it via a separate running instance of open-balena-ui on localhost. But regardless, it would still be possible to run a separate instance of open-balena-ui on localhost, but you would need to configure the environment variables on it per the instructions in the open-balena-ui github repo. As long as you point it to the correct postgrest server hostname on your remote openbalena server, it should work.

dcaputo-harmoni commented 2 years ago

@saveriogzz just checking in, were you able to resolve this issue?

saveriogzz commented 2 years ago

hey @dcaputo-harmoni , sorry the late answer and thanks for checking in! I couldn't make this work yet unfortunately. The services start up and run correctly, but I cannot access the dashoard on http(s)://dashboard.mysubdomain.mydomain.somewhere. I am not sure whether the reason is because my openBalena server's ports 80 and 443 are accessible just behind my company's VPN, or because I am editing wrongly my /etc/hosts file. At the moment, my /etc/hosts looks something like

# Generated by ansible
127.0.0.1   localhost
131.180.123.321 mysubdomain.mydomain.somewhere  mysubdomain

127.0.0.1 mysubdomain.mydomain.somewhere
127.0.0.1 api.mysubdomain.mydomain.somewhere
127.0.0.1 registry.mysubdomain.mydomain.somewhere
...
...

Running ./open-balena-admin/scripts/compose ps gives me image Should all those 0.0.0.0 being 127.0.0.1 instead?

Thanks again :beers:

dcaputo-harmoni commented 2 years ago

OK, so if I'm understanding your setup correctly, you are running open-balena-admin on a remote server (this would need to be the same server running open-balena) - so in this case you should not need to edit your hosts file at all. You would instead need to set up A or CNAME records for your domain to point to the servers that are hosting each of these containers, the same way you set up the open-balena dns records. I've updated the main README file for this repo to show which records point to which servers, hopefully that helps?

saveriogzz commented 2 years ago

Thanks @dcaputo-harmoni ! I was not thinking about CNAMES... dumb of me! I will set them up and keep you posted here!

saveriogzz commented 2 years ago

hey! I made the CNAMEs list added with the values

admin.mydomain.com
dashboard.mydomain.com
postgrest.mydomain.com
remote.mydomain.com

But I still get a 503 Service Unavailable Error.. I guess There must be something wrong in the network configuration. I also tried to edit the ui service port value of the compose/service.yml to be 127.0.0.1:8080:8080, but with the same result.

dcaputo-harmoni commented 2 years ago

@saveriogzz can you access it via the public IP of the admin server directly? As in http://<public ip>:8080 If not, something is likely wrong in your server networking setup - perhaps the IP is not publicly exposed.

saveriogzz commented 2 years ago

@saveriogzz can you access it via the public IP of the admin server directly? As in http://<public ip>:8080

If not, something is likely wrong in your server networking setup - perhaps the IP is not publicly exposed.

It's indeed not publicly exposed, but I should be able to reach it when using the VPN!

dcaputo-harmoni commented 2 years ago

What happens when you are connected to the VPN and you try to access the IP (not the hostname) directly via your browser on port 8080? Can you ping the IP?

jamwest commented 1 year ago

I have been having the same issue with the 503 service unavailable. open-balena-admin is running on the same remote host (EC2) as open-balena. My fix was that I had to set custom TCP rules for inbound traffic on 8080, 8000, and 10000-10009.

bernhardkaindl commented 1 year ago

I have been having the same issue with the 503 service unavailable. open-balena-admin is running on the same remote host (EC2) as open-balena. My fix was that I had to set custom TCP rules for inbound traffic on 8080, 8000, and 10000-10009.

Could you share how to set these rules using an example?

jamwest commented 1 year ago

Sure thing.

  1. Go to the Security tab of your EC2 instance.
  2. Follow the link under Security groups
  3. In the Inbound rules section, click on Edit inbound rules
  4. Add the following rules:
    
    Type: Custom TCP
    Port range: 8080
    Source: 0.0.0.0/0

Type: Custom TCP Port range: 8000 Source: 0.0.0.0/0

Type: Custom TCP Port range: 10000-10009 Source: 0.0.0.0/0


5. `Save rules`

As long as your A and CNAME routes are set up correctly you should be able to go to `http://admin.{your-domain-name}:8080`.
sajid-mulytic commented 1 year ago

Hi @dcaputo-harmoni I can access http://admin.{your-domain-name}:8080. Is there any way to make it HTTPS instead of HTTP? I couldn't find anything in the documents. Any suggestions? Thanks.

dcaputo-harmoni commented 1 year ago

Hi @sajid-mulytic, if you are using docker-compose to run it, there is no pre baked way to access via HTTPS. However if you deploy using the included helm / K8S scripts, there is a SSL ingress controller that will handle this for you. I suspect this would be a feature that others would be interested in if you want to take a stab at modifying the build script to include a secure option which modifies the services.yml file and submit a PR.

sajid-mulytic commented 1 year ago

Hi @dcaputo-harmoni, thanks for your suggestion. I do not have that much knowledge about Kubernetes at this moment. So, I am sticking with HTTP for now. Also, I will give it a try to modify the services.yml as you mentioned.

bernhardkaindl commented 1 year ago

Hi @sajid-mulytic if you also have port 80 available (at least once) on your openbalena-admin host, I'd recommend using Caddy (or some other reverse proxy which can obtain the needed certificate for HTTPS automatically).

For this I'd recommend taking a look at a reverse proxy like caddy-docker-proxy

The easiest thing for providing HTTP to the outside world is a reverse proxy which accepts the HTTPS connections from the outside world (of course with a valid certificate for the used HTTPS address) and forwards to connection(s) to the respective container(s).

Caddy is one of the may tools for this. It's light-weight and serves public DNS names over HTTPS using certificates from a public ACME CA such as Let's Encrypt or ZeroSSL:

then sites will be served over HTTPS automatically. It just works.

In the simplest case (only one domain to be served) it would be is as easy as:

caddy reverse-proxy --from example.com --to localhost:9000

You can just try this line after installng Caddy it's go, so just one static binary, (I didn't know about this) or you can run as a service using docker, which is very powerful:

https://github.com/lucaslorentz/caddy-docker-proxy does this as a separate docker service and you use it as the central arrival location for all HTTPS:443 connections on your host (not just one service to be provided over HTTPS, but any docker service you want)

While you need to make a very tiny modification to the services.yml (and the docker-compose files of other services besides open-balena-admin you might want to access over https) it's really easy to add these 8 lines to them and caddy-docker-proxy takes care of the rest:

caddy-docker-proxy uses a common docker network called caddy to which you connect all the docker services you want to be provided over HTTPS using caddy. caddy-docker-proxy then takes a look at the docker labels of these docker services, and sets those which are configured up to be served using it's reverse-proxy over HTTPS with automatic certificates for HTTPS, it does it all for you.

You run docker network create caddy to create the caddy network I mentioned before, then you take the caddy/docker-compose.yml example from the README.md and finally, you add the lines below image to the open-balena-ui service in your services.yml:

version: '3.7'
services:
  open-balena-ui:
    image: open-balena-ui-image-url
    networks:
      - caddy
    labels:
      caddy: whoami.example.com
      caddy.reverse_proxy: "{{upstreams 8080}}"

networks:
  caddy:
    external: true

These lines will cause that the open-balena-ui service is on the docker network caddy, and the running caddy-docker-proxy container will then find the labels (replace whoami.example.com with the name of the DNS A record you want to serve HTTPS for and 8080 with the open-balena-ui port number you want to serve it with, and caddy will be configured to just do it (and you can do the same for any other docker service that you might want to provide over HTTPS in the future)

RafSchandevyl commented 11 months ago

Hi, I have the same problem. I can login, but no data is fetched. All service are running on aws ec2 instance. My DB is running on a RDS on amazon and my repository is moved to native S3. following the log's i'm getting an connection timeout on the postgrest. Any idea where to look for ?