vrtmrz / obsidian-livesync

MIT License
4.81k stars 156 forks source link

Can not connect to CouchDB in Synology Docker #79

Closed Cityjohn closed 2 years ago

Cityjohn commented 2 years ago

For a non-profit it would be VERY helpful if we could run several couchDB databases for obisidian collaboration But it turns out to be a bit more difficult to set up that I thought and I'm having a major issue setting up a couchDB on my synology for livesync:

I set up the docker image using the following command:

sudo docker run -d --name VV -it -e COUCHDB_USER=.nectifecon -e COUCHDB_PASSWORD=[password] -v /volume1/docker/couchdb_ini/local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb:latest

Then I opened port 5984 on my synology server and tested it, it is open.

Checked my container which looks like this:

12434366

But I still always get the same error no matter what URI I use: I tried the local network ip and external ip using either http or https with different ports. I thought that https://192.168.178.12:5984/ should be the correct one.

Screenshot 2022-06-13 13 30 11

Anyone have any idea what I'm doing wrong?

vactomas commented 2 years ago

You might want to try to open the URL in your browser. It will give you an idea, if the container works properly or not. Furthermore, you can check the logs of the CouchDB container to see if there aren't any errors there.

Cityjohn commented 2 years ago

Good points, I tried to connect via browser and it timed out. The logs did show me that a few hours ago some connections sort of got through but not quite so I figured I'd try several more combinations of URI's and I got one to work.

I think I found the issues. http:// works to connect instead of https:// And the second issue was that VV is an illegal database name since capital letters are not allowed.

image

How does one get https:// to work and is it really necessary for secure operations?

vactomas commented 2 years ago

You would need to configure reverse proxy like Caddy for example and use local certificate. There is however an issue with self-signed certificates and mobile devices, that is described in docs. In theory, if it is your own network with just your devices, it shouldn't be an issue. That said, TLS (https) offers encryption for the traffic and therefore no one should be able to read the traffic while it is in transit.

vrtmrz commented 2 years ago

Thank you for using this plugin! And so thank you for supporting me @vactomas.

@Cityjohn

issue was that VV is an illegal database name since capital letters are not allowed.

Thank you! I'll implement this into checking the configuration.

How does one get https:// to work and is it really necessary for secure operations?

It doesn’t be required if you are using only a PC or Mac and run on the intranet. But if you want to expose the database via the internet, you have to. In Synology, you can set up the reverse proxy in Control panel -> login application -> details (If same as DSM 6) The KB is here.

Cityjohn commented 2 years ago

@vrtmrz

Quick question while I try to set this up safely. I couldn't find this anywhere and I'm not very wise yet when it comes to networking or docker.

Is it even possible to set up 10 or so couchDB containers on the same public IP with a reverse proxy on the same port 5984? Will the different live-sync obsidians be able to differentiate just using the database name? Or should I use different ports?

I am now trying to set up the reverse proxy so that I can use the Synology Let's Encrypt TSL certificates. I'm trying to do this by pointing all vault.domain.com to my home public IP adres and then hopefully my Synology proxy server will send traffic to the right docker containers.

vrtmrz commented 2 years ago

@Cityjohn You can share one CouchDB, but you have to set up the _user tables. (The Docker image runs in admin mode).

When using a reverse proxy, you will use the common external port usually 443(HTTPS). And transfer the access to each internal server (hosted in docker) by each different URI. In this case, you can serve each CouchDB as a different configuration having different administration passwords and let users make multiple vaults. So if you want to serve multiple CouchDB, you can do it with a docker container with the -p option, like -p 5984:5984 and -p:5985:5984. And, proxies should transfer requests from https://external.example.net/user1/{*} to http://docker_host:5984/{*}, from https://external.example.net/user2/{*} to http://docker_host:5985/{*}. but it is sometimes complicated.

So using a caddy or some other reverse proxy between databases and Synology is recommended (If you want to use multiple CouchDB)

If you want to add a secondary database in docker-compose.yml, you can add the second one's configuration like this:

 couchserver2:
        image: couchdb
        ports:
                - "5985:5984"
        environment:
            - COUCHDB_USER=${COUCHDB_USER_2}
            - COUCHDB_PASSWORD=${COUCHDB_PW_"}
        volumes:
            # The files' owner will be id:5984 when you launch the image.
            # Because CouchDB writes on-the-fly configurations into local.ini.
            # So when you want to perform git pull or change something, you have to change owners back.
            - ./data2/couchdb:/opt/couchdb/data
            - ./conf2/local.ini:/opt/couchdb/etc/local.ini
        networks:
            - caddy
        labels:
            caddy: ${COUCHDB_SERVER}
            # To avoid being crawled by Malicious Web Crawlers, 
            # - Make index page to be forbidden.
            # - Set CouchDB into subdirectory
            caddy.handle_path_2: /secondery/*
            caddy.handle_path_2.0_reverse_proxy: "{{upstreams 5984}}"
        restart: always

(and you have to remove this line, the internal caddy doesn't have to get an SSL certificate).

But if you want to use one database simply, you can connect the reverse proxy from Synology to your CouchDB.

Cityjohn commented 2 years ago

Maybe I'm not seeing something but these are the steps I took:

I created a map of all the domain names and where they should end up

0

Then I checked if the DNS was correctly pointing to my public IP

1

Then I checked if my modem has 443 open and redirects to my Synology

2

Then I created a docker CouchDB container with -p 6003:5984 and set the reverse proxy to redirect to the localhost on the container port 6003

3

Then I created a Let's Encrypt certificate for all services

4

and then...

5

But when I do a local connect however it connects within half a second

6

Am I missing something key in the networking?

Could it be that the Synology's own let's encrypt certificate is not working?

I'm going to try it with caddy now but I'm not sure it will resolve the connection issue. Is there a way for me to check where the connection issue lies?

Cityjohn commented 2 years ago

Aha wait i think I have to make a Let's encrypt certificate for each domain name but Synology won't allow me to because of a non descript error.

image

Cityjohn commented 2 years ago

Great it worked by turning off UPNP on my router and opening port 443 manually.

Now when I use these settings to connect to the database:

image1

I get an immediate error:

image2

Where do I open inspector?

Cityjohn commented 2 years ago

I DID IT!

image

The problem was that in the Synology GUI, after you have issued the certificate, you have to also configure it to be subject to the specific domain name so that you do not get a NET::ERR_CERT_COMMON_NAME_INVALID error.

Now I can move on to testing how these databases work haha.

Cityjohn commented 2 years ago

Works like a charm, I'm so happy. I hope this helps others with Synology who are finding it hard to get it running as I imagine a lot of people will want to use this system on their synology servers.

Just to elaborate the previous statement, when creating a certificate in the synology GUI, and when you press config, you can select which third party signed certificate belongs to which domain name, which is important.

Blue is correct, and red is incorrect and will yield the invalid name error:

image