Open FrancisRalph opened 1 week ago
I gave it some thought and here's what I think can be done:
docker compose -p "subdomain_key" up -d
where -p
specifies the project name to ensure a new set of containers is spun up for each subdomain. Docker compose will create a default network bridge for each set of containers, simplifying the process.For option 2b, I would need to do more research on how to ensure each set of containers use their own host directory.
Actually, I'll try setting up my own terraform module that uses GCP Cloud Run
Hi @FrancisRalph. Thanks for the feedback. It sounds like the goal is for each user to have their own instance of Actual Server running, and not for multiple users to be able to work in the same Actual instance together, right?
A couple of thoughts spring to mind:
An Actual Server instance can support multiple files. Depending on who the other users are (if you feel comfortable sharing your instance’s password with them), you may be able to make this work with a single instance, using multiple files.
To implement what you’re describing, you wouldn’t necessarily need to replicate the Caddy and DuckDNS containers per Actual instance. In theory (as I haven’t tested this), your separate Actual Server instances would just need to be configured to listen on different ports using environment variables in different systemd service files (with renamed actual_server container names). Then the reverse proxy configuration in your Caddy instance would just point to the different containers via their respective ports using the Caddyfile. Something like this, in the relevant section of locals.tf
:
${var.actual_fqdn1} {
encode gzip zstd
reverse_proxy actual_server1:5006
}
${var.actual_fqdn2} {
encode gzip zstd
reverse_proxy actual_server2:5007
}
You wouldn’t need multiple DuckDNS subdomains either, as you can use subsites for the same DuckDNS subdomain (i.e. budget1.example.duckdns.org, budget2.example.duckdns.org). Your DuckDNS container would still just be updating your example.duckdns.org subdomain with the public IP address of your Compute Engine instance.
Using Cloud Run is something I considered too, but the criteria for free tier seemed more straightforward for Compute Engine (at least, when I first glanced at it). If I find the time, I may try to get an example of this working in Cloud Run as well, perhaps in a different branch.
Some sort of map with FQDN and port, along with a for_each, probably wouldn’t be too difficult to implement what I described in #2. It might take me a bit to get around to it, but I’ll try to give it a shot soon.
Ah, I see, thanks for the corrections!
I managed to run Actual on Cloud Run, it was quite straightforward: https://github.com/FrancisRalph/actual-budget-gcp-cloud-run
One issue is it is accessible to the public with no authentication. Authentication requires other GCP resources (e.g. load balancer) that won't stay in the free tier if any. So I have set the max instance to 1, limited concurrent requests to 5 and set a $1 budget alert.
Thanks for taking the time to make and document this!
It would be great to be able to spin up multiple actual budget containers (as well as their respective Caddy and DuckDNS containers) within the same instance to allow for multiple users to use their own Actual Budget while keeping within the always free tier.
I would try and implement it myself but I'm not knowledgeable enough on Docker to give it a try right away.