Open optroodt opened 6 months ago
Unfortunately, this is going to be hard to implement due to limitations in the framework I'm using. NextJS allows specifying the baseUrl
only during build time, and can't be configured in runtime with env variables.
I'm curious, why did you opt into hosting stuff under relative paths versus subdomains?
I use Duck DNS, which gives you a limited number of free dns entries. I would have to automatically renew multiple certificates and remember the domains. I just found that hosting under a path would be quicker (and dirtier... 😅).
Ultimately I guess I will have to set up another domain. :)
Thanks for the quick response!
hmmmm, the limited number of dns entries is probably solved by just using DuckDNS for the dynamic dns and then having another domain with a CNAME to the duckdns one. That other domain can then have as many subdomains as you want. As for the certs, yeah you'll have to manage them (unless you're using something like caddy for auto certs). Putting stuff behind cloudflare also can help you avoid the need for managing the certs as well.
Sorry for intruding on your homelab setup 😅 While not strictly related to hoarder, I love homelabbing and if you have any questions about what I mentioned above, I'm more than happy to help on the hoarder discord ;)
As for the feature request itself, to be honest, it's unlikely to happen anytime soon given the framework limitation, sorry :(
No worries, feel free to close the issue if you like (or keep it open to see if there is more interest).
I use certbot for cert renewal and I've set up a new domain just for hoarder, it works perfectly. 👌
Thanks again, really appreciate your work and keep it up!
Besides having less certificates and hosts to manage, I have a different use case why i like to host under a relative path: Security by obscurity
I know, that you should not rely on that, but it proved well already to reduce the attack surface from the internet where you get crawled. Since i am doing this for several years already the crawlers / and "hacking" attempts are reduced to a level where it does not interfere that much anymore with my limited internet bandwidth I have at home.
Yes, security by obscurity is a bad idea when it comes to security, but if your obscured service you host itself is secure by state of the art it is no problem at all...
TLDR: Is there a way to make it possible @MohamedBassem in future maybe or do you not see it the list of stuff you plan to do?
What do you mean with Security by Obscurity? AFAIK there is no way to find all subdomains for a domain, same as you can't query all paths that are served for a given domain, so I don't understand the benefit you are seeing? I found some tools that brute force search for subdomains, but if you simply don't name it "hoarder.yourdomain.com", but something random, that should not be an issue anymore?
In principal general mistrust: Depending on DNS config of your machine, that requests are cleartext over the network. Also there is in theory axfr which might expose information in case of wrongly configured DNS server.
Sure i can use superlongyouneverguess.example.com but that is cumbersome to type and also kind of obscurity game then.
In contrast when supporting relative paths i could host under www.example.com a static generated webpage which when scanned most likely will not be tried to attack or something well maintained like nextcloud. Then i cold host www.example.com/mypasswords/ for my key vault and www.example.com/hoarder/ for other services.
I only need to publically maintain one SSL certificate also and everything is fine.
For me it is around the simplicity of exposing - and by extension securing - publicly accessible services.
I find the cognitive load of securing a single (sub)domain via some sort of tunnel far easier than doing the same thing for all exposed domains
For this reason I try to keep what services I can behind a single domain exposed through tunnel.
(of course this isn't the only self hosted application I host which does not support a base url - for these I've just been putting them behind tailscale - but I do like having some public in case I can't use my VPN for whatever reason)
First of all, I really like this project, well done!
I have multiple dockerized apps that I host under a single domain. I using Nginx to proxy requests to the correct container, based on the path.
For example, requests to
https://mydomain.com/app1
are proxied to a container onhttp://127.0.0.1:2000
, requests tohttps://mydomain.com/hoarder
are proxied tohttp://127.0.0.1:3000
and so on.The rules in Nginx look like this:
Hoarder-app has no setting to set this base path, so I'm running into various issues:
/_next/
/providers
and others/dashboard
I can add multiple locations in my Nginx config, but this is cumbersome and error prone (i.e. higher probability of conflicts between apps using the same paths). It would be much more convenient if there would be a setting that lets you define the base path. Any url generated by the app would then be relative to this base path. (e.g.
/hoarder/_next/...
,/hoarder/dashboard/...
)Thanks!