Open danwwilson opened 4 years ago
Interesting. Why the need to start and stop the containers though in the first place? Leaving the server up doesn't take many resources. (in my Caddyfile I'll often have multiple RStudio containers on different subdomains of the same server, eg. https://rstudio.example.com, https://rstudio2.example.com etc). Launching containers on demand sounds like something you might set up with a kubernetes system but haven't looked into it.
It's probably not a particularly common use case. I typically launch a container for an analysis job and then kill it once the project is over. Sometimes I'll only need a single container active, sometimes I'll need 10 containers active.
At the moment I just have some commands in a text file that pass in some volumes, a name, a password etc and I copy and paste that into a terminal to launch the container. I was just hoping to find a way that might reduce the friction slightly.
Feel free to close the issue if it's beyond the reasonable scope of the rocker project.
I don’t know if this is possible or not, but I’ve seen your guidance on using caddy as a proxy server and am curious to know whether or not it could be extended to launch a container when going to an endpoint.
For example: If I navigate to
rstudio.example.com/8787
It would launch a container (maybe from a docker compose) exposing the rstudio instance on port 8787, but keeping the URL nice. If I then went torstudio.example.com/8788
it would launch a new container on port 8788 keeping the nice URL again. Both containers would be available until such time as I stop them.The stopping of containers I’d be happy to manage via command line, but I’m looking for a nice easy way to launch containers.
Thanks for all the great work on the images, they’ve already proven super useful so far.