Open CraziFuzzy opened 7 years ago
This does bring up another issue I've been meaning to look into. I think it might be wise to include the docker image in the actual project, so it is generated much like the .deb and rpm, etc. In this way, the actual build would be IN the image, and it would not need in install on start situation, it would simply be updated by the user from the docker when an update is available.
This is of course depending on there being a docker plugin for gradle to deal with building the image.
Looks like there IS a plugin in the wild here: https://github.com/Transmode/gradle-docker I just don't know enough about gradle (anything, really) to make sense of it.
The other issue with the docker image not including the build, is it forces us to put the application (/opt/opendct) in appdata space, which shouldn't really be necessary, just to persist the install in case there is no internet when it is launched (happened to me a few times, where openDCT would not launch because there was no internet to install it into the container's ramdisk). Getting the version IN the image just makes the entire container that much simpler, and it's not like it makes it harder to update versions for the end user.
I believe you end up needing an internet connection regardless of how you launch the container. This is a complaint I hear about Docker sometimes (but I think it's specific to unRAID). For reference, there are tons of containers that work the way you have OpenDCT working right now. Take a look at all of the ones that pull the latest git commit on every launch like SickRage for example. The install doesn't disappear when you restart the container. It only disappears if you recreate the container or make changes to the container.
the first few images I made were with the install baked in. Worked great without internet connectivity. It was only when I changed it to get the newest version on launch that it would hang up (because it couldn't download). For me, the issues were after a power outage, my server would be up often times before the cable modem would sync up, so the container would launch, and then just quit out. Changing the image to write to a share made it better, because then the install was still there, and it could just ignore the failed install - but that's still a cludge at best.
The only dockers i have that run the binaries outside the image, are SageTV, and OpenDCT. Both of which, I believe, would benefit from pairing the docker image with the build.
Here's just a few examples from a popular repo: https://github.com/linuxserver/docker-sickrage/blob/master/root/etc/cont-init.d/30-install https://github.com/linuxserver/docker-plex/blob/master/root/etc/cont-init.d/50-plex-update https://github.com/linuxserver/docker-couchpotato/blob/master/root/etc/cont-init.d/30-install https://github.com/linuxserver/docker-radarr/blob/master/Dockerfile
What do they all have in common? They all download their binary fresh from the source and install it within the container, then redirect configuration to persistent storage. The way you have OpenDCT packaged today, it's doing the same thing. The actual exectable binary is being installed inside of the container and the configuration and logging is being place on persistent storage. I do not see a different. The scripting in the OpenDCT container should be updated to just go with the current install when the internet is not available.
SageTV is still an outlier because it's configuration prevents us from even being able to do clever symlinks for specific files because it will break them as it saves new data. Also plugins are fairly unstructured which is it's own problem that I want to address eventually. Until someone fixes these problems, SageTV will be a special case.
Having the image is a convenience and I will take a closer look at it eventually, but some of the reason I have not done anything in this direction yet is because I would like to be able to still update for all platforms on one OS and not need to sync between Windows and Linux for every update.
Could the original problem (opendct.properties overwritten on install) be addressed by marking the opendct.properties as a config file when building the rpm/deb? That way, an rpm/deb update of the package would only overwrite opendct.properties if it has not changed.
Nevermind, it looks like it is already being marked as a config file!
I also changed the Linux installer to lay down opendct.properties.default (instead of opendct.properties) for anyone that wants something to work with initially before actually running OpenDCT. opendct.properties has always been able to be fully built by just running OpenDCT. The out of the box properties file is just a requested convenience.
Need to ensure opendct.properties is not included in the install, so it is retained on reinstalls.
if defaults are wanted in file form, instead of just in the code (sagetv simply sets defaults in code the first time a property is accessed), then it needs to be a different file that is copied to opendct.properties on launch if the .properties file is missing, and ignored if the .properties file is already there.