Closed nalipaz closed 7 years ago
For those who need a solution now, you can use my image at https://quay.io/repository/nalipaz/unison
Dockerfile
FROM quay.io/nalipaz/unison
Vagrantfile
....
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "unison" do |unison|
unison.vm.provider "docker" do |d|
d.image = "quay.io/nalipaz/unison"
d.env = {
UNISON_VERSION: "2.48.3",
}
end
end
end
....
@nalipaz Thanks for finding a solution to this, this is great. I think what we can do is make an environment variable OCAML_VERSION
in much the same way that the Dockerfile supports a UNISON_VERSION
. Then depending on the OCAML_VERSION
we run a different apt-get. The easiest way to do this would be to move the giant RUN
that contains all the apt-get
s into a dependencies_install.sh
which can then run different apt-get install
s depending on which of the two versions of OCAML are selected, and error if another version is there.
If you're up for doing this that'd be great, otherwise if not let me know and I can do it. What do you think?
@leighmcculloch good idea. I will look into sometime this week. Tied up today.
Okay, so I worked on this for a bit today. However, I realized the plan you outlined unfortunately won't really work. The reason is that the images are stored statically on the server, so these apt-get commands get ran on the server and a run-time environment variable will not have any effect on a command that has already been executed on the build server.
So the way I see it, we have a couple options here:
FROM leighmcculloch/unison:ocaml-4.1
and similar. Then use a build script to create both versions 4.1 and 4.2. I do this in one of my repos by using travisci and then pushing the images into hub.docker.com and quay.io.I feel that solution 1 is the best overall solution and what most repos are doing these days.
As to the changes I have committed. I have added the following:
/container
and then COPY container /
. This simplifies all the COPY
commands that were there and makes further additions of files or other changes simpler in my opinion./usr/bin/local/
so that they can be called without a path.ENV OCAML_MINOR_VERSION=4.02
default variable.dependencies-install.sh
file where we run all the ocaml and other dependency installation commands.In my testing, the above all works, but you can still only get the version that uses ocaml-4.2 since the image is being built on the server and not the local machine so it isn't a final solution for sure. However, this does lay the ground-work for getting a final solution in place. Again, I suggest option 1 from my suggestions. If you would like me the throw together a .travis.yml for this I could add it to this PR.
Okay, I just thought of a fourth option. \ 4. Us a loop during the compile that installs ocaml 4.02 and then compiles unison, which is what is already done. Then purge and install ocaml 4.01 and then compile unison again, naming each of the resulting binaries appropriately. Then we use the runtime environment variable to direct the command to the right binary. Seems like the image might be getting a little crazy with a total of 4 compiled binaries in it, but it would certainly work.
These last commits do option 4.
I tagged the other changes before adding those in case we don't like this idea and for historical reference.
Ah, I see what you mean. Option 1 and creating a tag for each version of OCAML and UNISON version, so there'd be four tags all up is the way to go I think. And using a script to iterate the options, build and then push each one makes the most sense. What do you think? Do you think that makes the most sense too?
I do think that makes most sense and I would even extend that to the unison versions going forward, but not sure how that might work with backwards compatibility. Maybe keep the latest
tag doing both versions and then specific versions could be in their own tags?
I'd be inclined to try and not break backwards compat for the default unison version, by making latest point to it and the default OCAML version, and then backwards compat will be broken for those with custom unison versions, but the upgrade path is simple at least. It's not ideal.
So, I will do that and get something up tonight most likely.
Okay, got through that. I tested and builds work with the exception of pushing to docker hub and the fact that there may be an existing bug in the way the 2.40 version of unison compiles and the way this Docker image tries to use it.
Firstly, the .travis.yml loops through each variation of the unison and ocaml versions to produce 4 different images, tagging each and pushing them to docker hub. You will need to setup a Travis CI account connected to this repo and set your docker hub credentials in the travis repo settings as secure variables, DOCKER_HUB_EMAIL
, DOCKER_HUB_USER
, and DOCKER_HUB_PASS
.
Next, the unison 2.40 version seems to be without a unison-fsmonitor file upon completion of compile, which means that it errors upon build. That, to me, seems like something that should be handled outside of this PR.
Lastly, some notes on the changes that I did:
ENV UNISON_VERSION
and ENV OCAML_VERSION
to be ARG
's so that we can pass them in using --build-arg
during the build on travisci.cp
commands in unison-install.sh
. I then removed the unison-link.sh
file and references since we no longer need to symbolically link to the binary.To see the builds that ran, you can look at https://travis-ci.org/nalipaz/docker-unison/builds/140716579. Note that they all failed due to missing hub.docker.com credentials. Builds 9.1 and 9.2 technically succeeded aside from their missing credentials and builds 9.3 and 9.4 failed due to the missing unison-fsmonitor file as mentioned earlier.
Let me know if you have any questions.
Adding a few more commits to make tagging a little better and update the readme.
Okay, so one last update to this. I decided to go ahead with fixing the missing unison-fsmonitor
file in unison version 2.40 by simply not attempting to cp it when it isn't there. I also moved the travis environment variable for DOCKER_REPO_USERNAME out of the .travis.yml and into the web interface settings so that I could test all this using my own github, hub.docker.com, and travisci.org repos. I was able to successfully build using the last push.
I plan to delete all these repos if this PR or some form of it gets integrated into this repo.
Here are the needed web settings for the travisci.org repo:
Where you would of course use your docker hub info.
Best!
@nalipaz This is fantastic, an amazing contribution! I really like how you've implemented everything. I dropped one question about the format of the tagging names, but also happy with how it is. I use TravisCI on other projects so it won't be hard for me to link up.
I'm going to look into if there's a way to keep automated builds also, mostly because it adds some trust for people who want to make sure they are getting the same image as the Dockerfile they are looking at, but I'll do that after this is merged.
Added ocaml into the tag. I thought about the possibility of doing that too, but couldn't decide on brevity or clarity...
As to the automated build vs. pushing to a basic repo on hub.docker.com I was not able to find a way to do that other than to setup a automated build repo, turn off automated builds and then just push to it. It will look like builds are automatic, but because you have turned them off they won't be. The other option which I thought about exploring at one point was to take travisci out of the picture and use docker hub's build hooks.
The issue I find with the build hooks is that from what I can gather, there is no way to keep the code DRY since we would need to duplicate the Dockerfile in 4 different directories to achieve the desired results. However, you could symlink the Dockerfile into 3 additional directories I would guess and then just add your hooks/build
file in each of those directories. You would then need to setup the hub.docker.com repo to have the 4 tags in the settings with each pointing to the corresponding directory on github.
The inflexibility of the build process on hub.docker.com and the fact that the build queue went down for an entire day a few weeks back are reasons I choose quay.io for my repo. I push my images to both quay.io and hub.docker.com and keep my build process on travisci. I feel (in my gut and through mere anecdotal evidence) that travisci and quay.io are more stable than hub.docker.com and that is why I only use it as a backup repository for my own projects and not a build server or my main image repo even.
@nalipaz: I'm wanting to run some testing locally with it, and looking for the moment but hope to merge this soon.
@leighmcculloch sure, no probs. I know there are a lot of changes here and this is already in production usage for many people. Best to take your time with it. Luckily HEAD isn't moving much so not much in the way of merge issues if this lingers a while. Thanks for the attention.
I have created another version of this docker image which uses ocaml-4.01 due to an issue I am seeing on a Debian Jessie host. See http://permalink.gmane.org/gmane.network.unison.general/10650 which described the issue well. Debian Jessie has ocaml-4.01 in it's repos so whenever I install unison from the Debian Jessie repos it is likely compiled with that.
This PR makes unison compile using ocaml-4.01. I don't know if it is necessary that you merge this as is or you want to create another version somehow. But ocaml version needs to be the same when compiling on both the host and guest.