Closed pokle closed 10 years ago
I've alleviated the problem somewhat by pointing index.docker.io's build settings to /cassandra.
But that leaves us with no way to build opscenter. I did a bit of digging, and it seems that the best way forward to is to create a new github repository for opscenter
Ok so you can give a Dockerfile path in a repo but you can't bind two Dockerfile (ie. 2 different docker images) located in the same repo?
Correct. I've now successfully built an image on index.docker.com.
And it uses the name of the github repo as the image name. So that's another reason to split up the repos.
So, since you did all the opscenter work, would you like to create a github repo for it on your github?
BTW, what's the reason for including ssh on the Cassandra image? Is it mostly for ease of playing around with Cassandra?
— Sent from Mailbox for iPhone
On Sat, Feb 22, 2014 at 8:05 PM, Nicolas Colomer notifications@github.com wrote:
Ok so you can give a Dockerfile path in a repo but you can't bind two Dockerfile (ie. 2 different docker images) located in the same repo?
Reply to this email directly or view it on GitHub: https://github.com/pokle/cassandra/issues/7#issuecomment-35798292
And it uses the name of the github repo as the image name. So that's another reason to split up the repos.
I read in the documentation that "If you want to have more then one Dockerfile per Github repo, you will need to create more then one build, each targeting a different docker repository. Same goes with building multiple branches on the same Github repo.". Do you think this is broken?
So, since you did all the opscenter work, would you like to create a github repo for it on your github?
IMO, the 2 containers go together, so I'd prefer you host them both so that users can see they are maintained by the same person (it's all a matter of trust) :-)
BTW, what's the reason for including ssh on the Cassandra image? Is it mostly for ease of playing around with Cassandra?
The main reason is that I rely on OpsCenter agent auto-installation (gives better results than manual agent installation) and this feature needs to access Cassandra nodes via SSH to run. And as you said, it can provide a way to play with Cassandra, watch logs, and so on.
I tried setting up the opscenter build on index.docker.io again, and succeeded this time! In a few minutes, there should a build available with the image name poklet/opscenter
IMO, the 2 containers go together, so I'd prefer you host them both so that users can see they are maintained by the same person (it's all a matter of trust) :-)
Yes, that's a very good point. Agreed.
Regarding ssh: My concern with ssh on the image is that I wanted to build an image that was simple and tiny for people to learn from. That's also why I think documentation is important. I feel that the current Dockerfile has become too large for learners to learn from.
I guess there might be other ways to keep the Dockerfile simple. One way could be to pull out a 'cassandra-base' image that contains all the java, ssh, and other stuff, leaving only the meat in the cassandra & opsview containers.
I'm happy to leave it the way it is right now, and perhaps we'll have some more insights about making it simpler later.
One thing that I want to focus on is writing some automated tests to ensure that we can continue making changes without worrying that we've broken something.
I tried setting up the opscenter build on index.docker.io again, and succeeded this time! In a few minutes, there should a build available with the image name poklet/opscenter
Good news!
Regarding ssh: My concern with ssh on the image is that I wanted to build an image that was simple and tiny for people to learn from. That's also why I think documentation is important. I feel that the current Dockerfile has become too large for learners to learn from.
Actually, I think the Dockerfile can be explained step-by-step quite easily as it's done in the docker documentation. Nonetheless, the idea to break it down to 2 containers seems fine too :)
One thing that I want to focus on is writing some automated tests to ensure that we can continue making changes without worrying that we've broken something.
Do you already have idea on how to achieve this?
Regarding tests - For some of the other containers I've built, I've written simple tests using just bash. But that gets very annoying quickly. What would you use? Something like rspec? But I don't feel like pulling in a whole world of Ruby pain into the project :-)
But I don't feel like pulling in a whole world of Ruby pain into the project :-)
Well, I understand ;)
I discovered bats some times ago, which seems to be a good solution when you need to do Shell unit tests. Although I never tried yet, it looks simple and sufficient for such tests I think.
Thanks I'll try that bats out :-)
The work to split the repo into two docker images has broken the build at index.docker.io.