Open zicklag opened 5 years ago
Hi @zicklag I recently started rewriting it in GO on this fork https://github.com/lizardfs/lizardfs-docker-volume-plugin I still don't manage to pass all the tests but it's a start. Some reviewing would be highly appreciated :-)
Hey @jadolg that looks great! I'm not real experienced with Go, but it looks like you got the intent of the JavaScript code correctly in your port.
Also, a note. I'm not 100% sure that the tests were passing on the JavaScript version of the plugin. I can't remember if some of them were failing last I checked or not. The tests could maybe use some refactoring. It was our rough first attempt at Docker plugin testing, and I'm not sure if some of the tests would only pass sometimes due to timing issues or if we worked all of those out.
So maybe don't depend on the tests until you verify they are doing their job. :)
This was also my first attempt to building a plugin for Docker and I started as an experiment to see if I could replicate the behavior of your plugin step by step, basically to learn how Docker does it and it got so interesting that I ended up coding almost everything :') Then I aimed to pass all the tests so implemented the root_volume capability and now I'm struggling a little bit to pass the timeout tests but I'm not sure why it's failing just yet. I'll put some time on it today and try to add CI/CD to the project.
Ah, yes the timeout test would be the most likely one to be not testing right. I can't remember if I got the time measurement right in the test code. You should be able to test that one manually by setting the master server setting to a dummy value and measuring how long the plugin hangs when trying to do thinks like a docker volume ls
. ( If I remember right, it's been a while since I actually used this :smiley: )
Obviously the test should be fixed if it is broken so that you don't have to manually test it, though.
I am not sure if it is related I remember reporting that each container using the plugin seems to be spawning a mount process when the container is restarted due to whatever reason, the old mount process stays active in the system. if something is really wrong and container keeps restarting, it can lead to too many processes, etc.
On Mon, 25 May 2020 at 16:05, Zicklag notifications@github.com wrote:
Ah, yes the timeout test would be the most likely one to be not testing right. I can't remember if I got the time measurement right in the test code. You should be able to test that one manually by setting the master server setting to a dummy value and measuring how long the plugin hangs when trying to do thinks like a docker volume ls. ( If I remember right, it's been a while since I actually used this 😃 )
Obviously the test should be fixed if it is broken so that you don't have to manually test it, though.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/katharostech/docker-plugin_lizardfs/issues/8#issuecomment-633613273, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHOAM7R3BT7OCM4SDSK55DRTKCHTANCNFSM4IZEGQ4Q .
That's definitely a problem, but I don't think that would cause that particular test to fail. That will be something to look out for in your fork, though, @jadolg. See #9. I'm not sure if your plugin has inherited that issue.
For now, I just made a copy of what was already working and I'm going to test this also.
@zicklag it has. lfsmount is marked as defunct when a container exits. It does not matter if the container fails or not. I'm searching for a proper way to get rid of the zombies.
This is a tracking issue for rewriting the plugin in Rust. Not sure when/if we are going to get time todo it, but if we start using it more it is something we will want to do.