Open rtyshyk opened 6 years ago
Hello, Yeah I 'm aware of that kind of situation to avoid that we need to find a way to check and keep only active containers ! Will try to find something for that !
Le 6 mars 2018 12:40, "Roman" notifications@github.com a écrit :
Hi,
Thanks for a great tool.
I noticed that there are no checks for container state.
As result container upstream may contain two IP addresses when the upgrade is not approved (old container still alive), or even IP part is missing on upstream when config generated at the moment when the new container is starting.
Missing IP address causing invalid nginx conf as well as a duplicate entry for the same container with the right IP address (on next config update)
I might be mistaken somewhere, but in general, I have one-two times for a
week this issue, it is more or less reproducible with heavy container where container entrypoint takes some time to start.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdPq9LSLtKwBL6GD6RQBACLXdg7_7ks5tbnXBgaJpZM4Sej47 .
go-rancher-gen has "state" and "health" properties for a container, so the one more if statement can be added to a dynamic container configuration
Ok was not sure if the property was already available in rancher-gen ! It is quite easy then, will try to push something this afternoon
Le 6 mars 2018 12:59, "Roman" notifications@github.com a écrit :
go-rancher-gen has "state" and "health" properties for a container, so the one more if statement can be added to a dynamic container configuration
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370759238, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdPqs9mzpaaRMe13Bzlk9SZ0Y8HR3ks5tbno-gaJpZM4Sej47 .
Looking at actual code it seems that there is already a check and it should only get healthy and running containers... Look at line 316, I will try to look again into that (it was merged from a PR some time ago)
Le 6 mars 2018 13:02, "Adrien Maurel" amaurel90@gmail.com a écrit :
Ok was not sure if the property was already available in rancher-gen ! It is quite easy then, will try to push something this afternoon
Le 6 mars 2018 12:59, "Roman" notifications@github.com a écrit :
go-rancher-gen has "state" and "health" properties for a container, so the one more if statement can be added to a dynamic container configuration
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370759238, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdPqs9mzpaaRMe13Bzlk9SZ0Y8HR3ks5tbno-gaJpZM4Sej47 .
Just made some more test, it seems to work as expected... While upgrading it only show working and healthy containers... Do you have some logs ? Are you sure that your are running the last RAP version ?
Le 6 mars 2018 13:46, "Adrien Maurel" amaurel90@gmail.com a écrit :
Looking at actual code it seems that there is already a check and it should only get healthy and running containers... Look at line 316, I will try to look again into that (it was merged from a PR some time ago)
Le 6 mars 2018 13:02, "Adrien Maurel" amaurel90@gmail.com a écrit :
Ok was not sure if the property was already available in rancher-gen ! It is quite easy then, will try to push something this afternoon
Le 6 mars 2018 12:59, "Roman" notifications@github.com a écrit :
go-rancher-gen has "state" and "health" properties for a container, so the one more if statement can be added to a dynamic container configuration
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370759238, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdPqs9mzpaaRMe13Bzlk9SZ0Y8HR3ks5tbno-gaJpZM4Sej47 .
Sorry, my bad, I did not find any related issue in the changelog from version 0.9.2 which I use. Will try to update and let you know. Actually, I used a fork from v0.9.2 with some custom ldap and so on.
Is it a public fork ? We may pull some commit from it
Le 6 mars 2018 15:44, "Roman" notifications@github.com a écrit :
Sorry, my bad, I did not find any related issue in the changelog from version 0.9.2 which I use. Will try to update and let you know. Actually, I used a fork from v0.9.2 with some custom ldap and so on.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370803818, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdGVxk9pDoRtrwbTmmLwxFm38mUUvks5tbqDUgaJpZM4Sej47 .
It is not public.
We use it for our dev infrastructure. I added next features:
Is it interesting to you?
Yeah the online update was merged between v0.9.2 and v0.9.3 (almost a year happen between) and I merged quite a few PR ! Need to find a way to auto tag/ auto update version !
Le 6 mars 2018 16:15, "Roman" notifications@github.com a écrit :
It is not public.
We use it for our dev infrastructure. I added next features:
- basic auth based on ldap (https://github.com/kvspb/nginx-auth-ldap), ldap might be enable skipped for all RAP instance or for the specific container
- "secret link" to bypass ldap authentication (e.g. to share something who has no ldap account)
- bypass ldap basic auth if request made from docker local network
Is it interesting to you?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370814248, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdF_rfewRxY4iU6tPjLxZaiUVRqYUks5tbqgXgaJpZM4Sej47 .
Yeah if it is not too critical, would appreciate if you open a PR for that ! Quite sure some other people would be interested too
Le 6 mars 2018 16:15, "Roman" notifications@github.com a écrit :
It is not public.
We use it for our dev infrastructure. I added next features:
- basic auth based on ldap (https://github.com/kvspb/nginx-auth-ldap), ldap might be enable skipped for all RAP instance or for the specific container
- "secret link" to bypass ldap authentication (e.g. to share something who has no ldap account)
- bypass ldap basic auth if request made from docker local network
Is it interesting to you?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370814248, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdF_rfewRxY4iU6tPjLxZaiUVRqYUks5tbqgXgaJpZM4Sej47 .
I merged the latest change to my fork. looks ok.
Should I create PR on gitlab or github?
Gitlab.com is easier for me to merge but if you are more used to Github.com will move it !
Le 6 mars 2018 17:34, "Roman" notifications@github.com a écrit :
I merged the latest change to my fork. looks ok.
Should I create PR on gitlab or github?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/adi90x/rancher-active-proxy/issues/51#issuecomment-370842462, or mute the thread https://github.com/notifications/unsubscribe-auth/AKpqdCE-el0Drvccmqo7vr1SYUSigu4Gks5tbrqIgaJpZM4Sej47 .
Done, please have a look https://gitlab.com/adi90x/rancher-active-proxy/merge_requests/32
I guess this issue might be closed, hope only health containers will help with the issue.
Closing this issue , PR posted on GitLab , do not hesitate if you have any other issue !
The issue still reproducible (from time to time).
I have two times completely the same upstream + hosts configuration ({{ define "server" }} section).
There are even the same IP addresses in two templates. the old container IP address is $alive, and the old one is not (commented)
I am completely out of ideas how it is possible :( Do you have some?
Hi,
Thanks for a great tool.
I noticed that there are no checks for container state.
As result container
upstream
may contain two IP addresses when the upgrade is not approved (old container still alive), or even IP part is missing onupstream
when config generated at the moment when the new container is starting.Missing IP address causing invalid nginx conf as well as a duplicate entry for the same container with the right IP address (on next config update)
I might be mistaken somewhere, but in general, I have one-two times for a week this issue, it is more or less reproducible with a heavy container where container entrypoint takes some time to start.