Closed sgandon closed 3 years ago
Hi,
I have made some tests using the first production ready version of DSHM. With a 2 node installation (2 openresty + 2 dshm) on two 2vCPU 4Go RAM virtual machine (DSHM configured with 512Mb heap), I run 1000 request/s scénario with OIDC session caching. The sizing of the DSHM heap depends your webapp sessions size (with huge session you need more heap).
Concerning readyness and liveness, you can use curl+telnet or netcat command to check the port is open (when the port is open, DSHM is ready), connect to the port and send a QUIT command, then expect the connection is closed. In this case the server is responding and is alive and ready.
Regards,
Thanks a lot for your detailed answer. For the probes I have enable the hazelcast rest api
hazelcast:
cluster-name: mycluster
network:
rest-api:
enabled: true
And used http get probe on the path : /hazelcast/health/node-state it seems to be working. Do you see any reason not to use this heath check method ?
Another point which troubles me and triggers my security antenna, the DockerFile is cloning another repository than this one RUN git clone https://github.com/revomatico/ngx-distributed-shm
can you explain why and let me know which repo is the "right" one ? (I can propose a PR to change it if you want).
Your solution seems good.
I release the 1.0.4 version with an official docker build. You can pull the official image from quay.io :
docker pull quay.io/grrolland/ngx-distributed-shm:1.0.4
.
The dockerfile is fixed.
See PR #8 .
If that's ok for you, i will close the issue.
Indeed you can close this issue. I have worked on this topic a little more and we have to buidl our own docker image because we rely on a curated base image for java apps and I liked the fact that one single docker build command does both the maven build and the creates the docker image, apparently you have removed but that is ok as I have my own now. I have also removed the STOPSIGNAL SIGRTMIN+3 directive from the docker file which gave me headache by preventing the SIGTERM signal to reach the app when a kuberenetes pod is being stopped. I needed that to enable gracefull shutdown on hazelcast.
- -Dhazelcast.shutdownhook.policy=GRACEFUL
- -Dhazelcast.graceful.shutdown.max.wait=20
Which is to my opinion is required whenever you need to update a statefull-set in kubernetes see this blog. There are also some java options required in order to avoid error messages at the start.
JDK_JAVA_OPTIONS="--add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
Sorry but you did not enable the Github discussion so I keep using the issue.
Do you intend to create a new release ? that is quiet important to be able to not rely on master (which is moving) ? On the CI part I would have suggested you stay in github and use github actions as well as their container registry.
I take your advices and I will refactor the build system on the early next week. I'll integrate Hazelcast and Java options. Thanks for the feedback !
Hi, I've take some time to rework the build system. I had add hazelcast and java options. I use github action to release and the artefact are published in the github package, and in the github registry for docker image. The quay.io registry is always populated with the same versions/tags. I've open github discussion. Feel free to use it.
I now close this issue. Regards.
Hello, We are thinking of using dshm with openresty in order to cache our web app sessions among multiple instances in a Kubernetes environment. You provide some Kubernetes manifest example which is nice but in terms of production readyness I was wondering if you could provide some guidance in the RAM requirements for the container (Requests and limits). I would also like to ask if it is possible to setup some kind of readyness and liveness probe as well ?