I'm currently using an on-demand ec2 instance in aws to host a rabbitmq server. It's been great so far then I stumble upon this integration to kubernetes and decided to try it out. From my observation, regardless of how many I put in the replica when I create the instance, If I try to delete one of the pods, everything goes down even the admin site. Is this by design? Kinda hoping it will be similar to a regular deployments where the application will still be up only if there's at least one pod that's working
I'm currently using an on-demand ec2 instance in aws to host a rabbitmq server. It's been great so far then I stumble upon this integration to kubernetes and decided to try it out. From my observation, regardless of how many I put in the replica when I create the instance, If I try to delete one of the pods, everything goes down even the admin site. Is this by design? Kinda hoping it will be similar to a regular deployments where the application will still be up only if there's at least one pod that's working