Closed aloosnetmatch closed 2 months ago
@HoustonPutman
The same issue is happening to me,
The HPA mentioned it would scale to 14 pods, but they kept 58 running.
For me the leader election was successful, but I could see a lot of down replicas which is causing query issues and getting the error like shards are down
Ok, so y'alls issues seem somewhat related.
I have seen problems with Solr failing to delete bad replicas during an unsuccessful migration. And that's the reason why you are seeing a large increase in the number of replicas.
So i suspect something wrong with the scale down/up / migration of the shards. Every pod gets restarted during the downgrade......
This is definitely a problem, and related to the fact that you are addressing your solr nodes through the ingress. In order for all Solr traffic to not be directed through the ingress (which would slow things down considerably), we use basically /etc/hosts on the pods to map each ingress address to the IP of the pod it maps to. And since you are scaling down, it is removing some of the /etc/hosts entries, thus requiring full restarts every time.
An easy solution to this would be to only update the /etc/hosts if an IP is changed or added. It doesn't really matter if we have unused entries there.
Anyways, we should definitely have an integration test that stresses the HPA with ingresses, because this seems like a very iffy edge case.
The same issue is happening to me
@sabaribose I think this is separate, because you are not using an ingress, but using the headless service.
I think your is from the BalanceReplicas command not queueing for a retry when it fails. But I will do more investigation here.
I installed the solr operator 0.8.0 with solr image 9.4.1 on AKS. Using a a guideline this video : Rethinking Autoscaling for Apache Solr using Kubernetes - Berlin Buzzwords 2023
The setup uses persistent disks.
I created 2 indexes and put some data in it. index test: 3 shards and 2 replica's index test2: 6 shards and 2 replica's
I configured an HPA and stressed the cluster a bit to make sure the cluster would scale up from 5 to 11 nodes. Scaling up went fine. Shards for the 2 indexes got moved to the new nodes.
During scaling down, however , some shards get a lot of "down" replica's![Down_shards](https://github.com/apache/solr-operator/assets/47669630/dad82a69-ab65-4343-8c1f-2f5d99133a49)
The HPA mentioned it would scale down to 5 pods, but there kept 6 running.
The logs offcourse reveal
In the overseer there are items still in the work queue.
On the disk for the given shards , i can see the folders of the shards
They all seemed empty though.
So i suspect something wrong with the scale down/up / migration of the shards. Every pod gets restarted during the downgrade......
What could be the issue for the number of down shards to be so huge.
PS i did the same test on a Kind cluster with the same results.