Deployment : Kubernetes cluster (with zookeeper ) using druid-operator
Metadata store : mysql
Deep storage : Multi-attach PVC
Druid version : 30.0.0
I am running a cluster using tiny-cluster.yaml with addition to 3 middlemanagers node. When i am doing batch ingestion through tasks api, segment creation and publishing is not happing, though the task is showing success. Deepstorage PVC is empty, historical loaddropqueue is also empty and I don't see any errors in logs also.
I am able to create directories in PVC, so don't think its problem with PVC. What am i doing wrong here?
Also I am ingesting a json only using a python library. Is there any guidelines on correct way to format data to pass it in ingestion schema?
Deployment : Kubernetes cluster (with zookeeper ) using druid-operator Metadata store : mysql Deep storage : Multi-attach PVC Druid version : 30.0.0
I am running a cluster using tiny-cluster.yaml with addition to 3 middlemanagers node. When i am doing batch ingestion through tasks api, segment creation and publishing is not happing, though the task is showing success. Deepstorage PVC is empty, historical loaddropqueue is also empty and I don't see any errors in logs also.
I am able to create directories in PVC, so don't think its problem with PVC. What am i doing wrong here?
Also I am ingesting a json only using a python library. Is there any guidelines on correct way to format data to pass it in ingestion schema?
In https://github.com/druid-io/druid-operator/issues/277, I see that it is suggested to use hdfs instead of local for clustered environment, is it still the case?