canonical / kafka-bundle

2 stars 2 forks source link

Kafka is stuck on allocating in AWS deployments #81

Open kaskavel opened 6 months ago

kaskavel commented 6 months ago

Solutions QA team has a failed run, where kafka charm is stuck in allocating until the deployment times out after 4 hours.

Failed run: https://solutions.qa.canonical.com/testruns/743079df-d98a-4ab0-8d10-8e7a269682a0 Jenkins console: https://oil-jenkins.canonical.com/job/fce_build_layer/14815/console

Steps to reproduce

  1. Bootstrap a Juju 3 controller on top of AWS
  2. Use the controller to deploy this bundle

Expected behavior

The charms in the bundle deploy successfully.

Actual behavior

image

Versions

Operating system: Jammy

Juju: 3.3/stable

Juju agent:

Charm revision:

LXD:

Log output

Juju debug log:

Logs: https://oil-jenkins.canonical.com/artifacts/743079df-d98a-4ab0-8d10-8e7a269682a0/index.html

Additional context

The problem lies probably in the available storage:

Storage Unit Storage ID Type Mountpoint Size Status Message kafka/0 data/0 filesystem pending filesystem is not big enough (7755M < 10240M) kafka/1 data/1 filesystem pending filesystem is not big enough (7755M < 10240M) kafka/2 data/2 filesystem pending filesystem is not big enough (7755M < 10240M)

We have tried reducing the required storage via an overlay:

applications: kafka: storage: data: minimum-size: 1G multiple: range: 1-

But we are still hitting the issue.

github-actions[bot] commented 6 months ago

https://warthogs.atlassian.net/browse/DPE-3686