Closed kapyaar closed 1 week ago
The issue may not be directly related to the Bitnami container image, but rather to how the application is being utilized, configured in your specific environment, or tied to a specific scenario that is not easy to reproduce on our side.
If you think that's not the case and are interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.
With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Name and Version
bitnami/kafka:3.7.0
What architecture are you using?
amd64
What steps will reproduce the bug?
Trying to get a working system for handling high throughput. Testing on docker windows. The setup works for manual testing, but if I use K6 to do a load test, it stops after publishing around 28000 msgs.
With the following dockercompose file [EDIT: Adding section that creates the topic]
And a Dockerfile Using Nginx Unit (FROM unit:1.32.1-php8.2) with opcache, and rdkafka extension enabled
And Producer.php
And For K6 load testing, I use the script.js below, generating a string that is some 200 bytes long, with the following setup.
With this config, I Run
docker-compose up --build
Then, k6 run script.js
After about 28000 publishes, It stops publishing, and I get the following error
K6 report is shown below.
What is the expected behavior?
The expected behavior is that the Kafka server handles the producer messages without any error. But it looks like something is getting used up. and not released for a while? I say this because If I wait for a minute or so, and restart the test, it will repeat the same behavior, stopping around 28000 messages published.
What do you see instead?
Additional information
Wonder if I am missing some configuration steps? I am testing this on Windows docker.