Closed metalshanked closed 1 year ago
Hi @Haarolean want to take this one up. How can I start?
I went through the code and figured out that the limit is set in ui/controller/MessagesController.java
.
There are default and max limits set on the top of the class as constants.
And the limit is figured out by picking the default limit if the value is null else picks the lower one between the max limit and user input.
I'm guessing that the limit is put in place for a specific reason.
For larger limits, one such API will take an indefinite amount of time to return, as the consumer will need to seek messages & there's no way to "bulk" read. So, time will increase linearly.
So should the course of action be to remove the limit completely or rather just increase it to a larger value?
Maybe rather than making the limit greater by default we can provide it as a configurable value, so that people can set some environment variable to increase the max limit, knowing that it might have some side effects, while the safer config stays in by default.
@sarkarshuvojit thanks for the interest in contributing, let's figure out first if we'd really want to do this.
@metalshanked could you please elaborate why would you want more than 100 results on a page, considering there's pagination present (might be broken at the time, but that's another topic)?
@sarkarshuvojit meanwhile you can pick any other issue from "up for grabs" board.
For one of the use cases, I am using Kafka UI as an API to scan records from Kafka. It paginates through the responses for a limited set (say 20000 messages). Currently, since the limit is 100, i am bound by the X amount of time it takes. If the limit can be increased, it can have drastic reduction on the total time it takes for paginating through the responses.
@sarkarshuvojit @Haarolean - Is there way I can increase the max limit for API calls ? Thanks!
@Haarolean @iliax - Apologies, I updated to 0.7.0 today and wanted to understand the configs required to increase the limit.
I set the high values in KAFKA_CLUSTERS_0_DEFAULT_MAX_PAGE_SIZE
and KAFKA_CLUSTERS_0_DEFAULT_PAGE_SIZE
Then in my query i tried using pageSize
once and then when it did not seem to do anything, i switched back to limit
query parameters which does not seem to pull the increased number either
Eg:-
/messages?keySerde=String&valueSerde=String&limit=400&page=1
Is this the correct way to do this?
Thanks!
Hello @metalshanked , please use KAFKA_POLLING_DEFAULT_MAX_PAGE_SIZE
and KAFKA_POLLING_DEFAULT_PAGE_SIZE
env var to configure this. About url params - yes, you are using it right.
Hello @metalshanked , please use
KAFKA_POLLING_DEFAULT_MAX_PAGE_SIZE
andKAFKA_POLLING_DEFAULT_PAGE_SIZE
env var to configure this. About url params - yes, you are using it right.
Thanks @iliax. I tried setting KAFKA_POLLING_DEFAULT_PAGE_SIZE
as 1000 and KAFKA_POLLING_DEFAULT_MAX_PAGE_SIZE
as 5000
But it seems to use the KAFKA_POLLING_DEFAULT_PAGE_SIZE
setting (1000) but does not use KAFKA_POLLING_DEFAULT_MAX_PAGE_SIZE
Scenario:
This call gives 1000 records instead of 2000 /messages?keySerde=String&valueSerde=String&limit=2000&page=1
Can you please advise on what I am doing wrong? Thanks!
@metalshanked sorry for confusion, I gave you the wrong env var name:) it is KAFKA_POLLING_MAX_PAGE_SIZE not KAFKA_POLLING_DEFAULT_MAX_PAGE_SIZE. Please try it.
Thanks much @iliax ! It is working!
The per page limit on the messages api seems to be fixed at 100. Eg:- the below call also outputs only 100 messages at a time even with
limit=200
/messages?keySerde=String&valueSerde=String&limit=200&page=1
Is there a way to increase the limit of messages fetched per page?
Thanks in advance!