doujiang24 / lua-resty-kafka

Lua kafka client driver for the Openresty based on the cosocket API
BSD 3-Clause "New" or "Revised" License
803 stars 277 forks source link

when max_buffering is set too large, the memory will not be released. #156

Closed lzle closed 1 year ago

lzle commented 1 year ago

When max_buffering is set too large, the memory will not be released. Examples are as follows: After requesting access, it is found that the nginx process will always occupy 10GB of memory and will not release it until the process ends. Theoretically, the memory should drop after the access, because the table is already empty. can you explain it?thanks!

location / {
    content_by_lua '
        local kafka_buffer = require("resty.kafka.ringbuffer")
        local buffer = kafka_buffer:new(200, 10240)

        -- 1MB
        local message = string.rep('a',  1024 * 1024)
        for i = 1, 10240 do
            local ok, err = buffer:add('topic', 'key', message .. i)
            if not ok then
               ngx.say("add err:", err)
            end
        end

        for i = 1, 10240 do
            buffer:pop()
        end

        ngx.say("ok)
    ';
}
doujiang24 commented 1 year ago

@lzle have you tried force gc? after pop, collectgarbage("collect")

lzle commented 1 year ago

yes, call collectgarbage("collect") will release the memory, but I don't know whether the force gc will cause other problems, such as stop the world, which will cause the processing to pause. Is there any other better way? and why the automatic garbage collection mechanism did not reclaim memory?thanks!

doujiang24 commented 1 year ago

@lzle it's a normal GC issue, Lua need the memory reach 2x memory size to start GC. you could investigate how Lua GC works.