peter-lawrey / Java-Chronicle

Java Indexed Record Chronicle
1.22k stars 193 forks source link

Garbage collecting processed data. #21

Open M2- opened 11 years ago

M2- commented 11 years ago

Hello,

Currently I'm using your software to push roughly 400,000 integers, thew 9 different Excerpts gateways, which means about 45,000 nodes /excerpt. This system works as crossfire between read and write, for two different applications. One pushes the data, while one waits for there to be data, and extracts it from the excerpt. Basically the problem is with so much data being pushed and potentially Queued, there Excerpt heap gets pretty large, and needs to be cleared from what was read by the reading portion of the system(data that'll never be read or used again). So once data has been read from the excerpt gateway, how does it get deleted, or basically Garbage collected, without clearing potentially Queued data? I can't seem to find such a method.

Thank you for your time.

peter-lawrey commented 11 years ago

You clear data on mass by closing the Chronicle and deleting the files. Even if you are writing 450,000 integers per second, this will not fill a 2 TB drive in a day.

On 12 March 2013 06:21, M2- notifications@github.com wrote:

Hello,

Currently I'm using your software to push roughly 400,000 integers, thew 9 different Excerpts gateways, which means about 45,000 nodes /excerpt. This system works as crossfire between read and write, for two different applications. One pushes the data, while one waits for there to be data, and extracts it from the excerpt. Basically the problem is with so much data being pushed and potentially Queued, there Excerpt heap gets pretty large, and needs to be cleared from what was read by the reading portion of the system(data that'll never be read or used again). So once data has been read from the excerpt gateway, how does it get deleted, or basically Garbage collected, without clearing potentially Queued data? I can't seem to find such a method.

Thank you for your time.

— Reply to this email directly or view it on GitHubhttps://github.com/peter-lawrey/Java-Chronicle/issues/21 .

M2- commented 11 years ago

Thanks for answering.

How about more on the precise scale, deleting indexes that have been read, and not resetting the entire excerpt, possibly loosing queued data? And it pushed 450,000 integers every 20 mills(30 FPS). Overall its function is to break down a images ARGB array from its int-buffer, and push it the each 'frame packet' gateway. So it breaks the image down into 9 different packets of the ARGB array data, and sends it to be read and re-assembled by the client side of the system. Basically this allows for Inner Process Communication of real time applet graphics streaming. Which is meant for the ability to connect to the heap, and have multiple connections, along with the ability to serialize the gateways location, to have the entire read portion of the system transferred to another client, or 'tab transferring' from client to client, and continue the read and image reassembly to be ultimately displaying the given frame, but 30 times in a second, giving a real time effect, with complaints of your low latency.

peter-lawrey commented 11 years ago

Assuming you mean every 33.3 ms (30 FPS) instead of every 20 ms (50 FPS) and you are sending 30 x 450,000 x 32-bit data every second, that translates to 54 MB/s, while that is still an option for Chronicle I suspect there is a solution more appropriate to compressing the video stream which could be used. I couldn't suggest what it is as I am not an expert in that area.

On 12 March 2013 15:16, M2- notifications@github.com wrote:

Thanks for answering.

How about more on the precise scale, deleting indexes that have been read, and not resetting the entire excerpt, possibly loosing queued data? And it pushed 450,000 integers every 20 mills(30 FPS). Overall its function is to break down a images ARGB array from its int-buffer, and push it the each 'frame packet' gateway. So it breaks the image down into 9 different packets of ARGB array data, and sends it to be read and re-assembled by the client side of the system. Basically this allows for Inner Process Communication of real time applet graphics streaming. Which is meant for the ability to connect to the heap, and have multiple connections, along with the ability to serialize the gateways location, to have the entire read portion of the system transferred to another client, or 'tab transferring' from client to client, and continue the read and image reassembly to be ultimately displaying the given frame, but 30 times in a second, giving a real time effect, with complaints of you low latency.

— Reply to this email directly or view it on GitHubhttps://github.com/peter-lawrey/Java-Chronicle/issues/21#issuecomment-14757790 .

filinger commented 10 years ago

This issue is still in open status. Does this mean that there is still no ability to truncate chronicle in running application?

peter-lawrey commented 10 years ago

Not yet. Chronicle 2.0 will be released soon with improved functionality already available. A key feature of Chronicle 2.1 will be file rolling.

On 5 September 2013 16:44, Maxim notifications@github.com wrote:

This issue is still in open status. Does this mean that there is still no ability to truncate chronicle in running application?

— Reply to this email directly or view it on GitHubhttps://github.com/peter-lawrey/Java-Chronicle/issues/21#issuecomment-23872836 .

filinger commented 10 years ago

I am glad to hear it. Actually, I have just developed the file rolling system myself. Had some troubles with asynchronous chronicle closing thou, so it is not perfectly stable at the moment. All my hopes are on 2.1 then. : )

peter-lawrey commented 10 years ago

File rolling is not easy but do-able for local chronicles. Where rolling is really needed is in TCP replication. You want the clients to roll as well. This is the reason I want to add built in support.

Btw working on tge javadocs for chronicle 2.0. Something missing until now. On 6 Sep 2013 20:10, "Maxim" notifications@github.com wrote:

I am glad to hear it. Actually, I have just developed the file rolling system myself. Had some troubles with asynchronous chronicle closing thou, so it is not perfectly stable at the moment. All my hopes for 2.1 then. : )

— Reply to this email directly or view it on GitHubhttps://github.com/peter-lawrey/Java-Chronicle/issues/21#issuecomment-23962515 .

peter-lawrey commented 8 years ago

@johnkajava You best option is to switch to the newer version under https://github.com/OpenHFT/Chronicle-Queue