sunnyszy / lrb

A C++11 simulator for a variety of CDN caching policies.
BSD 2-Clause "Simplified" License
78 stars 28 forks source link

having size cache of zero #12

Open taliakatz opened 3 years ago

taliakatz commented 3 years ago

Hey, I wonder why I get at every run of the simulator that the cache size is zero? I'm getting the following line: cache size: 0/1099511627776 (0) Do you have an idea why is it happens?

sunnyszy commented 3 years ago

Hi @taliakatz ,

Thanks for your feedback. From your case, it seems that the cache never admits an object. Can you share the command and arguments you run the simulator?

taliakatz commented 3 years ago

Yes ofcourse @sunnyszy , For example I ran the command with the trace file wiki2019_remapped.tr , cche type S4LRU and cache size of 58720256 (it was accepted also with a bigger size or a different cache type).

with cache type of AdaptSize it wasn't zero (same trace file as above and cache size of 1099511627776 )

sunnyszy commented 3 years ago

Hi @taliakatz ,

In S4LRU, each segment has only 1/4 of cache size, so this may cause the size of an object larger than a segment, so it couldn't be admitted.

In order to verify that, can you share with me the first 10 lines of your wiki2019_remapped.tr?

taliakatz commented 3 years ago

Hi @sunnyszy , Those are the first 10 lines: (as I see the sizes are less then 1/4 of the cache size) 0 0 2994 0 0 1 5219 1 0 2 27840 1 0 3 27646 0 0 4 1090 0 0 5 6459 1 0 6 22497 1 0 7 6179 1 0 8 88691 1 0 9 7684 0

I wanted to add a question and ask how the cache size which I determine at every run is affect on the results?

taliakatz commented 3 years ago

Hey, did you saw the comment above?

sunnyszy commented 3 years ago

Hi @taliakatz ,

Sorry for the slow response. I was busy in the recent two days.

I verified the bug, and that is because S4LRU does not keep track current cache size state. But the hit/miss ratio logging should be correct, can you verify that? So a quick way to get around is just to ignore the current cache size logging.

And please leave this issue open before I fix this bug.