sunnyszy / lrb

A C++11 simulator for a variety of CDN caching policies.
BSD 2-Clause "Simplified" License
78 stars 28 forks source link

LRB result not printed #9

Closed lynnliu030 closed 3 years ago

lynnliu030 commented 3 years ago

Hi, I was using the cli to run wiki_18.tr trace, but I found out that LRB final result is not printed for 256 GB and it runs for a very long time now (other algorithms like below all finished)
image Result folder: image Log folder (the last line of LRB log): image Is it still running or is there any problem that makes it stuck? When I run the 512 GB the tot seq is 2800000000.

sunnyszy commented 3 years ago

Hi @lynnliu030 ,

I tried to replicate your bug. However, I'm able to finish the simulation of LRB. LRB simulation is slower than other algorithms due to the training and prediction overhead. Please let me know more about your simulation output. Did you see the simulator hanged at 1374 million requests (shown in your screenshot)?

And can you verify your git repo is in the latest version (161ba3dfa81ba11a0d3c11fe44ceec001774483d)?

I attached part of my simulation output below. Note there is a difference in byte miss ratio with your screenshot:

''' seq: 1371000000 cache size: 274875577909/274877766207 (0.999992) delta t: 16.73 segment bmr: 0.252941 rss: 3589062656 in/out metadata: 3223551 / 6890724 memory_window: 167772160 n_training: 84460 training_time: 660.204 ms inference_time: 50.9926 us

seq: 1372000000 cache size: 274873370730/274877766207 (0.999984) delta t: 16.122 segment bmr: 0.246394 rss: 3589062656 in/out metadata: 3220091 / 6887355 memory_window: 167772160 n_training: 75011 training_time: 657.676 ms inference_time: 50.9189 us

seq: 1373000000 cache size: 274875530802/274877766207 (0.999992) delta t: 16.998 segment bmr: 0.258048 rss: 3589062656 in/out metadata: 3214794 / 6889835 memory_window: 167772160 n_training: 20603 training_time: 676.333 ms inference_time: 50.9189 us

seq: 1374000000 cache size: 274876579618/274877766207 (0.999996) delta t: 15.959 segment bmr: 0.254206 rss: 3589062656 in/out metadata: 3217166 / 6884846 memory_window: 167772160 n_training: 104809 training_time: 683.517 ms inference_time: 50.9189 us

seq: 1375000000 cache size: 274877423021/274877766207 (0.999999) delta t: 15.465 segment bmr: 0.238582 rss: 3589062656 in/out metadata: 3218160 / 6880738 memory_window: 167772160 n_training: 27269 training_time: 676.267 ms inference_time: 50.9229 us '''

lynnliu030 commented 3 years ago

Hi @sunnyszy, sorry, I think I'm able to finish the simulation for LRB (in a total seq of 2800000000), except it takes much longer time than the other algorithms. These are my simulation result in comparison to yours, I am not sure about why there is a difference either, did you use the same command as I did above (i.e. same cache size, memory window)?

seq: 1371000000
cache size: 274876572875/274877766207 (0.999996)
delta t: 21.186
segment bmr: 0.252522
rss: 3647295488
in/out metadata: 3217877 / 6896398
memory_window: 167772160
n_training: 98253
training_time: 626.092 ms
inference_time: 58.2069 us

seq: 1372000000
cache size: 274875880187/274877766207 (0.999993)
delta t: 18.363
segment bmr: 0.24618
rss: 3647295488
in/out metadata: 3220196 / 6887250
memory_window: 167772160
n_training: 89436
training_time: 626.693 ms
inference_time: 57.3094 us

seq: 1373000000
cache size: 274877574512/274877766207 (0.999999)
delta t: 20.805
segment bmr: 0.258833
rss: 3647295488
in/out metadata: 3215161 / 6889468
memory_window: 167772160
n_training: 32961
training_time: 643.081 ms
inference_time: 57.6939 us
sunnyszy commented 3 years ago

@lynnliu030 Great to hear that! I think the difference is due to the random sampling (training and eviction). c++ std random lib is implementation-specified and thus can (and does) differ across implementations.

Given the difference is small, I think we can ignore it.