Open clark-chen opened 7 years ago
what is the error? any error message? any log?
On Tue, Feb 14, 2017 at 10:03 AM, clark-chen notifications@github.com wrote:
I have got your code, and read the "Configure and run parsec 2.1 benchmark in gem5" in http://pfzuo.github.io/2016/06/06/Configure-and-run- parsec-2.1-benchmark-in-GEM5/ .and I run the benchmark in gem5 successfully but failed in gem5v. the command is :/build/ALPHA/gem5.fast configs/example/hypervisor.py --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --l2cache --l2_size="2MB" --num-l2caches=1 --num-dirs=1 --vm-scripts="path to blackscholes.rcS:canneal.rcS"
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGH689tTUxnDEC0ku4gA2MusKQtgAFks5rcUqrgaJpZM4MAHJA .
my rcS file is
cd /parsec/install/bin
/sbin/m5 switchcpu
/sbin/m5 dumpstats
/sbin/m5 resetstats
./blackscholes 2 /parsec/install/inputs/blackscholes/in_16.txt /parsec/insta ll/inputs/blackscholes/prices.txt
echo "Done :D"
/sbin/m5 exit
/sbin/m5 exit
and i didnot run the rcS,when i run the command step by step: when it comes to $/sbin/m5 switchcpu the system was exit. the outputs
REAL SIMULATION
info: Entering event queue @ 0. Starting simulation...
warn: Prefetch instructions in Alpha do not do anything
warn: Prefetch instructions in Alpha do not do anything
hack: be nice to actually delete the event here
Exiting @ tick 45593909349500 because switchcpu
so, i comment out the "switchcpu",but when i run two rcS in two vm, like: blackscholes.rcS and bodytrack.rcS. when the vm1 finished running the blackscholes benchmark the system was exit without finishing the bodytrack.rcS benchmark.
i looked the "./m5out/stats.txt"(i want to do some research about l2 cache)
system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0.L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0.L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0.L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array
Thanks for your answer.
Yes. The problem is with the "m5 exist", when a VM reaches it, the whole simulation terminates. This is a common issue in simulation of VMs and a simple solution is that to leave it as is! I mean your simulation time and your results ends with shortest benchmark. The other solution is that to repeat the short benchmark several time (in rcS) so that your simulation ends with the longest benchmark.
On Sun, Feb 19, 2017 at 12:59 PM, clark-chen notifications@github.com wrote:
my rcS file is cd /parsec/install/bin /sbin/m5 switchcpu /sbin/m5 dumpstats /sbin/m5 resetstats ./blackscholes 2 /parsec/install/inputs/blackscholes/in_16.txt /parsec/insta ll/inputs/blackscholes/prices.txt echo "Done :D" /sbin/m5 exit /sbin/m5 exit and i didnot run the rcS,when i run the command step by step: when it comes to $/sbin/m5 switchcpu the system was quit. the outputs REAL SIMULATION info: Entering event queue @ 0. Starting simulation... warn: Prefetch instructions in Alpha do not do anything warn: Prefetch instructions in Alpha do not do anything hack: be nice to actually delete the event here Exiting @ tick 45593909349500 because switchcpu so, i comment out the "switchcpu",but when i run two rcS in two vm, like: blackscholes.rcS and bodytrack.rcS. when the vm1 finished running the blackscholes benchmark the system was quited without finishing the bodytrack.rcS benchmark. i looked the "./m5out/stats.txt"(i want to do some research about l2 cache) system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0.L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0. L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0.L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array Thanks for your answer.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1#issuecomment-280906900, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGH6yxRTMutEZrOUdni_C9Ce4kWVlzks5reAtngaJpZM4MAHJA .
I comment out the switchcpu and exit command in the rcS. And i use the lager benchmarks.
with command: ./build/ALPHA/gem5.fast configs/example/hypervisor.py --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --l2cache --l2_size="512kB" --num-l2caches=1 --num-dirs=1 --vm-scripts="path to blackscholes.rcS:canneal.rcS"
in the m5out/config.dot.pdf i didnot see the l2 cache module. and in m5out/stats.txt the message is with many
---------- Begin Simulation Statistics ----------and
---------- End Simulation Statistics ----------
which about l2 is:
system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0.L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0.L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0.L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array
so , i donot know how to see the l2cache message and l2 cache miss rate.
Thanks for your answer :)
I guess there should be a --caches options which enable caches ... is it?
On Mon, Feb 20, 2017 at 9:21 AM, clark-chen notifications@github.com wrote:
I comment out the switchcpu and exit command in the rcS. And i use the lager benchmarks. with command: ./build/ALPHA/gem5.fast configs/example/hypervisor.py --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --l2cache --l2_size="512kB" --num-l2caches=1 --num-dirs=1 --vm-scripts="path to blackscholes.rcS:canneal.rcS" in the m5out/config.dot.pdf i didnot see the l2 cache module. and in m5out/stats.txt the message is with many ---------- Begin Simulation Statistics ----------and ---------- End Simulation Statistics ---------- which about l2 is: system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0.L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0. L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0.L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array so , i donot know how to see the l2cache message and l2 cache miss rate. Thanks for your answer :)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1#issuecomment-280993902, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGH1DDlPI4XJal0mQyAB_mP0AxRPezks5reSnLgaJpZM4MAHJA .
I am afraid not. using:
./build/ALPHA/gem5.opt -d ./m5out/aaa configs/example/hypervisor.py --kernel=vmlinux --disk-image=linux-parsec2.1.img --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --caches --l1d_size=32kB --l1i_size=32kB --l2cache --l2_size=2MB --num-l2caches=1 --num-dirs=1 --vm-scripts="body.rcS:black.rcS"
in the stats.txt, as we know, the first
"---------- Begin Simulation Statistics ----------and
---------- End Simulation Statistics ----------" is the start of the simulator.
the message about l1 and l2cache are:
system.l1_cntrl0.L1DcacheMemory.num_data_array_reads 0 # number of data array reads system.l1_cntrl0.L1DcacheMemory.num_data_array_writes 0 # number of data array writes system.l1_cntrl0.L1DcacheMemory.num_tag_array_reads 0 # number of tag array reads system.l1_cntrl0.L1DcacheMemory.num_tag_array_writes 0 # number of tag array writes system.l1_cntrl0.L1DcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l1_cntrl0.L1DcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system.l1_cntrl0.L1IcacheMemory.num_data_array_reads 0 # number of data array reads system.l1_cntrl0.L1IcacheMemory.num_data_array_writes 0 # number of data array writes system.l1_cntrl0.L1IcacheMemory.num_tag_array_reads 0 # number of tag array reads system.l1_cntrl0.L1IcacheMemory.num_tag_array_writes 0 # number of tag array writes system.l1_cntrl0.L1IcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l1_cntrl0.L1IcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0.L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0.L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0.L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system1.l1_cntrl1.L1DcacheMemory.num_data_array_reads 0 # number of data array reads system1.l1_cntrl1.L1DcacheMemory.num_data_array_writes 0 # number of data array writes system1.l1_cntrl1.L1DcacheMemory.num_tag_array_reads 0 # number of tag array reads system1.l1_cntrl1.L1DcacheMemory.num_tag_array_writes 0 # number of tag array writes system1.l1_cntrl1.L1DcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system1.l1_cntrl1.L1DcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system1.l1_cntrl1.L1IcacheMemory.num_data_array_reads 0 # number of data array reads system1.l1_cntrl1.L1IcacheMemory.num_data_array_writes 0 # number of data array writes system1.l1_cntrl1.L1IcacheMemory.num_tag_array_reads 0 # number of tag array reads system1.l1_cntrl1.L1IcacheMemory.num_tag_array_writes 0 # number of tag array writes system1.l1_cntrl1.L1IcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system1.l1_cntrl1.L1IcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array
the system1 donot have l2 message.
L2 is shared between VMs (systems) and that's why there is only one l2_ctrl which is shared between two VMs (systems)
On Mon, Feb 20, 2017 at 10:38 AM, clark-chen notifications@github.com wrote:
I am afraid not. using: ./build/ALPHA/gem5.opt -d ./m5out/aaa configs/example/hypervisor.py --kernel=vmlinux --disk-image=linux-parsec2.1.img --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --caches --l1d_size=32kB --l1i_size=32kB --l2cache --l2_size=2MB --num-l2caches=1 --num-dirs=1 --vm-scripts="body.rcS:black.rcS" in the stats.txt, as we know, the first "---------- Begin Simulation Statistics ----------and ---------- End Simulation Statistics ----------" is the start of the simulator. the message about l1 and l2cache are: system.l1_cntrl0.L1DcacheMemory.num_data_array_reads 0 # number of data array reads system.l1_cntrl0.L1DcacheMemory.num_data_array_writes 0 # number of data array writes system.l1_cntrl0.L1DcacheMemory.num_tag_array_reads 0 # number of tag array reads system.l1_cntrl0. L1DcacheMemory.num_tag_array_writes 0 # number of tag array writes system.l1_cntrl0.L1DcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l1_cntrl0.L1DcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system.l1_cntrl0. L1IcacheMemory.num_data_array_reads 0 # number of data array reads system.l1_cntrl0.L1IcacheMemory.num_data_array_writes 0 # number of data array writes system.l1_cntrl0.L1IcacheMemory.num_tag_array_reads 0 # number of tag array reads system.l1_cntrl0.L1IcacheMemory.num_tag_array_writes 0 # number of tag array writes system.l1_cntrl0. L1IcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l1_cntrl0.L1IcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system.l2_cntrl0.L2cacheMemory.num_data_array_reads 0 # number of data array reads system.l2_cntrl0. L2cacheMemory.num_data_array_writes 0 # number of data array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_reads 0 # number of tag array reads system.l2_cntrl0.L2cacheMemory.num_tag_array_writes 0 # number of tag array writes system.l2_cntrl0.L2cacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system.l2_cntrl0. L2cacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system1.l1_cntrl1.L1DcacheMemory.num_data_array_reads 0 # number of data array reads system1.l1_cntrl1.L1DcacheMemory.num_data_array_writes 0
number of data array writes system1.l1_cntrl1.
L1DcacheMemory.num_tag_array_reads 0 # number of tag array reads system1.l1_cntrl1.L1DcacheMemory.num_tag_array_writes 0 # number of tag array writes system1.l1_cntrl1.L1DcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system1.l1_cntrl1. L1DcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array system1.l1_cntrl1.L1IcacheMemory.num_data_array_reads 0 # number of data array reads system1.l1_cntrl1.L1IcacheMemory.num_data_array_writes 0
number of data array writes system1.l1_cntrl1.
L1IcacheMemory.num_tag_array_reads 0 # number of tag array reads system1.l1_cntrl1.L1IcacheMemory.num_tag_array_writes 0 # number of tag array writes system1.l1_cntrl1.L1IcacheMemory.num_tag_array_stalls 0 # number of stalls caused by tag array system1.l1_cntrl1. L1IcacheMemory.num_data_array_stalls 0 # number of stalls caused by data array the system1 donot have l2 message.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1#issuecomment-281003458, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGHwQDx17PP_3YHwvCoMHVPv_pS0Ocks5reTvvgaJpZM4MAHJA .
Thanks,but the result information about l1 and l2 is 0. is something rong with my disk img? bu it runs well in the gem5 full system mode I download the pre-compiled image from: http://www.cs.utexas.edu/~parsec_m5/ Do i need to make the linux image with parsec benchmark by myself?
Do you check m5out/stat.txt? pre-compiled parsec benchmarks are just fine.
On Tue, Feb 21, 2017 at 5:06 AM, clark-chen notifications@github.com wrote:
Thanks,but the result information about l1 and l2 is 0. is something rong with my disk img? bu it runs well in the gem5 full system mode I download the pre-compiled image from: http://www.cs.utexas.edu/~ parsec_m5/ Do i need to make the linux image with parsec benchmark by myself?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1#issuecomment-281220838, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGH7aqbA45MsYg0sbHexce6OY4omGCks5rej-QgaJpZM4MAHJA .
yes, when i start the simulator the m5out/stats.txt will be overwrited. and every time the message about l1,l2cache are 0. Is something wrong, when you upload your code?
Are they (L1 and L2 statistics) zero without hypervisor.py too?
On Tue, Feb 21, 2017 at 9:19 AM, clark-chen notifications@github.com wrote:
yes, when i start the simulator the m5out/stats.txt will be overwrited. and every time the message about l1,l2cache are 0. Is something wrong, when you upload your code?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nikoonia/gem5v/issues/1#issuecomment-281251483, or mute the thread https://github.com/notifications/unsubscribe-auth/AKtGH_jjjShoOC5F-r4QkD0-C6muVU_2ks5renrSgaJpZM4MAHJA .
when i use fs.py instead of hypervisor.py run "./build/ALPHA/gem5.opt -d ./m5out/aaa ./configs/example/fs.py -n 4 --mem-size="512MB" --caches --l2cache --l2_size="2MB" --script=body.rcS " in directory gem5v
it outputs some errors:
Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/ck/gem5v/src/python/m5/main.py", line 359, in main exec filecode in scope File "./configs/example/fs.py", line 144, in <module> CacheConfig.config_cache(options, test_sys) File "/home/ck/gem5v/configs/common/CacheConfig.py", line 49, in config_cache system.l2 = L2Cache(clock = options.clock, NameError: global name 'L2Cache' is not defined
and i copy the fs.py from gem5 ,it also has mistakes.
Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/ck/gem5v/src/python/m5/main.py", line 359, in main exec filecode in scope File "./configs/example/fs1.py", line 54, in <module> from ruby import Ruby ImportError: No module named ruby
Yes, i change the configs/common/Caches.py
class L1 to class L1Cache class L2 to class L2Cache
now
"./build/ALPHA/gem5.opt -d ./m5out/aaa ./configs/example/fs.py -n 4 --mem-size="512MB" --caches --l2cache --l2_size="2MB" --script=body.rcS " is runs well,the stats.txt have l2 message:
system.l2.overall_miss_rate::cpu0.inst 0.017836 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu0.data 0.452114 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu1.inst 0.017818 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu1.data 0.173145 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu2.inst 0.018340 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu2.data 0.169664 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu3.inst 0.006199 # miss rate for overall accesses
system.l2.overall_miss_rate::cpu3.data 0.144361 # miss rate for overall accesses
system.l2.overall_miss_rate::total 0.300305 # miss rate for overall accesses
Then i test the hypervisor.py ,unfortunately, the result about l1,l2 still is 0.
I have got your code, and read the "Configure and run parsec 2.1 benchmark in gem5" in http://pfzuo.github.io/2016/06/06/Configure-and-run-parsec-2.1-benchmark-in-GEM5/ .and I run the benchmark in gem5 successfully but failed in gem5v. the command is :/build/ALPHA/gem5.fast configs/example/hypervisor.py --topology=Mesh --vm-cpu-placements="1-0:0-1" --vm-mem-sizes="512MB:512MB" --mesh-rows=1 --l2cache --l2_size="2MB" --num-l2caches=1 --num-dirs=1 --vm-scripts="path to blackscholes.rcS:canneal.rcS" in the rcS, when execute "/sbin/m5 switchcpu" the system will quit. why? Last but not least,Thank you very much for your answer.