Open rkgithubs opened 5 years ago
That error just means it failed at some point: it is not very specific. How much was written? Is there a partial trace there? Are you sure it didn't just run out of disk space or something?
It works for me:
$ cmake .
...
-- Version number: 7.90.18009
...
$ rm -rf drmemtrace.python*
$ bin64/drrun -t drcachesim -offline -- python ~/dr/test/hello.py
Hello, world!
$ ls -t
drmemtrace.python2.7.145100.2798.dir
...
$ bin64/drrun -t drcachesim -indir drmemtrace.python*.dir
Cache simulation results:
Core #0 (1 thread(s))
L1I stats:
Hits: 22,131,150
Misses: 94,739
Invalidations: 0
Miss rate: 0.43%
L1D stats:
Hits: 9,630,805
Misses: 264,796
Invalidations: 0
Prefetch hits: 67,599
Prefetch misses: 197,197
Miss rate: 2.68%
Core #1 (0 thread(s))
Core #2 (0 thread(s))
Core #3 (0 thread(s))
LL stats:
Hits: 328,422
Misses: 31,113
Invalidations: 0
Prefetch hits: 176,129
Prefetch misses: 21,068
Local miss rate: 8.65%
Child hits: 31,829,554
Total miss rate: 0.10%
There is plenty of disk space, there is partial trace collected for each run of python app. I was wondering the same thing, if there an trace buffer size limitation for arbitrary long programs. I was running an application in python doing sample HPC clustering.
No there is no limit. I would suggest augmenting the error line to include the return value which will give the error code.
I'm trying to collect trace for a python application. I'm using drrun drcachesim tool. It seems it can collect the results of the cache simulation but fails to collect offline trace. python app does take some input parameters which i have removed from the post. I think since it is able to collect the report that part was irrelevant to the error.
collect completed without -offline flag and reported the stat from cache simulation.