Hi! I'm trying to reproduce results from asplos20.
This repo does not provide how to run the benchmarks in linux setting, but I managed to set up the benchmarks to run in linux with some struggling. (One of the thing was to set occlum = no in musl configure)
Here are some of the errors I have found, and also there are some questions regarding performance.
For the fish benchmark, the cp busybox/busybox workspace/bin command in run_fish_test.sh script renames busybox to bin instead of moving in into bin folder because the folder did not exist. Correcting it gives much larger performance numbers than given in the paper.
In the pipe benchmark, &buf parameter in write syscall should be buf. But it doesn't seem to effect performance.
Now for the performance, first, the artifact does not include file benchmark so I wrote one myself by modifying pipe benchmark. Now, in my case, lighttpd benchmark and file benchmark show that linux shows much larger throughput. (for 3x ~ 5x). But in the paper it looks like it is almost the same. So I wonder how the benchmark is run in linux.
Hi! I'm trying to reproduce results from asplos20.
This repo does not provide how to run the benchmarks in linux setting, but I managed to set up the benchmarks to run in linux with some struggling. (One of the thing was to set occlum = no in musl configure)
Here are some of the errors I have found, and also there are some questions regarding performance.
For the fish benchmark, the
cp busybox/busybox workspace/bin
command inrun_fish_test.sh
script renamesbusybox
tobin
instead of moving in intobin
folder because the folder did not exist. Correcting it gives much larger performance numbers than given in the paper.In the
pipe
benchmark,&buf
parameter inwrite
syscall should bebuf
. But it doesn't seem to effect performance.Now for the performance, first, the artifact does not include
file
benchmark so I wrote one myself by modifyingpipe
benchmark. Now, in my case,lighttpd
benchmark andfile
benchmark show that linux shows much larger throughput. (for 3x ~ 5x). But in the paper it looks like it is almost the same. So I wonder how the benchmark is run in linux.I'd appreciate your answers!