fengggli / comanche

comanche
Apache License 2.0
0 stars 1 forks source link

fs-integration #13

Open fengggli opened 5 years ago

fengggli commented 5 years ago

TODO

fengggli commented 5 years ago

with fio

  1. capture all the systemcalls of fio(sync/psync engin at https://github.com/fengggli/fio/blob/v3.1/engines/sync.c) , so that i can intercept them.
  2. Currently client send request to server to allocate iomem and map to local address.
  3. call stack

4 attribute(Constructor can be useful)

fengggli commented 5 years ago

intercept:

https://stackoverflow.com/a/4586534/6261848 use --wrap in gcc

https://www.apriorit.com/dev-blog/537-using-constructor-attribute-with-ld-preload

https://www.tldp.org/HOWTO/pdf/C++-dlopen.pdf

https://stackoverflow.com/a/43005999

(py36) lifen@sievert(:):~/Workspace/vagrantvm/vagrant-ubuntu18-spdk1810/comanche/build$ltrace ./src/fuse/ustack/unit_test/test-preload
__libc_start_main(0x4005a0, 1, 0x7ffe6d64cce8, 0x400730 <unfinished ...>
open64("foobar.dat", 65, 0700)                                                                                                                     = 3
close(3)                                                                                                                                           = 0
__fprintf_chk(0x7f4ad696e540, 1, 0x4007d4, 0x4007ce[LOG]:done!
)                                                                                               = 21
+++ exited (status 0) +++

I checked the client side system call traces, all mmap are called without huge_tlb, I could intercept the mmap call, if it contains huge_tlb, use special memory, other wise..

with fio

  1. the thread_main is started as a forked process.
  2. flags in mmap is 0x40022 (mem=mmaphuge)

Note

get performance of simple workload first

  1. Some guideline about POSIX file operations: https://www.classes.cs.uchicago.edu/archive/2017/winter/51081-1/LabFAQ/lab2/fileio.html
fengggli commented 5 years ago

whole-file write-throughput

Performance

write_throughput

fengggli commented 5 years ago
compare between kvfs-ustack and kvfs-naive
  1. Since currently both of them use kvstore backend with cache(lock/unlock), they are actually fast.
  2. Problems are that intercepted write doesn't know its an O_SYNCED open or not. I shall check that in the server-side, save that information in filemeta Now kvfs-daemon can read the ioflag