jayduhon / inferno-os

Automatically exported from code.google.com/p/inferno-os
2 stars 0 forks source link

sys->microsec for complex benchmarking #271

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Is there reason why sys->microsec() doesn't exists (portability issue)? I've 
tried to add it, and it works just fine:

--- module/sys.m.orig   2011-08-26 17:07:24.000000000 +0300
+++ module/sys.m        2011-08-26 17:10:01.000000000 +0300
@@ -137,6 +137,7 @@
        iounit:         fn(fd: ref FD): int;
        listen:         fn(c: Connection): (int, Connection);
        millisec:       fn(): int;
+       microsec:       fn(): big;
        mount:          fn(fd: ref FD, afd: ref FD, on: string, flags: int, spec: string): int;
        open:           fn(s: string, mode: int): ref FD;
        pctl:           fn(flags: int, movefd: list of int): int;
--- emu/port/inferno.c.orig     2011-08-26 17:09:25.000000000 +0300
+++ emu/port/inferno.c  2011-08-26 17:15:02.000000000 +0300
@@ -127,6 +127,15 @@
 }

 void
+Sys_microsec(void *fp)
+{
+       F_Sys_microsec *f;
+
+       f = fp;
+       *f->ret = osusectime();
+}
+
+void
 Sys_open(void *fp)
 {
        int fd;

I've data flow with about 10000 items/sec. Each item must be processed by up to 
20 algorithms (running in separate threads) before processing next item. Some 
algorithms are complex enough, so I need to be able to detect slow algorithms 
to optimize them.

Simple benchmarking like "run single algorithm 10000 times in a loop and check 
how much time it took" doesn't acceptable because of two reasons: 1) some 
algorithms interact with another, so I've to run them all simultaneously to get 
correct benchmarking; 2) I need this benchmark running live on production 
system because some algorithms may slow down over time (they collect some data 
from already processed data items and use this data for processing next items).

So, I didn't see any other solution except measure how much time EACH algorithm 
process EACH data item. Keeping in mind expected amount of such operations is 
about 200_000/sec with 100% CPU(Core) load, sys->millisec() surely isn't enough 
for this, but sys->microsec() should be good enough.

Of course, sys->microsec() won't give 100% correct numbers because all 
algorithms run simultaneously on same CPU Core, so what I really need is way to 
get how much "CPU time" each thread have used, but AFAIK there is no way to get 
this info in hosted Inferno (/prog/N/status always show CPU time as 0:00.0).

Original issue reported on code.google.com by powerman...@gmail.com on 26 Aug 2011 at 3:29