DependableSystemsLab / LLFI

LLFI is an LLVM based fault injection tool, that injects faults into the LLVM IR of the application source code. The faults can be injected into specific program points, and the effect can be easily tracked back to the source code. Please refer to the paper below. NOTE: If you publish a paper using LLFI, please add it to PaperLLFI.bib
http://blogs.ubc.ca/karthik/2014/02/23/quantifying-the-accuracy-of-high-level-fault-injection-techniques/
Other
68 stars 35 forks source link

Fault injection summary #4

Closed ShadenSmith closed 10 years ago

ShadenSmith commented 10 years ago

During fault injection, each run is now labeled with its index to provide the user a progress report. A brief summary is also printed after all runs are completed, in the form of a histogram of process exit codes. Here is a small example:

99:  /home/shaden/src/inject/test/llfi/sum-faultinjection.exe 50
     program finish -11
     time taken 1 

========== SUMMARY ==========
Return codes:
    0:    34
    1:     4
    2:    32
   16:     1
  -11:    27
   -7:     2
jshwei commented 10 years ago

Hi Shaden,

Thanks for your contribution. The architecture we want to keep is that fault injection is only responsible for 'injecting the faults', and for analysis, we want to have separate analysis scripts. Currently we don't have analysis scripts in the project because we think there are many ways to analyze the fault injection results, and we ask the users to do the job.

What you are doing makes me think about having some sort of analysis pool which contains a list of analysis scripts. So if you can put these changes as a separate file, I can merge your changes and users may be able to get the summary of the return code with your script. Also, welcome to keep adding analysis tools to the pool.

Thanks, Jiesheng

ShadenSmith commented 10 years ago

Hi Jiensheng,

Thank you for the feedback. I agree that analysis should not be inserted into the fault injection engine. Your idea for a pool of analysis scripts is very good and I will give that some thought as I continue my experiments.

However, I don't think that the changes I proposed fall under the category of analysis. Currently, the return code and execution time of every run during an experiment are printed for the user to view. For non-trivial run sizes, the user can easily have several thousand lines printed to their screen. My changes simply aggregate the information that is already printed to the screen and presents it in a more manageable form that can easily be digested by the user.

Another option is disabling the output of the fault injector unless verbose mode is enabled. We could instead print a simple ASCII progress bar and (optionally) a summary at the end.

Ultimately, I think that some changes should be made for the output of the fault injector. It currently is not feasible to track progress and decipher the general trend of your runs without further analysis, despite the required information already being presented.

Best, Shaden

jshwei commented 10 years ago

Hi Shaden,

Sorry I just saw your reply here. I totally agree with you. I think probably we should only show the progress bar (or at least the run # that you currently add in this change.) in the normal case, and have summary of return code and execution time in verbose mode.

Thanks, Jiesheng