jpfairbanks / SemanticModels.jl

A julia package for representing and manipulating model semantics
MIT License
77 stars 17 forks source link

Use Documenter.jl for documentation #4

Closed jpfairbanks closed 5 years ago

jpfairbanks commented 5 years ago

We should use the documentation generator https://juliadocs.github.io/Documenter.jl/stable/man/guide/ for the docs in this package.

jpfairbanks commented 5 years ago

the last step is to enable publishing.

jpfairbanks commented 5 years ago

Our travis builds take longer than 10 minutes... which will make this hard.

jpfairbanks commented 5 years ago

We need to either reduce build time, or host our own CI infrastructure.

jpfairbanks commented 5 years ago

@mehalter has gotten travis builds to be successful. The last step is to get the autodeploy of docs.

infvie commented 5 years ago

Doc fixes : Intended Use Cases

infvie commented 5 years ago

Doc fixes : Approaches

infvie commented 5 years ago

Doc fixes : dubstep

infvie commented 5 years ago
  1. Collect a sample of known “good” inputs matched with their corresponding “good” outputs, and a sample of known “bad” inputs matched with their corresponding “bad” outputs.
    • “Good” here is defined as: given these input(s), the model output(s)/prediction(s) correspond to expected or observed empirical reality, within an acceptable tolerance/range re: error.
    • Edge cases to note but not heavily consider @ this point:
      • For “good” input to “bad” output, we just corrupt the “good” inputs at various points along the computation.
      • If assumption that code is correct and does not contain bugs holds, then it is ok to assume we will not observe “bad” input to “good” output.
  2. Run the simulation to collect a sample of known good outputs.
  3. Instrument the code to log all SSA assignments from the function calls
  4. Train an RNN on the sequence of [(func, var, val)...] where the labels are “good input/bad input”
    • By definition, any SSA “sentence” generated by a known “good” input is assumed to be “good”; thus, these labels essentially propagate down.
  5. Partial evaluations of RNN give you the “where did it go wrong. Instead of, image