probcomp / Venturecxx

Primary implementation of the Venture probabilistic programming system
http://probcomp.csail.mit.edu/venture/
GNU General Public License v3.0
29 stars 6 forks source link

Revisit the wisdom of the pure-functional appearance of the inference programming language #131

Open axch opened 9 years ago

axch commented 9 years ago

Either produce a more compelling rationale for it, or a plan for how to let it be imperative, or a plan for making the syntax familiar-looking without committing to imperative vs pure semantics.

Why? Everyone (notably including BenZ, various summer school students as channeled by Anthony) is constantly confused by do, return, run, which things are inference actions or not, etc.

The current state is partially a historical artifact, in that I originally made the inference language functional because it was being traced by Lite PETs, and I assumed that they required functional semantics from the SPs (or rather, didn't want to convince myself otherwise at the time).

One of the still-hypothetical benefits of functional style would be the ability to actually trace inference in a PET and do inference on its choices.

lenaqr commented 9 years ago

trace inference in a PET and do inference on its choices.

I think this is almost currently feasible, given some way to "bring the engine to the traces" (the opposite of a3b50b0e2a729c195b97ecbbd93816817cb7a251). One way I envision it is to add a likelihood-free modeling SP run_in_temp_model that creates a new model, runs the action on it, and throws it away; this would basically be an ST monad like thing where the mutation is confined to the temporary model and is not allowed to escape. The prerequisite for that would be to expose the SPs that create and compose inference actions to the modeling language, and as I mentioned provide some way for run_in_temp_model to expose the engine interface in order to run those actions.

axch commented 8 years ago

Program management decision is whether to view this as a goal at all, and if so, how urgent.

raxraxraxraxrax commented 8 years ago

Unless "the ability to actually trace inference in a PET and do inference on its choices" is something that we actively want for research purposes, my naive take is that this should not be high priority.

axch commented 8 years ago

Well, so, that's the question. Status quo is a horrid mishmash of functional and imperative style. It started functional because of history and the PET business (that we, indeed, do not seem to actively want). It has been growing more imperative because that's what all current users (and, presumably, @vkmvkmvkmvkm) are more used to, grudgingly because I prefer functional style and find it easier to think about. The question is, do we

The reason I say that (d) may not be impossible is that Haskell doing (d) for a while in its history may be the reason its IO monad has so many unrelated operations. That solution has gotten very ugly in Haskell, but may be acceptable to us, if we persist in the somewhat different hair shirt of aggressively punting all operations that don't have to do with probability to Python.