Open hpoit opened 8 years ago
Hi @tawheeler, @sbromberger. FYI, from a functional perspective, this is what I just sketched up
Tuffy Program Goal: scale relational operations during grounding phase of MLN inference through RDBMS Method: use hybrid solution of RDBMS-based grounding and in-memory search Method: use partitioning to further improve space and time efficiency of MLN
General Functionalities 1a. Symbol table to convert all logic constants into integer IDs 1b. Consolidate MLN clauses of same pattern 1c. PostgreSQL to store input and intermediate data, e.g. ground Markov network object 2a. Efficient grounding of SQL queries (RDBMS-based) with KBMC and 2b. Lazy reference (in-memory search) for MLN formula grounding resulting in Markov random field 2c. Partitioning and inferring on MRF 3a. MAP inference with WalkSAT 3b. Marginal inference with MC-SAT
Felix Program Goal: efficiently inference in Markov Logic through common subtasks in text-processing tasks. Method: use specialized algorithms for each task
General Functionalities
I am counting on you guys for a peer review from time to time as I move forward. Thank you.
Hi @hpoit. This is very MLN-specific so I can't really say whether it is a good approach or not. I would recommend against implementing things like SQL backends before you have a very basic version of everything else working first.
I'd start with:
Representations
MLN tasks
Functionalities
Distribution