Closed ngoodman closed 5 years ago
closed by #71
Hi, I have nothing to comment specifically about the new organisation of the book, you decide what's the best :-)
Just one question: there used to be - IIRC - a chapter about reasoning by analogy. Is it gone or has the content just been shuffled around?
Thx! V
pretty much all the content is still there -- just moved around a little.
however, there was never a chapter on analogy (though various little bits on reasoning that relate). it would be great to add more material on analogic reasoning -- but also great for someone to do more research on the connection between bayesian cognitive models and analogical reasoning!
Oh ok, I confused my hopes with memory then :-) or maybe I was just thinking about the little bits you mention. I was actually thinking of having, in probabilistic programming, relations similar to class-instance in classical programming. The connection with analogical reasoning being the class serving as a kind of pattern, and all instances being then analog to each other. Dunno how relevant that it is, but might be useful...?
I'm intending to do a major (and long overdue) re-organization shortly. My proposed new outline is:
Introduction — A brief introduction to the philosophy.
Basics: Generative models — Representing working models with probabilistic programs. Conditioning — Asking questions of models by conditional inference. Causal and statistical dependence — Conditional dependence — conditional dependence, explaining away, screening off, etc. Bayesian data analysis — Making scientific inferences about data and models Algorithms for inference — The landscape of inference methods, efficiency tradeoffs of different algorithms. Rational process modes — From competence to process, [Models for sequences of observations — Generative models of the relations between data points **Delete: move iid, exch to learning section, move hmm,pcfg elsewhere.]
Learning: Learning as conditional inference — How inferences change as data accumulate. (Include iid, exch, etc.) Learning compositional hypotheses — RR, PLOT, etc (include PCFG here?) Learning continuous functions. — Deep probabilistic models. GPs? Hierarchical models — The power of abstraction. Occam's Razor — Penalizing extra model flexibility. Mixture models — Models for inferring the kinds of things. Non-parametric models — What to do when you don't know how many kinds there are. **Get rid of this chapter? [add small summary of non-parametrics (dirichlet-discrete to dirichlet process via sequential sampling) to ch 11. get rid of ch 12.]
Social reasoning: Agents as probabilistic programs — One-shot decision problems, softmax choice Sequential decisions — Markov Decision Processes and Partially-Observable Markof Decision Processes Inference about inference — social cognition, pragmatics
Appendix - JavaScript basics — A very brief primer on JavaScript.