Closed mzhai2 closed 8 years ago
Not currently. That's a great question. It actually requires some non-trivial changes to the modeling methodology. We put some effort into investigating the modeling question (see e.g. http://arxiv.org/pdf/1409.4011v1.pdf if you're interested), but I think really finding the best solution to this is still an open research problem.
That's pretty interesting. For now would you recommend I decouple the variables into different models and tune them separately or is the current setup sufficient, albeit slower than the ideal solution?
A reasonably working hack is to put in a default value for the variables when they're unobserved (e.g. 0) and/or a categorical variable that indicates whether or not that variable is observed. Doing that with many such conditional variables will make things quite inefficient though.
Jasper
On Thu, Dec 31, 2015 at 5:02 PM, Michael Zhai notifications@github.com wrote:
That's pretty interesting. For now would you recommend I decouple the variables into different models and tune them separately or is the current setup sufficient, albeit slower than the ideal solution?
— Reply to this email directly or view it on GitHub https://github.com/HIPS/Spearmint/issues/54#issuecomment-168254066.
Thanks, that sounds like what I have been doing but it does explore the variable space even though the variable is set to be unobserved. It would be a bit slow to explore a lot of different topologies for neural networks.
Is there a way of setting a variable to be tunable given the enum state of another variable?