Open ardila opened 11 years ago
I don't know which is better -- but I would start with the theano one ...
@daseibert I think you should start with the NYU one (seems like it has the best performance from what i can tell), and I will start with the theano one (seems like it would integrate the best).
I think this plan makes sense even once I am able to get HMO process easily available to everybody, because we might very well want to use dropout networks and combine them with architectural optimizations going forward.
This sounds good. I'll plan to look at getting ( https://cs.nyu.edu/~wanli/dropc/) setup on honeybadger.
I agree that getting multiple models setup is a good idea even if we could run HMO ourselves. Having multiple models that (conceivably) will give similar performance on obj. recognition, neural fitting, or behavioral consistency will likely give us further insight into what matters and what doesn't for these measures.
On Sun, Oct 6, 2013 at 12:08 PM, Dan Yamins notifications@github.comwrote:
I think this plan makes sense even once I am able to get HMO process easily available to everybody, because we might very well want to use dropout networks and combine them with architectural optimizations going forward.
— Reply to this email directly or view it on GitHubhttps://github.com/ardila/behavioralthor/issues/3#issuecomment-25770792 .
Because @daseibert and I have not been able to get access to a running model that we can use for our projects, and because we are interested having the ability to quickly train a neural network with regularization new data (instead of having to run an HMO screen, which we also cannot do) he and I are planning to work on implementing one of the following existing pieces of code:
Mostly CUDA
Mostly python
I have not yet decided which would be best to attempt, but it seems the theano implementation would most closely integrate with out existing codebase. @yamins81 Is this true, and which one would you recommend?