leelabcnbc / thesis-yimeng-v2

good parts of thesis-yimeng-v1, better refactoring.
1 stars 0 forks source link

meeting 20190531 #3

Open zym1010 opened 5 years ago

zym1010 commented 5 years ago
hal-rock commented 5 years ago

@zym1010 possibly Neural population control via deep image synthesis could be helpful too, which I mentioned in that presentation but towards the end, I think after you left. Notably there is this part in their methods, for training the network they used for transfer learning:

In addition, the input images were transformed using an eccentricity-dependent function that mimics the known spatial sampling properties of the primate retinae. We termed this the “retinae transformation”. We had previously found that training deep convolutional ANN models with retinae-transformed images improves the neural prediction accuracy of V4 neural sites (an increase in explained variance by ∼5−10%). The ”retinae transformation” was implemented by a fish-eye transformation that mimics the eccentricity-dependent sampling performed in primate retinae (code available at https://github.com/dicarlolab/retinawarp).

The 5-10% increase in R^2 would correspond to a decent increase in R, if this works for V1 as well. They mention another mapping they use from the model's features for transfer learning that gives a similar boost, but I haven't examined your transfer learning code closely enough to see if you already use it.

zym1010 commented 5 years ago

@hal-rock thanks for letting me know! I will check that paper later.

Regarding the my transfer learning setup, I'm basically using DiCarlo's old setup, with a factorized fully connected layer as in https://github.com/david-klindt/NIPS2017.

I think using retinae transformation probably requires retraining those ImageNet models, and I think my main research question is on the usefulness of recurrent networks, so probably I won't add that transformation stuff.

hal-rock commented 5 years ago

Yes, their method does require retraining the model from scratch, and there's no particular reason to think that the difference between recurrent and feedforward models would change given the transform (I think), so that makes sense.

I guess I just wasn't sure to what extent you were looking at the usefulness of recurrence versus trying to make the best models (of which recurrence would be a part), but now I understand. Sorry to bother you, then.

zym1010 commented 5 years ago

@hal-rock you are welcome. knowing more papers is always a good thing!