Open ajinkya-kulkarni opened 1 year ago
Hi thanks for the interest. Yes you can use RGB images, provided you adjust the input layer here https://github.com/carloalbertobarbano/forward-forward-pytorch/blob/a3aa3133b72166607a5f3c74a1cd4d4b5c25c542/mnist_ff.py#L162 with the correct size
Ahh thanks, I will try it.
Another point, what is the function of hard_negatives
and steps_per_block
? I was not able to understand these two parameters.
I am going a bit by memory since I haven't worked with this for some time, but:
hard_negatives
is used to select the more difficult (hard) samples which are not correctly classified by the linear classifier (it was mentioned in Hinton's paper)steps_per_block
is something I introduced which allows to sequentially freeze blocks in the network (e.g. train all network for N steps, then freeze 1st block and train the N-1 blocks, then freeze 2nd block and train the remaining and so on..). The idea I guess was to stabilize the input of the following block in order to let them converge, but I'm not sure it had any advantages over just training the whole network at the same time
Hi, thanks for the repo! I was wondering if this package is compatible with a generic RGB dataset with labels for each image. Something like a CIFAR10 for example.