Closed rytp closed 8 years ago
Go for it! It's probably easiest to make a new subclass of whatever learning_fn you are normally using, and then make the constructor accept a new argument for the sheet whose activity you want to use for modulating this projection's plasticity. For instance, if you are usually using Hebbian learning, you'll probably first want to switch to the unoptimized version (to make it easier to work with, though a good bit slower), e.g.
using learning_fn=CFPLF_Plugin(single_cf_fn=Hebbian())
. As long as that works ok, you can then subclass Hebbian so that the information is available for you to use during learning:
class ModulatedHebbian(LearningFn):
def __init__(self, modulator_sheet=None, **params):
super(ModulatedHebbian,self).__init__(**params)
self.modulator = modulator_sheet
def __call__(self,input_activity, unit_activity, weights, single_connection_learning_rate):
weights += single_connection_learning_rate * unit_activity * input_activity * self.modulator.activity
However, it seems like there are likely to be issues to do with slicing the modulator's activity array to match what this CF sees; you might want to first debug it with fully connected projections (all of the same shape) to avoid that issue. Once that works, you'll probably need to delve into the part of CFProjection that creates the appropriate slice of the input sheet for use with this CF, which is some grotty code that none of us touch anymore and thus no one remembers. You may need to subclass CFProjection as well to get access to that code, and in any case be careful about making sure sheet sizes, etc. line up. You're mixing up data from very different parts of the simulation, so be sure you know what you're trying to do!
Thanks for the reply. I implemented an optimized version of the code without the CF. I have a query regarding which time instant of the activity of the modulator sheet is used to update the weights
class CFPLF_Hetero_opt(CFPLearningFn):
def __init__(self, modulator_sheet=None, **params):
super(CFPLF_Hetero_opt,self).__init__(**params)
self.modulator = modulator_sheet
def __call__(self, iterator, input_activity, output_activity, learning_rate, **params):
rows,cols = output_activity.shape
cfs = iterator.flatcfs
num_cfs = len(cfs) # pyflakes:ignore (passed to weave C code)
single_connection_learning_rate = self.constant_sum_connection_rate(iterator.proj_n_units,learning_rate)
if single_connection_learning_rate==0:
return
modu=0.1*self.modulator.activity # pyflakes:ignore (passed to weave C code)
irows,icols = input_activity.shape
cf_type = iterator.cf_type # pyflakes:ignore (passed to weave C code)
code = c_header + """
DECLARE_SLOT_OFFSET(weights,cf_type);
DECLARE_SLOT_OFFSET(input_sheet_slice,cf_type);
DECLARE_SLOT_OFFSET(mask,cf_type);
DECLARE_SLOT_OFFSET(_norm_total,cf_type);
DECLARE_SLOT_OFFSET(_has_norm_total,cf_type);
%(cfs_loop_pragma)s
for (int r=0; r<num_cfs; ++r) {
double load = output_activity[r];
double unit_activity= load;
double hetero = modu[r];
if (load != 0) {
load *= single_connection_learning_rate;
PyObject *cf = PyList_GetItem(cfs,r);
LOOKUP_FROM_SLOT_OFFSET(float,weights,cf);
LOOKUP_FROM_SLOT_OFFSET(int,input_sheet_slice,cf);
LOOKUP_FROM_SLOT_OFFSET(float,mask,cf);
UNPACK_FOUR_TUPLE(int,rr1,rr2,cc1,cc2,input_sheet_slice);
double total = 0.0;
// modify non-masked weights
npfloat *inpj = input_activity+icols*rr1+cc1;
for (int i=rr1; i<rr2; ++i) {
npfloat *inpi = inpj;
for (int j=cc1; j<cc2; ++j) {
// The mask is floating point, so we have to
// use a robust comparison instead of testing
// against exactly 0.0.
if (*(mask++) >= 0.000001) {
*weights += load * *inpi * (unit_activity) * (hetero);
if (*weights<0) { *weights = 0;}
total += fabs(*weights);
}
++weights;
++inpi;
}
inpj += icols;
}
// store the sum of the cf's weights
LOOKUP_FROM_SLOT_OFFSET(double,_norm_total,cf);
_norm_total[0]=total;
LOOKUP_FROM_SLOT_OFFSET(int,_has_norm_total,cf);
_has_norm_total[0]=1;
}
}
"""%c_decorators
inline(code, ['input_activity', 'output_activity','num_cfs',
'icols', 'cfs', 'single_connection_learning_rate',
'unit_threshold','cf_type'],
local_dict=locals(),
headers=['<structmember.h>'])
@rytp It depends on the kind of sheet you are using, a regular JointNormalizingCFSheet
will learn on each event, while a SettlingCFSheet
will wait until the end of an iteration to apply learning. It should be fairly straightforward to implement a subclass of the JointNormalizingCFSheet
, which applies learning whenever you want it to.
Hi,
I would like to model hetero-synaptic plasticity using topographica, where:
weights += single_connection_learning_rate * unit_activity * input_activity * other_sheet_corresponding_neuron_activity
Any suggestions.