Closed majidaldo closed 9 years ago
Yes, this is a good idea. I think I've fixed up the code a bit to do this, but please reopen if this change didn't work.
hmm on second thought it would be more flexible to have load_weights()
and another load_network()
that would call load_weights
for cases where you want to transfer weights. maybe there should be a load/save_param(param) for each set of parameters. i'm just thinking out load here.
In a way, this is the way it's currently set up. There is a feedforward.Network.load_params() method that loads only the parameter values (weights and biases) from a pickled network. Then there is a feedforward.load() module-level function that loads a pickle file and creates a new network from it.
ah ok.. i was wondering what that was doing in the module...as i don't recall any code using it.
if you create a RNN, save it, then load it, it might not operate (train, predict, ..etc). that's b/c some compilation info is not reinitialized.
in
recurrent.Network.setup_encoder
there isbatch_size = kwargs.get('batch_size', 64)
. so if the network were created with a batch_size=32, this does not get recreated sincesetup_encoder
is not invoked on loading.you may wish to reevaluate how nets are saved in order to better generalize saving any network. i suggest saving all the args and kwargs that went into creating the net and calling
__init__
with them on loading.