tensorlayer / TensorLayer

Deep Learning and Reinforcement Learning Library for Scientists and Engineers
http://tensorlayerx.com
Other
7.33k stars 1.61k forks source link

Easy way to add regularization loss when network has multiple outputs? #110

Closed taochenshh closed 7 years ago

taochenshh commented 7 years ago

I see from the api documentation that I can add regularization loss like this: cost = cost + tl.cost.maxnorm_regularizer(1.0)(network.all_params[0]) + tl.cost.maxnorm_regularizer(1.0)(network.all_params[2])

But suppose I have a neural network that has multiple output layers with only one input layer, say out1, out2, I think then out1.all_params and out2.all_params will have some common params. So I cannot use the method above? Of course, I can use a set to get the unique elements after combining two lists, but is there any more elegant way to handle this in TensorLayer? BTW, why are the weights and biases variables not created with keyword regularizer= which can add the variable loss to tf.GraphKeys.REGULARIZATION_LOSSES?

zsdonghao commented 7 years ago

you may find this function can help tl.layers.list_remove_repeat and instead of adding loss one by one, you can also use a for loop.

As I understand, the following script can solve your problem.

cost = 0 for p in tl.layers.list_remove_repeat(net1.all_params + net2.all_params) cost = cost + tl.cost.maxnorm_regularizer(1.0)(p)

taochenshh commented 7 years ago

Thanks, this solved my problem.