A slim tensorflow wrapper that provides syntactic sugar for tensor variables. This library will be helpful for practical deep learning researchers not beginners.
This makes it possible to disable summaries in a sg_context scope using with sg_context(summary=False).
The motivation for this is that I otherwise end up having 1000+ summaries, it takes a long time load in tensorboard and I can't open the taps anyway as the browser will just freeze.
In order to implement this, a sg_summary_func decorator similar to sg_sugar_func and sg_layer_func was added. This decorator simplifies the summary implementations quite a lot and allows sg_context to affect the summary functions.
Unfortunately, this introduces a two small API change.
if prefix=None is set the default prefix now is used, where it before removed the prefix. To remove the prefix prefix='' should now be used. I hope this is okay, to my mind it actually makes more sense.
The required gradient argument in sg_summary_gradient must now be provided as a keyword argument. This is similar to sg_ce and the other loss functions.
This makes it possible to disable summaries in a
sg_context
scope usingwith sg_context(summary=False)
.The motivation for this is that I otherwise end up having 1000+ summaries, it takes a long time load in
tensorboard
and I can't open the taps anyway as the browser will just freeze.In order to implement this, a
sg_summary_func
decorator similar tosg_sugar_func
andsg_layer_func
was added. This decorator simplifies the summary implementations quite a lot and allowssg_context
to affect the summary functions.Unfortunately, this introduces a two small API change.
prefix=None
is set the default prefix now is used, where it before removed the prefix. To remove the prefixprefix=''
should now be used. I hope this is okay, to my mind it actually makes more sense.gradient
argument insg_summary_gradient
must now be provided as a keyword argument. This is similar tosg_ce
and the other loss functions.