test_layers_flow_control.py removed in favour of eager mode
test_layers_importer.py
[x] refactored
[ ] tested
test_layers_merge.py
[x] refactored
[ ] tested
test_layers_normalization.py
[x] refactored
[ ] tested
test_layers_padding.py
[x] refactored
[ ] tested
test_layers_pooling.py
[x] refactored
[ ] tested
test_layers_recurrent.py
[x] refactored
[ ] tested
test_layers_reshape.py
[x] refactored
[ ] tested
test_layers_spatial_transformer.py
[x] refactored
[ ] tested
test_layers_stack.py
[x] refactored
[ ] tested
test_layers_super_resolution.py
[x] refactored
[ ] tested
test_layers_time_distributed.py
[x] refactored
[ ] tested
test_logging.py
[x] refactored
[x] tested
test_logging_hyperdash.py
[x] refactored
[x] tested
test_mnist_simple.py
[x] refactored
[ ] tested
test_model_compilednetwork.py
[x] refactored
[ ] tested
test_models.py
[x] refactored
[ ] tested
test_network_custom_2d.py
[x] refactored
[x] tested
test_network_custom_input_layers.py
[x] refactored
[x] tested
test_network_custom_multiple_inputs.py
[x] refactored
[x] tested
test_network_custom_multiple_outputs.py
[x] refactored
[x] tested
test_network_sequential_1d.py
[x] refactored
[x] tested
test_network_sequential_2d.py
[x] refactored
[x] tested
test_network_sequential_3d.py
[x] refactored
[x] tested
test_network_sequential_rnn.py
[x] refactored
[x] tested
test_optimizer_amsgrad.py
[x] refactored
[ ] tested
test_pydocstyle.py
[ ] refactored
[ ] tested
test_reuse_mlp.py
[x] refactored
[ ] tested
test_tf_layers.py
[x] refactored
[ ] tested
test_timeout.py
[x] refactored
[x] tested
test_utils_predict.py
[x] refactored
[ ] tested
test_yapf_format.py
[x] refactored
[x] tested
Work to be done
Layers
contrib
ROIPoolingLayer:
[x] refactored
[ ] tested
convolution/
AtrousConv1dLayer:
[x] refactored => AtrousConv1dLayer and AtrousConv2dLayer are removed, use Conv1d/2d with dilation_rate instead.
[x] tested
spatial_transformer.py
SpatialTransformer2dAffineLayer: see test_layers_spatial_transformer.py
[x] refactored
[x] tested
stack.py
StackLayer:
[x] refactored
[ ] tested
UnStackLayer: => Need to be checked! Not working
[ ] refactored
[ ] tested
time_distribution.py
TimeDistributedLayer:
[x] refactored
[ ] tested
TensorLayer Hub
tl.models => became tl.hub
[ ] VGG16
[ ] VGG19
[ ] MobileNet
[ ] SqueezeNet
Examples
basic_tutorials
[x] refactored
[ ] tested
data_process
[x] refactored
[x] tested
database
[x] refactored
[ ] tested
deprecated_tutorials
[ ] refactored
[ ] tested
distributed_training
[x] refactored
[ ] tested
keras_tfslim
[x] refactored
[ ] tested
pretrained_cnn
[x] refactored
[ ] tested
quantized_net
[x] refactored
[ ] tested
reinforcement_learning
[x] refactored
[ ] tested
text_classification
[x] refactored
[ ] tested
text_generation
[x] refactored
[ ] tested
text_ptb
[x] refactored
[ ] tested
text_word_embedding
[x] refactored
[ ] tested
More TODO
[ ] add scope_name into the layer name
[ ] don't check duplicated layer name in core layer
Test code:
plh = tf.placeholder(tf.float16, (100, 32))
net = tl.layers.InputLayer(name='in')(plh)
with tf.variable_scope('test'):
net = tl.layers.DenseLayer(n_units=50, act=tf.nn.relu, name="dense")(net)
with tf.variable_scope('test', reuse=True):
net = tl.layers.DenseLayer(n_units=50, act=tf.nn.relu, name="dense")(net)
print(net['test/dense'])
print(net['test/dense_2'])
assert len(net.all_weights) == 2
print(net.all_params) # give a warning, but still works
How to do with manual-compile?:
with tf.variable_scope('test'):
model.add(tl.layers.DenseLayer(n_units=50, act=tf.nn.relu, name="seq_layer_9"))
with tf.variable_scope('test', reuse=True):
model.add(tl.layers.DenseLayer(n_units=50, act=tf.nn.relu, name="seq_layer_9"))
plh = tf.placeholder(tf.float16, (100, 32))
net = model.build(plc)
print(net['test/seq_layer_9'])
Rename
[ ] assign_params --> assign_weights
[ ] print_params --> print_weights
[ ] print_layers --> print_outputs
[ ] all_layers --> all_outputs
[ ] all_params --> all_weights
[ ] local_weights
[ ] count_pararms --> count_all_weights
[ ] count_local_weights
[ ] restore_params --> restore_weights in tl.models/hub
NETWORK API REFACTORING - TO DO LIST
Finished work:
Unittests Status:
Work to be done
Layers
dilation_rate
instead.TensorLayer Hub
Examples
More TODO
[ ] add
scope_name
into the layer name[ ] don't check duplicated layer name in core layer
Test code:
assign_params
-->assign_weights
print_params
-->print_weights
print_layers
-->print_outputs
all_layers
-->all_outputs
all_params
-->all_weights
local_weights
count_pararms
-->count_all_weights
count_local_weights
restore_params
-->restore_weights
intl.models/hub
get_layers_with_name
-->get_outputs_with_name
?Give exception and hint when users use the old name.
Documentation
Simplify layer name
DenseLayer
-->Dense
then add code:def BatchNormLayer(*args, **kwargs): raise Exception("BatchNormLayer(net, is_train=True, name='bn') --> BatchNorm(name='bn')(net, is_train=True)")