waleedka / hiddenlayer

Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.
MIT License
1.79k stars 266 forks source link

Unexpected data type for grouping rules? (Codebase merge issue 2 of 3) #2

Closed philferriere closed 6 years ago

philferriere commented 6 years ago

Hi @waleedka, Please see: https://github.com/waleedka/weightwatcher/blob/aa8f1546f27401b61296d96be8095e290318f8b3/ww/builder_pytorch.py#L44-L87

waleedka commented 6 years ago

This code was not used yet, just exploring an idea.

I think of node grouping and transformations as a two step process:

  1. Framework-specific code that maps all layer names to a standard naming convention. This code should be as small as possible and only handles the differences between frameworks. For example, in converts "Linear", "fully connected", "fc", ...etc. to "linear", and converts Convolution, Conv, Conv2d, ...etc. to "conv".

  2. Framework-agnostic code that merges and transforms layers for better visualization. This works on the result from step 1. For example, this merges a "conv/bn/relu" into one layer for nicer optimization.

Ideally, the grouping rules in 2 should be user-customizable using a simple syntax. For example, if someone doesn't like to merge linear and RELU they can easily remove the "linear/relu" rule. Similarly, if someone uses Conv followed by sigmoid a lot and want to merge them they should be able to add a "conv/sigmoid" rule easily.

The current syntax is simply "a/b/c" which catches layers that follow each other. I was experimenting with adding more rules, such as sibling layers and so on.

Then, from that I was thinking that if we build a nice parser for these rules then we might be able to use it for the mapping in 1 as well. For example, to map the TF graph to the standard graph we use in step 2 we need to do things like merging Conv and weights and biases nodes into one, or to remove all "assign" nodes. These were the two additional lines I added, but I haven't gotten the chance to continue down that path.

philferriere commented 6 years ago

Ok, good to know.

I was able to get descent graphs for a variety of models from the tf-slim model zoo using three sets of rules. These rules are applied sequentially to simplify the graph:

https://github.com/waleedka/weightwatcher/blob/aa8f1546f27401b61296d96be8095e290318f8b3/ww/graph.py#L222-L231

Rules are passed to DirectedGraph()'s constructor:

https://github.com/waleedka/weightwatcher/blob/aa8f1546f27401b61296d96be8095e290318f8b3/ww/graph.py#L28-L37

You could use our framework-specific rules. Or, you could use your own rules. Or, a modified copy of our rules -- up to the caller.

As a general rule, I greatly support the dichotomy between framework-specific code and framework-agnostic code. In practice, when the dichotomy is very small, I tend to weigh that against perhaps unecessarily complicating the source code.

Here's what that means in our case. The rules currently passed in are the following:

TF CODE:

https://github.com/waleedka/weightwatcher/blob/aa8f1546f27401b61296d96be8095e290318f8b3/ww/builder_tf.py#L48-L74

PYTORCH CODE:

https://github.com/waleedka/weightwatcher/blob/aa8f1546f27401b61296d96be8095e290318f8b3/ww/builder_pytorch.py#L38-L53

The set of entries is sooooo small, re-factoring the code (beyond passing the rules as parameters) might feel somewhat artificial. However, if we discover very unique rules that do compromise (i.e., completely break) our current implementation, then yes, we absolutely should revisit the matter.