Closed JaeLee18 closed 4 years ago
The code used for that paper was based on TF Lattice 1.x, and we have since updated the code base (see release log). The current tfl.Linear layer can implement the linear embedding described in the paper and do a bit more. It lets you define per dimension monotonicity, and also lets fix L1 or L2 norm of the weights (by setting normalization_order to 1 or 2). If you use normalization_order=1 and also set monotoncities to 1, then you get a "weighted average" layer that can be used to guarantee the output range of the linear embedding to be in the range of the input to the layer.
Re meta learning: these layers are like any other layer you would use for the task, with the added benefit of using domain knowledge such as monotonicity and other shape constraints in the model design.
First, thank you for sharing this amazing library!
I have two questions regarding Tensorflow Lattice.
After I read the "Deep Lattice Networks and Partial Monotonic Functions" paper, I am trying to implement the deep lattice network as the paper introduced, but I wonder "tfl.layers.Linear" is equivalent to "Linear Embedding Layer", which was mentioned in the paper.
And the last question is I wonder this network can be used in Meta Learning as well.
Thank you 😄