They present an optimization problem (think of QP problem) as neural network layer, whose output can be used as an input for another layer (like sense layer). They differentiate the KKT condition to find the gradients required for backpropagation.
These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture.
Link to paper by Brandon Amos, J. Zico Kolter
They present an optimization problem (think of QP problem) as neural network layer, whose output can be used as an input for another layer (like sense layer). They differentiate the KKT condition to find the gradients required for backpropagation.
Link to oral presentation