It should constrain the norms of the columns as described in the original paper:
Instead of penalizing the squared length (L2 norm) of the whole weight vector, we set an upper bound on the L2 norm of the incoming weight vector for each individual hidden unit. If a weight-update violates this constraint, we renormalize the weights of the hidden unit by division.
This block of code constrains the norms of the rows of the weight matrix:
https://github.com/mdenil/dropout/blob/master/mlp.py#L245-L254
It should constrain the norms of the columns as described in the original paper:
The matrix orientation in the code means that each column corresponds to a hidden unit (see: https://github.com/mdenil/dropout/blob/master/mlp.py#L38-L41), so it is the columns and not the rows that should be constrained.