lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.58k stars 732 forks source link

Intermediate Layer activations #1517

Open neural-everything opened 11 months ago

neural-everything commented 11 months ago

Hi @lululxvi , I cant seem to find a way to get the outputs of each layer as either a callback or some way to write them on to a file. Thanks in advance

praksharma commented 10 months ago

If you mean the output from each neuron, you should look for code which does the forward pass. The forward pass is mostly included in the class definition of the neural network. If you look at any example there must be this line:

model = dde.Model(data, net)

This should be your first clue. Now let us look at the definition of Model's __init__().

    def __init__(self, data, net):
        self.data = data
        self.net = net

Bingo you can access the neural network using dde.Model.net().

Let us assume that you are using fully connected neural network i.e. dde.nn.FNN(). If you just type model.net you get the network's architecture.

FNN(
  (linears): ModuleList(
    (0): Linear(in_features=1, out_features=50, bias=True)
    (1): Linear(in_features=50, out_features=50, bias=True)
    (2): Linear(in_features=50, out_features=50, bias=True)
    (3): Linear(in_features=50, out_features=2, bias=True)
  )
)

The forward pass for FNN is simply dde.Model.net.forward() or in our case model.net.forward() see this.

Now you need to modify this function to print or save the output of each neuron. Simply add two print statement for intermediate and last layer.

    def forward(self, inputs):
        x = inputs
        if self._input_transform is not None:
            x = self._input_transform(x)
        for j, linear in enumerate(self.linears[:-1]):
            x = (
                self.activation[j](linear(x))
                if isinstance(self.activation, list)
                else self.activation(linear(x))
            )
--->>       print(x)
        x = self.linears[-1](x)
        if self._output_transform is not None:
            x = self._output_transform(inputs, x)
--->>   print(x)
        return x

I haven't tried it but it should do the job. Just remember this might slow down the training and fill your terminal with 1000 of lines. Better execute python script as follows:

python your_problem.py >> output.txt