Closed ZeyuSun closed 3 years ago
Support multiple input of different dimensions
This is common for in multimodal learning. For example, suppose for each an MNIST image x, we extract 3 additional features stored in z. The following code should work.
x
z
x_shape = (1,28,28) z_shape = (3,) summary(model, [x_shape, z_shape])
However, the current implementation involves converting [x_shape, z_shape] to a numpy array, which assumes the same dimension of x_shape and z_shape: https://github.com/sksq96/pytorch-summary/blob/345d898d84507b848e92dab4629e03405e19afce/torchsummary/torchsummary.py#L101-L102
[x_shape, z_shape]
x_shape
z_shape
I realized I was not using the latest version.
Feature:
Support multiple input of different dimensions
Motivation:
This is common for in multimodal learning. For example, suppose for each an MNIST image
x
, we extract 3 additional features stored inz
. The following code should work.However, the current implementation involves converting
[x_shape, z_shape]
to a numpy array, which assumes the same dimension ofx_shape
andz_shape
: https://github.com/sksq96/pytorch-summary/blob/345d898d84507b848e92dab4629e03405e19afce/torchsummary/torchsummary.py#L101-L102