I noticed this doesn't support multiheadedattention. I plan to fix this, so no need for help, this is just a placeholder to make people aware of the issue.
import torch
from torch import nn
from torchsummaryX import summary
embedding = torch.nn.MultiheadAttention(2, 2)
x = torch.zeros((2, 2, 2))
summary(embedding, x, x, x)
similar setup
import torch
from torch import nn
from torchsummaryX import summary
class Model(nn.Module):
def __init__(self):
super().__init__()
self.embedding = torch.nn.MultiheadAttention(2, 2)
def forward(self, x):
return self.embedding(x, x, x)
model = Model()
x = torch.zeros((2, 2, 2))
summary(model, x)
I noticed this doesn't support multiheadedattention. I plan to fix this, so no need for help, this is just a placeholder to make people aware of the issue.
similar setup
output: error due to empty dataframe