Open lkurlandski opened 1 year ago
Are these results to be expected? I don't remember reading about anything like this in the paper. If so, is there a recommended batch size to use when evaluating the model?
The predictions should be the same regardless of batch size. So that is weird.
Are X_1 and X_2 real files / data, or made up test data? Are all 3 data points the same length without any padding?
Can you print out / look at at model(X_*)[0].data[:,1][1]
without any softmax step there?
We've got a heavy load of priorities into the new year, but at a minimum will help debug over github and will try to find time to mess with this myself.
Thanks for the rapid response and any effort on your part!
X_1 and X_2 are derived from a real malicious file from the SOREL-20M dataset. Each element in X_1 and X_2 are perturbed variants of the original sample. So X_1[0], X_1[1], X_2[0], X_2[1] are all slightly perturbed malware tensors with a common ancestor. All have the same dimensions and only differ by at most 1024 elements.
All three data points are the same length and none of them have any padding.
print(X_1.shape)
print(X_2.shape)
torch.Size([2, 1971368]) torch.Size([2, 1971368])
The logits are also different:
print(model(X_1)[0].data[:,1][1])
print(model(X_2)[0].data[:,1][1])
tensor(-1.5864) tensor(-1.8645)
The batch size and which examples are in each batch can impact how a model learns during training. However, during evaluation, the batch size and which examples are in each batch should only impact the speed at which data is processed, not the predictions of the model (as far as I am aware). I have found that the outputs of this model for a single example are impacted by properties of other examples in the batch. Evaluating an example individually (batch_size=1) can result in different predictions than if the example is included in a batch.
I believe this is a result of padding added in LowMemConv.LowMemConvBase.seq2fix, specifically, this line:
Are these results to be expected? I don't remember reading about anything like this in the paper. If so, is there a recommended batch size to use when evaluating the model?
I included a minimal working example demonstrating this behavior below.
Tensors: X_1.pt.txt X_2.pt.txt
Environment: environment.yml.txt
Example: