Open yulonglin opened 4 months ago
The output dimensions for one of the examples is wrong, since the batch size is 2, not 1. Further, there is no need to import RobertaClassificationHead for that example, as it is later accessed via torchtext.models.RobertaClassificationHead
torchtext.models.RobertaClassificationHead
Run the following:
>>> import torch, torchtext >>> from torchtext.models import RobertaClassificationHead >>> from torchtext.functional import to_tensor >>> xlmr_large = torchtext.models.XLMR_LARGE_ENCODER >>> classifier_head = torchtext.models.RobertaClassificationHead(num_classes=2, input_dim = 1024) >>> model = xlmr_large.get_model(head=classifier_head) >>> transform = xlmr_large.transform() >>> input_batch = ["Hello world", "How are you!"] >>> model_input = to_tensor(transform(input_batch), padding_value=1) >>> output = model(model_input) >>> output.shape
torch.Size([2, 2])
torch.Size([1, 2])
Note: Links to docs will display an error until the docs builds have been completed.
This comment was automatically generated by Dr. CI and updates every 15 minutes.
The output dimensions for one of the examples is wrong, since the batch size is 2, not 1. Further, there is no need to import RobertaClassificationHead for that example, as it is later accessed via
torchtext.models.RobertaClassificationHead
How to reproduce the issue
Run the following:
Expected result
torch.Size([2, 2])
Result given in docstring
torch.Size([1, 2])