Closed lu-renjie closed 1 year ago
I used “thop” package to calculate one forward computation of short-term transformer, below is a demo code: ''' from thop import profile from transformers import PretrainedConfig config = PretrainedConfig.from_pretrained('bert_config/bert-base-uncased') config.bev_dim = BEV_DIM config.hidden_size = 768 config.num_x_layers = 4 config.use_lang2visn_attn = False model = LocalBEVEncoder(config)
txt_emebds = torch.zeros([1,80,768]) txt_masks = torch.ones([1,80]).bool() bev_fts = torch.zeros([1, BEV_DIMBEV_DIM, 768]) bev_pos_fts = torch.zeros([1, BEV_DIMBEV_DIM, 10]) bev_masks = torch.ones([1, BEV_DIMBEV_DIM]).bool() bev_nav_masks = torch.ones([1, BEV_DIMBEV_DIM]).bool()
flops, params = profile(model, inputs=(txt_emebds,txt_masks,bev_fts, bev_pos_fts, bev_masks, bev_nav_masks,None, None, )) print('FLOPs = ' + str(flops/10003) + 'G') print('Params = ' + str(params/10002) + 'M') '''
Ok, thank you.
已收到,谢谢。
Please consider giving me a star. Thanks :D
How did you calculate the FLOPs reported in your paper?