allenai / longformer

Longformer: The Long-Document Transformer
https://arxiv.org/abs/2004.05150
Apache License 2.0
2.03k stars 271 forks source link

Generating Embeddings #164

Open MarwaEssam opened 3 years ago

MarwaEssam commented 3 years ago

I am trying to run the model and as an initial step, I need to generate embeddings for text, that I can use later for classification. I tried generating the embedding from the CLS vector of the last hidden state of the model. Every time though I get a different embedding vector for the same sentence. Isn't this supposed to be fixed as I am in an evaluation mode.

Here is my code:

config = LongformerConfig.from_pretrained('longformer-base-4096/')
    config.attention_mode = 'sliding_chunks'
    model = LongformerModel.from_pretrained('longformer-base-4096/', config=config)
    model.eval()
    # tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
    tokenizer = LongformerTokenizerFast.from_pretrained('roberta-base')
    tokenizer.model_max_length = 4096
    SAMPLE_TEXT = ' '.join(["What is the capital of Egypt? Cairo is the capital of Egypt."]*100)  
    tok=tokenizer.encode(text,truncation=True)
    input_ids = torch.tensor(tok).unsqueeze(0)
    attention_mask = torch.ones(input_ids.shape, dtype=torch.long,
                               device=input_ids.device)
   output = model(input_ids, attention_mask=attention_mask,output_hidden_states=True)
  result=output.last_hidden_state.squeeze()[0]

I also tried getting it this way: embedding=output.hidden_states[-1].squeeze()[0]

also using: embedding=output[0][0][0] also using result= model.embeddings(input_ids)[0][0]

still different embeddings each time. Any help?

P.S I get the following warning at the beginning of the run: Some weights of the model checkpoint at longformer-base-4096/ were not used when initializing LongformerModel: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.self.query_global.weight', 'roberta.encoder.layer.0.attention.self.query_global.bias', 'roberta.encoder.layer.0.attention.self.key_global.weight', 'roberta.encoder.layer.0.attention.self.key_global.bias', 'roberta.encoder.layer.0.attention.self.value_global.weight', 'roberta.encoder.layer.0.attention.self.value_global.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.self.query_global.weight', 'roberta.encoder.layer.1.attention.self.query_global.bias', 'roberta.encoder.layer.1.attention.self.key_global.weight', 'roberta.encoder.layer.1.attention.self.key_global.bias', 'roberta.encoder.layer.1.attention.self.value_global.weight', 'roberta.encoder.layer.1.attention.self.value_global.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.self.query_global.weight', 'roberta.encoder.layer.2.attention.self.query_global.bias', 'roberta.encoder.layer.2.attention.self.key_global.weight', 'roberta.encoder.layer.2.attention.self.key_global.bias', 'roberta.encoder.layer.2.attention.self.value_global.weight', 'roberta.encoder.layer.2.attention.self.value_global.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.self.query_global.weight', 'roberta.encoder.layer.3.attention.self.query_global.bias', 'roberta.encoder.layer.3.attention.self.key_global.weight', 'roberta.encoder.layer.3.attention.self.key_global.bias', 'roberta.encoder.layer.3.attention.self.value_global.weight', 'roberta.encoder.layer.3.attention.self.value_global.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.self.query_global.weight', 'roberta.encoder.layer.4.attention.self.query_global.bias', 'roberta.encoder.layer.4.attention.self.key_global.weight', 'roberta.encoder.layer.4.attention.self.key_global.bias', 'roberta.encoder.layer.4.attention.self.value_global.weight', 'roberta.encoder.layer.4.attention.self.value_global.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.self.query_global.weight', 'roberta.encoder.layer.5.attention.self.query_global.bias', 'roberta.encoder.layer.5.attention.self.key_global.weight', 'roberta.encoder.layer.5.attention.self.key_global.bias', 'roberta.encoder.layer.5.attention.self.value_global.weight', 'roberta.encoder.layer.5.attention.self.value_global.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.self.query_global.weight', 'roberta.encoder.layer.6.attention.self.query_global.bias', 'roberta.encoder.layer.6.attention.self.key_global.weight', 'roberta.encoder.layer.6.attention.self.key_global.bias', 'roberta.encoder.layer.6.attention.self.value_global.weight', 'roberta.encoder.layer.6.attention.self.value_global.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.self.query_global.weight', 'roberta.encoder.layer.7.attention.self.query_global.bias', 'roberta.encoder.layer.7.attention.self.key_global.weight', 'roberta.encoder.layer.7.attention.self.key_global.bias', 'roberta.encoder.layer.7.attention.self.value_global.weight', 'roberta.encoder.layer.7.attention.self.value_global.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.self.query_global.weight', 'roberta.encoder.layer.8.attention.self.query_global.bias', 'roberta.encoder.layer.8.attention.self.key_global.weight', 'roberta.encoder.layer.8.attention.self.key_global.bias', 'roberta.encoder.layer.8.attention.self.value_global.weight', 'roberta.encoder.layer.8.attention.self.value_global.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.self.query_global.weight', 'roberta.encoder.layer.9.attention.self.query_global.bias', 'roberta.encoder.layer.9.attention.self.key_global.weight', 'roberta.encoder.layer.9.attention.self.key_global.bias', 'roberta.encoder.layer.9.attention.self.value_global.weight', 'roberta.encoder.layer.9.attention.self.value_global.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.self.query_global.weight', 'roberta.encoder.layer.10.attention.self.query_global.bias', 'roberta.encoder.layer.10.attention.self.key_global.weight', 'roberta.encoder.layer.10.attention.self.key_global.bias', 'roberta.encoder.layer.10.attention.self.value_global.weight', 'roberta.encoder.layer.10.attention.self.value_global.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.self.query_global.weight', 'roberta.encoder.layer.11.attention.self.query_global.bias', 'roberta.encoder.layer.11.attention.self.key_global.weight', 'roberta.encoder.layer.11.attention.self.key_global.bias', 'roberta.encoder.layer.11.attention.self.value_global.weight', 'roberta.encoder.layer.11.attention.self.value_global.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']

BinchaoPeng commented 3 years ago

I also want to get feature vector through the longformer,so do you make it successfully?could you tell me how to implement the steps? thanks!