atulkum / pointer_summarizer

pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"
Apache License 2.0
904 stars 243 forks source link

strange view() operation in ReduceState module #11

Open jamestang0219 opened 5 years ago

jamestang0219 commented 5 years ago

We wanna reduce the forward and backward state, so we need to concatenate the forward state and backward state, then go through a nn.Linear module to let 2 * hidden_dim become hidden_dim.

In your code, h is 2 x B x hidden_dim, and you use view() operation directly on h and c. The results should be concatenating first example forward state and second example forward state, not concatenating first example forward state and first example backward state.

In my opinion, we should use h.transpose(0, 1).contiguous().view(-1, config.hidden_dim*2).

https://github.com/atulkum/pointer_summarizer/blob/454a2f6f10865531f724cbe5064a927d66dfa1b7/training_ptr_gen/model.py#L75

atulkum commented 5 years ago

Thanks for reviewing the code. You are correct I fixed the code. Would update the result after the re run.

jamestang0219 commented 5 years ago

@atulkum One more question, in your code, step_coverage_loss is the sum of the minimum of attn_dist and coverage in each element. https://github.com/atulkum/pointer_summarizer/blob/5e511697d5f00cc474370fd76ac1da450ffd4d2e/training_ptr_gen/train.py#L99 And coverage is coverage + attn_dist. https://github.com/atulkum/pointer_summarizer/blob/5e511697d5f00cc474370fd76ac1da450ffd4d2e/training_ptr_gen/model.py#L124 So, the step_converage_loss should always be the sum of attn_dist(1.0)?

atulkum commented 5 years ago

I think you are right. The order of updating the coverage is not correct. The coverage should be updated after coverage loss has been calculated. I think I might have made a mistake here while code refactoring. I will fix it.

        if config.is_coverage:
            coverage = coverage.view(-1, t_k)
            coverage = coverage + attn_dist
atulkum commented 5 years ago

Thanks for reviewing the code. I have fixed the bug. https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L91 https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L100

jamestang0219 commented 5 years ago

@atulkum have you ever try to set is_coverage as True, it's extremely easy to cause loss became NaN, less learning rate is useless for this issue.

atulkum commented 5 years ago

I have turned on is_coverage=True after training for 500k iteration. Making is_coverage=True from the beginning makes the training unstable.

jamestang0219 commented 5 years ago

@atulkum I think this operation may cause NaN https://github.com/atulkum/pointer_summarizer/blob/fd8dda35390d058c1745b9495634ea0ddadf71ad/training_ptr_gen/model.py#L95 calculating the memory of attention in each decoder step may create many computation graph branch in torch backend, but in fact , we only need to calculate it once after get encoder_outputs.

atulkum commented 5 years ago

You are right about increasing branches computation graph but it won't cause NaN. If you are getting NaN then it might be somewhere else. I tested it on pytorch 0.4 and it does not give NaN.

I am changing this to be called only once (thanks for catching this) and test it again.

jamestang0219 commented 5 years ago

@atulkum I set is_coverage as True after 500k step, but at the beginning of retraining, it always gives NaN. I'll continue to test, thank you again.

atulkum commented 5 years ago

After how many iteration (with is_coverage = True) you are getting NaN? Did you initialize the model_file_path in the code? https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L141

You can try to debug it on CPU.

My GPU is busy right now, but I just tested it on CPU for around 20 epoch I did not get NaN or any exploding gradient etc.

jamestang0219 commented 5 years ago

Thanks for suggestion. I initialized the model_file_path, but after no more than 100 iter, it get NaN:(

atulkum commented 5 years ago

I have uploaded a model here. I retrain it with with is_coverage = True for 200k iteration but did not get NaN

For retraining you should do 3 things: 1) in the config.py make is_coverage = True 2) in the config.py make max_iterations = 600000 3) make sure in train.py you are calling trainIters with full absolute path model_file_path

I am also curious why you get NaN, can you debug it and pinpoint the code which is causing NaN?

BigwhiteRookie commented 5 years ago

Thanks for your code, but I have a question: when I run the train.py , one error : AttributeError: 'generator' object has no attribute 'next', i dont understand it , the system show it at the batcher.py 209.

karansaxena commented 5 years ago

@jamestang0219 were you able to point out the nan problem? I trained without coverage till 200k and beyond that, I am using the coverage but running into nan soon after.