nusnlp / crosentgec

Code for cross-sentence grammatical error correction using multilayer convolutional seq2seq models (ACL 2019)
GNU General Public License v3.0
50 stars 12 forks source link

Embedded model error message #3

Closed harrydeng8 closed 5 years ago

harrydeng8 commented 5 years ago

I received an error as the following when ruuning decoder: "./decode.sh conll13st-test models/crosent/model1 models/dicts 1" ++ CUDA_VISIBLE_DEVICES=1 ++ python fairseq/interactive_multi.py --no-progress-bar --path models/crosent/model1/checkpoint_best.pt --beam 12 --nbest 12 --replace-unk --source-lang src --target-lang trg --input-files models/crosent/model1/outputs/tmp.conll13st-test.1565940766/input.src models/crosent/model1/outputs/tmp.conll13st-test.1565940766/input.ctx --num-shards 12 --task translation_ctx models/dicts Traceback (most recent call last): File "fairseq/interactive_multi.py", line 195, in main(args) File "fairseq/interactive_multi.py", line 102, in main models, model_args = utils.load_ensemble_for_inference(model_paths, task) File "/home/hdeng/nsu/fairseq/fairseq/utils.py", line 163, in load_ensemble_for_inference model = task.build_model(state['args']) File "/home/hdeng/nsu/fairseq/fairseq/tasks/fairseq_task.py", line 43, in build_model return models.build_model(args, self) File "/home/hdeng/nsu/fairseq/fairseq/models/init.py", line 25, in build_model return ARCH_MODEL_REGISTRY[args.arch].build_model(args, task) File "/home/hdeng/nsu/fairseq/fairseq/models/fconv_dualenc_gec_gatedaux.py", line 76, in build_model encoder_embed_dict = utils.parse_embedding(args.encoder_embed_path) File "/home/hdeng/nsu/fairseq/fairseq/utils.py", line 267, in parse_embedding embed_dict[pieces[0]] = torch.Tensor([float(weight) for weight in pieces[1:]]) File "/home/hdeng/nsu/fairseq/fairseq/utils.py", line 267, in embed_dict[pieces[0]] = torch.Tensor([float(weight) for weight in pieces[1:]]) ValueError: could not convert string to float: 'Not'

harrydeng8 commented 5 years ago

I checked your download.sh and found a line to create directory of embed but not copying any models into this directory. "mkdir -p models/embed ".

I copied something with the following and received error message as shown above. "curl -L -o models/embed/model.vec https://tinyurl.com/yd6wvhgw/mlconvgec2018/models/models/embeddings/wiki_model.vec".

Please kindly advise and/or update download.sh.

Thanks!

shamilcm commented 5 years ago

I have updated download.sh. I do not have access to the enviroment now, so I just made the edit directly to github. Can you check if it works fine?. The command you posted above was correct, except that it had an extra 'models/' in the path which was the issue. The correct URL is https://tinyurl.com/yd6wvhgw/mlconvgec2018/models/embeddings/wiki_model.vec

harrydeng8 commented 5 years ago

Thank you for updating the model path. We ran into another error message, could you see what we are missing still?
Traceback (most recent call last): File "fairseq/interactive_multi.py", line 195, in main(args) File "fairseq/interactive_multi.py", line 182, in main results += process_batch(batch) File "fairseq/interactive_multi.py", line 168, in process_batch maxlen=int(args.max_len_a tokens.size(1) + args.max_len_b), File "/home/hdeng/nsu/fairseq/fairseq/multiinput_sequence_generator.py", line 96, in generate return self._generate(src_tokens, src_lengths, ctx_tokens, ctx_lengths, beam_size, maxlen, prefix_tokens) File "/home/hdeng/nsu/fairseq/fairseq/multiinput_sequence_generator.py", line 121, in _generate src_lengths.expand(beam_size, src_lengths.numel()).t().contiguous().view(-1), File "/home/hdeng/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, *kwargs) File "/home/hdeng/nsu/fairseq/fairseq/models/fconv_dualenc_gec_gatedaux.py", line 214, in forward x = conv(x) File "/home/hdeng/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, **kwargs) File "/home/hdeng/nsu/fairseq/fairseq/modules/conv_tbc.py", line 30, in forward return input.contiguous().conv_tbc(self.weight, self.bias, self.padding[0]) AttributeError: 'Tensor' object has no attribute 'conv_tbc'

shamilcm commented 5 years ago

Aden you using Python 3.6 and pytorch 0.4.1. This is only tested version so far.

On Thu, 22 Aug 2019 at 3:08 PM, harrydeng8 notifications@github.com wrote:

Thank you for updating the model path. We ran into another error message, could you see what we are missing still? Traceback (most recent call last): File "fairseq/interactive_multi.py", line 195, in main(args) File "fairseq/interactive_multi.py", line 182, in main results += process_batch(batch) File "fairseq/interactive_multi.py", line 168, in process_batch maxlen=int(args.max_len_a tokens.size(1) + args.max_len_b), File "/home/hdeng/nsu/fairseq/fairseq/multiinput_sequence_generator.py", line 96, in generate return self._generate(src_tokens, src_lengths, ctx_tokens, ctx_lengths, beam_size, maxlen, prefix_tokens) File "/home/hdeng/nsu/fairseq/fairseq/multiinput_sequence_generator.py", line 121, in _generate src_lengths.expand(beam_size, src_lengths.numel()).t().contiguous().view(-1), File "/home/hdeng/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, *kwargs) File "/home/hdeng/nsu/fairseq/fairseq/models/fconv_dualenc_gec_gatedaux.py", line 214, in forward x = conv(x) File "/home/hdeng/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, **kwargs) File "/home/hdeng/nsu/fairseq/fairseq/modules/conv_tbc.py", line 30, in forward return input.contiguous().conv_tbc(self.weight, self.bias, self.padding[0]) AttributeError: 'Tensor' object has no attribute 'conv_tbc'

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MFBSYEI4GWFEZEXR6LQFY3QRA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD44DYFA#issuecomment-523779092, or mute the thread https://github.com/notifications/unsubscribe-auth/AAE46MGOZDIF6RKF6EBFHXTQFY3QRANCNFSM4INCUFPQ .

harrydeng8 commented 5 years ago

Python 3.7 and Pytorch 0.4.1 is working without error. Will multiple GPU help in speed? Will you suggest to run under Python3.6 instead?

Thanks!

shamilcm commented 5 years ago

Python3.6 is the env we used and tested. Not sure if there is any issue with python 3.6. I haven’t tested multi GPU training with this code, so not sure. If latency, between GPUs is low, I do not see a reason why they should not. When you split across GPUs, make sure you divide batchsize by number of GPUs used.

If you use crosent model, you can use 96 batch size split across 2 GPUs or 4 GPUs, without any delaying of updates.

-Shamil

On Thu, 22 Aug 2019 at 4:39 PM, harrydeng8 notifications@github.com wrote:

Python 3.7 and Pytorch 0.4.1 is working without error. Will multiple GPU help in speed? Will you suggest to run under Python3.6 instead?

Thanks!

— You are receiving this because you commented.

Reply to this .6email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MEYPDWRHO3DDNWHZKLQFZGCNA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD44LB5A#issuecomment-523809012, or mute the thread https://github.com/notifications/unsubscribe-auth/AAE46MBLNY4AB7I2ADY3CIDQFZGCNANCNFSM4INCUFPQ .

harrydeng8 commented 5 years ago

Thank you!

harrydeng8 commented 5 years ago

Hi, Shamil,

Follow your email below, where should I set batch size or batch size split?

I could only find buffer size in your code.

Thank you!

Harry

From: Shamil Chollampatt notifications@github.com Sent: Thursday, August 22, 2019 1:44 AM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; Author author@noreply.github.com Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

Python3.6 is the env we used and tested. Not sure if there is any issue with python 3.6. I haven’t tested multi GPU training with this code, so not sure. If latency, between GPUs is low, I do not see a reason why they should not. When you split across GPUs, make sure you divide batchsize by number of GPUs used.

If you use crosent model, you can use 96 batch size split across 2 GPUs or 4 GPUs, without any delaying of updates.

-Shamil

On Thu, 22 Aug 2019 at 4:39 PM, harrydeng8 notifications@github.com wrote:

Python 3.7 and Pytorch 0.4.1 is working without error. Will multiple GPU help in speed? Will you suggest to run under Python3.6 instead?

Thanks!

— You are receiving this because you commented.

Reply to this .6email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MEYPDWRHO3DDNWHZKLQFZGCNA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD44LB5A#issuecomment-523809012, or mute the thread https://github.com/notifications/unsubscribe-auth/AAE46MBLNY4AB7I2ADY3CIDQFZGCNANCNFSM4INCUFPQ .

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSERW4XU6W435QPBA3QFZGTXA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD44LPKQ#issuecomment-523810730 , or mute the thread https://github.com/notifications/unsubscribe-auth/ALTSWYV446FU7VRSXGRKCXLQFZGTXANCNFSM4INCUFPQ .

shamilcm commented 5 years ago

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

harrydeng8 commented 5 years ago

When I was testing your trained code, it was only using 40% GPU but CPU usage was 100% even though I have 16 cores in 2 CPUs.

I was wondering if change batch size to a larger value might help to speed up the inference process.

Do you agree and if there is any additional suggestion you may have to speed it up?

Thanks,

Harry

From: Shamil Chollampatt notifications@github.com Sent: Monday, October 14, 2019 1:10 AM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; State change state_change@noreply.github.com Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTSWYU7XKV4TV5BN3LSQ23QOQSMBANCNFSM4INCUFPQ .

harrydeng8 commented 5 years ago

Dear Shamil,

We are running your reranker process and got an error of “nsu/tools/nbest-reranker/lib/pytorch_pretrained_bert/file_utils.py", line 20, in

import boto3

ModuleNotFoundError: No module named 'boto3'”

Do we need to install this boto3 SDK?

Thanks!

Harry

From: Shamil Chollampatt notifications@github.com Sent: Monday, October 14, 2019 1:10 AM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; State change state_change@noreply.github.com Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTSWYU7XKV4TV5BN3LSQ23QOQSMBANCNFSM4INCUFPQ .

shamilcm commented 5 years ago

You would need pytorch_pretrained_bert dependencies. You would python modules: boto3, tqdm, and requests

On Fri, 18 Oct 2019 at 7:59 AM, harrydeng8 notifications@github.com wrote:

Dear Shamil,

We are running your reranker process and got an error of “nsu/tools/nbest-reranker/lib/pytorch_pretrained_bert/file_utils.py", line 20, in

import boto3

ModuleNotFoundError: No module named 'boto3'”

Do we need to install this boto3 SDK?

Thanks!

Harry

From: Shamil Chollampatt notifications@github.com Sent: Monday, October 14, 2019 1:10 AM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; State change < state_change@noreply.github.com> Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub < https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460> , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ALTSWYU7XKV4TV5BN3LSQ23QOQSMBANCNFSM4INCUFPQ> .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MBNBVLV7IPXAAVXX5TQPD36JA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6BKA#issuecomment-543416488, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE46MHHBIFO4KFWGMI2BTLQPD36JANCNFSM4INCUFPQ .

harrydeng8 commented 5 years ago

Hi, Shamil,

Thank you!

In addition, when we use your crosentgec model on a single sentence, it would be the same as your model from the paper “Neural Quality Estimation of Grammatical Error Correction”?

Best,

Harry

From: Shamil Chollampatt notifications@github.com Sent: Thursday, October 17, 2019 5:06 PM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; State change state_change@noreply.github.com Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

You would need pytorch_pretrained_bert dependencies. You would python modules: boto3, tqdm, and requests

On Fri, 18 Oct 2019 at 7:59 AM, harrydeng8 notifications@github.com wrote:

Dear Shamil,

We are running your reranker process and got an error of “nsu/tools/nbest-reranker/lib/pytorch_pretrained_bert/file_utils.py", line 20, in

import boto3

ModuleNotFoundError: No module named 'boto3'”

Do we need to install this boto3 SDK?

Thanks!

Harry

From: Shamil Chollampatt notifications@github.com Sent: Monday, October 14, 2019 1:10 AM To: nusnlp/crosentgec crosentgec@noreply.github.com Cc: harrydeng8 harrydeng@gmail.com; State change < state_change@noreply.github.com> Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub < https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460> , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ALTSWYU7XKV4TV5BN3LSQ23QOQSMBANCNFSM4INCUFPQ> .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MBNBVLV7IPXAAVXX5TQPD36JA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6BKA#issuecomment-543416488, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE46MHHBIFO4KFWGMI2BTLQPD36JANCNFSM4INCUFPQ .

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYQNIG7RJLUAMPZTUJDQPD4XVA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6PWQ#issuecomment-543418330 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTSWYTVYMT6NTXQWCGHUW3QPD4XVANCNFSM4INCUFPQ .

harrydeng8 commented 5 years ago

Hi, Shamil,

Could you help me with the following mistake during decode process of your trained model?

I used to run ok. Then installed Spacy for ERRANT. Something is wrong.

Thank you!

Harry

python fairseq/interactive_multi.py --no-progress-bar --path --beam 12 --nbest 12 --replace-unk --source-lang src --target-lang trg --input-files models/crosent/outputs/tmp.conll14st-test.1572512694/input.src models/crosent/outputs/tmp.conll14st-test.1572512694/input.ctx --num-shards 12 --task translation_ctx models/dicts

usage: interactive_multi.py [-h] [--no-progress-bar] [--log-interval N]

                        [--log-format {json,none,simple,tqdm}] [--seed N]

                        [--task TASK]

                        [--skip-invalid-size-inputs-valid-test]

                        [--max-tokens N] [--max-sentences N]

                        [--gen-subset SPLIT] [--num-shards N]

                        [--shard-id ID] [--path FILE]

                        [--remove-bpe [REMOVE_BPE]] [--cpu] [--quiet]

                        [--beam N] [--nbest N] [--max-len-a N]

                        [--max-len-b N] [--min-len N] [--no-early-stop]

                        [--unnormalized] [--no-beamable-mm]

                        [--lenpen LENPEN] [--unkpen UNKPEN]

                        [--replace-unk [REPLACE_UNK]] [--score-reference]

                        [--prefix-size PS] [--sampling]

                        [--sampling-topk PS] [--sampling-temperature N]

                        [--buffer-size N]

                        [--input-files INPUT_FILES [INPUT_FILES ...]]

interactive_multi.py: error: argument --path: expected one argument

From: Dr. Harry Deng harrydeng@gmail.com Sent: Thursday, October 17, 2019 9:37 PM To: 'nusnlp/crosentgec' reply@reply.github.com; 'nusnlp/crosentgec' crosentgec@noreply.github.com Cc: 'State change' state_change@noreply.github.com Subject: RE: [nusnlp/crosentgec] Embedded model error message (#3)

Hi, Shamil,

Thank you!

In addition, when we use your crosentgec model on a single sentence, it would be the same as your model from the paper “Neural Quality Estimation of Grammatical Error Correction”?

Best,

Harry

From: Shamil Chollampatt <notifications@github.com mailto:notifications@github.com > Sent: Thursday, October 17, 2019 5:06 PM To: nusnlp/crosentgec <crosentgec@noreply.github.com mailto:crosentgec@noreply.github.com > Cc: harrydeng8 <harrydeng@gmail.com mailto:harrydeng@gmail.com >; State change <state_change@noreply.github.com mailto:state_change@noreply.github.com > Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

You would need pytorch_pretrained_bert dependencies. You would python modules: boto3, tqdm, and requests

On Fri, 18 Oct 2019 at 7:59 AM, harrydeng8 <notifications@github.com mailto:notifications@github.com > wrote:

Dear Shamil,

We are running your reranker process and got an error of “nsu/tools/nbest-reranker/lib/pytorch_pretrained_bert/file_utils.py", line 20, in

import boto3

ModuleNotFoundError: No module named 'boto3'”

Do we need to install this boto3 SDK?

Thanks!

Harry

From: Shamil Chollampatt <notifications@github.com mailto:notifications@github.com > Sent: Monday, October 14, 2019 1:10 AM To: nusnlp/crosentgec <crosentgec@noreply.github.com mailto:crosentgec@noreply.github.com > Cc: harrydeng8 harrydeng@gmail.com; State change < state_change@noreply.github.com mailto:state_change@noreply.github.com > Subject: Re: [nusnlp/crosentgec] Embedded model error message (#3)

--max-sentences is the argument for setting batch size. When I was referring to speed up with multi-gpu, I was referring to training. I do not think that the inference code supports multi-gpu.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub < https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460 &email_token=ALTSWYSVHCIE3ZJESC6CPXDQOQSMBA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBDUYJA#issuecomment-541543460> , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ALTSWYU7XKV4TV5BN3LSQ23QOQSMBANCNFSM4INCUFPQ> .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub <https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=AAE46MBNBVLV7IPXAAVXX5TQPD36JA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6BKA#issuecomment-543416488 &email_token=AAE46MBNBVLV7IPXAAVXX5TQPD36JA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6BKA#issuecomment-543416488>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAE46MHHBIFO4KFWGMI2BTLQPD36JANCNFSM4INCUFPQ .

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/nusnlp/crosentgec/issues/3?email_source=notifications&email_token=ALTSWYQNIG7RJLUAMPZTUJDQPD4XVA5CNFSM4INCUFP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBR6PWQ#issuecomment-543418330 , or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTSWYTVYMT6NTXQWCGHUW3QPD4XVANCNFSM4INCUFPQ .