Closed arne-cl closed 9 years ago
I think there is an incorrect default value in the make_segmentation_crfpp_template.py
script (it's fixed in the pending PR).
Can you try changing the default number to 12 instead of 13?
Thank you Aoife. tune_segmentation_model
works now, after I applied your change like this:
git cherry-pick 3c9aad35446a1d7b6a49b977cff95b10458e8cdc
python3 setup.py install
help me i am getting the same error
where is tune_segmentation_model file and how to change it
Dear @ankur220693,
the problem should be solved in my fork: https://github.com/arne-cl/discourse-parsing I also provide a docker container for this: https://github.com/NLPbox/heilman-sagae-2015-docker
If you have docker installed on your machine (and you trust random people on the internet), then you can simply install the parser from Docker Hub with one line:
$ cat /tmp/input.txt
Although they didn't like it, they accepted the offer.
$ docker run -v /tmp:/tmp -ti nlpbox/heilman-sagae-2015:2018-05-12-1 /tmp/input.txt
Loading tagger from /opt/zpar-0.7/models/english/tagger
Loading model... done.
Loading constituency parser from /opt/zpar-0.7/models/english/conparser
Loading scores... done. (21.7049s)
{"scored_rst_trees": [{"score": -0.9662282971887425, "tree": "(ROOT (satellite:contrast (text 0)) (nucleus:span (text 1)))"}], "edu_tokens": [["Although", "they", "did", "n't", "like", "it", ","], ["they", "accepted", "the", "offer", "."]]}
(On the first run, this will take a long time and download half of the internet, but on subsequent run it works like a normal local installation.)
Best regards, Arne
Thanks for your reply, I will try this approach. Test_crf shows error. While, I have train_crf. I will approach you if I will get stuck further. Basically I am running a POS Tagger for Indian languages. I have 20K tagged test and train data.
On Thu 24 May, 2018 3:04 pm Arne Neumann, notifications@github.com wrote:
Dear @ankur220693 https://github.com/ankur220693,
the problem should be solved in my fork: https://github.com/arne-cl/discourse-parsing I also provide a docker container for this: https://github.com/NLPbox/heilman-sagae-2015-docker
If you have docker installed on your machine (and you trust random people on the internet), then you can simply install the parser from Docker Hub with one line:
$ cat /tmp/input.txt Although they didn't like it, they accepted the offer.
$ docker run -v /tmp:/tmp -ti nlpbox/heilman-sagae-2015:2018-05-12-1 /tmp/input.txt Loading tagger from /opt/zpar-0.7/models/english/tagger Loading model... done. Loading constituency parser from /opt/zpar-0.7/models/english/conparser Loading scores... done. (21.7049s) {"scored_rst_trees": [{"score": -0.9662282971887425, "tree": "(ROOT (satellite:contrast (text 0)) (nucleus:span (text 1)))"}], "edu_tokens": [["Although", "they", "did", "n't", "like", "it", ","], ["they", "accepted", "the", "offer", "."]]}
(On the first run, this will take a long time and download half of the internet, but on subsequent run it works like a normal local installation.)
Best regards, Arne
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/EducationalTestingService/discourse-parsing/issues/42#issuecomment-391651873, or mute the thread https://github.com/notifications/unsubscribe-auth/Alp9cJzp85kRBG_KqmPsW9CEDibAEfckks5t1n6jgaJpZM4FWz-A .
The system stucks when " Loading Scores... " Is there any problem in Indian Languages? My input text is in another language
On Thu, May 24, 2018 at 3:08 PM, Ankur priyadarshi hellrider1422@gmail.com wrote:
Thanks for your reply, I will try this approach. Test_crf shows error. While, I have train_crf. I will approach you if I will get stuck further. Basically I am running a POS Tagger for Indian languages. I have 20K tagged test and train data.
On Thu 24 May, 2018 3:04 pm Arne Neumann, notifications@github.com wrote:
Dear @ankur220693 https://github.com/ankur220693,
the problem should be solved in my fork: https://github.com/arne-cl/ discourse-parsing I also provide a docker container for this: https://github.com/NLPbox/ heilman-sagae-2015-docker
If you have docker installed on your machine (and you trust random people on the internet), then you can simply install the parser from Docker Hub with one line:
$ cat /tmp/input.txt Although they didn't like it, they accepted the offer.
$ docker run -v /tmp:/tmp -ti nlpbox/heilman-sagae-2015:2018-05-12-1 /tmp/input.txt Loading tagger from /opt/zpar-0.7/models/english/tagger Loading model... done. Loading constituency parser from /opt/zpar-0.7/models/english/conparser Loading scores... done. (21.7049s) {"scored_rst_trees": [{"score": -0.9662282971887425, "tree": "(ROOT (satellite:contrast (text 0)) (nucleus:span (text 1)))"}], "edu_tokens": [["Although", "they", "did", "n't", "like", "it", ","], ["they", "accepted", "the", "offer", "."]]}
(On the first run, this will take a long time and download half of the internet, but on subsequent run it works like a normal local installation.)
Best regards, Arne
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/EducationalTestingService/discourse-parsing/issues/42#issuecomment-391651873, or mute the thread https://github.com/notifications/unsubscribe-auth/Alp9cJzp85kRBG_KqmPsW9CEDibAEfckks5t1n6jgaJpZM4FWz-A .
crf_test failure!!
SOLVED ...THANKS A LOT
On Wed, Jun 6, 2018 at 1:02 PM, Ankur priyadarshi hellrider1422@gmail.com wrote:
The system stucks when " Loading Scores... " Is there any problem in Indian Languages? My input text is in another language
On Thu, May 24, 2018 at 3:08 PM, Ankur priyadarshi < hellrider1422@gmail.com> wrote:
Thanks for your reply, I will try this approach. Test_crf shows error. While, I have train_crf. I will approach you if I will get stuck further. Basically I am running a POS Tagger for Indian languages. I have 20K tagged test and train data.
On Thu 24 May, 2018 3:04 pm Arne Neumann, notifications@github.com wrote:
Dear @ankur220693 https://github.com/ankur220693,
the problem should be solved in my fork: https://github.com/arne-cl/dis course-parsing I also provide a docker container for this: https://github.com/NLPbox/heilman-sagae-2015-docker
If you have docker installed on your machine (and you trust random people on the internet), then you can simply install the parser from Docker Hub with one line:
$ cat /tmp/input.txt Although they didn't like it, they accepted the offer.
$ docker run -v /tmp:/tmp -ti nlpbox/heilman-sagae-2015:2018-05-12-1 /tmp/input.txt Loading tagger from /opt/zpar-0.7/models/english/tagger Loading model... done. Loading constituency parser from /opt/zpar-0.7/models/english/conparser Loading scores... done. (21.7049s) {"scored_rst_trees": [{"score": -0.9662282971887425, "tree": "(ROOT (satellite:contrast (text 0)) (nucleus:span (text 1)))"}], "edu_tokens": [["Although", "they", "did", "n't", "like", "it", ","], ["they", "accepted", "the", "offer", "."]]}
(On the first run, this will take a long time and download half of the internet, but on subsequent run it works like a normal local installation.)
Best regards, Arne
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/EducationalTestingService/discourse-parsing/issues/42#issuecomment-391651873, or mute the thread https://github.com/notifications/unsubscribe-auth/Alp9cJzp85kRBG_KqmPsW9CEDibAEfckks5t1n6jgaJpZM4FWz-A .
After running
extract_segmentation_features
, I couldn't gettune_segmentation_model
to work.The problem seems to be that
crf_learn
doesn't create any segmentation_model files in the first place (but doesn't give any error message either):