Closed cws7777 closed 2 years ago
Hi, Thanks for your interest, our fine-tuned model can be used for parsing plain text. Please follow the instructions here to prepare your data and run parsing scripts.
Thanks alot!!! :) I'll try! :D
Hi, I'm trying to download the pre-trained AMR parsing models (2.0 & 3.0) from OneDrive that you've shared on the README.md, When it downloads around 1.2 GB, it is disconnected with the error ("server/network error"), and needs to download from the beginning again. Isn't there any problem when you download the model?
Hi @cws7777, it works well on my side, did you try multiple times and they all returned such an error?
Hm.. might be our internet connection is not that stable then.. :'(
I tried to download 2.0 model (zip file) first on last Friday, and it gave me the error. So I tried 3.0 model but gave me the same error.. :'( I tried this morning, and it happened again..
I'll try more and let you know! :) Thanks for quick reply!
Ah, I have one more question about inferencing on my own data,
Since it has config,json
in in the pre-trained model, Should I just run this command bash inference_amr.sh /path/to/fine-tuned/AMRBART/ gpu_id
with setting the directory of pre-trained model?
Don't I need to edit the inference_amr.sh
file such as tokenizer, train/eval/test_data_file?
Hi @cws7777, that's weird, but do not worry, @muyeby is now uploading the models to Google drive. Please give us some time, and you can try it later.
Oh.. wow..! Thank you so much for helping me out! :) I really appreciated it!!
Hi, @cws7777, we have uploaded the models to Huggingface and updated the README, you can try it again. To inference on your own data, you should edit the inference_amr.sh file and prepare your file to have the same format with examples.
Hi @muyeby, I also have a question on generating graph on our own text data. So I saw your uploads on Transformer library. I tried to run the code and use example data to see the output. Specifically, I used "bash inference_amr.sh xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing 0" and I modified "datacate=examples". However, I don't think I am able to generate graph for data format like data/data4parsing.jsonl. Is it possible that you can help me on generating outputs for "data/data4parsing.jsonl"?
@muyeby Thanks for uploading! I will check it out and try! :)
@xu1998hz What kind of output are you getting?? Did you use same format as the examples??
@cws7777 , Yes, I used the example file to try do run the outputs. Are you able to produce the output?
I don't have output
@xu1998hz No, not yet. I just downloaded the fine-tuned models from huggingface!
Cool, please let me know if you can successfully generate text!
Did you just modified datacate
and run the command bash inference_amr.sh xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing 0
?
I also moved examples folder under data
@xu1998hz Ah ha, didn't it give an error? did you modify Tokenizer in inference_amr.sh
file as well??
@muyeby Can you give us an example code of detailed setting in inference_amr.sh
? I don't know why but it keeps giving me an error..! :'(
Hi, @cws7777 @xu1998hz, I have updated the scripts, Now you can modify the tokenizer path and run the command bash inference_amr.sh xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing 0
to get the output of examples/data4parsing.jsonl.
If you still get any errors, please post the errors here.
Thanks a lot! It works perfectly. @muyeby
Thanks for the updated scripts! :) I think it goes through well, but my cuda is a problem I think..! :'(
When I ran the conda env update --name <env> --file requirements.yml
to create, it gave me an error as below,
ResolvePackageNotFound:
So I just downloaded one by one in requirements.yaml
without the nvidia::cudatoolkit=11.1.1
and pytorch::pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0
and now it can't find gpus on my computer....
I think I have to figure out the version of it somehow..
@xu1998hz Did you follow the same step as in the README.md for creating env & installing requirement.yaml?
@muyeby @xu1998hz Figured out the problem above, and it went through well.
Thanks for the help. :)
Hi, First of all, thank you for your great work! I recently got interested in AMR and found your work from accepted papers in ACL.
I have a question about fine-tuned models that you shared for AMR parsing. Can those models be used for plain texts to generate AMR graphs?? Also, do you support the code for plain texts to generate AMR graphs like SPRING model?
Thanks! :) Hope you have a good one!