huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.26k stars 26.85k forks source link

summarization code is incomplete #2186

Closed ghost closed 4 years ago

ghost commented 4 years ago

Hi in the summarization code you have removed all the training part, why is that? Solely evaluating an existing model does not really have any point. While I really find this repo great, incomplete work like this summarization folder, defenitely degrade from the dignity of this repo. I greatly appreciate either remove this summarization folder, or properly implementing it.

TheEdoardo93 commented 4 years ago

I'm sorry that you're angry with Transformers library and its authors, but I'm not share your opinion. This framework is well documented, developed and updated (the most important part of each library).

However, if you want to watch and/or train the model for the summarization task, you can refer here, as said in the README.md.

I share with you that, for completeness, it could be useful for many people a Python script that allows to train a summarization model with a custom dataset.

ghost commented 4 years ago

Hi I really believe you would better off fully remove this folder, previously training part was also included but was not complete, after weeks of waiting you decided to fully remove it? why is this?

Please reconsider your decision of including such codes into the repository, have you ever asked yourself what is the point of evaluating on the already trained model, while not allowing user to actually train such models? If the user need to revert back to presum repo, then let the user also evaluate there, there is no point of including the codes which is not complete, this is a bad choice and hurt the dignity of repo in the long run.

ghost commented 4 years ago

@thomwolf I put Thomas in cc, I really believe adding such incomplete codes is not proper, and hurt the dignity of this repository in the long run.

ohmeow commented 4 years ago

@juliahane ... I gotta say that while I understand your frustration and what you are requesting, your attitude completely sucks and is unlikely to solicit a response from the huggingface team.

There is no way in hell you can rationalize requesting the team to do away from the inference code based on the pre-trained model just because you can't fine-tune it for your own dataset. The code is complete insofar as what it intends to do.

Now ... I'd love to have this be finetunable and would love to see what the HF team produces. In the meantime, I stepped through their code and figured out what you need to do in modeling_bertabs.py to make this just so. I'm glad to share with you, but in the words of Base Commander Colonel Nathan Jessup, "You gotta ask nicely."

ghost commented 4 years ago

Hi Thanks for your response. If i was not sounding nice i apologize for it. Unfortunately i do believe in every single word of what i said. Adding summarization just for evaluation does not help then let people also revert back to presum for it. I really dont get the point here of adding loading and calling pretrained models. Unfortunately i really believe such attitude from your team in long run hurt hugging face name for sure. People see your repo as the greatest repo for deep learning but if you start adding codes like this which does not train and pointless from my view it does change peoples mind. I am sorry this is the truth. Adding summarization code without allowing user to train is pointless. Also i expect you to be more welcoming towards complaints. There is really no point of loading pretrained models from another repo and let user to call it. This is a legitimate complain and your attitude of you gaurding gainst it completely sucks.

On Mon, Dec 16, 2019, 11:19 PM ohmeow notifications@github.com wrote:

@juliahane https://github.com/juliahane ... I gotta say that while I understand your frustration and what you are requesting, your attitude completely sucks and is unlikely to solicit a response from the huggingface team.

There is no way in hell you can rationalize requesting the team to do away from the inference code based on the pre-trained model just because you can't fine-tune it for your own dataset. The code is complete insofar as what it intends to do.

Now ... I'd love to have this be finetunable and would love to see what the HF team produces. In the meantime, I stepped through their code and figured out what you need to do in modeling_bertabs.py to make this just so. I'm glad to share with you, but in the words of Base Commander Colonel Nathan Jessup, "You gotta ask nicely."

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/2186?email_source=notifications&email_token=AM3GZMZZO3EXCNUOZA3DNGTQY75IPA5CNFSM4J3B6S32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHAJ3TA#issuecomment-566271436, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM3GZM4KS2VL5QBN5KX72GLQY75IPANCNFSM4J3B6S3Q .

ghost commented 4 years ago

Also i would like to add this point. Previously huggingface added summarization codes that has evaluation part but it was not implemented and the code was failing is several parts basically huggingface uploaded fully not tested code. I respect good code. Everyone should respect writing good code which is tested. Later you removed that training part and leave evaluation part. Still not a complete code which really serve user no functionality than calling already trained models. Both acts of adding code which breaks in at least 10 parts not at all complete in anysense like adding flags and then not writing conditions.... Is really something which hurt your name in the long run. Resulting in people losing trust in huggingface.

On Mon, Dec 16, 2019, 11:41 PM julia hane juliahane123@gmail.com wrote:

Hi Thanks for your response. If i was not sounding nice i apologize for it. Unfortunately i do believe in every single word of what i said. Adding summarization just for evaluation does not help then let people also revert back to presum for it. I really dont get the point here of adding loading and calling pretrained models. Unfortunately i really believe such attitude from your team in long run hurt hugging face name for sure. People see your repo as the greatest repo for deep learning but if you start adding codes like this which does not train and pointless from my view it does change peoples mind. I am sorry this is the truth. Adding summarization code without allowing user to train is pointless. Also i expect you to be more welcoming towards complaints. There is really no point of loading pretrained models from another repo and let user to call it. This is a legitimate complain and your attitude of you gaurding gainst it completely sucks.

On Mon, Dec 16, 2019, 11:19 PM ohmeow notifications@github.com wrote:

@juliahane https://github.com/juliahane ... I gotta say that while I understand your frustration and what you are requesting, your attitude completely sucks and is unlikely to solicit a response from the huggingface team.

There is no way in hell you can rationalize requesting the team to do away from the inference code based on the pre-trained model just because you can't fine-tune it for your own dataset. The code is complete insofar as what it intends to do.

Now ... I'd love to have this be finetunable and would love to see what the HF team produces. In the meantime, I stepped through their code and figured out what you need to do in modeling_bertabs.py to make this just so. I'm glad to share with you, but in the words of Base Commander Colonel Nathan Jessup, "You gotta ask nicely."

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/2186?email_source=notifications&email_token=AM3GZMZZO3EXCNUOZA3DNGTQY75IPA5CNFSM4J3B6S32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHAJ3TA#issuecomment-566271436, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM3GZM4KS2VL5QBN5KX72GLQY75IPANCNFSM4J3B6S3Q .

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.