HiroakiMikami / mlprogram

PyTorch library for synthesizing programs from natural language
MIT License
18 stars 3 forks source link

TreeGen: A Tree-Based Transformer Architecture for Code Generation #39

Open HiroakiMikami opened 4 years ago

HiroakiMikami commented 4 years ago
brando90 commented 3 years ago

Hi! Maybe I can help. What are the suggested next steps to have this work @HiroakiMikami?

brando90 commented 3 years ago

Hi @HiroakiMikami apologies for the spam. Let me know if there is something specific that you suggest to look into to help with having pytorch TreeGen working!

brando90 commented 3 years ago

Hi @HiroakiMikami apologies for the spam! Let me know if you do want help.

HiroakiMikami commented 3 years ago

Sorry for the late reply.

I do not value paper reproduction (such as TreeGen) now for 2 reasons. The first reason is that using a large model with NLP technique may be a more effective and simpler approach than AST-based program generation (e.g., http://arxiv.org/abs/2105.09938v1). And the second reason is that I found out that the quality and characteristics of dataset are more critical than model differences for many usecases.

So, the status of these paper-reproduction issues is pending.

Thank you for your comments!

brando90 commented 3 years ago
And the second reason is that I found out that the quality and characteristics of dataset are more critical than model differences for many usecases.

Hi Hiroaki,

I appreciate your response. I was wondering if you could clarify what you meant by that. Do you mean that all models perform relatively similar across all data sets? I wasn't quite sure how to interpret your response.

I am also curious, what type of model would you prefer then? NLP based ones? Grammar syntax based ones? Or something else?

thanks for your time again!

brando90 commented 3 years ago

Another question, our of curiosity, if you do not value reproduction of papers anymore what do you value? Related, does this make your mlprogram repo obsolete for you (besides the fact it makes it easy for you to run experiments)?

Thanks for your time! Its appreciated.

HiroakiMikami commented 3 years ago

Do you mean that all models perform relatively similar across all data sets?

That's almost right. I think all models show a similar performance if the used computational resource (e.g., model parameters, FLOPs, and training epochs) is almost the same.

Also, I think that dataset quality (e.g. the number of annotation mistakes, the code quality) is important. The performance of program synthesis may be limited by the quality of the dataset, not DNN model structures.

what type of model would you prefer then? NLP based ones? Grammar syntax based ones

I think NLP based one is enough for program synthesis. Grammer syntax based models reduce the syntax errors of the outputs. However, the inference procedure is very complex and cannot utilize GPU efficiently. So using NLP based models and filtering out the code with invalid syntax (like CodeXGLUE baselines) may be more efficient than using grammer syntax models.

if you do not value reproduction of papers anymore what do you value? Related, does this make your mlprogram repo obsolete for you (besides the fact it makes it easy for you to run experiments)?

I made and maintained this repository in order to make my experiments easy, so the purpose of mlprogram is not changed. But I should use transformers module as a model zoo.