Garbage in garbage out: interpreting code generators
Since these models learn from data, we need to use data to evaluate them. It's the most important commodity.
You cannot just give your accuracy if you don't take into account the evaluation of your data! Your accuracy can be profoundly misleading if your data is biased.
The interpretability strategy consist of two important topics:
Data exploration: you cannot explain a model without data analysis
Approach evaluation: standard machine learning evaluation plus
Unconditioned models are very complex so that's why we are evaluating conditioned models.
We agree to make pair programming for the GPT baseline
Garbage in garbage out: interpreting code generators Since these models learn from data, we need to use data to evaluate them. It's the most important commodity. You cannot just give your accuracy if you don't take into account the evaluation of your data! Your accuracy can be profoundly misleading if your data is biased.
The interpretability strategy consist of two important topics:
Unconditioned models are very complex so that's why we are evaluating conditioned models. We agree to make pair programming for the GPT baseline