I'm trying to finetune the concode task using 'code' as both input & output, instead of 'nl' & 'code'. I wanted to know if we can directly use the concode finetuned checkpoints of concode task and some more information about using tokenizers and embeddings?
Also, where are all changes need to be done here to load concode model instead of the base codet5 model ??
I'm trying to finetune the concode task using 'code' as both input & output, instead of 'nl' & 'code'. I wanted to know if we can directly use the concode finetuned checkpoints of concode task and some more information about using tokenizers and embeddings?
Also, where are all changes need to be done here to load concode model instead of the base codet5 model ??
parser.add_argument("--model_tag", type=str, default='codet5_base', choices=['roberta', 'codebert', 'bart_base', 'codet5_small', 'codet5_base', 'codet5_large']).
Thanks!!