Closed dangnguyenngochai closed 9 months ago
We used NVIDIA Tesla T4 to train all the models.
transfer learning in rq2
source domain training (training on a bug fix corpus) takes ~3 - 7 days, target domain training (truing on a vulnerability fix corpus) takes ~1h.
pre-training + target domain training in rq3
pre-training take ~3-7 days, and target domain training takes similar time as before.
How about your other method, SeqTrans, how long does it take for training using the specifications mentioned in your paper "Intel Xeon E5 processor, four Nvidia 3090 GPU, and 1TB RAM"
How about your other method, SeqTrans
We're not authors of SeqTrans.
Very sorry, my mistake.
Can you share the trained models for your research questions? It would be nice if I don't have to wait that long to test some of my ideas.
I am now contacting Zenodo to host all the models (compressed version is 273GB).
I have them currently uploaded to OneDrive for now:
I will update the repo about the model directory structure.
How long can you let these models be accessible via one drive? I don't think I can transfer the whole thing to my server for now.
The link will be valid until Sep 2022. The models can be stored at OneDrive at least until Oct 2023, in the meantime I will try to find a location to host the models.
Greetingas @chenzimin, much thanks for the research work. I wish to have a downloadable copy of the model. Thanks.
The models have been pushed to Zenodo, see README.
Can you share the hardware specifications used for experiments, along with the time it takes for transfer learning in rq2 and pre-training + target domain training in rq3 ?