albertan017 / LLM4Decompile

Reverse Engineering: Decompiling Binary Code with Large Language Models
https://arxiv.org/abs/2403.05286
MIT License
3.13k stars 228 forks source link

details of splitting exebench for train and evaluation #28

Open wang-yongpan opened 1 month ago

wang-yongpan commented 1 month ago

hello, i m impressed by the Decompile model you released.

i want to know the details of splitting exebench for train and evaluation. because i want to reproduce your evaluation results for a better application.

thank you.

albertan017 commented 1 month ago

We use all the training samples:

train_synth_compilable
train_real_compilable
train_synth_simple_io
train_real_simple_io
train_synth_rich_io

And test on its test set:

test_synth

Good luck for your project!

wang-yongpan commented 1 month ago

ok, i get it. thanks for your reply

wang-yongpan commented 3 weeks ago

hi,

I found that the exebench has three optimization options (O0, O3, Os). How can I evaluate your tool on different options (O0, O1, O2, O3) similar to your experiments?

albertan017 commented 3 weeks ago

Unfortunately, you will have to compile the dataset on your own as we do not have the authorization to distribute another's dataset. For more details on the issues we faced, please refer to Appendix A in our paper.

wang-yongpan commented 3 weeks ago

OK, how can i obtain the source code of exebench? i can not find them from huggingface.

albertan017 commented 3 weeks ago

you can find it here

wang-yongpan commented 3 weeks ago

I can not find the source code from this link. I just found the below files:

train_not_compilable: 2.357M train_synth_compilable: 2.308373M train_real_compilable: 0.675074M train_synth_simple_io: 0.550116M train_real_simple_io: 0.043769M train_synth_rich_io: 0.097250M valid_synth: 5k valid_real: 2.133k test_synth: 5k test_real: 2.134k

do you mean the source code is contained in these?

albertan017 commented 3 weeks ago

that's all they provided...

wang-yongpan commented 3 weeks ago

ok, I noticed that your paper tested the re-executability rate of the exebench dataset. Can I ask how you achieved it?

albertan017 commented 3 weeks ago

in the examples/basic.py, you can see

synth_wrapper = Wrapper(c_deps=row['synth_deps'] + '\n' + row['synth_io_pairs']['dummy_funcs'][0] + '\n',
                                    func_c_signature=row['func_head_types'].replace('extern', ''), func_assembly=row['asm']['code'][0],
                                    cpp_wrapper=row['synth_exe_wrapper'])

it requires the func_assembly. So we remove the func_assembly, and add the func_def:

synth_wrapper = Wrapper(c_deps=row['synth_deps'] + '\n' + row['synth_io_pairs']['dummy_funcs'][0] + '\n' + row['func_def'],
                                    func_c_signature=row['func_head_types'].replace('extern', ''), func_assembly=None,
                                    cpp_wrapper=row['synth_exe_wrapper'])

We made some additional changes to the code for our specific needs, but that's essentially how you can modify it. However, we were only able to compile half of the code with these modifications. If you have any better solutions, we would really appreciate your insights!