kukas / deepcompyle

Pretraining transformers to decompile Python bytecodes, WORK IN PROGRESS
11 stars 0 forks source link

70_inference.sh,inference results were empty. What could be the reason? #2

Open testdm20 opened 2 months ago

testdm20 commented 2 months ago

During the training process, the following metrics were displayed: {'eval_loss': 2.022883892059326, 'eval_bleu': 54.0062, 'eval_gen_len': 96.5, 'eval_runtime': 1.5432, 'eval_samples_per_second': 20.736, 'eval_steps_per_second': 0.648, 'epoch': 2574.89}.

When running inference using the 70_inference.sh file, the results were as follows. Debugging the code showed that the inference results were empty. What could be the reason?

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [00:46<00:00, 3.58s/it] predict metrics predict_bleu = 0.0 predict_gen_len = 1024.0 predict_loss = 8.9258 predict_model_preparation_time = 0.0019 predict_runtime = 0:00:51.68 predict_samples = 402 predict_samples_per_second = 7.779 predict_steps_per_second = 0.252 [INFO|modelcard.py:449] 2024-08-16 03:06:50,357 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Translation', 'type': 'translation'}}

kukas commented 2 months ago

Hey! Thanks for the message and for the interest in the project!

The project is work in progress, the models I trained did not work yet. The models might be too small to be able to learn decompilation or there might be another problem, I am not sure because I did not look at it for few months.

If you want to try to fix it, I recommend trying to train a dummy model to:

There is some work on that in 61_train_dummy_model.sh so you could pick up where i left off.

Do you mind if I ask how did you find this project and why are you interested? :-)