Closed toptechie156 closed 7 months ago
Hi you will need to change this line to save the details in addition to the score, now it's an empty variable https://github.com/bigcode-project/bigcode-evaluation-harness/blob/094c7cc197d13a53c19303865e2056f1c7488ac1/bigcode_eval/tasks/humaneval.py#L98 For example:
results, details = compute_code_eval(..)
details.to_json("details.json")
Im running human eval for codellama/CodeLlama-7b-Instruct-hf model using the folllwing command
accelerate launch main.py --model codellama/CodeLlama-7b-Instruct-hf --max_length_generation 512 --tasks humaneval --temperature 0.2 --n_samples 1 --batch_size 1 --precision fp16 --load_in_4bit --allow_code_execution --save_generations --save_references --limit 5
Currently I can only access the final score of my test in the file evaluation_results.json
I need to check for which all problems, my model generated the correct code(unit tests passed) & for which all problems the testcases failed(and see what was the output for the tests)