ChrisTimperley / RepairChain

AIxCC: automated vulnerability repair via LLMs, search, and static analysis
Apache License 2.0
3 stars 0 forks source link

added hardening to llm calls, changed some logging information and added gemini #62

Closed rubengmartins closed 2 months ago

rubengmartins commented 2 months ago

We will need to decide which strategies to use and how many LLM patches we want.

determine_patch_generation_strategy.py This is the file that we may still want some changes.

I tried gemini with my personal account and the at first sight it seems that it handles JSON so much better than Claude. Although we cannot test it, Anh was able to use it by just changing the model name and it should work as expected in production. I think we should include it in our final set of models when submitting the final version.

The other changes include returning None when the call to the LLM fails. This is important and will avoid getting stuck on sending many requests which could happen when I was returning a "".

Besides that minor to the logging information.

This should be the last pull request from my side for the competition.

rubengmartins commented 2 months ago

We never got a key for Gemini.

Yesterday, I created a personal account (they give $300 in free credits). However, I did this using the api_key method.

In the configuration file, they use Vertex AI. This needs a different setup that I would need to check.

Will check with Anh if he is just changing the model name since they run the fuzzing with Gemini in production. He also does not have access to Gemini.

rubengmartins commented 2 months ago

I am running some tests on mock-cp asking for a larger number of patches to see the stability of Gemini and to have some statistics about the different options.

Hopefully, I can do the same for nginx to have a better idea which options to use.