Open HushShushHush opened 2 months ago
@HushShushHush Hi! Thanks for your try. The support for various LLMs is in the beta version. If there is no special requirement, I strongly recommend you use the model of ChatGPT or GPT4 that is currently sufficiently tested. Supporting other LLMs requires a great of effort and I am still working out for it.
Hello,
I modify the config.rs and incoder.py to invoke a self-build LLM Incoder-1B. In the process of generating fuzz drivers, however, the Docker container will always receive an error message: ERROR [prompt_fuzz::program] Detected corrupt state of your last execution, please remove your library output in the
output
directory and restart, even though I have removed the corresponding output before interaction.Are there any scripts that need to be further modified?
Thanks.