meta-llama / PurpleLlama

Set of tools to assess and improve LLM security.
Other
2.73k stars 453 forks source link

How can I get the python file of the model architecture, such as 'model.py' ? #59

Open LaosGAmin opened 1 month ago

LaosGAmin commented 1 month ago

I want to make some changes to the model, but using the transforms packet seems a bit inconvenient.

laurendeason commented 3 weeks ago

Apologies for the delayed response! Could you provide additional information about what you are trying to accomplish? If I understand correctly, you wish to modify an LLM that is being used (as a model under test, judge llm, etc) within the benchmarks. Currently, the benchmarks can be run on existing models hosted by OPENAI, ANYSCALE, or TOGETHER. In order to use a self-hosted model (such as a model that you have modified locally), you can follow the instructions here: https://github.com/meta-llama/PurpleLlama/tree/main/CybersecurityBenchmarks#how-to-run-benchmarks-for-self-hosted-models.

LaosGAmin commented 3 weeks ago

Apologies for the delayed response! Could you provide additional information about what you are trying to accomplish? If I understand correctly, you wish to modify an LLM that is being used (as a model under test, judge llm, etc) within the benchmarks. Currently, the benchmarks can be run on existing models hosted by OPENAI, ANYSCALE, or TOGETHER. In order to use a self-hosted model (such as a model that you have modified locally), you can follow the instructions here: https://github.com/meta-llama/PurpleLlama/tree/main/CybersecurityBenchmarks#how-to-run-benchmarks-for-self-hosted-models.

I think I need to wrap layers or blocks in the model with a wrapper function. So I need the source code for a particular model.