GreyDGL / PentestGPT

A GPT-empowered penetration testing tool
MIT License
7.28k stars 883 forks source link
large-language-models llm penetration-testing python

Contributors Forks Stargazers Issues MIT License Discord


PentestGPT

A GPT-empowered penetration testing tool.
Explore the docs »

Design Details · View Demo · Report Bug or Request Feature

GreyDGL%2FPentestGPT | Trendshift

General Updates

Quick Start

  1. Create a virtual environment if necessary. (virtualenv -p python3 venv, source venv/bin/activate)
  2. Install the project with pip3 install git+https://github.com/GreyDGL/PentestGPT
  3. Ensure that you have link a payment method to your OpenAI account. Export your API key with export OPENAI_API_KEY='<your key here>',export API base with export OPENAI_BASEURL='https://api.xxxx.xxx/v1'if you need.
  4. Test the connection with pentestgpt-connection
  5. For Kali Users: use tmux as terminal environment. You can do so by simply run tmux in the native terminal.
  6. To start: pentestgpt --logging

Getting Started

Common Questions

Installation

PentestGPT is tested under Python 3.10. Other Python3 versions should work but are not tested.

Install with pip

PentestGPT relies on OpenAI API to achieve high-quality reasoning. You may refer to the installation video here.

  1. Install the latest version with pip3 install git+https://github.com/GreyDGL/PentestGPT
    • You may also clone the project to local environment and install for better customization and development
      • git clone https://github.com/GreyDGL/PentestGPT
      • cd PentestGPT
      • pip3 install -e .
  2. To use OpenAI API
    • Ensure that you have link a payment method to your OpenAI account.
    • export your API key with export OPENAI_API_KEY='<your key here>'
    • export API base with export OPENAI_BASEURL='https://api.xxxx.xxx/v1'if you need.
    • Test the connection with pentestgpt-connection
  3. To verify that the connection is configured properly, you may run pentestgpt-connection. After a while, you should see some sample conversation with ChatGPT.

    • A sample output is below
      
      You're testing the connection for PentestGPT v 0.11.0
      #### Test connection for OpenAI api (GPT-4)
      1. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <pentestgpt --reasoning_model=gpt-4>

    Test connection for OpenAI api (GPT-3.5)

    1. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <pentestgpt --reasoning_model=gpt-3.5-turbo-16k>
      
      - notice: if you have not linked a payment method to your OpenAI account, you will see error messages.
  4. The ChatGPT cookie solution is deprecated and not recommended. You may still use it by running pentestgpt --reasoning_model=gpt-4 --useAPI=False.

Build from Source

  1. Clone the repository to your local environment.
  2. Ensure that poetry is installed. If not, please refer to the poetry installation guide.

Usage

  1. You are recommended to run:

    • (recommended) - pentestgpt --reasoning_model=gpt-4-turbo to use the latest GPT-4-turbo API.
    • pentestgpt --reasoning_model=gpt-4 if you have access to GPT-4 API.
    • pentestgpt --reasoning_model=gpt-3.5-turbo-16k if you only have access to GPT-3.5 API.
  2. To start, run pentestgpt --args.

    • --help show the help message
    • --reasoning_model is the reasoning model you want to use.
    • --parsing_model is the parsing model you want to use.
    • --useAPI is whether you want to use OpenAI API. By default it is set to True.
    • --log_dir is the customized log output directory. The location is a relative directory.
    • --logging defines if you would like to share the logs with us. By default it is set to False.
  3. The tool works similar to msfconsole. Follow the guidance to perform penetration testing.

  4. In general, PentestGPT intakes commands similar to chatGPT. There are several basic commands.

    1. The commands are:
      • help: show the help message.
      • next: key in the test execution result and get the next step.
      • more: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.
      • todo: show the todo list.
      • discuss: discuss with the PentestGPT.
      • google: search on Google. This function is still under development.
      • quit: exit the tool and save the output as log file (see the reporting section below).
    2. You can use <SHIFT + right arrow> to end your input (and is for next line).
    3. You may always use TAB to autocomplete the commands.
    4. When you're given a drop-down selection list, you can use cursor or arrow key to navigate the list. Press ENTER to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.\ The user can submit info about:
      • tool: output of the security test tool used
      • web: relevant content of a web page
      • default: whatever you want, the tool will handle it
      • user-comments: user comments about PentestGPT operations
  5. In the sub-task handler initiated by more, users can execute more commands to investigate into a specific problem:

    1. The commands are:
      • help: show the help message.
      • brainstorm: let PentestGPT brainstorm on the local task for all the possible solutions.
      • discuss: discuss with PentestGPT about this local task.
      • google: search on Google. This function is still under development.
      • continue: exit the subtask and continue the main testing session.

Report and Logging

  1. [Update] If you would like us to collect the logs to improve the tool, please run pentestgpt --logging. We will only collect the LLM usage, without any information related to your OpenAI key.
  2. After finishing the penetration testing, a report will be automatically generated in logs folder (if you quit with quit command).
  3. The report can be printed in a human-readable format by running python3 utils/report_generator.py <log file>. A sample report sample_pentestGPT_log.txt is also uploaded.

Custom Model Endpoints and Local LLMs

PentestGPT now support local LLMs, but the prompts are only optimized for GPT-4.

Citation

Please cite our paper at:

@inproceedings {299699,
author = {Gelei Deng and Yi Liu and V{\'\i}ctor Mayoral-Vilches and Peng Liu and Yuekang Li and Yuan Xu and Tianwei Zhang and Yang Liu and Martin Pinzger and Stefan Rass},
title = {{PentestGPT}: Evaluating and Harnessing Large Language Models for Automated Penetration Testing},
booktitle = {33rd USENIX Security Symposium (USENIX Security 24)},
year = {2024},
isbn = {978-1-939133-44-1},
address = {Philadelphia, PA},
pages = {847--864},
url = {https://www.usenix.org/conference/usenixsecurity24/presentation/deng},
publisher = {USENIX Association},
month = aug
}

License

Distributed under the MIT License. See LICENSE.txt for more information. The tool is for educational purpose only and the author does not condone any illegal use. Use as your own risk.

Contact the Contributors!

(back to top)