mariocandela / beelzebub

A secure low code honeypot framework, leveraging AI for System Virtualization.
https://beelzebub-honeypot.com
MIT License
668 stars 50 forks source link

Integration with API together.ai #118

Closed sashabal closed 2 months ago

sashabal commented 2 months ago

Is your feature request related to a problem? Please describe. There are services with LLM models that provide a starting deposit for testing, for example together.ai

Describe the solution you'd like If possible, please implement integration with together.ai LLM(meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo)

Additional context Example request in Python:

import os
from together import Together

client = Together(api_key=os.environ.get('TOGETHER_API_KEY'))

response = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo",
    messages=[],
    max_tokens=512,
    temperature=0.7,
    top_p=0.7,
    top_k=50,
    repetition_penalty=1,
    stop=["<|eot_id|>"],
    stream=True
)
print(response.choices[0].message.content)

Example curl request:

curl -X POST "https://api.together.xyz/v1/chat/completions" \
  -H "Authorization: Bearer $TOGETHER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
    "messages": [],
    "max_tokens": 512,
    "temperature": 0.7,
    "top_p": 0.7,
    "top_k": 50,
    "repetition_penalty": 1,
    "stop": "[\"<|eot_id|>\"]",
    "stream": true
  }'
mariocandela commented 2 months ago

Hi @sashabal,

Nice to meet you!

This feature is not part of the product roadmap, but I will do everything possible to work on it in the coming months.

If you need it immediately you could use LLAMA locally, or implement this new feature yourself, I'm happy to guide you through the beelzebub code base :smile:

Cheers

Mario

sashabal commented 2 months ago

Ok, Thank you