Josh-XT / AGiXT

AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
https://AGiXT.com
MIT License
2.63k stars 351 forks source link

Can't seem to get this up and running, am I doing this right? #16

Closed OverwriteDev closed 1 year ago

OverwriteDev commented 1 year ago

Hey there! This is an interesting project. I can't seem to get things up and running though. Would you mind having a look at the steps I've taken below and let me know if I've missed something?

1- I followed your installation instructions. I prefer using virtual environments so I used Miniconda to create one.

2-I made a copy of the .env file and modified it as indicated below.

# INSTANCE CONFIG
# Memory is based on AGENT_NAME
AGENT_NAME=My-Agent-Name

# babyagi - Objective settings for running in terminal
OBJECTIVE=Create a chatbot
INITIAL_TASK=Develop an initial task list.

# AI_PROVIDER can currently be openai, llamacpp (local only), or oobabooga (local only)
AI_PROVIDER=oobabooga

# AI_PROVIDER_URI is only needed for custom AI providers such as Oobabooga Text Generation Web UI
AI_PROVIDER_URI=http://127.0.0.1:7860

# If you're using LLAMACPP, you can set the path to the llama binary here.
# If llamacpp is not in the llama folder of the project, you can set the path here.
# Example for Windows: LLAMACPP_PATH=C:\llama\main.exe
# Example for Linux: LLAMACPP_PATH=/path/to/llama/main
#LLAMACPP_PATH=llama/main

# Bing Conversation Style if using Bing. Options are creative, balanced, and precise
#BING_CONVERSATION_STYLE=creative

# ChatGPT settings
#CHATGPT_USERNAME=
#CHATGPT_PASSWORD=

# Enables or disables the AI to use command extensions.
COMMANDS_ENABLED=True

# Memory Settings
# No memory means it will not remember anything or use any memory.
NO_MEMORY=False

# Long term memory means it use a file of its conversations to remember things from previous sessions.
USE_LONG_TERM_MEMORY_ONLY=False

# AI Model can either be gpt-3.5-turbo, gpt-4, text-davinci-003, vicuna, etc
# This determines what prompts are given to the AI and determines which model is used for certain providers.
AI_MODEL=vicuna

# Temperature for AI, leave default if you don't know what this is
AI_TEMPERATURE=0.5

# Maximum number of tokens for AI response, default is 2000
MAX_TOKENS=2000

# Working directory for the agent
WORKING_DIRECTORY=WORKSPACE

# Extensions settings

# OpenAI settings for running OpenAI AI_PROVIDER
#OPENAI_API_KEY=

# Huggingface settings
#HUGGINGFACE_API_KEY=
#HUGGINGFACE_AUDIO_TO_TEXT_MODEL=facebook/wav2vec2-large-960h-lv60-self

# Selenium settings
SELENIUM_WEB_BROWSER=chrome

# Twitter settings
#TW_CONSUMER_KEY=my-twitter-consumer-key
#TW_CONSUMER_SECRET=my-twitter-consumer-secret
#TW_ACCESS_TOKEN=my-twitter-access-token
#TW_ACCESS_TOKEN_SECRET=my-twitter-access-token-secret

# Github settings
#GITHUB_API_KEY=
#GITHUB_USERNAME=

# Sendgrid Email settings
#SENDGRID_API_KEY=
#SENDGRID_EMAIL=

# Microsoft 365 settings
#MICROSOFT_365_CLIENT_ID=
#MICROSOFT_365_CLIENT_SECRET=
#MICROSOFT_365_REDIRECT_URI=

# Voice (Choose one: ElevenLabs, Brian, Mac OS)
# BrianTTS
USE_BRIAN_TTS=True

# Mac OS
#USE_MAC_OS_TTS=False

# ElevenLabs (If API key is not empty, it will be used)
#ELEVENLABS_API_KEY=
#ELEVENLABS_VOICE=Josh

3- I'm using oobabooga so I setup startup commands as follows:

@echo off

@echo Starting the web UI...

cd /D "%~dp0"

set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env

if not exist "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" (
  call "%MAMBA_ROOT_PREFIX%\micromamba.exe" shell hook >nul 2>&1
)
call "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end )
cd text-generation-webui

call python server.py --model anon8231489123_vicuna-13b-GPTQ-4bit-128g --auto-devices --chat --wbits 4 --groupsize 128 --listen --no-stream

:end
pause

4- In the oobabooga UI I have things setup as follows and everything seems to start up from this end:

image image image image

5- I start up app.py and npm and they seems to start up fine:

image image

This is where I start to get an issue and I'm not sure how to approach solving it. When I create a new agent it creates a file in the memories folder with the new agent's name that's blank inside. When I click start task it creates a new agent called 'Agent-LLM' and populates that with the task I entered and I get a key error response in the app.py window. :

image image image image image image

Any assistance getting this up and running would be very much appreciated.

Josh-XT commented 1 year ago

Hello! I think you'll need to run your oobabooga server with the following:

conda activate textgen
python3 server.py --model YOUR-MODEL --listen --no-stream

This will start the Oobabooga server in a way that it will accept API calls.

OverwriteDev commented 1 year ago

Hey there! Just as I was making progress, Oobabooga updated and broke. Got it sorted and made adjustments for the new version, I get information back from Oobabooga now. Now I've just got to figure out the syntax for the commands side of things. Thanks for the help!