Please describe the purpose of this pull request.
Create a mock_llm helper class that can intercept the public api calls to llms if a static response is sufficient for the unit test. Right now the default param is to use the public api, but once stable the default should be mock. The modifications to test_client.py are simply to show what the integration would look like, not something that would be landed as part of this PR
How to test
Normal test run (calls public api):
poetry run pytest -s -vv tests/test_client.py::test_agent_interactions
With mock api
poetry run pytest -s -vv tests/test_client.py::test_agent_interactions --llm-api=mock
Have you tested this PR?
(letta-py3.12) carenthomas@Jeffs-MacBook-Pro-2 MemGPT % poetry run pytest -s -vv tests/test_client.py::test_agent_interactions --llm-api=mock
Initializing database...
====================================================================== test session starts =======================================================================
platform darwin -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0 -- /Users/carenthomas/Library/Caches/pypoetry/virtualenvs/letta-2EtcMsTd-py3.12/bin/python
cachedir: .pytest_cache
rootdir: /Users/carenthomas/Documents/MemGPT/tests
configfile: pytest.ini
plugins: asyncio-0.23.8, order-1.3.0, anyio-3.7.1
asyncio: mode=Mode.AUTO
collected 1 item
tests/test_client.py::test_agent_interactions[client0] Starting mock llm api server thread
tests.conftest
Running: uvicorn server:mock_llm_app --host localhost --port 8000
INFO: Started server process [70024]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on https://localhost:8000 (Press CTRL+C to quit)
Starting server thread
Starting server...
Running: uvicorn server:app --host localhost --port 8283
INFO: Started server process [70024]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8283 (Press CTRL+C to quit)
Running client tests with server: http://localhost:8283
INFO:httpx:HTTP Request: GET http://testserver/configs "HTTP/1.1 200 OK"
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8283
INFO: ::1:59367 - "POST /v1/tools HTTP/1.1" 307 Temporary Redirect
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/tools HTTP/11" 307 0
INFO: ::1:59367 - "POST /v1/tools/ HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/tools/ HTTP/11" 200 1510
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8283
INFO: ::1:59368 - "POST /v1/tools HTTP/1.1" 307 Temporary Redirect
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/tools HTTP/11" 307 0
INFO: ::1:59368 - "POST /v1/tools/ HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/tools/ HTTP/11" 200 1989
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8283
INFO: ::1:59369 - "POST /v1/agents HTTP/1.1" 307 Temporary Redirect
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/agents HTTP/11" 307 0
2024-10-24 23:25:03,303 - Letta.letta.server.server - DEBUG - Attempting to find user: user-00000000-0000-4000-8000-000000000000
DEBUG:chromadb.auth.registry:Registering provider: token_config
DEBUG:chromadb.auth.registry:Registering provider: user_token_config
DEBUG:chromadb.auth.registry:Registering provider: token
DEBUG:chromadb.auth.registry:Registering provider: token
DEBUG:chromadb.config:Starting component System
DEBUG:chromadb.config:Starting component Posthog
DEBUG:chromadb.config:Starting component OpenTelemetryClient
DEBUG:chromadb.config:Starting component SimpleAssignmentPolicy
DEBUG:chromadb.config:Starting component SqliteDB
DEBUG:chromadb.config:Starting component QuotaEnforcer
DEBUG:chromadb.config:Starting component LocalSegmentManager
DEBUG:chromadb.config:Starting component SegmentAPI
2024-10-24 23:25:04,331 - Letta.letta.server.server - DEBUG - Created new agent from config: <letta.agent.Agent object at 0x120f6ca10>
INFO: ::1:59369 - "POST /v1/agents/ HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/agents/ HTTP/11" 200 8436
AGENT ID agent-513480df-6a45-43a1-9461-7935db7369bb
Sending message Hello, agent!
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8283
2024-10-24 23:25:04,407 - Letta.letta.server.server - DEBUG - Checking for agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
2024-10-24 23:25:04,408 - Letta.letta.server.server - DEBUG - Agent not loaded, loading agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
2024-10-24 23:25:04,408 - Letta.letta.server.server - DEBUG - Grabbing agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb from database
2024-10-24 23:25:04,409 - Letta.letta.server.server - DEBUG - Creating an agent object
2024-10-24 23:25:04,426 - Letta.letta.server.server - DEBUG - Adding agent to the agent cache: user_id=user-00000000-0000-4000-8000-000000000000, agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
2024-10-24 23:25:04,428 - Letta.letta.server.server - DEBUG - Got input messages: [Message(id='message-2b53e30a-7a59-4950-b219-621d981f6010', role=<MessageRole.user: 'user'>, text='{\n "type": "user_message",\n "message": "Hello, agent!",\n "time": "2024-10-24 11:25:04 PM PDT-0700"\n}', user_id='user-00000000-0000-4000-8000-000000000000', agent_id='agent-513480df-6a45-43a1-9461-7935db7369bb', model=None, name=None, created_at=datetime.datetime(2024, 10, 25, 6, 25, 4, 428602, tzinfo=datetime.timezone.utc), tool_calls=None, tool_call_id=None)]
2024-10-24 23:25:04,429 - Letta.letta.server.server - DEBUG - Checking for agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
2024-10-24 23:25:04,429 - Letta.letta.server.server - DEBUG - Starting agent step
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): localhost:8000
INFO: ::1:59371 - "POST /v1/chat/completions HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:https://localhost:8000 "POST /v1/chat/completions HTTP/11" 200 746
2024-10-24 23:25:04,474 - Letta.letta.server.server - DEBUG - Calling step_yield()
2024-10-24 23:25:04,476 - Letta.letta.server.server - DEBUG - Checking for agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
2024-10-24 23:25:04,477 - Letta.letta.server.server - DEBUG - Checking for agent user_id=user-00000000-0000-4000-8000-000000000000 agent_id=agent-513480df-6a45-43a1-9461-7935db7369bb
INFO: ::1:59370 - "POST /v1/agents/agent-513480df-6a45-43a1-9461-7935db7369bb/messages HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:http://localhost:8283 "POST /v1/agents/agent-513480df-6a45-43a1-9461-7935db7369bb/messages HTTP/11" 200 1112
Response {
"messages": [
{
"id": "message-b1e5dab3-0316-42d8-a379-bcd2fabd9449",
"role": "assistant",
"text": "User has greeted me. Time to establish a connection and gauge their mood.",
"user_id": "user-00000000-0000-4000-8000-000000000000",
"agent_id": "agent-513480df-6a45-43a1-9461-7935db7369bb",
"model": "memgpt-openai",
"name": null,
"created_at": "2024-10-25T06:25:04.449629",
"tool_calls": [
{
"id": "call_HGVehbDCxJzRZqzMNis4CUDY",
"type": "function",
"function": {
"name": "send_message",
"arguments": "{\n \"message\": \"Hello! It's great to meet you! How are you doing today?\"\n}"
}
}
],
"tool_call_id": null
},
{
"id": "message-e40dd390-ecdb-4217-af5f-171c1c150e82",
"role": "tool",
"text": "{\n \"status\": \"OK\",\n \"message\": \"None\",\n \"time\": \"2024-10-24 11:25:04 PM PDT-0700\"\n}",
"user_id": "user-00000000-0000-4000-8000-000000000000",
"agent_id": "agent-513480df-6a45-43a1-9461-7935db7369bb",
"model": "memgpt-openai",
"name": "send_message",
"created_at": "2024-10-25T06:25:04.449970",
"tool_calls": null,
"tool_call_id": "call_HGVehbDCxJzRZqzMNis4CUDY"
}
],
"usage": {
"completion_tokens": 48,
"prompt_tokens": 2370,
"total_tokens": 2418,
"step_count": 1
}
}
[Message(id='message-b1e5dab3-0316-42d8-a379-bcd2fabd9449', role=<MessageRole.assistant: 'assistant'>, text='User has greeted me. Time to establish a connection and gauge their mood.', user_id='user-00000000-0000-4000-8000-000000000000', agent_id='agent-513480df-6a45-43a1-9461-7935db7369bb', model='memgpt-openai', name=None, created_at=datetime.datetime(2024, 10, 25, 6, 25, 4, 449629), tool_calls=[ToolCall(id='call_HGVehbDCxJzRZqzMNis4CUDY', type='function', function=ToolCallFunction(name='send_message', arguments='{\n "message": "Hello! It\'s great to meet you! How are you doing today?"\n}'))], tool_call_id=None), Message(id='message-e40dd390-ecdb-4217-af5f-171c1c150e82', role=<MessageRole.tool: 'tool'>, text='{\n "status": "OK",\n "message": "None",\n "time": "2024-10-24 11:25:04 PM PDT-0700"\n}', user_id='user-00000000-0000-4000-8000-000000000000', agent_id='agent-513480df-6a45-43a1-9461-7935db7369bb', model='memgpt-openai', name='send_message', created_at=datetime.datetime(2024, 10, 25, 6, 25, 4, 449970), tool_calls=None, tool_call_id='call_HGVehbDCxJzRZqzMNis4CUDY')]
PASSEDDEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8283
INFO: ::1:59372 - "DELETE /v1/agents/agent-513480df-6a45-43a1-9461-7935db7369bb HTTP/1.1" 200 OK
DEBUG:urllib3.connectionpool:http://localhost:8283 "DELETE /v1/agents/agent-513480df-6a45-43a1-9461-7935db7369bb HTTP/11" 200 4
Related issues or PRs
Please link any related GitHub issues or PRs.
Is your PR over 500 lines of code?
If so, please break up your PR into multiple smaller PRs so that we can review them quickly, or provide justification for its length.
Additional context
Add any other context or screenshots about the PR here.
Please describe the purpose of this pull request. Create a mock_llm helper class that can intercept the public api calls to llms if a static response is sufficient for the unit test. Right now the default param is to use the public api, but once stable the default should be mock. The modifications to test_client.py are simply to show what the integration would look like, not something that would be landed as part of this PR
How to test Normal test run (calls public api):
poetry run pytest -s -vv tests/test_client.py::test_agent_interactions
With mock apipoetry run pytest -s -vv tests/test_client.py::test_agent_interactions --llm-api=mock
Have you tested this PR?
Related issues or PRs Please link any related GitHub issues or PRs.
Is your PR over 500 lines of code? If so, please break up your PR into multiple smaller PRs so that we can review them quickly, or provide justification for its length.
Additional context Add any other context or screenshots about the PR here.