stitionai / devika

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.
MIT License
18.29k stars 2.38k forks source link

Use Websocket instead of polling #16

Closed rohittp0 closed 5 months ago

rohittp0 commented 6 months ago

I was looking at the logs and saw,

24.03.21 23:01:49: root: INFO   : /api/token-usage GET
24.03.21 23:01:49: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:49: root: INFO   : /api/project-list GET
24.03.21 23:01:49: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:49: root: INFO   : /api/model-list GET
24.03.21 23:01:49: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:49: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:49: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:49: root: INFO   : /api/get-messages POST
24.03.21 23:01:49: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:50: root: INFO   : /api/token-usage GET
24.03.21 23:01:50: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:50: root: INFO   : /api/project-list GET
24.03.21 23:01:50: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:50: root: INFO   : /api/model-list GET
24.03.21 23:01:50: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:50: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:50: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:50: root: INFO   : /api/get-messages POST
24.03.21 23:01:50: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:51: root: INFO   : /api/token-usage GET
24.03.21 23:01:51: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:51: root: INFO   : /api/project-list GET
24.03.21 23:01:51: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:51: root: INFO   : /api/model-list GET
24.03.21 23:01:51: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:51: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:51: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:51: root: INFO   : /api/get-messages POST
24.03.21 23:01:51: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:52: root: INFO   : /api/token-usage GET
24.03.21 23:01:52: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:52: root: INFO   : /api/project-list GET
24.03.21 23:01:52: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:52: root: INFO   : /api/model-list GET
24.03.21 23:01:52: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:52: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:52: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:52: root: INFO   : /api/get-messages POST
24.03.21 23:01:52: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:53: root: INFO   : /api/token-usage GET
24.03.21 23:01:53: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:53: root: INFO   : /api/project-list GET
24.03.21 23:01:53: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

A lot of requests are being made continuously. I think it would be better (performance-wise) to use a WebSocket instead of continuously polling. Any thoughts? Was this done intentionally or am I missing something here?

rohittp0 commented 6 months ago

Looking at the requests many of the requests can be optimized away ( without using Socket ) by only making requests when UI components like drop-down lists are clicked etc.