Closed ishaan-jaff closed 1 year ago
cc @betogaona7 @giorgiop can you please take a look at this PR ?
I hope it's useful to you, if this initial commit looks good i can add docs/more tests too
Perhaps there is a default setting that enforces a maximum response length? Here is the log, notice that the JSON is incomplete:
INFO:__main__:create tale sections
Error getting the JSON. Error: Expecting ',' delimiter: line 1 column 1043 (char 1042)
Result: {"classes": [{"class_name": "Payload", "class_docstring": "This class represents the payload for updating user alerts. It contains an instance of the `Update` class, which is formatted according to the Update structure."}, {"class_name": "Update", "class_docstring": "This class represents the update to be performed on a user alert. It has multiple members, including 'AlertID', 'Threshold', 'AlertTrigger', 'Value', 'Frequency', 'Active', and 'EmailAlert'. Each member is a string."}], "methods": [{"method_name": "requestHandler", "method_docstring": "This method handles requests for updating alerts. It takes in three parameters: a context (ctx), a Request instance (r), and a reference to an instance of `coreApi.CoreApi` (api). It unmarshals the request body into an `Update` instance and tries to update the instrument price alert with the update. If the update fails, it returns an error response with a status code of 500 and an error message. If the update is successful, it returns a new JSON response with a status code of 200."}, {"method_name": "Register", "method_docstring": "This method registers a new HTTP
Returning empty JSON due to a failure
Oh, forgot, regarding this:
Begins using ChatLiteLLM() for using any llm as a drop in replacement for gpt-3.5-turbo
As a side note, we found that previous versions before GPT-4, such as GPT-3.5, Davinci, and so on, were not capable of generating docstrings correctly. Therefore, they were not reliable for get_unit_tale
and extract_code_elements
. Please keep this in mind when using other models, as there is a high probability that they fail if gpt4 is replaced in these two functions.
closing due to inactivity
Addressing this issue: https://github.com/mystral-ai/devtale/issues/27
This PR:
ChatLiteLLM()
for using any llm as a drop in replacement for gpt-3.5-turboLeveraging https://github.com/BerriAI/litellm/ for both features in this PR
ChatLiteLLM()
is integrated into langchain and allows you to call all models using theChatOpenAI
I/O interface https://python.langchain.com/docs/integrations/chat/litellmHere's an example of how to use ChatLiteLLM()