Closed MaartenGr closed 9 months ago
@lfoppiano Have you set chat=True
? That is necessary to use a chat model compared to a completion model. You can read more about this in the docstrings.
Not initially. Sorry. The completion chat is deprecated, anyway.
I just tried:
client = openai.OpenAI()
lc_chatgpt = OpenAI(client, model="gpt-3.5-turbo", chat=True)
I get a similar error:
Traceback (most recent call last):
File "/Users/lfoppiano/development/projects/concepts-visualisation/concepts_visualisation/openalex/keyword/extract_keywords_keyllm.py", line 116, in <module>
process_single(input_json, output_json)
File "/Users/lfoppiano/development/projects/concepts-visualisation/concepts_visualisation/openalex/keyword/extract_keywords_keyllm.py", line 40, in process_single
keywords_abstracts = kw_model.extract_keywords(abstracts, embeddings=embeddings_abstracts, threshold=0.9)
File "/Users/lfoppiano/anaconda3/envs/nii/lib/python3.10/site-packages/keybert/_llm.py", line 94, in extract_keywords
out_cluster_keywords = self.llm.extract_keywords(
File "/Users/lfoppiano/anaconda3/envs/nii/lib/python3.10/site-packages/keybert/llm/_openai.py", line 181, in extract_keywords
keywords = response["choices"][0]["message"]["content"].strip()
TypeError: 'ChatCompletion' object is not subscriptable
Perhaps, using ChatCompletion, the responses should be extracted using response.choices
instead of response['choices']
@lfoppiano Thanks for trying it out! It seems that there is still some issue which I believe I just fixed. Could you test it?
It works! 👍
Thanks!
I've tried to test, but I've got something else:
Then I tried the following:
I've specified the model because the default is
gpt-3.5-turbo-instruct
, which sounds oldhere the output:
If I specify to use model="gpt-3.5-turbo"
I get the following error: