csensemakers / desci-sense

2 stars 2 forks source link

Add support for concurrent LLM calls #113

Open ronentk opened 6 months ago

ronentk commented 6 months ago

https://github.com/openai/openai-python#async-usage

https://openrouter.ai/docs#models

This will be useful to support keyword extraction alongside semantics

ronentk commented 6 months ago

@pepoospina I updated the api code to call the concurrent method that extracts both semantics and keywords

https://github.com/csensemakers/desci-sense/blob/d945cec1ab2b5708cdd1f4519349a713ef44db57/nlp/desci_sense/shared_functions/main.py#L28

There are some new requirements needed, so I updated the requirements file as well. Let me know if you have issues running this from the app

ronentk commented 6 months ago

We might want to think in the future about how to optimize the speed even more - for example, we could cache the metadata and save on redundant calls to the citoid API. But if we assume that currently the user will only run the parser once after the end of the post creation, I think we can postpone this.