vishwasg217 / fin-sight

FinSight - Financial Insights at Your Fingertip: FinSight is a cutting-edge AI assistant tailored for portfolio managers, investors, and finance enthusiasts. It streamlines the process of gaining crucial insights and summaries about a company in a user-friendly manner.
https://finsight-report.streamlit.app/
200 stars 75 forks source link

Local AI #7

Open BBC-Esq opened 1 year ago

BBC-Esq commented 1 year ago

Your project caught my attention. Feel free to check out my project on my github as well. Would it be possible to adjust your code to work with a local LLM instead of through gPT-4?

vishwasg217 commented 1 year ago

Hey,

thank you for showing interest in my project.

Yes, you should be able to use a local LLM. You just need to add the code for accessing local llm in the get_model() method at the src/utils.py file. This should be enough if you're looking to use Finsight locally.

However, if you're planning to deploy. I'd suggest you have a look at the price plan of the cloud service you intend to use. Local LLM weights can be quite big in size, hence the billing can shoot up significantly.

Let me know if you have any other questions.

Thanks

BBC-Esq commented 1 year ago

If I have a local LLM that is being run on a server like localhost, where would I modify to add my specific server information so it connects to it just like as if it were the chat GPT/open AI model.

BBC-Esq commented 1 year ago

Also, any chance you can share some screenshots?

styck commented 10 months ago

If I have a local LLM that is being run on a server like localhost, where would I modify to add my specific server information so it connects to it just like as if it were the chat GPT/open AI model.

I'm using Windows and LM Studio which lets you start a server on a local port, I just modified the get_model() in utils.py as follows:

def get_model(model_name):

OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

if model_name == "openai":
    model = ChatOpenAI(base_url="http://192.168.50.201:1234/v1", api_key = "xxx")
return model

I wasn't actually on the same computer, my desktop computer had finsight code and was using the LLM on a laptop, so replace the IP address with your localhost, the api_key is not needed and will be ignored for a local LLM. A better solution is to allow selection of openai or local LLM and use the API key if needed.

I also modified \1_📊_Finance_Metrics_Review.py so it only asked me for the API key is it wasn't defined. I'm using Visual Studio Code so just put the API keys in my launch.json for debugging so I don't have to enter it all the time.