emirsahin1 / llm-axe

A simple, intuitive toolkit for quickly implementing LLM powered applications.
MIT License
58 stars 12 forks source link

Can llm-axe Help Integrate AI with Outlook.exe to Analyze Email Data? #1

Closed Foadsf closed 1 month ago

Foadsf commented 1 month ago

I was directed to this project from the Ollama Discord community. I am looking for ways to use locally installed or self-hosted AI to interact with the huge body of information accumulated in my email correspondence over the years.

For example, I want to interact with an AI chatbot and ask questions such as:

I would like to know if and how llm-axe can help with this. Can llm-axe retrieve information from Outlook's storage/database files to facilitate this integration? If so, could you give me some guidance or examples on achieving this?

emirsahin1 commented 1 month ago

Thank you for your interest in the project!

llm-axe can definitely help you with the llm and data interaction side of things. However, it is not aimed to be a data retrieval library.

Here is how I would tackle this project:

  1. Find a python library that can get your emails from outlook. I've never played around with email libraries but it looks like this SO post might be useful: https://stackoverflow.com/questions/5077625/reading-e-mails-from-outlook-with-python-through-mapi

  2. After you have your weekly email content, there are a couple of different approaches you could experiment with. One approach would be to feed each individual email to an Agent that summarizes the key points. You can combine these summaries into one big string and feed that into a summarizer that will create a weekly summary. If you need to extract specific information from these emails like URLS or deadlines, you can mix in DataExtractor Agents and such.

Example code:

from llm_axe import PdfReader, Agent, AgentType, OllamaChat

llm = OllamaChat(model="llama3:instruct")

# assume this is our list of weekly email data
emails = [
    "Hi Bob, just a reminder about our meeting tomorrow at 10am. See you then!\n\nBest,\nAlice",
    "Hey Bob, I wanted to give you a quick update on the project. We've made significant progress and are on track to meet our deadlines.\n\nCheers,\nCharlie",
    "Hello Bob, I hope you're doing well. Just wanted to check in and see if you're available for a call next week.\n\nRegards,\nSarah",
    "Hi Bob, great job on the presentation today! Let's continue the good work and finish strong.\n\nThanks,\nTeam",
    "Dear Bob, I have reviewed your proposal and have a few suggestions for improvement. Let's discuss this further in our next meeting.\n\nSincerely,\nJohn",
    "Hey Bob, don't forget about the team lunch this Friday. It's going to be a lot of fun!\n\nBest,\nEmily"
]

email_summaries = ""
email_reader = Agent(llm, AgentType.SUMMARIZER)
prompt = "Make a short summary of this email containing any important information"

for email in emails:
    short_summary = email_reader.ask(email + "\n" + prompt)
    email_summaries += "Email 1 Summary:\n" + short_summary + "\n"

# make a final weekly sumamry
weekly_summary = email_reader.ask(email_summaries + "\n" + "Make a short list summary of my weekly email. Include any important information. I am bob")

print(weekly_summary)

This is the output I get:

Here is a summary of the emails you received:

* You have a meeting with Alice tomorrow at 10:00 AM.
* The project is on track to meet its deadlines.
* Sarah wants to schedule a call with you next week.
* The team congratulated you on your successful presentation and encouraged you to maintain momentum.
* John reviewed your proposal, made some suggestions, and plans to discuss it further at the next meeting.
* You're invited to a team lunch this Friday.

Important information:

* Meeting with Alice tomorrow at 10:00 AM
* Project is on track to meet deadlines
* Call scheduled with Sarah next week
* Invitation to team lunch this Friday

For more complex use cases where you'd want the llm to have information from 1000s of emails or more, you'd have to setup an RAG system that will retrieve the necessary emails based on the user's question. You can then feed that data into an llm-axe agent the same way we did in the example. I'd recommend looking into libraries like llamaindex for the RAG side of things.

I hope that helps. Please check the examples folder for more examples on how to use llm-axe and please leave a star if you find the project useful. Thanks!

Foadsf commented 1 month ago

Hi Emir,

Thank you for your detailed response and guidance! Your example of using llm-axe to summarize email content is very helpful.

Is it possible to streamline this email analysis feature into any existing Ollama front ends, such as Open WebUI? For example, integrating the workflow to fetch emails using libraries like win32com for Outlook and then processing them with llm-axe within the WebUI?

The idea is to have a seamless user experience where the AI can retrieve, analyze, and summarize emails directly from the WebUI. This would greatly enhance productivity by centralizing email management and analysis.

Thank you for your assistance!

emirsahin1 commented 1 month ago

Yes, it's doable, however, the backend of most chat frontends like Open WebUI don't seem easily modifiable.

It seems like the easiest way to turn this into a chat UI would be to make a small flask chat server that receives prompts and returns back answers.

In the server, you could implement a similar functionality to the above example that reads your emails and responds with the weekly summary. You can pair that with a function caller agent so that it only does this email summary when prompted (like this chat example). To simplify it even further, you could just store the chat history on the client and pass it along with the prompt.

All that's left would be to simply have a web interface that can send your prompts to that server, and displays back the response. There are existing interfaces out there that you can rip apart and use. I suppose you could even use the Open WebUI interface if you separate it from the backend. If you have front-end skills though (or chatgpt 😃), the simplest way would probably be to use Langui to make a simple one page UI.

I'm not planning on building a user interface for llm-axe right now, but I may consider it in the near future if the need arises. I hope this information was helpful though, and please do let me know if you happen to have any llm-axe related question in the future.