szczyglis-dev / py-gpt

Desktop AI Assistant powered by GPT-4, GPT-4 Vision, GPT-3.5, DALL-E 3, Langchain, Llama-index, chat, vision, voice control, image generation and analysis, autonomous agents, code and command execution, file upload and download, speech synthesis and recognition, access to Web, memory, prompt presets, plugins, assistants & more. Linux, Windows, Mac.
https://pygpt.net
MIT License
444 stars 91 forks source link

Feature Requests #23

Open gfsysa opened 4 months ago

gfsysa commented 4 months ago

Bug? -- If the default llama-index is not base, but base still exists and you begin a chat with chat-with-file enableed, the model will not find your indexed files... Have to switch to chat with files mode, select the database and switch back to Chat.

Thanks for considering this input! Thanks for your hard work even more.

gfsysa commented 4 months ago

Another thought:

I suspect there are few ways to approach this in the prompt, instructions, and perhaps with the advanced indexing techniques... I'm not sure.

gfsysa commented 4 months ago

Couple more thoughts:

Both relate to an operation I executed to indexed 65 pages from a website, and then a second prompt to identify which pages had not been updated since Feb and to draft a copy update for one of the pages. I assumed these were pre-processing requests that were going to llama-index (index llama-index first) and that the model would provide the 'draft' I requested.... I watched the system output enter a series of loops, and after about the 4th loop I realized it was repeating the same request over and over and giving the same output. I stopped the operation, but my next prompt was rejected as I had hit our rate limit... not a big deal and we're going to the next tier this week, but token management is sometimes an issue.

szczyglis-dev commented 4 months ago

Thank you very much for the feedback!

Several of the things you mentioned have been added in the latest version (2.1.10):

Regarding the need to select an index from the list in Chat in mode in:

Bug? -- If the default llama-index is not base, but base still exists and you begin a chat with chat-with-file enableed, the model will not find your indexed files... Have to switch to chat with files mode, select the database and switch back to Chat.

could you please describe in more detail, step by step with example setup? Unfortunately, I can't reproduce this problem.

oleksii-honchar commented 3 months ago

Hey Marcin, I am really impressed with how far you have come with this project over the past year! Thank you for all your hard work. I did notice that there is currently no support for markdown in responses and posts, meaning that code and text are not formatted properly. Is there a way to enable this functionality for better readability?

This is current app style

And this is an prettified example

oleksii-honchar commented 3 months ago

Another useful feature could be the particular chat history reset. For example I'm using same preset/persona for general topics (e.g. "SW Dev Coach") and I don't need to store the context of every topic or conversation, also want to keep the list of chats clean, so I'm usually reset this particular chat history(and context) reusing it for different topic.

Here is the example how it works

image

And this is an idea how it could look like in pygpt interface

image
gfsysa commented 3 months ago

Hi -- Just want to say thank you for all of the updates and feature inclusions.. You're an animal and I don' t know why you're so awesome.

I will be active with the tool more over the next week or so, will try to gather some more feedback.

Also, loving the other input here, these are great sugestions.

Question: do you want a new Issue created for everything so you can close the ticket, or is the thread here okay?

gfsysa commented 1 month ago

Thanks for all the work on the app. Something that would be really useful -- would like to scrape web content more efficiently, and understand that I can use llama-index, but am not too confident with this aspect of the guide:

Adding custom vector stores and data loaders You can create a custom vector store provider or data loader for your data and develop a custom launcher for the application. To register your custom vector store provider or data loader, simply register it by passing the vector store provider instance to vector_stores keyword argument and loader instance in the loaders keyword argument:

custom_launcher.py

from pygpt_net.app import run from plugins import CustomPlugin, OtherCustomPlugin from llms import CustomLLM from vector_stores import CustomVectorStore from loaders import CustomLoader

plugins = [ CustomPlugin(), OtherCustomPlugin(), ] llms = [ CustomLLM(), ] vector_stores = [ CustomVectorStore(), ] loaders = [ CustomLoader(), ]

run( plugins=plugins, llms=llms, vector_stores=vector_stores, # <--- list with custom vector store providers loaders=loaders # <--- list with custom data loaders ) The vector store provider must be an instance of pygpt_net.provider.vector_stores.base.BaseStore. You can review the code of the built-in providers in pygpt_net.provider.vector_stores and use them as examples when creating a custom provider.

The data loader must be an instance of pygpt_net.provider.loaders.base.BaseLoader. You can review the code of the built-in loaders in pygpt_net.provider.loaders and use them as examples when creating a custom loader.