brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
https://smartconnections.app
GNU General Public License v3.0
2.49k stars 170 forks source link

Smart Chat should handle code in the chat input #524

Open brianpetro opened 5 months ago

brianpetro commented 5 months ago

Discussed in https://github.com/brianpetro/obsidian-smart-connections/discussions/523

Originally posted by **avataraustin** March 27, 2024

SC Chat window stays stuck on loading when asking a question containing code

I have noticed for a very long time since I started using SC, that if I ask a question of Chatgpt in the SC chat window and if my question contains computer code it will frequently (although not always) stall out and stay stuck on loading "..." I have to create a new chat with a different question in order to get a response. dev log right after issue occurs: ![Screenshot 2024-03-27 at 12 35 02 PM](https://github.com/brianpetro/obsidian-smart-connections/assets/39737076/018984c4-9242-415d-8241-266faf7b3156)
brianpetro commented 5 months ago

@avataraustin

If you could, please share an example chat that fails as described.

Thanks for bringing this to my attention!

🌴

avataraustin commented 5 months ago

This for example seems to stall:

Explain this and what snapshot is in this case

onValue(shoppingListInDB, function(snapshot) {

// Challenge: Console log snapshot.val() to show all the items inside of shoppingList in the database

})
MichaelMartinez commented 4 months ago

Naturally, at least in my opinion, a person may want to ask the LLM how to write a query for Dataview plugin or code. I am not sure what is taking place, but I get the "..." problem same as @avataraustin . I have tried local models via lmstudio, Openrouter and OpenAI api.

The problem seems to be in the chat window UI itself because the 'smart-chats' folder has a record of the chats. Looking at the output from lmstudio, I was expecting to see the generated text inside the chat pane of smart connections, it wasn't there. However, in the smart-chat folder there is a massive markdown file with everything there. Perhaps its the backticks in certain languages? I asked llama-3 to escape the backticks so I could see the generated dataview snippet and it worked once or twice, then it forgot.

brianpetro commented 4 months ago

Hi @MichaelMartinez

Thanks for letting me know.

There are a couple of potential issues with dataview in responses since, depending on the context, Smart Connections may be trying to render the dataview.

Can you send me screenshots? This way I can see exactly what you're seeing and hopefully narrow down the issue from there.

🌴

MichaelMartinez commented 4 months ago

Hi @brianpetro - Confession time: perhaps I am "holding it wrong" as they say. 😄 Thank you for creating this and sharing! It is a really cool tool!

Looking at the chat logs, I can see the assistant looking for references to my question within my notes. I don't have any notes related to the syntax of dataview so it seems to grab some of the recent notes as context.

I included that information because my natural inclination was to chat with this like I would inside the LibreChat or ChatGPT interface. This interface is fundamentally different and I have yet to wrap my mind around using it in a way that extends its capabilities rather than fight them. I believe I am using this tool incorrectly as a general chat rather than a very specific RAG like chat.

In any case, I have taken a few screen shots to show you what I see as it relates to markdown. It seems like this may be just a rendering issue. If the markdown renderer is "greedy" and tries to immediately render a code block, especially one that performs an operation like Dataview, we are going to have problems in various areas. Many of the local models are not great at formatting which just adds another layer of suck into the mix.

This is the setup setup

This is the first part of the chat chat1

This is the second part of the chat chat2

Here are the logs that show what is produced by the model during one part of the converation where I stated that; "I don't see anything" logs

So, the context was absolutely massive in this particular point of the chat and I am not sure how it included all of the files that it did. Again, its probably my mistake to use this a general chat rather than a specific RAG like interface. Perhaps there is a way for a user of smart connections to turn context off and on based on what the user wants to do.

I know there is a delicate balance of too much config vs. not enough. However, people like me that use local models should expect more configuration than those that use GPT-4, in my opinion.