Closed poweihuang0817 closed 1 year ago
Hello!
Thanks for reporting the issue!
Could you provide more details as to what you mean by token issue? Did the prompt exceed the token limit?
Please note that the models will not always generate valid Json blocks.
warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] Vectors not found warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] Vectors not found warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] No vectors were found. warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] Vectors not found warn: Microsoft.SemanticKernel.Connectors.Memory.Qdrant.QdrantMemoryStore[0] Vectors not found
I'm suspecting there is some issue in memory setting.
Hi Powe!
Could you provide more details on how you encountered the errors/warnings? Logs or screenshots will help greatly.
Could you let me know what you did? i.e.
This may have been caused by a bug that was impacting the Qdrant connector. This was resolved in #1313
@poweihuang0817 if you have a spare moment to try with the latest code from the repository (I don't think the fix has made it to nuget yet) please let us know if the issue persists
Hello craig and tao, I've mixed two issues up. The bigger question I want to report is that the long term memory would not be json format. GPT will not response as I requested. And then there will be json parsing error. I manage to reduce that by reducing the token.
Hello Powei, it's common that GPT doesn't respond with the correct Json format. That's why we chose to skip instead of raising errors in memory extraction: https://github.com/microsoft/semantic-kernel/blob/main/samples/apps/copilot-chat-app/webapi/CopilotChat/Skills/ChatSkills/SemanticChatMemoryExtractor.cs#L57.
And of course, token limit will be another cause too. With the enough spare token for the models to respond, it's possible that the model cannot complete the Json block, leading to parsing error.
Let us know if that answers your question.
Closing now as it appears this issue has been resolved. Please reopen as needed.
Describe the bug Due to token issue, the response from text completion will not be valid json. SemanticChatMemory memory = SemanticChatMemory.FromJson(result.ToString());
{"items": [{"label": "workspaceName", "details": "SoP_High_Memory"}, {"label": "datasetName", "details": "SoP_High_Memory_0718"}, {"label": "columnName", "details": "AvgMemoryConsumption"}, {"label": "tableName", "details": "WabiAllHighRefresh0718"}, {"label...
To Reproduce Steps to reproduce the behavior:
Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context Add any other context about the problem here.