Open onixldlc opened 5 months ago
In chunk view, it just display plain text without any text rendering。 In chat veiw, it can be rendered with markdown format. And in chat view, the text has been processed by LLM necessarily the same as the content in chunks.
I see, so there was no sanitization to begin with, weirdly enough most of the xss payload didn't run. interesting
also after testing a bit more i got the xss to show up in the chat view via reference window, so I'm guessing both reference window and the chunk view is connected ? or at least use the same modal
and this was the cause of the xss
Is there an existing issue for the same bug?
Branch name
main
Commit ID
4c14760
Other environment information
Actual behavior
There is an xss in the
Knowledge Base > Dataset > chunk
viewExpected behavior
text in side of a chunk should be escape to mitigate xss
Steps to reproduce
Additional information
i can only replicate the xss in the
Knowledge Base > Dataset > chunk
view page, i tried to get the ai to send that same chunk in the chat view and it escape it correctlyas you can see on the text that i have highlighted, they has been escaped in the chat page but not in the chunk view page
but weirdly enough this doesn't happen in the actual xss file chunk
i guess it has something to do with the iframe tag sanitization ?