ssssota / svelte-exmarkdown

Svelte component to render markdown.
https://ssssota.github.io/svelte-exmarkdown
MIT License
168 stars 5 forks source link

Performance issues #209

Open vlrevolution opened 4 months ago

vlrevolution commented 4 months ago

Any way to make it more performant? I am running into quite a overhead with this package when streaming a response in from the AI: image

What could be a good way to fix this? Should I throttle the rendering or something like that: what would be a good way to achieve that ?

vlrevolution commented 4 months ago

Not sure if it is my user error but it keeps updating while idle like this (this number of logs keeps going forever): image

Any idea what could be causing?

vlrevolution commented 4 months ago

I am using one<Markdown per {#each block to render the markdown based messages. Hmm

ssssota commented 4 months ago

Can you create a reproduction sample?

vlrevolution commented 3 months ago

I realize it is deffo user error :D I am forcing reactivity by bad design

vlrevolution commented 3 months ago

Actually testing it more there might be some issue. I will report back soon

vlrevolution commented 3 months ago

It was more or less my own mistake.

However, do you reckon it could be possible to not rerendeer the entire parsed md each time the md gets added a streamed chunk? Perhaps could be possible by splitting by elements and not rerendering those that are already rendered? I believe right now it will rerender everything again even if we add only a space to the end of md.

Actually I guess it shouldn't parse again if we input only a little to the end of the md, as that is the expensive operation

vlrevolution commented 3 months ago

I managed to solve it with a few wrapper elements for the markdown itself, splitting by header items naively. This way each response that gets long, normally will include a header or a two and these will be split into seperate Markdown components within a wrapper component to avoid passing props again to the Markdown component and therefore avoid running parse() again on the md. It works :D

// MarkdownWrapper.svelte:
<script>
    import MarkdownWrapperWrapper from './MarkdownWrapperWrapper.svelte';

    export let messageStore; // The Svelte store for this message's content

    // Function to split the content at headings
    function splitAtHeadings(content) {
        // This regular expression matches headings at the start of lines
        const headingRegex = /^(#{1,6} .+)/gm;
        // We use the regular expression to split the content
        return content.split(headingRegex).filter(Boolean);
    }

    let markdownBlocks = [];

    // Reactive statement to re-split the content whenever messageStore updates
    $: if ($messageStore) {
        markdownBlocks = splitAtHeadings($messageStore.content);
    }
    // $: console.log('markdownBlocks:', markdownBlocks);
</script>

{#each markdownBlocks as block, index (index)}
    <MarkdownWrapperWrapper  md={block} />
{/each}

Structuring it this way avoids a lot of parse().. before it would do that on every new chunk for the entire array of components of all messages. Now it only processes the last message and only sections of it (seperate by headings)

vlrevolution commented 3 months ago

But the first render if there are multiple messages to parse is still quite slow and freezes the UI.

Could it be possible to achieve without blocking the main thread? Maybe some ideas from here: https://github.com/markedjs/marked

vlrevolution commented 3 months ago

Maybe we could use webworker to run the parse function in?

vlrevolution commented 3 months ago

https://github.com/ssssota/svelte-exmarkdown/assets/67480746/ccc45b06-bbcd-41b6-86fb-1d80d0a90b26

Just wanted to let you know of my progress: it is indeed possible to use web workers to delegate the rendering off the main thread. Video also shows virtual scrolling of a long list of markdown messages in action. It is quite cool. It caches the parsing results so after it wants to show an element once more it will get it from cache. Quite neat but not perfect yet. I need to add a few more things in terms of the overall UX for the chatting to work properly with this kind of virtual solution.