Open thams opened 1 year ago
Use split-block
plugin https://github.com/hyrijk/logseq-plugin-split-block
split-block works!
Would still be nice to have something that pre-parses GPT's responses in a way that is LogSeq friendly.
Split block indeed works and is a nice workaround.
But wondering the same as @thams - is it possible to output multiple blocks or trigger the split block plugin (if available) after creating a single block with unordered lists?
I'm working on a pull request that would (optionally) parse the result and replace the unsupported leading dashes.
Would also make sense to simply add post-processing hooks that could do such things as trigger Split block or some arbitrary other tool.
Adding for reference for people looking for a simple solution that renders well.
I added this to the chat prompt:
Bullet points should use "*" and never "-".
Works like a charm if you are ok with the output to be within a single block!
OpenAI often returns text that looks like MD but trips up LogSeq. Then LogSeq doesn't display the response correctly, and gives a warning: "Full content not displayed; Logseq doesn't support multiple unordered lists or headings in a block."
To reproduce:
ask GPT "Create a study guide for learning large language model AI"
The response will be something like this:
Which shows up in Logseq like this:
Unsure what the solution is. Replacing GPTs leading dashes with double-dashes might be the trick, but that might screw things up elsewhere.