Open tehplague opened 1 week ago
Hey @tehplague ,
thanks for opening this issue and sorry for the late replay (TYPO3 camp Vienna was calling ;-)).
Your approach sounds good and I think this could really simplify the process.
I will refactor the generation of the suggestions the next time.
Greetings Manu
The extension tries to strip hyphens and bullet dots from the LLM responses because the LLMs are asked to provide their answers as such lists. However, instead of only removing those characters from line beginnings, every occurrence of
-
and•
is removed.The culprit is in
Passionweb\AiSeoHelper\Service\ContentService::buildBulletPointList()
. A simplestr_replace
is not enough to properly sanitize the LLM response in this regard.Additionally, while looking at this function, I am also unsure whether
explode()
ing the response byPHP_EOL
is suitable to get individual lines in the first place. I suppose OpenAI's API constantly uses plainLF
characters as line endings but on non-Unix systems we may havePHP_EOL != LF
.Why don't we try to let the LLM provide its answers e.g. directly as a JSON array by modifying the prompt accordingly? This would ease decoding a lot.