Prompt engineering : Merge the genai prompts templates in a "meta template", to allow one call to genai services generate multiple texts with multiple embeded prompts.
For now, the behavior of a product text generation enrichment is handled on the completion phase and works as follow :
for each product, there is a call on GenAiCompletionService.processProduct()
This method calls the GenAiservice, with the Product, and the VerticalConfig, containing localized prompts templates, in a key value way. We can see configuration sample in the aiConfig properties, in the vertical tv.yml
In the AIService, the doGeneration() will iterate over each internationalized aiConfig and delegates the generation to method generateProductTexts(), which effectivly do the call to SpringAI, for a given lang and a given key/prompt pair. The Product object is updated inside this method, it will be automatically persisted by the completion pipeline that calls the gen ai services.
the generateProductText also handle prompt templatings through the spelEvaluationService. A note about AIService, which could be removed if all relevant genaiservices are moved to the GenAiCompletionService
No big ideas on how to operate : Having a brief talk with a great AI Expert, the idea was given to ask for
the LLM to generate the output in a json format, maybe even with the ai configuration provided to the LLM "as his".
Prompt engineering : Merge the genai prompts templates in a "meta template", to allow one call to genai services generate multiple texts with multiple embeded prompts.
For now, the behavior of a product text generation enrichment is handled on the completion phase and works as follow :