Open BlueKiji77 opened 2 months ago
Hi @BlueKiji77 , I would reocommend trying out the standard ChainOfThought and seeing if that resolve some of the issues here. Part of the reason you run into "Too many retries" is the model itself is not adhering to requirements #957 , not your pipeline being too complex.
I have been trying to extract data (title, question answered, entities, summary) from documents chunks.
I believed typed predictors would be good for this, but I keep running into "Too many retries " error.
This left me wondering if the way I defined the program might be the problem and I do not properly understand how Typed Predictors work or I might be trying to do too much in a single program.
WHAT THE PROGRAM SHOULD DO
Extract the following fields: Title, Summary, QuestionAnswered, Entities using DataExtractionSignature.
For each field in the output to Typed Predictor, the program assesses 3 to 5 properties with Suggest module, AssessDataSignature
For ouput validation
Assessment model and signature
Metric for Evaluation and Assessment
The program itself
Also optimize_signature seems really obscure to me. It does not seem to optimize prompts from my inspection of my LLM history. It throws the "Too many retries" after like 4 iterations. Can someone point me in the right direction here.