Closed fabclj closed 6 months ago
Should we do this for pulling the Lexicons too? I think we could also end up with an agent with a lot of lexicons, and then we would have this issue again.
Should we do this for pulling the Lexicons too? I think we could also end up with an agent with a lot of lexicons, and then we would have this issue again.
Right, it make sense to handle here also this case. I will prepare something.
Edit: looking on lexicon file, I see that we are currently awaiting for lexicons one by one. We never implemented a Promise.all solution. So we cannot run into the issue we had in pullFlow iteration, but we could have the problem about a very slow process in case of agent with large number of lexicons. I suggest to implement anyway a chunked promise.all solution. https://github.com/Cognigy/Cognigy-CLI/blob/748ed81f6536f4dd2deffcfa9d253be4eb5b5184/src/lib/lexicons.ts#L40
:tada: This PR is included in version 1.5.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
This PR include a workaround that fixes the rate limit errors when cloning a agent with high number of resources and consequent high number of API requests. Updated the Readme file including the rate limit risks on large agents