Closed jshepp27 closed 1 month ago
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
@jzleibo - looks like it's blocked. Open suggestion for you! The example three_key_questions.ipynb is incorrectly loading formative memories otherwise.
Great piece of work by the way! The python is beautiful.
@jzleibo - looks like it's blocked. Open suggestion for you! The example three_key_questions.ipynb is incorrectly loading formative memories otherwise.
Great piece of work by the way! The python is beautiful.
I think it was only blocked because you have not accepted the Google Contributor License agreement. I believe it should have been sent to you by email when you submitted this pull request on github.
As for the change itself, it looks fine to me. @jagapiou , @vezhnick , what do you think?
Which LLM did you use that had issues consistently outputting '\n\n\n'?
Ah, noted. I'll try accepting the CLA again for future contributions.
That's a very good point - I didn't think about variation over LLMs. Nonetheless, GPT-4 via the API not azure.
Going to win awards for the smallest contribution on the planet!!! - I will be back with a bigger contribution soon though ;)!
It seems splitting lines over "\n\n\n" isn't robust enough to ensure the aggregated formative episodes split into individual episodes, to then be zipped into memories.
When I run it with a "\n\n" separator, my experiments including the example "three_key_questions" run without the warning from episodes == DEFAULT_FORMATIVE_AGES.
Of course, given how sensitive this is, it may make sense to implement a more robust fix. This works repeatedly for me.