Currently the entire content on the lesson needs to be include in the openAI request along with the previous user context to maintain a conversation in the case of larger lessons this will use a lot of tokens and ultimately reach max tokens quickly. It might be wise to try and summaries lesson content where possible to save on tokens. This could be done in the flow module where by once a lesson is ready to be published it is run through openAI to generate a summary to be included in the files frontmatter for use with Molly.
Currently the entire content on the lesson needs to be include in the openAI request along with the previous user context to maintain a conversation in the case of larger lessons this will use a lot of tokens and ultimately reach max tokens quickly. It might be wise to try and summaries lesson content where possible to save on tokens. This could be done in the flow module where by once a lesson is ready to be published it is run through openAI to generate a summary to be included in the files frontmatter for use with Molly.