Open jpallen opened 11 months ago
The model has configurable context now in #11 but each model has different context lengths.
Do we want to fill the context every time we send or be more cost effective/selective?
I was thinking just limiting to something like 1000 tokens before and after, but make this user-configurable (including infinite), so there is some user control over cost vs effectiveness
(and make configurable)