ExtensityAI / symbolicai

Compositional Differentiable Programming Library
987 stars 58 forks source link

removed deprecated/never used models; updated token counting for chat… #52

Closed futurisold closed 6 months ago

futurisold commented 6 months ago

… completion; refactoring

TODO: add tests for the counting function. IMO we're doing too much work with trying to remedy the overflow (Where to cut from? Why cut from the beginning and not the end or vice-versa? Cutting is task specific.) I would fail "gracefully" i.e. announce the user it overflowed, nothing will be sent to OpenAI if val > self.max_context_tokens, and ask them to reduce the prompt. Previously the naming in the mixin was confusing; max_tokens represents the response tokens, not context tokens!