Closed johanbrandhorst closed 1 month ago
So good to see you contributing! can't wait to get this in!.
Long time no see Travis :). Well done on this project. I'm wondering how you would prefer I test this, the only way I can see is to spin up a custom HTTP server to fake the requests but it doesn't provide much in the way of confidence - I have tested this personally against the Gemini API and it works great.
Also happy to see you contribute here, @johanbrandhorst :-)
Re tests, the googleai
provider is relatively well-tested with live tests vs. the Gemini API (https://github.com/tmc/langchaingo/blob/main/llms/googleai/shared_test/shared_test.go)
For this test specifically, testing batching vs. no batching is a bit tricky and we don't really do mock testing with fake HTTP backends for now. That said, if the shared live tests pass and you observe the performance improvement, I believe this is good enough.
The new Batch API significantly speeds up embedding
PR Checklist
memory: add interfaces for X, Y
orutil: add whizzbang helpers
).Fixes #123
).golangci-lint
checks.