Open unicoder88 opened 4 months ago
@unicoder88, what's the impact of batching on performance/costs in duplicate-heavy use cases?
@evgenydmitriev , I've added unique text processing and updated tests. Only unique texts are sent to workers, but results are assigned back to original inputs. This way duplicate-heavy use cases should still consume inbound traffic, but not increase Cloudflare usage costs.
Users creating batches of identical messages is a good consideration, but my question wasn't about duplicates within a batch. A much bigger duplicate problem might show up across multiple requests/users. Where do you think the biggest costs are? How does your batching method affect those costs (if at all)?
Thanks @evgenydmitriev . Looks like the most expensive part is Vectorize calls. I'm intending to add Cloudflare cache API over each individual vector.
Price given 1000 requests:
Workers AI
https://developers.cloudflare.com/workers-ai/platform/pricing/
$0.011 / 1,000 Regular Twitch Neurons (also known as Neurons).
https://ai.cloudflare.com/#pricing-calculator
Price $0.011 / 1,000 Neurons 768 tokens * 1000 requests = 970 Neurons = $0,0011 when over free limit
Vectorize
https://developers.cloudflare.com/vectorize/platform/pricing/ Price $0.040 per million
Queried Vector Dimensions: The total number of vector dimensions queried. If you have 50,000 vectors with 768-dimensions in an index, and make 1000 queries against that index, your total queried vector dimensions would sum to ((50000 + 1000) * 768) = 39.168 million = $1,567 (UPD. meant price here is one and a half dollar per 1000 requests, so $1.567)
@unicoder88 your pricing math is off, and I personally think that manually implementing caching is almost always an overkill, but you are thinking in the right direction. There might be an easier solution, though.
Actually, you're right @evgenydmitriev ! Obviously, a database is a better and more straightforward solution than distributed cache here, so now I'm planning to use a D1 to create a schema, then query and insert results there.
D1 pricing looks tasty:
Rows written | 100,000 / day | First 50 million / month included + $1.00 / million rows
In worst case of all new vectors and free writes exhausted, 1000 writes will cost $0.001.
I'm not sure D1 is necessarily cheaper than Cloudflare cache, but it might be, depending on the usage. My argument was about avoiding manual caching (whatever the data store you might want to choose). There's a much simpler solution that is already implemented for the worker, but will be broken with your batching approach.
Hi @evgenydmitriev ! I have some updates to bring up to you.
Firstly, worker limits:
Possible bottleneck could be 6 simultaneous requests, while this batching method is attempting to execute up to 100 requests, the rest should be throttled.
However, after I've built the whole system with same batching approach, and results look better than expected. Execution looks like this:
The setup:
Processing times, however, do not show considerable slowdown no matter concurrency or duplicates:
Parallel execution:
CPU time also looks good:
I would remove any duplicates checking for simplicity, and any caching because 768 dimensional vectors look unlikely to match exactly between different texts. Also, I would improve input checks for non-empty text, because empty text in a batch breaks whole batch.
I would appreciate any further hints.
Thanks for the detailed test results. You correctly identified the Workers AI embedding models being the cost bottleneck in one of your previous comments, even accounting for the pricing calculation error. You also correctly identified the need for a caching mechanism there. The question is finding an easier and cheaper way of doing it than manually attaching a data store and querying every incoming message against it. There's an easier solution that's already implemented in the single message approach, but will be broken if we were to implement your batching mechanism as is.
@evgenydmitriev, It would be nice to have URL like this - GET https://www.example.com/some-namespace?text=some%20text, and then let Cloudflare CDN cache those. I've tried this, and it doesn't work because workers are run before cache.
What I tried:
Cache-Control
headers to response
c.res.headers.append("Cache-Control", "public, max-age=14400, s-maxage=84000, stale-if-error=60")
c.res.headers.append("Cloudflare-CDN-Cache-Control", "max-age=24400")
c.res.headers.append("CDN-Cache-Control", "max-age=18000")
@unicoder88, could you clarify what it is that you are trying to cache? HTTP requests containing message batches?
@evgenydmitriev, well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL. Then either have clients initiate multiple parallel requests on their own if necessary (Cloudflare will take care of scaling), or on top of it come up with some sort of gateway that takes a batch and dispatches to this worker. Latter is unclear yet.
well, latest suggestion is to start small and create an endpoint that works on a single message, but is cacheable by URL.
Isn't this what the current worker does? You do not need a custom domain to cache HTTP requests to Cloudflare Workers.
Now it's a POST body, so shouldn't be cacheable by default. In this prototype, I converted to GET (with no luck admittedly 😀)
Hi @evgenydmitriev , please review updates.
Hi! This pull request adds ability to pass multiple texts in single request, but also is compatible with single-text format.
Solves issues https://github.com/1712n/dn-institute/issues/431 and https://github.com/1712n/dn-institute/issues/430
Considerations taken:
max input items for Workers AI - https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5/#api-schema
API Schema suggests
"maxItems": 100
Uncertain which is correct, but this is configurable as an environment variableMAX_INPUT
, leaving 100 as a defaultQuery vectors doesn't seem to provide a batch support, so I opted to run all requests in parallel with
Promise.all
- all requests must succeed. An alternative isPromise.allSettled
that can tolerate individual request errors, but then we'll have to come up with error format per item, which isn't worth it for nowIn addition, basic high level integration tests were added.