Based on some previous testing I've done (seen as part of #2430) we can actually get the metrics to run in a semi-performant way with a very large duckdb instance. Due to the way that the sqlmesh rolling windows ran upon our initial version, deletes + writes into trino were exceedingly slow. Using duckdb as a pre-warmed cache, we can distribute the calculation of metrics to a cluster of pre-warmed duckdbs and then write that back to the trino warehouse.
What is it?
Based on some previous testing I've done (seen as part of #2430) we can actually get the metrics to run in a semi-performant way with a very large duckdb instance. Due to the way that the sqlmesh rolling windows ran upon our initial version, deletes + writes into trino were exceedingly slow. Using duckdb as a pre-warmed cache, we can distribute the calculation of metrics to a cluster of pre-warmed duckdbs and then write that back to the trino warehouse.