Open MarkMcCulloh opened 11 months ago
Cool idea! I can definitely see it fitting in the Wing Cloud Library. A few questions:
What kinds of cloud services would this be implemented with (Redis, Memcached, or other key-value stores)?
Caches are probably one the most valuable things in software, and implementing a good distributed cache strategy is non-trivial.
It could be helpful to identify some use cases to design the resource around. If you were writing the README for this resource, what's an example app you'd use to demonstrate it? I imagine the requirements for an API that caches IP addresses might be different from an API that caches website assets, or an API that caches session data, or an API that caches configuration settings, or caches API calls, etc. Would the cache support deleting/evicting individual entries?
Hi,
This issue hasn't seen activity in 60 days. Therefore, we are marking this issue as stale for now. It will be closed after 7 days. Feel free to re-open this issue when there's an update or relevant information to be added. Thanks!
Hi,
This issue hasn't seen activity in 60 days. Therefore, we are marking this issue as stale for now. It will be closed after 7 days. Feel free to re-open this issue when there's an update or relevant information to be added. Thanks!
I picture some kind of composable cache interface:
pub interface KeyValueStore {
inflight get(key: str): str?;
inflight set(key: str, value: str?): void;
}
pub interface CacheProps {
/**
* Provide your own implementation that stores
* the cache data.
*
* @default `cloud.Table`
*/
keyValueStore: KeyValueStore?;
}
pub class Cache {
keyValueStore: KeyValueStore;
new(props: CacheProps) {
if let keyValueStore = props.keyValueStore {
this.keyValueStore = keyValueStore;
} else {
this.keyValueStore = new cloud.Table();
}
}
// ...
}
We could implement the in-memory cache on top of any other cache.
Feature Spec
cloud.Cache
is a hybrid key-value cache with multiple layers to optimize performance.Layer 1 (L1): in-memory cache Later 2 (L2): remote cache
Reads from the cache start at L1 and only update from L2 if L1 TTL is expired (or if a fresh read is requested). Writes go directly to L2 and then invalidate L1.
Use Cases
Caches are probably one the most valuable things in software, and implementing a good distributed cache strategy is non-trivial.
Implementation Notes
No response
Component
No response
Community Notes