Closed jakobbraun closed 1 year ago
In addition, we could transfer SYS.EXA_COLUME_USAGE.
I'll ask product-management for a quote if and when we will implement this.
hi @jakobbraun , did you receive an answer from product-management?
@jorge-delgado-aera We will consider your request for our future roadmap. However decision on timing and priority is still pending.
Reminded product management again, that we need a decision.
Again reminded product management.
@redcatbear See comment above this will stay in the backlog , we don‘t need a urgent decision if we will implement it or not and can prioritize this higher , if we have more resources in the future.
Acknowledged. Will keep this in the backlog.
hello guys, I have cloudwatch implemented in more than 20 clusters, is there a plan to collect this metric more frequently?
@jorge-delgado-aera, I am in discussions with the PM people. Which frequency do you have in mind? 10min? More often?
hi @redcatbear, I'd like every 5 min if possible
@jorge-delgado-aera, we double-checked and think that your requirement might actually already be covered. We are running the Cloudwatch Adapter against one of our clusters for quite a while now with an AWS lambda scheduled once per minute. The actual DB RAM size metric in this case is a copy from the TEMP_DB_RAM
in MONITOR_LAST_DAY
, which in the default setting is captured every half minute by the Exasol database.
So if I am not mistaken, all you would have to do to get a higher frequency is to change the lambda schedule.
Please let me know if that solves your problem or if I am overlooking something here.
Closing due to no more user feedback.
For monitoring the database it would be great to report the temporary db ram size more often then every 30 minutes. That could be possible by reading it as an aggregation of all rows of
SYS.EXA_ALL_SESSIONS
.(This is a follow-up of #61)