Closed tgrecojr closed 4 years ago
Yes, this is how it works. Every change to the Kubernetes datastore causes a write to the backing datastore. Kine uses the database as log-structured storage, so every write creates a new row to store the updated object. Periodically, old versions of the objects are deleted; this is known as compaction. etcd does the same thing, except with the RAFT consensus algorithm and bbolt instead of SQL.
See https://github.com/rancher/k3s/issues/2278#issuecomment-697249758 for a brief overview of why Kubernetes is always busy, even when you might think that it doesn't have anything to do.
If you think that perhaps something is going wrong, can you share io ops/sec and write throughput for your data volume? k3s is regularly used on Rasperry Pis with SD cards for storage, so the actual overhead should be low, even if the constant queries seem unusual at first look.
Thank you for the response. This makes sense, but I have to believe that there is opportunity for optimization when using an RDBMS backed cluster. I haven't dug in just yet. Obviously, how the RDBMS is backed makes a huge differences as well (for example using clustering and then backing that with zfs mirror pools). I guess that I am going to have to implement an external etcd cluster.....which is disappointing, because I really only have 2 monster servers, and the mysql implementation is perfect. I don't really have an odd number of servers to use here.
Again, thanks for the input, I guess this can be closed with no fix for now, but I do plan on thinking about optimizations.
As a note to others who may come across this... although not high availability, I upgrade to the latest version of K3s to take advantage of the embedded etc, and will do testing from there.
Environmental Info: k3s version v1.18.9+k3s1 (630bebf9)
Node(s) CPU architecture, OS, and Version: Master: Linux sputnik 4.15.0-117-generic #118-Ubuntu SMP Fri Sep 4 20:02:41 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux Worker: Linux mir 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration: 2 nodes. 1 worker, 1 control plane Control Plane: v1.18.9+k3s1, 1.3.3-k3s2, 32 cores, 126 GB ram Worker: v1.18.9+k3s1, 1.3.3-k3s2, 32 cores, 110 GB ram
Describe the bug: My installation is constantly churning my mysql instance with many updates and deletes. It is nonstop. I am going to have to do full uninstall if this continues to save my disks.
Mysql "show full processlist" continually shows:
UPDATE kine SET prev_revision = ? WHERE name = 'compact_rev_key' DELETE FROM kine WHERE id = ? INSERT INTO kine(name, created, deleted, create_revision, prev_revision, lease, value, old_value) values(?, ?, ?, ?, ?, ?, ?, ?)
Steps To Reproduce: Install K3s and monitor mysql
Expected behavior: I expect the mysql calls to be judicious and not cause continual churn with inserts, updates, and deletes -- so it doesn't cause continually disk I/O.
Actual behavior: Disk I/O is super high on my mysql data volume due to excessive mysql inserts/updates/deletes from k3s/kine.
Additional context / logs: N/A