kobolog / gorb

IPVS on steroids – REST API, heartbeats, service discovery and more
GNU Lesser General Public License v3.0
809 stars 83 forks source link

Add lock/unlock codes for kvstore #43

Closed albamc closed 7 years ago

albamc commented 7 years ago

What I did

Add a lock/unlock codes for kvstore

How I did

How to test it

What to do

references

https://github.com/docker/libkv/blob/master/docs/examples.md

codecov-io commented 7 years ago

Codecov Report

Merging #43 into master will not impact coverage.

@@           Coverage Diff           @@
##           master      #43   +/-   ##
=======================================
  Coverage   96.93%   96.93%           
=======================================
  Files          13       13           
  Lines         196      196           
=======================================
  Hits          190      190           
  Misses          4        4           
  Partials        2        2

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update b169c92...2d06db9. Read the comment docs.

kobolog commented 7 years ago

Hey - thanks for this PR! Do I understand it correctly that its purpose is to synchronize KV store access when multiple GORBs use it to avoid reading some sort of invalid data? In this case, what sort of race conditions or data corruption you envision if using KV store without locking?

Regarding the logging levels – yeah, if not configured explicitly, all packages will end up using a global Logger object, so updating a setting in one place should affect it everywhere else. I'm not sure why this is not the behavior you're experiencing – I'll investigate!

Also, some basic tests would be nice to have before landing this PR =)

albamc commented 7 years ago

Umm... I think if multiple gorb access kvstore, each gorb should access different kvstore path. This lock feature is not for multiple gorb access. It's for the situations which each "host" report their LB info directly to kvstore (like registrator) If gorb's policy is modify L4 data must use gorb api, then this PR is no use and should be closed. And I think some cases requires locks for example host sides want to pause L4 modify before they finish register L4 info. Actually, I developed client like gorb-docker-link to report gorb L4 data to consul directly, but this program contains many our company's features so it's not fit for open sources.

kobolog commented 7 years ago

Oh, so you mean this is for cases when there's a separate service somewhere that might modify parts of KV store that are also accessed by a Gorb instance and in this case there might be a data race? I haven't really thought about this usecase – the mental model is for all LB topology changes to go through the Gorb API. Are you saying that you have an internal service that utilizes this direct KV store modifications to control weights?

albamc commented 7 years ago

Sorry for late answer. In my environments, multiple clients report their container status to consul and gorb get this status to build L4 entries. At first there's many errors and one of the reason seems like race conditions. But finally, I found the race condition is not a reason. The reason is mis-configured my consul server-agent configurations. As you commented, this PR is not fit for gorb's mental model and locking is not necessary for my environments. So... please let me know what to do...

kobolog commented 7 years ago

Yeah, I still don't see how this fixes anything but I see how this could promote the usecase where a 3rd-party app or service is fiddling with Gorb's KV namespace, which I don't think is a good thing. So I'm gonna close it.