dev-urandom / graft

Guys have you heard of this consensus protocol called raft?
12 stars 6 forks source link

Graft as a Go library #20

Open benmills opened 11 years ago

benmills commented 11 years ago

We should start thinking about how other go projects will use graft. If you distill what the raft algorithm gives you it's basically an ordered list of strings. Here is how I think we could expose this graft:

Instantiation


foo := graft.New("name of server")
// note: This name has to be unique in the cluster or maybe it should just 
// be the address?

foo.AddPeers(...)

foo.Start()
// note: We should probably make other methods on a graft server panic
// if it hasn't been started

So at this point let's assume the other peers have started and leader election happens without issue.

Adding data

foo.AppendEntries("bar")

I think regardless if a server is a follower or leader the public API should look like this. If the local server is a follower AppendEntries will just make a request to the leader of the cluster, which the follower will know about. If the local server is the leader it will coordinate the append.

Accessing data

This is where things get interesting. Let's use a K/V store as an example. Our store adds an entry containing JSON snapshot of the key any time a key is set/updated. Here is an example for a set: {"key":"a", "value":"b"}.

Idea A

The first idea is the store could hold on to the index it's processed up to and any time it receives a request it could evaluate any new committed entries in order to process the incoming request.

var data map[string]string // assume this is populated

newRawEntries := foo.CommitedEntries[lastProcessedIndex:]
lastProcessedIndex = len(newRawEntries)
newEntries := jsonParseEntries(newRawEntries)

for _, entry := range newEntries {
    data[entry.Key] = entry.Value
}

Now we could run that before each operation on the K/V store so we could assume that data is up-to-date across the cluster. This is nice since it's lazy, but it could add unnecessary overhead to requests.

Idea B

The next option is to expose some kind of callback registry so that we could subscribe to updates.

var data map[string]string // assume this is populated

foo.OnNewCommitedEntry(func (entry LogEntry) {
    // assume entry is parsed..
    data[entry.Key] = entry.Value
})

I'm not sure what I like better, maybe both.

I think that once we come to some agreement here I'm going to make a commit that fleshes out the README so we have a "design document" of sorts that we can reference / update as we start to connect the dots on graft.

benmills commented 11 years ago

I wonder if we can push that logic of "new" committed entries into graft for idea A on getting data. We could expose something like foo.NewCommitedEntries which would be a slice.

wjdix commented 11 years ago

Idea B looks best to me. Though I think Idea C, which is how things are currently implemented of specifying a state machine to graft and allowing graft to commit to it is another option. It's 4 AM. So take it with a grain of salt.

benmills commented 11 years ago

I may just be stupid, but I don't understand how you could specify a state machine for a K/V store? Could you sketch out how this would look?