Closed ironfish closed 11 years ago
Journal.IO just appends to a log without using keys. As long as applications use a journal via processors, channels and the EventsourcingExtension, duplicates will never appear in the journal, as these will never use client-defined sequence numbers.
Client-defined sequence numbers are still there for historical reasons, from times where I wanted to implement replication is a generic way i.e. independent of the storage backend (LevelDB, Journal.IO). I won't follow anymore this path as we would just re-implement what is already there with HBase and other distributed storage backends.
If we remove the client-defined sequence number feature, it will be impossible to make duplicate entries. Maybe we should rename this issue to cover that more explicitly. WDYT?
That makes sense. What do you want the issue renamed to?
On Wed, Mar 20, 2013 at 1:55 AM, Martin Krasser notifications@github.comwrote:
Journal.IO just appends to a log without using keys. As long as applications use a journal via processors, channel and the EventsourcingExtension, duplicates will never appear in the journal, as these will never use client-defined sequence numbers.
Client-defined sequence numbers are still there for historical reasons, from times where I wanted to implement replication is a generic way i.e. independent of the storage backend (LevelDB, Journal.IO). I won't follow anymore this path as we would just re-implement what is already there with HBase and other distributed storage backends.
If we remove the client-defined sequence number feature, it will be improssible to make duplicate entries. Maybe we should rename this issue to cover that more explicitly. WDYT?
— Reply to this email directly or view it on GitHubhttps://github.com/eligosource/eventsourced/issues/73#issuecomment-15159583 .
Duncan K. DeVore <><
Just renamed it. I'm still using client-defined sequence numbers in some tests, not sure how to deal with that when having this feature disabled. I'll leave this ticket open for the moment.
That being the case, i'm not sure this should be labeled as a bug. Perhaps wontfix?
Done. Re-opening is always possible :)
It seems that JournalIO doesn't support a unique key. If you modify JournalSpec test as follows:
"persist messages with client-defined sequence numbers" in { fixture => import fixture._
This will duplicate a message and in JournalIO that message will get persisted. Following is the output:
[info] JournalioJournalSpec: [info] A journal [info] - must persist and timestamp input messages [info] - must persist but not timestamp output messages [info] - persist messages with client-defined sequence numbers * FAILED * [info] Message(test-1,5,1363736773401,0,List(),true,null,null,null,null,null) was not equal to Message(test-2,6,1363736773401,0,List(),true,null,null,null,null,null) (JournalSpec.scala:112) [info] - must persist input messages and acknowledgements [info] - must persist input messages and acknowledgements along with output messages [info] - must replay iput messages for n processors with a single command [info] - must tolerate phantom acknowledgements
You shouldn't get this error as the dupe should be dropped.