Closed archenroot closed 6 years ago
For event store, we need persistence to the disk and I don't think Chronicle Map is the best; however, I am seriously thinking to support Chronicle Queue along with Kafka for the messaging.
What we can do is to abstract the producer and consumer into interfaces and then inject different implementations. I am also thinking others customers are asking for the support like Solace and Redis.
Do you know if Chronicle Map can persist to disk?
@stevehu - actually, even the chronicle queue itself is for-ever persisted (if you want), where all messages are stored forever, so can be used for replay capabilities ... but this needs brainstorming
Regarding Chronicle Map it looks like there is enterprise version which combine Map with Queue into single transactional system by providing so called Journal (probably something similar to file system journaling capabilities), some discussion is here: https://groups.google.com/forum/#!topic/java-chronicle/tY0q6_HnxbE
Anyway you can persist Chronicle Map into disk optionally, but it all depends on OS, so there is no control about how often the data are written to disk in the moment. Only when you close the Map, it is guaranteed that all data are writen to disk.
Here is feature matrix for the Chronicle Map:
In-memory off-heap Map | Open source |
---|---|
Persistence to disk | Open source |
Remote calls | Commercially available |
Eventually-consistent replication (100% redundancy | Commercially available |
Synchronous replication | Commercially available |
Partially-redundant replication | On demand |
Entry expiration timeouts | On demand |
some more details: https://github.com/OpenHFT/Chronicle-Map/blob/master/docs/CM_Features.adoc
PS: Peter Lawrey answer very quickly any kind of question posted on stackoverflow, or we can invite him here as well, not a problem for more low-level questions about Chronicle
I am now working on some high performance trading bot based on Chronicle queue which is used. For replication to other machines I am now considering (playing around) with Aeron (fast UDP based library -not only) to replicate the queues accross machines. I think even when you require ACK of the message to be provided by the consumer/receiver, it will be better to implement it this feature via independent UDP server/client infrastructure in the future instead of depending on TCP for lowering overhead, but of course without guaranteed delivery. In case delivery is required (in your case might be), for TCP stuff Chronicle provides so called Network project. This whole idea is moving away from broker centric system and look at the queue (when combined with map and remote access, etc.) like a high performance database of data which you can replicate where you need them, again I just work on some concepts and bench marking these technologies in the moment, but as I like your pure JEE implementation (for boosting performance), I think it might be something for you to consider for your project to become real beast :-), same apply for JavaRX usage (but that is for different discussion we can open another day...)
Possible half off-topic Out of these interesting off-heap operating projects I also use high performance java collections, I am not sure in the moment if it is required anywhere in eventuate4j itself, as this is not replacement of postgres like databases, but rather is to be used together for high speed :-). The project is called cqengine, it is in-memory SQL like query-able high performance collection system supporting dozen of index types (optionally persisted to disk as well!), but doesn't offer any kind of network stack, therefore is suggested to be used simply as in-jvm-process fast collection (concurrent as well), so if not for eventuate4j, maybe you can utilise this on some of your projects within some services implementation. It can also simply serve as fast cache backed-up by standard SQL like database, it is really deadly fast (microseconds on large data sets like millions, billions), take a look yourself: https://github.com/npgall/cqengine
Small enhancement/repair to my previous text Chronicle Enterprise Queue does replication via TCP. I am now brainstorming how to implement this network replicator based on open source version. Additionally it might supports as enterprise version filtering capabilities.
NOTE: There is still version 3.x of Queue product which has TCP based replica, but it has been considered as enterprise version in version >= 4.x.
Based on our conversation, I am seriously thinking to abstract consumer and producer interface and using service.yml to plugin different implementations for different message brokers. Chronicle is one of them and one of our customers asked for Solace.
Nice, I am closing this issue than as this is the root idea I originally had in mind :-)
Gavin has opened another issue based on this discussion in light-tram-4j and let's use it to track the progress.
That is brilliant to hear. I will more investigate posibilities for event store replacement as any kind of SQL database is simply putting too much latency (that is personal point of view :-) ). On the other hand you can connect to postgres via domain sockets to remove the burden of TCP locally. I will in future focus on the superior cqengine. I think with mixture of chronicle queue, you can achieve deadly async beast for event store and both the local service data views (when we talk about CQRS pattern).
This whole concept can mixture SQL(cold storage) with fully in memory(hot storage) event store. However it will require additional not trivial work to achieve this in fully scalable way and when we talk about chronicle queue, which is in open source version not capable to talk via network, one must accept enterprose version (which I am not aware of the price), or implement this functionallity by himself. It means to implement the replication over network of the queues.
One must also understand the chronicle queue behaviour, it by default store the data forever (these guys are having 100TB queues within their systems - reported as biggest users), so in following diagram I am saying to sync in-memory failed query engine with SQL storage, but you can simply implement Queue full read again, if you don't use any kind of retention policy. It sounds like a JOURNAL (almost) by design, but Chronicle Enterprise provides real implementation so called JOURNAL which looks like some kind of Chronicle Queue and Map (with network replication support) mixture.
Otherwise queue design is quite "simple", when you create a topic, it stores data in following way: mytopic/ 20160710.cq4 20160711.cq4 20160712.cq4 20160713.cq4
So when you no longer need some kind of data, you can just remove the specific file. I am not sure if other than DAY resolution is supported.
Bellow is naive diagram
I am again closing this, but just wanted to add some additional hints which might in future leads to more high performance design options when adopting eventuate-4j.
This is more free topic only.
I have 2 topics to consider as replacement in the microservice framework to achieve deadly performance.
Event Store Would you consider to replace SQL like store with Chronicle Map system?
Broker Would you consider to replace Kafka or any other messaging with Chronicle Queue subsystem?
Best regards,
Ladislav