Closed archenroot closed 5 years ago
@archenroot I briefly browsed the site and this is so cool. I didn't find the detailed requirement that needs to be implemented; however, we have worked on some high throughput applications with some of our clients and hope the experiences we gathered can help.
If you have enough memory, then you can use Chronicle Map. If you need local persistence support, then RocksDB with SSD is pretty fast. We are using it for most of the CQRS projects on the materialized view for the query side. You can save the final aggregation result in the exact format for the response in String. It is almost like RAM speed :).
Happy New Year!!!!
BTW, most of our usage for RocksDB is in the Kafka Streams for Interactive Queries. You might need to set it up on your own and remember that JDK alpine docker image is not working and you need full JDK.
Here are details on task: https://highloadcup.ru/en/round/4/ Currently best performance is C++ guy :-)
More detailed on task: https://highloadcup.ru/media/condition/accounts_rules_en.html
I started today with OpenAPI spec and Account class. I think I will go with CQEngine for this task as lot of query complexity is on API layer to be executed. But welcome any comment. Its limited for submission, so I hope I finish it over weekend.
Ladislav
I am closing this as I got all answers and we continue on the cup externally.
@archenroot Just let you know that I have updated the Techempower benchmarks to upgrade to JDK 11. The numbers look good now.
Its really nice looking hhhh
I actually didn't have time to work on the cup fully, it requires data dictionary and/or some data compression, the task is complex, but thx for updating techempower
Hi guys,
I am joining contest back in Russia regarding high load services: https://highloadcup.ru/en/
I would like to use as backbone In-memory engine with disk persistence cqengine.
Q: Do you have any production grade optimizations to get max throughput? Should I use JDK 8-11? etc. Any hint is well welcomed.
Ladislav