Closed jemshit closed 2 years ago
Anyone?
I agree there could be a lot more practical examples of how people use arctic for getting started.
To your questions: (1) I've found that its easier to keep all data of the same timeframe together. For me, at the most basic level, one library for all daily, and another for intraday. (Not sure why you would need to keep hourly if you keep minute data, as you an always resample higher.) And I generally keep all from the same source together.
So in your nomenclature: exchange1-intraday, exchange2-intraday, exchange1-daily, exchange2-daily.
(2) Frankly, I've never used chunkstore. The default store is versionstore, which I uses for all OHLC data.. The versioning IMO can come in handy in the event the data is gets messed up... which eventually happens.
(3) Again, never used chunkstore.
Thank you.
Could you put some lights on this ? @dunckerr @bmoscon
I read it, hence the questions
So far, this is my summary:
Library is used for bucketing, and it consists of multiple collections in mondodb. So options are: a) "exchange1-symbolA" is library, "spot-minute", "spot-hour",.. are symbols. b) "exchange1-minute" is library, "symbolA-spot", "symbolA-perp",.. are symbols
I still don't understand motivation of chunkstore fully, but according to "chunkstore is super dependent your chunksize, and writing is slower than reading, unless you have a specific reason to use it, you probably want to use versionstore", versionStore seems go to solution.
Not sure, just few quotes:
it is the minimum amount of data that you have to read if you're only reading a subset of the data, but needs to be big enough that the compression is effective.
ideal for use cases when very large datasets need to be accessed by 'chunk'
I couldn't find any arctic related Stackoverflow tag, so i'm asking here.
I read through whole documentation, it is still not clear for me which store type should i use, how to chunk it... Requirements:
**OHLCV**
data forminutely/hourly/daily/weekly/monthly
candlesticks. No bid/askQuestions:
chunk_size
according to above read scenarios? i couldn't find more info on documentation