Closed djoffrey closed 6 years ago
Hello, @Kismuz I changed the code in order to be able to use daily data frames instead of intradaily data frames. I can see that the episode.data contains a correct sample of my daily dataframe and this one is accepted however it remains in global step 0 and always geting new random samples. So the train never starts. Any idea how can I solve this?
@joaosalvado10 - any details? what script you are running? shots of terminal and tensor_board output?
Upd: I looked at the file. It may be irrelevant to this particular issue, but it should be properly sorted, while this one is not: records 1-29 are for 1970's and from 30 it goes 1800'th up. BTgymDataFeed can not correctly sample unsorted files. Should add this to docs.
I think I solved the problem, I am trying to change the state given to the agent by including features available in the csv file.
Hi Kismuz,
I have found https://github.com/bartosh/backtrader/tree/ccxt which has support for a wide range of crypto exchanges. I would be interested to know if btgym supports the same exchanges as backtrader. Thank you for your work!
@kazi308, basically BTGym is RL wrapper around backtrader and has been designed to inherit support for cc, live trading etc. by default , though I'm still busy developing RL framework itself and have not reached this field so far. Another promising direction is utilising exisiting domain knowledge in form of already developed traditional trading strategies and built-in backtrader indicators as additional or primary state inputs to RL agent.
@Kismuz I did a little dirty hack to datafeed to make cryptocurrency data work, and I think we need a little abstraction of data input mechanism, such as
function read_csv => function load_data would be nicer, I think.
@djoffrey ,
I did a little dirty hack to datafeed to make cryptocurrency data work
Can you share details of what data you use and mods have been done?
we need a little abstraction of data input mechanism
You mean adding other than csv source of input or more changing parsing approach to existing data source? General data pipe rework is my priority at the moment, though I'm focused on enabling data hierarchy from the meta-learning point of view. General changes has been pushed and some refining is in progress, any suggestions and contributions are welcome.
@Kismuz
Can you share details of what data you use and mods have been done?
My dataset is a pandas.DataFrame object with datetime index, and OHLCV columns, I manually replace "data" of BTGymDataSet and related member vars etc.
You mean adding other than csv source of input or more changing parsing approach to existing data source?
Yes, I think a general data pipeline is quite important.
Hi , @joaosalvado10 ,could you please give a simple example to show how to use daily data with btgym? I cannot figure it out yet. Thanks!
@djoffrey,
Short:
Expanded: