notadamking / RLTrader

A cryptocurrency trading environment using deep reinforcement learning and OpenAI's gym
https://discord.gg/ZZ7BGWh
GNU General Public License v3.0
1.73k stars 537 forks source link

how use it ? #135

Open hardy110 opened 5 years ago

hardy110 commented 5 years ago

first install it next ? python ./cli.py optimize or python ./optimize.py next ? python ./cli.py train next ? python ./cli.py test

MichaelQuaMan commented 2 years ago

I've been heavily using RTrader for weeks now.

The essences of what you do is: optimize, train, test, rinse and repeat.

Tips:

dataset = df.copy(deep=True)

close = dataset["Close"]

Exponential moving average (EMA)

dataset["ema"] = ema_indicator(close=close, window=10)

dataset['rsi'] = rsi(close=close) dataset['macd'] = macd(close=close) dataset['macd_diff'] = macd_diff(close=close) dataset['macd_signal'] = macd_signal(close=close)

clipped_data = dataset[0:5000] clipped_data["Close"].plot() clipped_data.to_csv("GBPUSD_M30_0_5000.csv")


- If the GPU consumes all your GPU mem and crashes your machine, turn it off

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

- If the standard `optimize.py` uses all your CPU and memory, create your own version and limit the number CPU it uses in its pool or remove/bi-pass the multiprocessing 

n_processes = multiprocessing.cpu_count()

n_processes = 6

or bi-pass the multiprocessing and write your own `optimize` script.

file_name = "EURUSD_D1.csv" input_data_path = f"{input_path}/{file_name}" date_format = ProviderDateFormat.DATETIME_HOUR_24 data_provider = StaticDataProvider(date_format=date_format, csv_data_path=input_data_path, data_columns=data_columns)

params = {

"tensorboard_path": tensorboard_path,

"data_provider": data_provider,
"reward_strategy": CustomRewards,
"show_debug": True,

}

if name == 'main': for i in range(1): trader = RLTrader(**params) trader.optimize() trader.train(render_test_env=False, render_report=False, save_report=True)

- Learn some basics about using `optuna`

E.g., show all the studies saved to `Sqlite`:

optuna studies --storage sqlite:///data/params.db

Or delete a study:

optuna delete-study --study-name PPO2_MlpLnLstmPolicy_WeightedUnrealizedProfit --storage sqlite:///data/params.db

- Monitor your `optuna` study progress in a `Jupyter` notebook:

import optuna db_path = "sqlite:///data/params.db" study_name = "PPO2MlpLnLstmPolicyCustomRewards" direction = "maximize"

optuna_study = optuna.create_study( study_name=study_name, storage=db_path, direction=direction, load_if_exists=True ) len(optuna_study.get_trials())

trials_df = optuna_study.trials_dataframe()

trials_df.rename({"params_cliprange": "cliprange", "params_ent_coef": "ent_coef", "params_gamma": "gamma", "params_lam": "lam", "params_learning_rate": "lrn_rate", "params_n_steps":"n_steps", "params_noptepochs": "noptepochs", "system_attrs_fail_reason":"fail"}, axis=1, inplace=True) trials_df.drop('datetime_complete', axis=1, inplace=True)

trials_df.sort_values(by="value", ascending=False).head(5)

optuna_study.best_value

trials_df[["value"]].plot()

fig = optuna.visualization.plot_optimization_history(optuna_study) fig.show()

optuna.visualization.plot_slice(optuna_study)

- Add logging statements to get insights and/or add break points and/or watch the code in an IDE like `PyCharm`. 
- If the logs start getting too large using rolling logs and compress them:

max_bytes = 10485760 backup_count = 20 use_gzip = True

fh = ConcurrentRotatingFileHandler(info_log, maxBytes=max_bytes, backupCount=backup_count, use_gzip=use_gzip)