Ceruleanacg / Personae

📈 Personae is a repo of implements and environment of Deep Reinforcement Learning & Supervised Learning for Quantitative Trading.
MIT License
1.33k stars 335 forks source link

About the action space #4

Closed ewanlee closed 6 years ago

ewanlee commented 6 years ago

I think the action space should be 3 ** self.codes_count rather than self.codes_count * 3 ?

Ceruleanacg commented 6 years ago

For DDPG, the action space is self.codes_count * 3, because here DDPG is implemented for continuous action space, so here for each stock code, the action is [-1, 1] which represents the possibility of taking each action.

For PolicyGradient, the action space is still self.codes * 3, because in fact, we can only take one action for each state, the same to DDPG, so you may find some logical problems in DDPG, because DDPG tries to do self.codes actions in one state, that is not reasonable.

So I implemented method for test forward_v2 to avoid this problem.

Thank you very much.

ewanlee commented 6 years ago

@Ceruleanacg Your explanation is very detailed, thank you very much. So you define an action as an operation (buy, sell or hold) on a stock.

In _get_next_info method, You compare the number of operations performed by the trader at current state self.trader.action_times with the number of stocks self.code_count. I guess you want to jump to the next state after you have operated on all the stocks? If you do not complete the operation of all stocks then the current state will not change. But in the training phase of the PolicyGradient, you use greedy strategy (the use_prob parameter is False ) to interact with the market. If the current state is unchanged, the action taken is unchanged. Then self.code_count actions are performed on the same stock with same operation. Is it something unreasonable?

Ceruleanacg commented 6 years ago

In the method _get_next_info, there are two factors that will influence state_next, the first is current_date, which will be updated by comparing the self.trader.action_times with self.code_count in order to get next date, the second is in _get_scaled_stock_data_as_state method, which inserts self.trader.cash and self.trader.holding_value into state_next.

So for PolicyGradient that uses forward_v2, every action taken will influence the next state.

ewanlee commented 6 years ago

I am sorry I did not read the code carefully. I have the last two questions:

  1. What is your purpose for compare self.trader.action_times and self.code_count ?
  2. Whether or not the PolicyGradient will prematurely converge to a local optimum if you set use_prob always False?
Ceruleanacg commented 6 years ago

For question 1, if the self.trader.action_times == self.code_count, it means that the self.current_date needs to be updated in order to get stock data for next date.

For question 2, actually we will get local optimum, you can also set it true if you want, but, how to say, i found PolicyGradient performs very bad if I set it true :)

If you have further questions, you can add my WeChat 17392810723, we could learn more from each other.

ewanlee commented 6 years ago

Alright, thank you very much 👍