wassname / rl-portfolio-management

Attempting to replicate "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem" https://arxiv.org/abs/1706.10059 (and an openai gym environment)
544 stars 179 forks source link

'DataFrame' object has no attribute 'mean_market_returns' #5

Closed bchirico closed 6 years ago

bchirico commented 6 years ago

First of all, thanks for sharing this code. I'm trying to run the notebook keras-ddpg and I ran into this error:

During the agent.fit() method there is a callback to TrainEpisodeLoggerPortfolio which evaluates the mean_market_return based on the environment infos attribute:

df = pd.DataFrame(self.env.infos)
self.episode_metrics[episode]=dict(
  max_drawdown=MDD(df.portfolio_value), 
  sharpe=sharpe(df.rate_of_return), 
  accumulated_portfolio_value=df.portfolio_value.iloc[-1],
  mean_market_return=df.mean_market_returns.cumprod().iloc[-1],
  cash_bias=df.weights.apply(lambda x:x[0]).mean()
)

The problem here seems to be that the Environment doesn't have mean_market_returns. Same thing is happening for the cash_bias since self.env.infos doesn't have weights, therefore the error (If I am correct those two attributes should be in the class PortfolioSim).

Am I doing something wrong or this is really the problem here? This is the complete stack trace:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-20-982fd62f6724> in <module>()
      6                       TrainIntervalLoggerTQDMNotebook(),
      7                       TrainEpisodeLoggerPortfolio(10),
----> 8                       ModelIntervalCheckpoint(save_path, 10*1440, 1)
      9                     ]
     10                  )

/usr/local/lib/python3.4/dist-packages/rl/core.py in fit(self, env, nb_steps, action_repetition, callbacks, verbose, visualize, nb_max_start_steps, start_step_policy, log_interval, nb_max_episode_steps)
    160                         'nb_steps': self.step,
    161                     }
--> 162                     callbacks.on_episode_end(episode, episode_logs)
    163 
    164                     episode += 1

/usr/local/lib/python3.4/dist-packages/rl/callbacks.py in on_episode_end(self, episode, logs)
     55             # If not, fall back to `on_epoch_end` to be compatible with built-in Keras callbacks.
     56             if callable(getattr(callback, 'on_episode_end', None)):
---> 57                 callback.on_episode_end(episode, logs=logs)
     58             else:
     59                 callback.on_epoch_end(episode, logs=logs)

<ipython-input-19-9b518044e729> in on_episode_end(self, episode, logs)
     16             accumulated_portfolio_value=df.portfolio_value.iloc[-1],
     17             #mean_market_return=df.mean_market_returns.cumprod().iloc[-1],
---> 18             cash_bias=df.weights.apply(lambda x:x[0]).mean()
     19         )
     20 

/usr/local/lib/python3.4/dist-packages/pandas/core/generic.py in __getattr__(self, name)
   3079             if name in self._info_axis:
   3080                 return self[name]
-> 3081             return object.__getattribute__(self, name)
   3082 
   3083     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'weights'

1440/|/reward=-0.0000 info=(return: 0.9938, portfolio_value: 0.7632, cost: 0.0000, weights_std: 0.3301, reward: -0.0000, weights_mean: 0.1667, steps: 1441.0000, market_value: 1.2144, rate_of_return: -0.0026, date: 1445472000.0000, log_return: -0.0026, )  0%|| 1440/2000000.0 [03:30<83:43:20,  6.63it/s]

Thanks again, you've done a great job!!

wassname commented 6 years ago

Sorry about that. I'm afraid the keras-ddpg notebook it outdated, I should remove it. That's because keras-rl isn't maintained anymore. You can either go back to an earlier commit or try the tensorforce notebook.

bchirico commented 6 years ago

@wassname thanks for the reply. I will try the notebook you mentioned.

kudos to you.