notadamking / RLTrader

A cryptocurrency trading environment using deep reinforcement learning and OpenAI's gym
https://discord.gg/ZZ7BGWh
GNU General Public License v3.0
1.73k stars 540 forks source link

ValueError: cannot reshape array of size 40 into shape (1,39) #62

Closed maxmatical closed 5 years ago

maxmatical commented 5 years ago

When I try to run the model, I get the following error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
 in ()
----> 1 model.learn(total_timesteps=50000)

~/anaconda3/lib/python3.6/site-packages/stable_baselines/ppo2/ppo2.py in learn(self, total_timesteps, callback, seed, log_interval, tb_log_name, reset_num_timesteps)
    275             self._setup_learn(seed)
    276 
--> 277             runner = Runner(env=self.env, model=self, n_steps=self.n_steps, gamma=self.gamma, lam=self.lam)
    278             self.episode_reward = np.zeros((self.n_envs,))
    279 

~/anaconda3/lib/python3.6/site-packages/stable_baselines/ppo2/ppo2.py in __init__(self, env, model, n_steps, gamma, lam)
    397         :param lam: (float) Factor for trade-off of bias vs variance for Generalized Advantage Estimator
    398         """
--> 399         super().__init__(env=env, model=model, n_steps=n_steps)
    400         self.lam = lam
    401         self.gamma = gamma

~/anaconda3/lib/python3.6/site-packages/stable_baselines/common/runners.py in __init__(self, env, model, n_steps)
     17         self.batch_ob_shape = (n_env*n_steps,) + env.observation_space.shape
     18         self.obs = np.zeros((n_env,) + env.observation_space.shape, dtype=env.observation_space.dtype.name)
---> 19         self.obs[:] = env.reset()
     20         self.n_steps = n_steps
     21         self.states = model.initial_state

~/anaconda3/lib/python3.6/site-packages/stable_baselines/common/vec_env/dummy_vec_env.py in reset(self)
     43     def reset(self):
     44         for env_idx in range(self.num_envs):
---> 45             obs = self.envs[env_idx].reset()
     46             self._save_obs(env_idx, obs)
     47         return self._obs_from_buf()

~/Desktop/Python/RL_Trader/rl_trader_env_v2_lstm.py in reset(self)
    193         self.trades = []
    194 
--> 195         return self._next_observation()
    196 
    197     def step(self, action):

~/Desktop/Python/RL_Trader/rl_trader_env_v2_lstm.py in _next_observation(self)
     98         obs = np.insert(obs, len(obs), scaled_history[:, -1], axis=0)
     99 
--> 100         obs = np.reshape(obs.astype('float16'), self.obs_shape)
    101         obs[np.bitwise_not(np.isfinite(obs))] = 0
    102 

~/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py in reshape(a, newshape, order)
    290            [5, 6]])
    291     """
--> 292     return _wrapfunc(a, 'reshape', newshape, order=order)
    293 
    294 

~/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
     54 def _wrapfunc(obj, method, *args, **kwds):
     55     try:
---> 56         return getattr(obj, method)(*args, **kwds)
     57 
     58     # An AttributeError occurs if the object does not have

ValueError: cannot reshape array of size 40 into shape (1,39)
notadamking commented 5 years ago

Have you modified the codebase? If so, I would need to see your modifications to see what would cause this error. Otherwise, you might be testing an agent/environment with an invalid hyper-parameter set.

xerviami commented 5 years ago

Dear,

I had the same issuer as I wanted to load another price history. The colum names were not exactly the sames as the ones used in the code and it laked a USD volume column.

It worked by using this kind of renaming and column addition: df = df.rename(columns={'date': 'Date', 'open': 'Open', 'high': 'High', 'low': 'Low', 'close': 'Close', 'volume': 'Volume BTC'}) df['Volume USD']= df['Volume BTC'] * df['Open']

Kind regards