Closed ipsec closed 2 years ago
Seems like you're just trying to load an incompatible checkpoint? That happens for example when you change the model size or try to change to an environment with different obs/act spaces as the agent was trained on.
Hi Danijar,
I have double checked the obs/act spaces and I found one odd behavior: the action space was changed (by the dreamerv2) from Discrete(8) to Box(0., 1., (8,)). This is right?
I'm training with this code:
import gym
import dreamerv2.api as dv2
config = dv2.defaults.update({
'logdir': '~/logdir/trader',
'log_every': 300,
'train_every': 10,
'prefill': 1e3,
'actor_ent': 3e-3,
'loss_scales.kl': 1.0,
'discount': 0.99,
'eval_every': 300,
'replay': {'capacity': 2e3, 'ongoing': False, 'minlen': 10, 'maxlen': 30, 'prioritize_ends': True},
'dataset': {'batch': 10, 'length': 10},
}).parse_flags()
env = gym.make('gym_orderbook:Trader-v0')
dv2.train(env, config)
And trying to load the variables file using the code from my first question.
The config.yaml I'm loading is from the logdir/config.yaml this is right?
Maybe I'm missing call same wrapper?
Thanks in advanced.
I found the problem.
print('Create agent.')
agnt = agent.Agent(config, env.obs_space, env.act_space, step)
dataset = iter(replay.dataset(**config.dataset))
train_agent = common.CarryOverState(agnt.train)
train_agent(next(dataset))
These train_agent calls are required before load the variables file.
Thanks
Hey, could you also please show how do you run an entire evaluation episode, calling the get_action() function at each step? It is not clear to me what the input to that function should be. Also, I believe the state of the world model should be updated and passed all the time... Thank you!
Hi,
How to create an agent, load the weights and then call a prediction function to receive the action?
I'm trying to recreate one but many details are missing. I stuck in this error:
My agent code: