PaddlePaddle / PARL

A high-performance distributed training framework for Reinforcement Learning
https://parl.readthedocs.io/
Apache License 2.0
3.22k stars 817 forks source link

A2C save_inference_model error #1028

Closed hadoop2xu closed 1 year ago

hadoop2xu commented 1 year ago
image

训练完A2C模型,想保存下来,用来推理;报错如下:

image
rical730 commented 1 year ago

Hi,你设置的input_shapes只用了 env.observation_space.shape 的第一维,应该全都用上的。

也就是说,理论上需要设置 input_shapes 应该是 [[None, 4, 84, 84]] 但你这里设置的是 [[None, 4]]

推荐修改为:

obs_shape = env.observation_space.shape
input_shapes = [[None, *obs_shape]]
rical730 commented 1 year ago

但是由于agent.save_inference_model 底层使用的是 paddle.jit.save 保存的模型必须实现forward方法,也就是指定推理流程,而A2C example 的 model是针对训练设计的,不是针对评估推理设计的,没有forward函数,需要用户自定义选择推理流程。

因此,如果你希望保存模型用于推理,除了修改刚刚提到的input_shapes,你还需要在 AtariModel 中新增一个 forward 方法,建议直接copy policy方法即可。

例如:atari_model.py 新增一个forward

#   Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import paddle
import parl
import paddle.nn as nn
import paddle.nn.functional as F

class AtariModel(parl.Model):
    def __init__(self, act_dim):
        super(AtariModel, self).__init__()
        self.conv1 = nn.Conv2D(
            in_channels=4, out_channels=32, kernel_size=8, stride=4, padding=1)
        self.conv2 = nn.Conv2D(
            in_channels=32,
            out_channels=64,
            kernel_size=4,
            stride=2,
            padding=2)
        self.conv3 = nn.Conv2D(
            in_channels=64,
            out_channels=64,
            kernel_size=3,
            stride=1,
            padding=0)

        self.flatten = nn.Flatten()

        # Need to calc the size of the in_features according to the input image.
        # The default size of the input image is 84 * 84
        self.fc = nn.Linear(in_features=64 * 9 * 9, out_features=512)

        self.policy_fc = nn.Linear(in_features=512, out_features=act_dim)
        self.value_fc = nn.Linear(in_features=512, out_features=1)

    def policy(self, obs):
        """
        Args:
            obs: A float32 tensor array of shape [B, C, H, W]

        Returns:
            policy_logits: B * ACT_DIM
        """
        obs = obs / 255.0
        conv1 = F.relu(self.conv1(obs))
        conv2 = F.relu(self.conv2(conv1))
        conv3 = F.relu(self.conv3(conv2))
        flatten = self.flatten(conv3)
        fc_output = F.relu(self.fc(flatten))
        policy_logits = self.policy_fc(fc_output)

        return policy_logits

    def value(self, obs):
        """
        Args:
            obs: A float32 tensor of shape [B, C, H, W]

        Returns:
            values: B
        """
        obs = obs / 255.0
        conv1 = F.relu(self.conv1(obs))
        conv2 = F.relu(self.conv2(conv1))
        conv3 = F.relu(self.conv3(conv2))
        flatten = self.flatten(conv3)
        fc_output = F.relu(self.fc(flatten))
        values = self.value_fc(fc_output)
        values = paddle.squeeze(values, axis=1)
        return values

    def policy_and_value(self, obs):
        """
        Args:
            obs: A tensor array of shape [B, C, H, W]

        Returns:
            policy_logits: B * ACT_DIM
            values: B
        """
        obs = obs / 255.0
        conv1 = F.relu(self.conv1(obs))
        conv2 = F.relu(self.conv2(conv1))
        conv3 = F.relu(self.conv3(conv2))
        flatten = self.flatten(conv3)
        fc_output = F.relu(self.fc(flatten))

        policy_logits = self.policy_fc(fc_output)

        values = self.value_fc(fc_output)
        values = paddle.squeeze(values, axis=1)
        return policy_logits, values

    # 新增 forward 方法,用于指定想要保存的推理过程
    def forward(self, obs):
        return self.policy(obs)
hadoop2xu commented 1 year ago

谢谢