imuncle / imuncle.github.io

大叔的个人小站
https://imuncle.github.io/
78 stars 17 forks source link

RM ICRA机器人底盘识别 #111

Open imuncle opened 4 years ago

imuncle commented 4 years ago

今年的ICRA比赛的一大亮点是新增了哨岗,使用单目相机识别场地中的机器人,提供小地图。传统的目标检测算法都是预测出能框住目标的矩形框(如下图所示),但是用在机器人定位上不太合适,误差太大。

image

截图来自加州伯克利大学 Robomaster

真正符合需求的是目标的6D姿态检测,比较出名的就是SSD-6DYOLO-6D(毕竟目标检测领域就它俩最快),但其实这个快是在有给力的GPU的前提下的,我试了一下用我的破电脑的CPU跑yolo-tiny,竟然要2S一帧,人家的GPU效果是145fps,差距未免太大。

image

上图便是6D的目标检测效果,大致思路是物体中心的栅格负责检测物体的八个角点,但其实我的需求只需要检测地面的四个角点,所以我的模型还可以简化。

另外YOLO的网络太过于庞大,它不仅要检测目标,还要目标分类,支持上千种物体,我完全用不上,我只有一个类,所以我决定自己设计网络结构。

网络结构

我设计的网络结构如下:

image

采用的FCN全卷积神经网络结构,因此预测的时候支持任意分辨率输入。

训练时的输入我将缩放至416*416,外加有3个颜色通道,因此输入维度为3x416x416,层与层之间我加入了一个大小为3,步长为1,paddind为1的卷积层和一个大小为2,步长为2的卷积层,实现了卷积和下采样操作,最后得到一个48x6x6的输出。

最后一层是预测层,我删去了类别,同时增加了坐标输出,因为有4个角点,所以一共需要八个输出,另外还有一个置信度,表示的是当前栅格为底面中心点的概率,所以一共是九个维度的输出。

从上面的分析可以看出,我把图像分成了6x6的小格子,每个格子负责判断自己是否是底面中心,并预测出四个角点的位置,最后选取置信度较高的格子预测出的角点位置作为最终的角点位置。

需要注意的是,最后的网络层输出需要使用sigmoid函数将其输出限制在0-1之间,因此预测出的角点坐标和真实的图像坐标之间还有一个公式换算,我设计的换算公式如下图所示:

image

其中img_scale是卷积后图片缩小的比例,在我这里就是416/6

损失函数

损失函数分为两部分,首先要根据图片的标签得出哪个栅格是底盘的中心。如果栅格不是底盘的中心,那么该栅格的损失函数为置信度的平方,既需要使其置信度越低越好,如果是底盘的中心,那么该栅格的损失函数为(1-置信度)^2和四个预测角点与真实角点的距离的平方和,既要使该栅格的置信度越接近1越好,角点越接近真实值越好。

公式如下:

image

这里为了让损失函数快速收敛,我在每一层网络中还加入了了BatchNorm层,使输出正则化。

代码实现

我使用pytorch来搭建网络,之前想使用百度的paddle paddle(飞浆),毕竟有免费GPU白嫖,然而捣鼓一番发现不好用,特别是自定义损失函数很不好用,又回到pytorch才发现,真香!

定义网络结构

class ConvPollNet(nn.Module):
    def __init__(self, in_channel, out_channel, num_filters, padding):
        super(ResNet, self).__init__()
        self.pool = nn.Conv2d(out_channel, out_channel, 2, stride=2, bias=False)
        self.conv = nn.Conv2d(in_channel, out_channel, num_filters, padding=padding, bias=False)
        self.batch_norm = nn.BatchNorm2d(out_channel, momentum=0.9, eps=1e-5)

    def forward(self, x):
        out = self.conv(x)
        out = self.pool(out)
        out = self.batch_norm(out)
        out = F.relu(out)
        return out

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = ConvPollNet(3, 1, 3, 1)
        self.conv2 = ConvPollNet(1, 3, 3, 1)
        self.conv3 = ConvPollNet(3, 6, 3, 1)
        self.conv4 = ConvPollNet(6, 12, 3, 1)
        self.conv5 = ConvPollNet(12, 24, 3, 1)
        self.conv6 = ConvPollNet(24, 48, 3, 1)
        self.conv7 = nn.Conv2d(48, 9, 3, padding=1, bias=False)

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = self.conv5(x)
        x = self.conv6(x)
        x = (torch.sigmoid(self.conv7(x)))
        return x

定义数据读取函数

max_boxs = 36

class ListDataset(Dataset):
    def __init__(self, path, mode='train', img_size=416):
        self.json_files = os.listdir(path+'/label/')
        self.path = path
        self.mode = mode
        self.img_size = img_size

    def preprocess(self, img, bbox_labels, input_height):
        sample_labels = np.zeros((max_boxs, 4, 2), dtype=np.float32)
        for index in range(len(bbox_labels)):
            box_corners = bbox_labels[index]['points']
            box_corners = np.array(box_corners)
            box_corners *= 0.014423
            box_center = [0, 0]
            box_center[0] = (box_corners[0][0] + box_corners[1][0] + box_corners[2][0] + box_corners[3][0])/4
            box_center[1] = (box_corners[0][1] + box_corners[1][1] + box_corners[2][1] + box_corners[3][1])/4
            box_center[0] = math.floor(box_center[0])
            box_center[1] = math.floor(box_center[1])
            box_corners[0][0] -= box_center[0]
            box_corners[0][1] -= box_center[1]
            box_corners[1][0] -= box_center[0]
            box_corners[1][1] -= box_center[1]
            box_corners[2][0] -= box_center[0]
            box_corners[2][1] -= box_center[1]
            box_corners[3][0] -= box_center[0]
            box_corners[3][1] -= box_center[1]
            box_corners /= 12.0
            box_corners += 0.5
            num = box_center[0] + box_center[1] * 6
            sample_labels[num] = box_corners
        img = img.resize((self.img_size, self.img_size), Image.BILINEAR)
        img = np.array(img).astype('float32')
        img -= [127.5, 127.5, 127.5]
        img = img.transpose((2, 0, 1))  # HWC to CHW
        img *= 0.007843
        return img, sample_labels

    def __getitem__(self, index):
        file_path = self.json_files[index]
        if self.mode == 'train':
            with open(self.path + 'label/' + file_path,'r') as load_f:
                load_dict = json.load(load_f)
                img = Image.open(os.path.join(self.path+'img/', load_dict['imagePath']))
                img, sample_labels = self.preprocess(img, load_dict['shapes'], load_dict['imageHeight'])
                return img, sample_labels
        elif self.mode == 'test':
            with open(self.path + 'label/' + file_path,'r') as load_f:
                load_dict = json.load(load_f)
                return Image.open(os.path.join(self.path+'img/', load_dict['imagePath']))

    def __len__(self):
        return len(self.json_files)

dataset = ListDataset('./', mode='train')
dataloader = torch.utils.data.DataLoader(dataset,batch_size=10,shuffle=True)

定义绘图函数用于可视化

all_train_iter=0
all_train_iters=[]
all_train_costs=[]

def draw_train_process(title,iters,costs,label_cost):
    plt.title(title, fontsize=24)
    plt.xlabel("iter", fontsize=20)
    plt.ylabel("cost", fontsize=20)
    plt.plot(iters, costs,color='red',label=label_cost) 
    plt.legend()
    plt.grid()
    plt.show()

定义损失函数及优化方法

def loss(input, bbox):
    n, c, h, w = input.shape
    loss_ = torch.pow(bbox[0][0][0][0], 2)
    for batch in range(n):
        for j in range(w):
            for i in range(h):
                index = j + i * w
                if(bbox[batch][index][0][0] == 0):
                    tmp_loss = torch.mul(torch.pow(input[batch][8][i][j], 2), 100)
                    loss_ = torch.add(loss_, tmp_loss)
                else:
                    tmp_loss = torch.mul(torch.pow(1 - input[batch][8][i][j], 2), 10000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][0][i][j] - bbox[batch][index][0][0], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][1][i][j] - bbox[batch][index][0][1], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][2][i][j] - bbox[batch][index][1][0], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][3][i][j] - bbox[batch][index][1][1], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][4][i][j] - bbox[batch][index][2][0], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][5][i][j] - bbox[batch][index][2][1], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][6][i][j] - bbox[batch][index][3][0], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
                    tmp_loss = torch.mul(torch.pow(input[batch][7][i][j] - bbox[batch][index][3][1], 2), 20000)
                    loss_ = torch.add(loss_, tmp_loss)
    return loss_

optimizer = optim.Adam(net.parameters(), lr=0.001)

开始训练

for epoch in range(40):
    for i, data in enumerate(dataloader, 0):
        inputs, labels = data
        optimizer.zero_grad()
        outputs = net(inputs)
        cost = loss(outputs, labels)
        cost.backward()
        optimizer.step()
        all_train_iter=all_train_iter+10
        all_train_iters.append(all_train_iter)
        all_train_costs.append(cost.item())

        if i % 6 == 0:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, cost.item()))

绘制损失函数曲线并保存模型

draw_train_process("training",all_train_iters,all_train_costs,"trainning cost")
print('Finished Training')

torch.save(net, './6d_car.pt')
print('Saved Model')

我的损失函数曲线如下,还是非常好看的:

image

使用模型预测

model = Net()
model = torch.load('./6d_car.pt')
model.eval()

dataiter = iter(dataloader)
images, img_path = dataiter.next()

def get_box(output, img_scale):
    output = output[0]
    output = output.numpy()
    c, h, w = output.shape
    boxs = []
    box_scale = 12.0
    for j in range(w):
        for i in range(h):
            if(output[8][i][j] > 0.9):
                box = []
                point1 = [0, 0]
                point1[0] = ((output[0][i][j] - 0.5) * box_scale + j) * img_scale
                point1[1] = ((output[1][i][j] - 0.5) * box_scale + i) * img_scale
                box.append(point1)
                point2 = [0, 0]
                point2[0] = ((output[2][i][j] - 0.5) * box_scale + j) * img_scale
                point2[1] = ((output[3][i][j] - 0.5) * box_scale + i) * img_scale
                box.append(point2)
                point3 = [0, 0]
                point3[0] = ((output[4][i][j] - 0.5) * box_scale + j) * img_scale
                point3[1] = ((output[5][i][j] - 0.5) * box_scale + i) * img_scale
                box.append(point3)
                point4 = [0, 0]
                point4[0] = ((output[6][i][j] - 0.5) * box_scale + j) * img_scale
                point4[1] = ((output[7][i][j] - 0.5) * box_scale + i) * img_scale
                box.append(point4)
                boxs.append(box)
    return boxs

def imshow(imgs, boxs):
    img = imgs[0]
    img = Image.open(os.path.join('./img/', img))
    draw = ImageDraw.Draw(img)
    for index in range(len(boxs)):
        draw.line((boxs[index][0][0],boxs[index][0][1], boxs[index][1][0],boxs[index][1][1]), 'red')
        draw.line((boxs[index][1][0],boxs[index][1][1], boxs[index][2][0],boxs[index][2][1]), 'red')
        draw.line((boxs[index][2][0],boxs[index][2][1], boxs[index][3][0],boxs[index][3][1]), 'red')
        draw.line((boxs[index][3][0],boxs[index][3][1], boxs[index][0][0],boxs[index][0][1]), 'red')
    img = np.array(img)
    plt.imshow(img)
    plt.show()

with torch.no_grad():
    outputs = model(images)
    boxs = get_box(outputs, 69.333)
    imshow(img_path, boxs)

检测结果如下:

image

最后实测416x416的图片检测在我的垃圾电脑的CPU上也能达到20fps,还是不错的。