def _xywh2cs(self, x, y, w, h):
center = np.zeros((2), dtype=np.float32)
center[0] = x + w * 0.5
center[1] = y + h * 0.5
if w > self.aspect_ratio * h:
h = w * 1.0 / self.aspect_ratio
elif w < self.aspect_ratio * h:
w = h * self.aspect_ratio
scale = np.array(
[w * 1.0 / self.pixel_std, h * 1.0 / self.pixel_std],
dtype=np.float32)
if center[0] != -1:
scale = scale * 1.25
return center, scale
x, y, w, h = bbox
aspect_ratio = cfg.input_shape[1]/cfg.input_shape[0]
center = np.array([x + w * 0.5, y + h * 0.5])
if w > aspect_ratio * h:
h = w / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
scale = np.array([w,h]) * 1.25
rotation = 0
pose resnetの入力のアスペクトを保持するようにする。
torch版のオフィシャル実装 https://github.com/microsoft/human-pose-estimation.pytorch https://github.com/microsoft/human-pose-estimation.pytorch/blob/master/lib/dataset/coco.py
tensorflow版のオフィシャル実装 https://github.com/mks0601/TF-SimpleHumanPose https://github.com/mks0601/TF-SimpleHumanPose/blob/master/main/gen_batch.py
エクスポートのIssue https://github.com/axinc-ai/ailia-models/issues/177