facebookresearch / sapiens

High-resolution models for human tasks.
https://about.meta.com/realitylabs/codecavatars/sapiens/
Other
3.85k stars 192 forks source link

keypoints308_openpose.sh #55

Closed HaoZhang990127 closed 1 week ago

HaoZhang990127 commented 1 week ago

Hi,

Thank you for your nice work. I reference keypoints17.sh and keypoints17_openpose.sh to change the keypoints308.sh file to add --skeleton_style openpose ## add & to process in background, but get an error as following:

/root/paddlejob/workspace/env_run/sapiens/det/mmdet/models/backbones/csp_darknet.py:123: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(enabled=False):
  0%|                                                                                                                                       | 0/100 [00:08<?, ?it/s]
Traceback (most recent call last):
  File "/root/paddlejob/workspace/env_run/sapiens/pose/demo/demo_vis.py", line 246, in <module>
    main()
  File "/root/paddlejob/workspace/env_run/sapiens/pose/demo/demo_vis.py", line 227, in main
    pred_instances = process_one_image(args, image_path, detector, pose_estimator, visualizer)
  File "/root/paddlejob/workspace/env_run/sapiens/pose/demo/demo_vis.py", line 65, in process_one_image
    visualizer.add_datasample(
  File "/root/miniconda3/envs/sapiens/lib/python3.10/site-packages/mmengine/dist/utils.py", line 427, in wrapper
    return func(*args, **kwargs)
  File "/root/paddlejob/workspace/env_run/sapiens/pose/mmpose/visualization/local_visualizer.py", line 554, in add_datasample
    pred_img_data = self._draw_instances_kpts(
  File "/root/paddlejob/workspace/env_run/sapiens/pose/mmpose/visualization/local_visualizer.py", line 320, in _draw_instances_kpts
    raise ValueError(
ValueError: the length of kpt_color (308) does not matches that of keypoints (309)

How can I get the openpose output for keypoints308? Thank you so much, and the keypoints308_openpose.sh as following:


#!/bin/bash

cd ../../.. || exit
SAPIENS_CHECKPOINT_ROOT=/root/paddlejob/workspace/env_run/sapiens

#----------------------------set your input and output directories----------------------------------------------
## we recommend to use high resolution face images to extract accurate face keypoints.
INPUT='./demo/data/itw_videos/reel2'
OUTPUT='/root/paddlejob/workspace/env_run/sapiens/pose/outputs/reel2'

#--------------------------MODEL CARD---------------
MODEL_NAME='sapiens_1b'; CHECKPOINT=$SAPIENS_CHECKPOINT_ROOT/pose/checkpoints/sapiens_1b/sapiens_1b_goliath_best_goliath_AP_640.pth

DATASET='goliath'
MODEL="${MODEL_NAME}-210e_${DATASET}-1024x768"
CONFIG_FILE="/root/paddlejob/workspace/env_run/sapiens/pose/configs/sapiens_pose/${DATASET}/${MODEL}.py"
OUTPUT=$OUTPUT/$MODEL_NAME

# bounding box detector
DETECTION_CONFIG_FILE='demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person_no_nms.py'
DETECTION_CHECKPOINT=$SAPIENS_CHECKPOINT_ROOT/detector/checkpoints/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth

#---------------------------VISUALIZATION PARAMS--------------------------------------------------
LINE_THICKNESS=15 ## line thickness of the skeleton
RADIUS=15 ## keypoint radius

KPT_THRES=0.3 ## default keypoint confidence
# KPT_THRES=0.5 ## higher keypoint confidence

##-------------------------------------inference-------------------------------------
RUN_FILE='demo/demo_vis.py'

## number of inference jobs per gpu, total number of gpus and gpu ids
JOBS_PER_GPU=1; TOTAL_GPUS=1; VALID_GPU_IDS=(0 1 2 3 4 5 6 7)
TOTAL_JOBS=$((JOBS_PER_GPU * TOTAL_GPUS))

# Find all images and sort them, then write to a temporary text file
IMAGE_LIST="${INPUT}/image_list.txt"
find "${INPUT}" -type f \( -iname \*.jpg -o -iname \*.png \) | sort > "${IMAGE_LIST}"

# Check if image list was created successfully
if [ ! -s "${IMAGE_LIST}" ]; then
  echo "No images found. Check your input directory and permissions."
  exit 1
fi

# Count images and calculate the number of images per text file
NUM_IMAGES=$(wc -l < "${IMAGE_LIST}")
IMAGES_PER_FILE=$((NUM_IMAGES / TOTAL_JOBS))
EXTRA_IMAGES=$((NUM_IMAGES % TOTAL_JOBS))

export TF_CPP_MIN_LOG_LEVEL=2
export MMDET_DISABLE_REGISTRY=1
echo "Distributing ${NUM_IMAGES} image paths into ${TOTAL_JOBS} jobs."

# Divide image paths into text files for each job
for ((i=0; i<TOTAL_JOBS; i++)); do
  TEXT_FILE="${INPUT}/image_paths_$((i+1)).txt"
  if [ $i -eq $((TOTAL_JOBS - 1)) ]; then
    # For the last text file, write all remaining image paths
    tail -n +$((IMAGES_PER_FILE * i + 1)) "${IMAGE_LIST}" > "${TEXT_FILE}"
  else
    # Write the exact number of image paths per text file
    head -n $((IMAGES_PER_FILE * (i + 1))) "${IMAGE_LIST}" | tail -n ${IMAGES_PER_FILE} > "${TEXT_FILE}"
  fi
done

# Run the process on the GPUs, allowing multiple jobs per GPU
for ((i=0; i<TOTAL_JOBS; i++)); do
  GPU_ID=$((i % TOTAL_GPUS))
  CUDA_VISIBLE_DEVICES=${VALID_GPU_IDS[GPU_ID]} python ${RUN_FILE} \
    ${DETECTION_CONFIG_FILE} \
    ${DETECTION_CHECKPOINT} \
    ${CONFIG_FILE} \
    ${CHECKPOINT} \
    --input "${INPUT}/image_paths_$((i+1)).txt" \
    --output-root="${OUTPUT}" \
    --save-predictions \
    --radius ${RADIUS} \
    --kpt-thr ${KPT_THRES} \
    --thickness ${LINE_THICKNESS} \
    --skeleton_style openpose ## add & to process in background
  # Allow a short delay between starting each job to reduce system load spikes
  sleep 1
done

# Wait for all background processes to finish
wait

# Remove the image list and temporary text files
rm "${IMAGE_LIST}"
for ((i=0; i<TOTAL_JOBS; i++)); do
  rm "${INPUT}/image_paths_$((i+1)).txt"
done

# Go back to the original script's directory
cd -

echo "Processing complete."
echo "Results saved to $OUTPUT"

Thank you so much. Best regard.

rawalkhirodkar commented 1 week ago

Hello, we do not support openpose output format for 308 keypoint set. You will have to implement the keypoint subindexing yourself - refer this issue for more details.