ubicomplab / rPPG-Toolbox

rPPG-Toolbox: Deep Remote PPG Toolbox (NeurIPS 2023)
https://arxiv.org/abs/2210.00716
Other
414 stars 99 forks source link

Error faced while adding new database to the toolbox #248

Closed SwarubiniPJ closed 4 months ago

SwarubiniPJ commented 4 months ago

Hello, We are trying to add new database to this toolbox but it is throwing some errors, please have a look at this error:

Traceback (most recent call last): File "main.py", line 282, in unsupervised_data = unsupervised_loader( File "/home/lab/Desktop/Research/Dev/Python/rPPG-Toolbox/dataset/data_loader/InfraredLoader.py", line 42, in init super().init(name, "data/Infrared", config_data) File "/home/lab/Desktop/Research/Dev/Python/rPPG-Toolbox/dataset/data_loader/BaseLoader.py", line 67, in init self.preprocess_dataset(self.raw_data_dirs, config_data.PREPROCESS, config_data.BEGIN, config_data.END) File "/home/lab/Desktop/Research/Dev/Python/rPPG-Toolbox/dataset/data_loader/BaseLoader.py", line 209, in preprocess_dataset self.build_file_list(file_list_dict) # build file list File "/home/lab/Desktop/Research/Dev/Python/rPPG-Toolbox/dataset/data_loader/BaseLoader.py", line 486, in build_file_list raise ValueError(self.dataset_name, 'No files in file list') ValueError: ('unsupervised', 'No files in file list')

We are trying to add database similar to UBFC-rPPG database.

girishvn commented 4 months ago

Hi @SwarubiniPJ,

The error suggests that no preprocessed data files have been generated (The target data directory does not contain the expected label and input files).

Can you outline the steps you have taken to add a new dataset? It should involve something along the lines of creating a new dataloader file similar to something like rPPG-Toolbox/dataset/data_loader /PURELoader.py.

girishvn commented 4 months ago

Closing due to inactivity. feel free to reopen if need be.

SwarubiniPJ commented 4 months ago

Steps followed to add new database:

  1. First, we have created our dataset and arrange all the videos and ground truth signals in the same fashion as the UBFC database.
  2. Then we used the UBFC dataloader for our subjects. The code for the following: """The dataloader for UBFC-rPPG dataset.

Details for the UBFC-rPPG Dataset see https://sites.google.com/view/ybenezeth/ubfcrppg. If you use this dataset, please cite this paper: S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, J. Dubois, "Unsupervised skin tissue segmentation for remote photoplethysmography", Pattern Recognition Letters, 2017. """ import glob import os import re from multiprocessing import Pool, Process, Value, Array, Manager

import cv2 import numpy as np from dataset.data_loader.BaseLoader import BaseLoader from tqdm import tqdm

class InfraredLoader(BaseLoader): """The data loader for the UBFC-rPPG dataset."""

def __init__(self, name, data_path, config_data):

    super().__init__(name, "data/Infrared", config_data)

def get_raw_data(self, data_path):
    """Returns data directories under the path(For UBFC-rPPG dataset)."""
    data_dirs = glob.glob(data_path + os.sep + "subject*")
    if not data_dirs:
        raise ValueError(self.dataset_name + " data paths empty!")
    dirs = [{"index": re.search(
        'subject(\d+)', data_dir).group(0), "path": data_dir} for data_dir in data_dirs]
    return dirs

def split_raw_data(self, data_dirs, begin, end):
    """Returns a subset of data dirs, split with begin and end values."""
    if begin == 0 and end == 1:  # return the full directory if begin == 0 and end == 1
        return data_dirs

    file_num = len(data_dirs)
    choose_range = range(int(begin * file_num), int(end * file_num))
    data_dirs_new = []

    for i in choose_range:
        data_dirs_new.append(data_dirs[i])

    return data_dirs_new

def preprocess_dataset_subprocess(self, data_dirs, config_preprocess, i, file_list_dict):
    """ invoked by preprocess_dataset for multi_process."""
    filename = os.path.split(data_dirs[i]['path'])[-1]
    saved_filename = data_dirs[i]['index']

    # Read Frames
    if 'None' in config_preprocess.DATA_AUG:
        # Utilize dataset-specific function to read video
        frames = self.read_video(
            os.path.join(data_dirs[i]['path'],"vid.avi"))
    elif 'Motion' in config_preprocess.DATA_AUG:
        # Utilize general function to read video in .npy format
        frames = self.read_npy_video(
            glob.glob(os.path.join(data_dirs[i]['path'],'*.npy')))
    else:
        raise ValueError(f'Unsupported DATA_AUG specified for {self.dataset_name} dataset! Received {config_preprocess.DATA_AUG}.')

    # Read Labels
    if config_preprocess.USE_PSUEDO_PPG_LABEL:
        bvps = self.generate_pos_psuedo_labels(frames, fs=self.config_data.FS)
    else:
        bvps = self.read_wave(
            os.path.join(data_dirs[i]['path'],"ground_truth.txt"))

    frames_clips, bvps_clips = self.preprocess(frames, bvps, config_preprocess)
    input_name_list, label_name_list = self.save_multi_process(frames_clips, bvps_clips, saved_filename)
    file_list_dict[i] = input_name_list

@staticmethod
def read_video(video_file):
    """Reads a video file, returns frames(T, H, W, 3) """
    VidObj = cv2.VideoCapture(video_file)
    VidObj.set(cv2.CAP_PROP_POS_MSEC, 0)
    success, frame = VidObj.read()
    frames = list()
    while success:
        frame = cv2.cvtColor(np.array(frame), cv2.COLOR_BGR2RGB)
        frame = np.asarray(frame)
        frames.append(frame)
        success, frame = VidObj.read()
    return np.asarray(frames)

@staticmethod
def read_wave(bvp_file):
    """Reads a bvp signal file."""
    with open(bvp_file, "r") as f:
        str1 = f.read()
        str1 = str1.split("\n")
        bvp = [float(x) for x in str1[0].split()]
    return np.asarray(bvp)
  1. Then we created the .yaml file for our loader.
  2. Did all the required changes in the config.py file.
  3. We are facing the problem after the above steps. Kindly provide the detailed solutions to rectify this error.
samphilips commented 2 months ago

Hi I am facing the same error, were you able to fix it, if so what did you do, thank you

samphilips commented 2 months ago

Steps followed to add new database:

  1. First, we have created our dataset and arrange all the videos and ground truth signals in the same fashion as the UBFC database.
  2. Then we used the UBFC dataloader for our subjects. The code for the following: """The dataloader for UBFC-rPPG dataset.

Details for the UBFC-rPPG Dataset see https://sites.google.com/view/ybenezeth/ubfcrppg. If you use this dataset, please cite this paper: S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, J. Dubois, "Unsupervised skin tissue segmentation for remote photoplethysmography", Pattern Recognition Letters, 2017. """ import glob import os import re from multiprocessing import Pool, Process, Value, Array, Manager

import cv2 import numpy as np from dataset.data_loader.BaseLoader import BaseLoader from tqdm import tqdm

class InfraredLoader(BaseLoader): """The data loader for the UBFC-rPPG dataset."""

def __init__(self, name, data_path, config_data):

    super().__init__(name, "data/Infrared", config_data)

def get_raw_data(self, data_path):
    """Returns data directories under the path(For UBFC-rPPG dataset)."""
    data_dirs = glob.glob(data_path + os.sep + "subject*")
    if not data_dirs:
        raise ValueError(self.dataset_name + " data paths empty!")
    dirs = [{"index": re.search(
        'subject(\d+)', data_dir).group(0), "path": data_dir} for data_dir in data_dirs]
    return dirs

def split_raw_data(self, data_dirs, begin, end):
    """Returns a subset of data dirs, split with begin and end values."""
    if begin == 0 and end == 1:  # return the full directory if begin == 0 and end == 1
        return data_dirs

    file_num = len(data_dirs)
    choose_range = range(int(begin * file_num), int(end * file_num))
    data_dirs_new = []

    for i in choose_range:
        data_dirs_new.append(data_dirs[i])

    return data_dirs_new

def preprocess_dataset_subprocess(self, data_dirs, config_preprocess, i, file_list_dict):
    """ invoked by preprocess_dataset for multi_process."""
    filename = os.path.split(data_dirs[i]['path'])[-1]
    saved_filename = data_dirs[i]['index']

    # Read Frames
    if 'None' in config_preprocess.DATA_AUG:
        # Utilize dataset-specific function to read video
        frames = self.read_video(
            os.path.join(data_dirs[i]['path'],"vid.avi"))
    elif 'Motion' in config_preprocess.DATA_AUG:
        # Utilize general function to read video in .npy format
        frames = self.read_npy_video(
            glob.glob(os.path.join(data_dirs[i]['path'],'*.npy')))
    else:
        raise ValueError(f'Unsupported DATA_AUG specified for {self.dataset_name} dataset! Received {config_preprocess.DATA_AUG}.')

    # Read Labels
    if config_preprocess.USE_PSUEDO_PPG_LABEL:
        bvps = self.generate_pos_psuedo_labels(frames, fs=self.config_data.FS)
    else:
        bvps = self.read_wave(
            os.path.join(data_dirs[i]['path'],"ground_truth.txt"))

    frames_clips, bvps_clips = self.preprocess(frames, bvps, config_preprocess)
    input_name_list, label_name_list = self.save_multi_process(frames_clips, bvps_clips, saved_filename)
    file_list_dict[i] = input_name_list

@staticmethod
def read_video(video_file):
    """Reads a video file, returns frames(T, H, W, 3) """
    VidObj = cv2.VideoCapture(video_file)
    VidObj.set(cv2.CAP_PROP_POS_MSEC, 0)
    success, frame = VidObj.read()
    frames = list()
    while success:
        frame = cv2.cvtColor(np.array(frame), cv2.COLOR_BGR2RGB)
        frame = np.asarray(frame)
        frames.append(frame)
        success, frame = VidObj.read()
    return np.asarray(frames)

@staticmethod
def read_wave(bvp_file):
    """Reads a bvp signal file."""
    with open(bvp_file, "r") as f:
        str1 = f.read()
        str1 = str1.split("\n")
        bvp = [float(x) for x in str1[0].split()]
    return np.asarray(bvp)
  1. Then we created the .yaml file for our loader.
    1. Did all the required changes in the config.py file.
    2. We are facing the problem after the above steps. Kindly provide the detailed solutions to rectify this error.

Hi I am facing the same error, were you able to fix it, if so what did you do, thank you

yahskapar commented 2 months ago

@samphilips,

Can you create a new issue regarding this + add more details such as what dataset you're trying to add and what your dataloader looks like?