DeepLabCut / DeepLabCut

Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with deep learning for all animals incl. humans
http://deeplabcut.org
GNU Lesser General Public License v3.0
4.67k stars 1.66k forks source link

Analyze Skeleton-No Filtered data file found #2477

Closed klgrove closed 7 months ago

klgrove commented 10 months ago

Is there an existing issue for this?

Bug description

I am trying to generate CSV files of the skeletons for analysis. I followed the steps outlined here: https://github.com/DeepLabCut/DeepLabCut/blob/main/deeplabcut/post_processing/analyze_skeleton.py

Everything seems to go fine using the code above, but when I try to analyze the skeleton, I get the following error:

deeplabcut.analyzeskeleton(config_path, r"C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids", videotype='mp4', shuffle=1, trainingsetindex=0, filtered=True, save_as_csv=True,
    ...:  destfolder=None)
Analyzing all the videos in the directory...
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\01Dec2023(2).mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 01Dec2023(2) and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.

Operating System

Windows 10 Enterprise

DeepLabCut version

dlc version 2.3.8

DeepLabCut mode

multi animal

Device type

GPU 0-Intel (R) UHD Graphics GPU 1-AMD Radeon RX 6500

Steps To Reproduce

conda activate tfdml_plugin
import deeplabcut
In [2]: config_path=r"C:\Users\klgrove\Desktop\LatToeSpread-Kristi-2023-11-17\config.yaml"
In [3]: import argparse
In [4]: from math import atan2, degrees
In [5]: from pathlib import Path
In [6]: import os
In [7]: import numpy as np
In [8]: import pandas as pd
In [9]: from scipy.spatial import distance
In [10]: from deeplabcut.utils import auxiliaryfunctions,auxfun_multianimal
In [17]: def calc_distance_between_points_two_vectors_2d(v1, v2):
    ...:     """calc_distance_between_points_two_vectors_2d [pairwise distance between vectors points]
    ...:     Arguments:
    ...:         v1 {[np.array]} -- [description]
    ...:         v2 {[type]} -- [description]
    ...:     Raises:
    ...:         ValueError -- [description]
    ...:         ValueError -- [description]
    ...:         ValueError -- [description]
    ...:     Returns:
    ...:         [type] -- [description]
    ...:     testing:
    ...:     >>> v1 = np.zeros((2, 5))
    ...:     >>> v2 = np.zeros((2, 5))
    ...:     >>> v2[1, :]  = [0, 10, 25, 50, 100]
    ...:     >>> d = calc_distance_between_points_two_vectors_2d(v1.T, v2.T)
    ...:     """
    ...:     if not isinstance(v1, np.ndarray) or not isinstance(v2, np.ndarray):
    ...:         raise ValueError("Invalid argument data format")
    ...:     if not v1.shape[1] == 2 or not v2.shape[1] == 2:
    ...:         raise ValueError("Invalid shape for input arrays")
    ...:     if not v1.shape[0] == v2.shape[0]:
    ...:         raise ValueError("Error: input arrays should have the same length")
    ...:
    ...:     dist = [distance.euclidean(p1, p2) for p1, p2 in zip(v1, v2)]
    ...:     return dist
    ...:

    ...:         p1 {[np.ndarray, list]} -- np.array or list [ with the X and Y coordinates of the point]^M
    ...:         p2 {[np.ndarray, list]} -- np.array or list [ with the X and Y coordinates of the point]^M
    ...: ^M
    ...:     Returns:^M
    ...:         [int] -- [clockwise angle between p1, p2 using the inner product and the deterinant of the two vectors
    ...: ]^M
    ...: ^M
    ...:     Testing:  - to check:     print(zero, ninety, oneeighty, twoseventy)^M
    ...:         >>> zero = angle_between_points_2d_clockwise([0, 1], [0, 1])^M
    ...:         >>> ninety = angle_between_points_2d_clockwise([1, 0], [0, 1])^M
    ...:         >>> oneeighty = angle_between_points_2d_clockwise([0, -1], [0, 1])^M
    ...:         >>> twoseventy = angle_between_points_2d_clockwise([-1, 0], [0, 1])^M
    ...:         >>> ninety2 = angle_between_points_2d_clockwise([10, 0], [10, 1])^M
    ...:         >>> print(ninety2)^M
    ...:     """^M
    ...: ^M
    ...:     """^M
    ...:         Determines the angle of a straight line drawn between point one and two.^M
    ...:         The number returned, which is a double in degrees, tells us how much we have to rotate^M
    ...:         a horizontal line anit-clockwise for it to match the line between the two points.^M
    ...:     """^M
    ...: ^M
    ...:     xDiff = p2[0] - p1[0]^M
    ...:     yDiff = p2[1] - p1[1]^M
    ...:     ang = degrees(atan2(yDiff, xDiff))^M
    ...:     if ang < 0:^M
    ...:         ang += 360^M
    ...:     # if not 0 <= ang <+ 360: raise ValueError('Ang was not computed correctly')^M
    ...:     return ang
    ...:

    ...:     >>> v2 = np.zeros((2, 4))^M
    ...:     >>> v2[0, :] = [0, 1, 0, -1]^M
    ...:     >>> v2[1, :] = [1, 0, -1, 0]^M
    ...:     >>> a = calc_angle_between_vectors_of_points_2d(v2, v1)^M
    ...:     """^M
    ...: ^M
    ...:     # Check data format^M
    ...:     if (^M
    ...:         v1 is None^M
    ...:         or v2 is None^M
    ...:         or not isinstance(v1, np.ndarray)^M
    ...:         or not isinstance(v2, np.ndarray)^M
    ...:     ):^M
    ...:         raise ValueError("Invalid format for input arguments")^M
    ...:     if len(v1) != len(v2):^M
    ...:         raise ValueError(^M
    ...:             "Input arrays should have the same length, instead: ", len(v1), len(v2)^M
    ...:         )^M
    ...:     if not v1.shape[0] == 2 or not v2.shape[0] == 2:^M
    ...:         raise ValueError("Invalid shape for input arrays: ", v1.shape, v2.shape)^M
    ...: ^M
    ...:     # Calculate^M
    ...:     n_points = v1.shape[1]^M
    ...:     angs = np.zeros(n_points)^M
    ...:     for i in range(v1.shape[1]):^M
    ...:         p1, p2 = v1[:, i], v2[:, i]^M
    ...:         angs[i] = angle_between_points_2d_anticlockwise(p1, p2)^M
    ...: ^M
    ...:     return angs
    ...:

In [20]: def analyzebone(bp1, bp2):^M
    ...:     """[Computes length and orientation of the bone at each frame]^M
    ...: ^M
    ...:     Arguments:^M
    ...:         bp1 {[type]} -- [description]^M
    ...:         bp2 {[type]} -- [description]^M
    ...:     """^M
    ...:     bp1_pos = np.vstack([bp1.x.values, bp1.y.values]).T^M
    ...:     bp2_pos = np.vstack([bp2.x.values, bp2.y.values]).T^M
    ...: ^M
    ...:     # get bone length and orientation^M
    ...:     bone_length = calc_distance_between_points_two_vectors_2d(bp1_pos, bp2_pos)^M
    ...:     bone_orientation = calc_angle_between_vectors_of_points_2d(bp1_pos.T, bp2_pos.T)^M
    ...: ^M
    ...:     # keep the smallest of the two likelihoods^M
    ...:     likelihoods = np.vstack([bp2.likelihood.values, bp2.likelihood.values]).T^M
    ...:     likelihood = np.min(likelihoods, 1)^M
    ...: ^M
    ...:     # Create dataframe and return^M
    ...:     df = pd.DataFrame.from_dict(^M
    ...:         dict(length=bone_length, orientation=bone_orientation, likelihood=likelihood)^M
    ...:     )^M
    ...:     # df.index.name=name^M
    ...: ^M
    ...:     return df
    ...:

    ...: ^M
    ...:         bones = {}^M
    ...:         if "individuals" in df.columns.names:^M
    ...:             for animal_name, df_ in df.groupby(level="individuals", axis=1):^M
    ...:                 temp = df_.droplevel(["scorer", "individuals"], axis=1)^M
    ...:                 if animal_name != "single":^M
    ...:                     for bp1, bp2 in cfg["skeleton"]:^M
    ...:                         name = "{}_{}_{}".format(animal_name, bp1, bp2)^M
    ...:                         bones[name] = analyzebone(temp[bp1], temp[bp2])^M
    ...:         else:^M
    ...:             for bp1, bp2 in cfg["skeleton"]:^M
    ...:                 name = "{}_{}".format(bp1, bp2)^M
    ...:                 bones[name] = analyzebone(df[scorer][bp1], df[scorer][bp2])^M
    ...: ^M
    ...:         skeleton = pd.concat(bones, axis=1)^M
    ...:         video_to_skeleton_df[video] = skeleton^M
    ...:         skeleton.to_hdf(output_name, "df_with_missing", format="table", mode="w")^M
    ...:         if save_as_csv:^M
    ...:             skeleton.to_csv(output_name.replace(".h5", ".csv"))^M
    ...: ^M
    ...:     if return_data:^M
    ...:         return video_to_skeleton_df^M
    ...: ^M
    ...: ^M
    ...: if __name__ == "__main__":^M
    ...:     parser = argparse.ArgumentParser()^M
    ...:     parser.add_argument("config")^M
    ...:     parser.add_argument("videos")^M
    ...:     cli_args = parser.parse_args()
    ...:
usage: ipython [-h] config videos
ipython: error: the following arguments are required: config, videos
An exception has occurred, use %tb to see the full traceback.

SystemExit: 2

C:\Users\klgrove\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\IPython\core\interactiveshell.py:3556: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
  warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)

    ...:             continue^M
    ...: ^M
    ...:         bones = {}^M
    ...:         if "individuals" in df.columns.names:^M
    ...:             for animal_name, df_ in df.groupby(level="individuals", axis=1):^M
    ...:                 temp = df_.droplevel(["scorer", "individuals"], axis=1)^M
    ...:                 if animal_name != "single":^M
    ...:                     for bp1, bp2 in cfg["skeleton"]:^M
    ...:                         name = "{}_{}_{}".format(animal_name, bp1, bp2)^M
    ...:                         bones[name] = analyzebone(temp[bp1], temp[bp2])^M
    ...:         else:^M
    ...:             for bp1, bp2 in cfg["skeleton"]:^M
    ...:     """Extracts length and orientation of each "bone" of the skeleton.^M
    ...: ^M
    ...:     The bone and skeleton information is defined in the config file.^M
    ...: ^M
    ...:     Parameters^M
    ...:     ----------^M
    ...:     config: str^M
    ...:         Full path of the config.yaml file.^M
    ...: ^M
    ...:     """Extracts length and orientation of each "bone" of the skeleton.^M
    ...: ^M
    ...:     The bone and skeleton information is defined in the config file.^M
    ...: ^M
    ...:     Parameters^M
    ...:     ----------^M
    ...:     config: str^M
    ...:         Full path of the config.yaml file.^M
    ...: ^M
    ...:     videos: list[str]^M
    ...:         The full paths to videos for analysis or a path to the directory, where all the^M
    ...:         videos with same extension are stored.^M
    ...: ^M
    ...:     videotype: str, optional, default=""^M
    ...:         Checks for the extension of the video in case the input to the video is a^M
    ...:         directory. Only videos with this extension are analyzed.^M
    ...:         If left unspecified, videos with common extensions^M
    ...:         ('avi', 'mp4', 'mov', 'mpeg', 'mkv') are kept.^M
    ...: ^M
    ...:     shuffle : int, optional, default=1^M
    ...:         The shuffle index of training dataset. The extracted frames will be stored in^M
    ...:         the labeled-dataset for the corresponding shuffle of training dataset.^M
    ...: ^M
    ...:     trainingsetindex: int, optional, default=0^M
    ...:         Integer specifying which TrainingsetFraction to use.^M
    ...:         Note that TrainingFraction is a list in config.yaml.^M
    ...: ^M
    ...:     filtered: bool, optional, default=False^M
    ...:         Boolean variable indicating if filtered output should be plotted rather than^M
    ...:         frame-by-frame predictions. Filtered version can be calculated with^M
    ...:         ``deeplabcut.filterpredictions``.^M
    ...: ^M
    ...:     save_as_csv: bool, optional, default=False^M
    ...:         Saves the predictions in a .csv file.^M
    ...: ^M
    ...:     destfolder: string or None, optional, default=None^M
    ...:         Specifies the destination folder for analysis data. If ``None``, the path of^M
    ...:         the video is used. Note that for subsequent analysis this folder also needs to^M
    ...:         be passed.^M
    ...: ^M
    ...:     modelprefix: str, optional, default=""^M
    ...:         Directory containing the deeplabcut models to use when evaluating the network.^M
    ...:         By default, the models are assumed to exist in the project folder.^M
    ...: ^M
    ...:     track_method: string, optional, default=""^M
    ...:         Specifies the tracker used to generate the data.^M
    ...:         Empty by default (corresponding to a single animal project).^M
    ...:         For multiple animals, must be either 'box', 'skeleton', or 'ellipse' and will^M
    ...:         be taken from the config.yaml file if none is given.^M
    ...: ^M
    ...:     return_data: bool, optional, default=False^M
    ...:         If True, returns a dictionary of the filtered data keyed by video names.^M
    ...: ^M
    ...:     Returns^M
    ...:     -------^M
    ...:     video_to_skeleton_df^M
    ...:         Dictionary mapping video filepaths to skeleton dataframes.^M
    ...: ^M
    ...:         * If no videos exist, the dictionary will be empty.^M
    ...:         * If a video is not analyzed, the corresponding value in the dictionary will be^M
    ...:           None.^M
    ...:     """
    ...:

Relevant log output

In [83]: deeplabcut.analyzeskeleton(config_path, r"C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids", videotype='mp4', shuffle=1, trainingsetindex=0, filtered=True, save_as_csv=True,
    ...:  destfolder=None)
Analyzing all the videos in the directory...
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\01Dec2023(2).mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 01Dec2023(2) and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\22Dec2023 (1)-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 22Dec2023 (1)-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 girls 3-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 girls 3-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 boys 3-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 boys 3-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 girls 2-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 girls 2-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\08Dec2023 run 2.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 08Dec2023 run 2 and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 boys 2-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 boys 2-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\08Dec2023-mp4.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 08Dec2023-mp4 and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\22Dec2023 (2)-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 22Dec2023 (2)-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 boys 1-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 boys 1-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\18Dec2023 girls 1-converted.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 18Dec2023 girls 1-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids\01Dec2023.mp4
No filtered data file found in C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 01Dec2023 and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.

Anything else?

No response

Code of Conduct

n-poulsen commented 10 months ago

Hi @klgrove, did you analyze the videos before running the analyze_skeleton code? The skeleton analysis code uses the output from deeplabcut.analyze_videos, so you would need to run it first, such as:

videos = r"C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids"
deeplabcut.analyze_videos(
    config_path, videos, videotype='mp4', shuffle=1, trainingsetindex=0, filtered=True, save_as_csv=True, destfolder=None
)
deeplabcut.analyzeskeleton(
    config_path, videos, videotype='mp4', shuffle=1, trainingsetindex=0, filtered=True, save_as_csv=True, destfolder=None
)
klgrove commented 10 months ago

Yes, I did analyze the videos first.

Does the Analyze Skeleton output a different CSV file than what I would get with just converting the h5 files to CSV? I was able to convert the H5 files to CSV, but I wasn't sure if Analyze Skeleton gives a different output.

n-poulsen commented 9 months ago

@klgrove yes, deeplabcut.analyzeskeleton computes the length and orientation of each bone in the skeleton (the skeleton as it's defined in your project config.yaml), and it computes these values by using the output of deeplabcut.analyze_videos.

Can you check if you have output files in your video folder (C:\Users\klgrove\OneDrive - Inside MD Anderson\Documents\Shepherd Lab\Baseline Testing\Toe Spread Vids)?

The filenames should have names in the format {video_name}DLC_resnet50_LatToeSpreadNov17shuffle1_45000_el.pickle

klgrove commented 8 months ago

The output files are there, but some of them look like they used an old snapshot (20000 versus 45000). Would that make a difference? Do I need to re-analyze the videos using the old snapshot before running analyze skeleton?

klgrove commented 8 months ago

I ran deeplabcut.analyze_videos on the entire folder again, and then tried analyze skeleton, and I am still getting the same error: No filtered data file found.

klgrove commented 8 months ago

I tried analyze skeleton again this morning, and this is what it spit out:

Processing Q:\Behavioral Care and Enrichment\Research Projects & Data\Kristi Grove\Shepherd Lab\Baseline Testing\Toe Spread Vids\22Dec2023 (1)-converted.mp4
No filtered data file found in Q:\Behavioral Care and Enrichment\Research Projects & Data\Kristi Grove\Shepherd Lab\Baseline Testing\Toe Spread Vids for video 22Dec2023 (1)-converted and scorer DLC_resnet50_LatToeSpreadNov17shuffle1_45000 and ellipse tracker.
Processing Q:\Behavioral Care and Enrichment\Research Projects & Data\Kristi Grove\Shepherd Lab\Baseline Testing\Toe Spread Vids\22Dec2023 (2)-converted.mp4
C:\Users\klgrove\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\deeplabcut\post_processing\analyze_skeleton.py:286: FutureWarning: DataFrame.groupby with axis=1 is deprecated. Do `frame.T.groupby(...)` without axis instead.
  for animal_name, df_ in df.groupby(level="individuals", axis=1):
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[31], line 1
----> 1 deeplabcut.analyzeskeleton(config, r"Q:\Behavioral Care and Enrichment\Research Projects & Data\Kristi Grove\Shepherd Lab\Baseline Testing\Toe Spread Vids", videotype='mp4', shuffle=1, trainingsetindex=0, filtered=True, save_as_csv=True, destfolder=None)

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\deeplabcut\post_processing\analyze_skeleton.py:291, in analyzeskeleton(config, videos, videotype, shuffle, trainingsetindex, filtered, save_as_csv, destfolder, modelprefix, track_method, return_data)
    289             for bp1, bp2 in cfg["skeleton"]:
    290                 name = "{}_{}_{}".format(animal_name, bp1, bp2)
--> 291                 bones[name] = analyzebone(temp[bp1], temp[bp2])
    292 else:
    293     for bp1, bp2 in cfg["skeleton"]:

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\deeplabcut\post_processing\analyze_skeleton.py:155, in analyzebone(bp1, bp2)
    152 bp2_pos = np.vstack([bp2.x.values, bp2.y.values]).T
    154 # get bone length and orientation
--> 155 bone_length = calc_distance_between_points_two_vectors_2d(bp1_pos, bp2_pos)
    156 bone_orientation = calc_angle_between_vectors_of_points_2d(bp1_pos.T, bp2_pos.T)
    158 # keep the smallest of the two likelihoods

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\deeplabcut\post_processing\analyze_skeleton.py:59, in calc_distance_between_points_two_vectors_2d(v1, v2)
     56     raise ValueError("Error: input arrays should have the same length")
     58 # Calculate distance
---> 59 dist = [distance.euclidean(p1, p2) for p1, p2 in zip(v1, v2)]
     60 return dist

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\deeplabcut\post_processing\analyze_skeleton.py:59, in <listcomp>(.0)
     56     raise ValueError("Error: input arrays should have the same length")
     58 # Calculate distance
---> 59 dist = [distance.euclidean(p1, p2) for p1, p2 in zip(v1, v2)]
     60 return dist

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\scipy\spatial\distance.py:536, in euclidean(u, v, w)
    500 def euclidean(u, v, w=None):
    501     """
    502     Computes the Euclidean distance between two 1-D arrays.
    503
   (...)
    534
    535     """
--> 536     return minkowski(u, v, p=2, w=w)

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\scipy\spatial\distance.py:496, in minkowski(u, v, p, w)
    494         root_w = np.power(w, 1/p)
    495     u_v = root_w * u_v
--> 496 dist = norm(u_v, ord=p)
    497 return dist

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\scipy\linalg\_misc.py:146, in norm(a, ord, axis, keepdims, check_finite)
    144 # Differs from numpy only in non-finite handling and the use of blas.
    145 if check_finite:
--> 146     a = np.asarray_chkfinite(a)
    147 else:
    148     a = np.asarray(a)

File ~\AppData\Local\anaconda3\envs\tfdml_plugin\lib\site-packages\numpy\lib\function_base.py:630, in asarray_chkfinite(a, dtype, order)
    628 a = asarray(a, dtype=dtype, order=order)
    629 if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
--> 630     raise ValueError(
    631         "array must not contain infs or NaNs")
    632 return a

ValueError: array must not contain infs or NaNs