talmolab / sleap

A deep learning framework for multi-animal pose tracking.
https://sleap.ai
Other
433 stars 96 forks source link

Import DLC data crash, "KeyError: "There is no node named nose in Skeleton(name='Skeleton-1')" #429

Closed catubc closed 3 years ago

catubc commented 3 years ago

Hi I installed the prerelast 1.0.10a9

(sleap_env) cat@cat-work:~$ python
Python 3.6.12 |Anaconda, Inc.| (default, Sep  8 2020, 23:10:56) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sleap
sleap.2020-12-08 08:20:02.098525: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-12-08 08:20:02.100005: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
__version>>> sleap.__version__
'1.0.10a9'

But I still get an error when importing DLC labeled data:

s(sleap_env) cat@cat-work:~$ sleap-label
2020-12-08 08:20:10.572434: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-12-08 08:20:10.573910: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/shiboken2/files.dir/shibokensupport/signature/parser.py:97: FutureWarning: split() requires a non-empty pattern match.
  return [x.strip() for x in split(argstr) if x.strip() not in ("", ",")]
Traceback (most recent call last):
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 264, in importDLC
    self.execute(ImportDeepLabCut)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 232, in execute
    command().execute(context=self, params=kwargs)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 129, in execute
    self.do_with_signal(context, params)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 153, in do_with_signal
    cls.do_action(context, params)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 699, in do_action
    labels = Labels.load_deeplabcut(filename=params["filename"])
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/dataset.py", line 1661, in load_deeplabcut
    return read(filename, for_object="labels", as_format="deeplabcut")
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/main.py", line 99, in read
    return disp.read(filename, *args, **kwargs)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/dispatch.py", line 56, in read
    return adaptor.read(file, *args, **kwargs)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/deeplabcut.py", line 285, in read
    FileHandle(csv_path), full_video=video, skeleton=skeleton
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/deeplabcut.py", line 180, in read_frames
    Instance(skeleton=skeleton, points=instance_points)
  File "<attrs generated init sleap.instance.Instance>", line 10, in __init__
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/instance.py", line 423, in _validate_all_points
    f"There is no node named {node_name} in {self.skeleton}"
KeyError: "There is no node named nose in Skeleton(name='Skeleton-1')"

For completeness, here's most of config.yaml file also:


   # Project definitions (do not edit)
Task: cohort2
scorer: danlab
date: Oct21
multianimalproject: true

    # Project path (change when moving around)
project_path: /mnt/adfe6e7b-b77b-4731-bc9e-e639667faba4/lisa/cohort2-danlab-2020-10-21

    # Annotation data set configuration (and individual video cropping parameters)
video_sets:
  ? /mnt/adfe6e7b-b77b-4731-bc9e-e639667faba4/lisa/cohort2-danlab-2020-10-2/videos/2020_07_21_02_34_26_386606_compressed_cropped.avi
  : crop: 0, 400, 0, 400
 ...
  ? /mnt/adfe6e7b-b77b-4731-bc9e-e639667faba4/lisa/cohort2-danlab-2020-10-2/videos/2020_08_06_01_16_33_760403_compressed_cropped.avi
  : crop: 0, 400, 0, 400
individuals:
- adultfemale
- adultmale
- pup1shavedhead
- pup2backstripe
- pup3shavedback
- pup4shavedheadshavedback
uniquebodyparts: []
multianimalbodyparts:
- nose
- leftear
- rightear
- spine1
- spine2
- spine3
- spine4
- spine5
- tail1
- tail2
- tail3
skeleton:
- - rightear
  - spine3
- - tail1
  - tail2
- - rightear
  - spine4
- - spine1
  - spine2
- - nose
  - spine3
- - spine4
  - spine5
- - spine3
  - tail2
- - leftear
  - spine3
- - nose
  - leftear
- - spine2
  - spine3
- - rightear
  - spine5
- - spine4
  - tail1
- - tail1
  - tail3
- - leftear
  - spine2
- - nose
  - rightear
- - rightear
  - tail1
- - rightear
  - spine1
- - spine4
  - tail2
- - spine5
  - tail1
- - nose
  - spine1
- - tail2
  - tail3
- - leftear
  - spine1
- - spine3
  - spine4
- - rightear
  - spine2
- - spine4
  - tail3
- - leftear
  - spine4
- - leftear
  - rightear

bodyparts: MULTI!
start: 0
stop: 1
numframes2pick: 60

    # Plotting configuration
skeleton_color: black
pcutoff: 0.6
dotsize: 5
alphavalue: 0.7
colormap: plasma

    # Training,Evaluation and Analysis configuration
TrainingFraction:
- 0.95
iteration: 0
default_net_type: resnet_50
default_augmenter: multi-animal-imgaug
snapshotindex: -1
batch_size: 8

    # Cropping Parameters (for analysis and outlier frame detection)
cropping: false
croppedtraining: true
    #if cropping is true for analysis, then set the values here:
x1: 0
x2: 640
y1: 277
y2: 624

    # Refinement configuration (parameters from annotation dataset configuration also relevant in this stage)
corner2move2:
- 50
- 50
move2corner: true
video_sets_original:
  /mnt/adfe6e7b-b77b-4731-bc9e-e639667faba4/lisa/cohort2-danlab-2020-10-2/videos/2020_07_21_02_34_26_386606_compressed.avi:
    crop: 0, 1280, 0, 1024
 ...
  /mnt/adfe6e7b-b77b-4731-bc9e-e639667faba4/lisa/cohort2-danlab-2020-10-2/videos/2020_08_06_01_16_33_760403_compressed.avi:
    crop: 0, 1280, 0, 1024
talmo commented 3 years ago

Hmm that's odd. Do you mind sharing your DLC CSV file with the labels?

catubc commented 3 years ago

Sure, I put a csv example here:

https://drive.google.com/file/d/1_s5G30roJPd5V6KBrX0mJRTzIB6503EN/view?usp=sharing

And the .h5 file here:

https://drive.google.com/file/d/1R6fKdtEmUOcND4NnaP4SpNH0wp8Wwzji/view?usp=sharing

Note: there are many instances of animals where a feature is occluded, not sure if that's what the issue is.

talmo commented 3 years ago

So I'm able to load the CSV file both in the GUI and interactively:

import sleap

labels = sleap.Labels.load_file("CollectedData_danlab.csv", as_format="deeplabcut")

print(labels.skeleton.node_names)
print(labels[29].instances[0].numpy())
print(labels[29].instances[1].numpy())

Output:

['nose', 'leftear', 'rightear', 'spine1', 'spine2', 'spine3', 'spine4', 'spine5', 'tail1', 'tail2', 'tail3']
[[         nan          nan]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]
 [335.57568828 770.96751239]
 [304.51417728 787.11949811]
 [273.03851279 807.41301863]
 [254.40160619 832.26222744]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]]
[[         nan          nan]
 [287.53388459 677.78297937]
 [276.35174063 721.26909478]
 [303.68587032 709.67279733]
 [278.83666151 702.63218817]
 [248.60345747 710.5011043 ]
 [224.16840214 730.79462482]
 [206.3598025  755.64383362]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]]

Let's troubleshoot. Can you try:

  1. Run the snippet above in a python session and see if you still get the error.
  2. Run which sleap-label to check that the binary got linked correctly when you upgraded versions
catubc commented 3 years ago

Ok, so my workstation SLEAP install broke now, it gives the tensorRT error and I can't seem to be able to fix it. [EDIT: Fixed workstation now...]

But I ran your code on my laptop, here's the output:

(sleap_env) cat@cat-GF63-Thin-9SCX:~$ python
Python 3.6.12 |Anaconda, Inc.| (default, Sep  8 2020, 23:10:56) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sleap
2020-12-08 15:22:25.190340: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-12-08 15:22:25.191377: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
>>> labels = sleap.Labels.load_file("/home/cat/Downloads/CollectedData_danlab.csv", as_format="deeplabcut")
>>> print(labels.skeleton.node_names)
['nose', 'leftear', 'rightear', 'spine1', 'spine2', 'spine3', 'spine4', 'spine5', 'tail1', 'tail2', 'tail3']
>>> print(labels.skeleton.node_names)
['nose', 'leftear', 'rightear', 'spine1', 'spine2', 'spine3', 'spine4', 'spine5', 'tail1', 'tail2', 'tail3']
>>> print(labels[29].instances[0].numpy())
[[         nan          nan]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]
 [335.57568828 770.96751239]
 [304.51417728 787.11949811]
 [273.03851279 807.41301863]
 [254.40160619 832.26222744]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]]
>>> print(labels[29].instances[1].numpy())
[[         nan          nan]
 [287.53388459 677.78297937]
 [276.35174063 721.26909478]
 [303.68587032 709.67279733]
 [278.83666151 702.63218817]
 [248.60345747 710.5011043 ]
 [224.16840214 730.79462482]
 [206.3598025  755.64383362]
 [         nan          nan]
 [         nan          nan]
 [         nan          nan]]
>>> sleap.__version__
'1.0.10a9'
>>> 
catubc commented 3 years ago

I can confirm the same output on my workstation (where all the DLC data is hosted).

Let me know any other suggestions.

talmo commented 3 years ago

Ok, and you definitely can't load it in the GUI?

Let's try saving the SLP file interactively in Python:

import sleap
labels = sleap.Labels.load_file("CollectedData_danlab.csv", as_format="deeplabcut")
labels.save("CollectedData_danlab.slp")

And then in the terminal, open the resulting file: sleap-label "CollectedData_danlab.slp"

It's working on my end, but I don't have the images to visualize the labels with.

Let me know what you get and if you see any new errors.

talmo commented 3 years ago

Also here's the converted project file if you want to just try loading the one I created using the snippet above:

CollectedData_danlab.slp.zip

catubc commented 3 years ago

Ok

So the save and the reload definitely worked. So something must be goign on with the GUI version.

Let me know if anything else I can test.

Also, it seems the GPU is not working if I want to train on this data. I will open a separate issue. [EDIT: I fixed GPU training now]

talmo commented 3 years ago

Hmm, not sure what could be going on. Here's me importing your data from the GUI using just a black image as a placeholder:

import (Click to zoom)

catubc commented 3 years ago

Works now for me as well (with images!). Thanks.

But:

How do I import several .csv files from several movies to train at the same time?

talmo commented 3 years ago

Right, so that's not currently supported but I'm tracking that feature over in issue #412.

Does it work for you if you go to File -> Merge into project... and merge in the separate SLP files converted from each CSV?

Do you have many CSV files such that that would be impractical?

catubc commented 3 years ago

I have 20-30 such files in general. I don't mind doing it.

However, I just tried

  1. Importing DLC .csv <- worked ok

  2. Merging into it from .h5 file (there is no option to merge from .csv), got this crash:

(sleap_env) cat@cat-GF63-Thin-9SCX:~$ sleap-label
2020-12-08 17:10:49.324944: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-12-08 17:10:49.325993: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/shiboken2/files.dir/shibokensupport/signature/parser.py:97: FutureWarning: split() requires a non-empty pattern match.
  return [x.strip() for x in split(argstr) if x.strip() not in ("", ",")]
/home/cat/Downloads/sleap2/CollectedData_danlab.h5 doesn't match ext for json or json.zip
Traceback (most recent call last):
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 499, in importPredictions
    self.execute(MergeProject)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 232, in execute
    command().execute(context=self, params=kwargs)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 125, in execute
    self.ask_and_do(context, params)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/gui/commands.py", line 1854, in ask_and_do
    new_labels = Labels.load_file(filename, video_search=gui_video_callback)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/dataset.py", line 1561, in load_file
    filename, for_object="labels", video_search=video_search, *args, **kwargs
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/main.py", line 109, in read
    return disp.read(filename, *args, **kwargs)
  File "/home/cat/anaconda3/envs/sleap_env/lib/python3.6/site-packages/sleap/io/format/dispatch.py", line 58, in read
    raise TypeError("No file format adaptor could read this file.")
TypeError: No file format adaptor could read this file.
talmo commented 3 years ago

Sorry, I wasn't super clear about the procedure. For each CSV file:

  1. Open new SLEAP window
  2. File -> Import -> DLC dataset
  3. File -> Save to a new .slp file

Then, open one of the resulting .slp files and now merge them in one-by-one through File -> Merge into project.

I'll tag this as an enhancement so we can streamline the process in a new version :)

talmo commented 3 years ago

As a workaround for now so you can save yourself some clicking for the first step:

import sleap

csv_files = ["path/to/vid1.csv", "path/to/vid2.csv"]

for csv_file in csv_files:
    labels = sleap.Labels.load_file(csv_file, as_format="deeplabcut")
    labels.save(csv_file + ".slp")
catubc commented 3 years ago

Thanks so much, I will go ahead and do this asap.

Before I do that, I'd like to evaluate SLEAP on a video. So I have now finished training 50 epochs of center and center_instance and would like to label some videos.

I couldn't find a way to do this, did I miss something?

Thanks so much

[EDIT: I see there is some information here as well, I will try to follow this guide:

https://sleap.ai/guides/remote.html#remote-inference

talmo commented 3 years ago

Just as a piece of future reference until we implement this functionality natively, here's how you can import and merge a set of DLC folders:

import sleap
from glob import glob

csv_files = glob("labeled-data/*/*.csv")
merged_labels = None
for csv_file in csv_files:
    labels = sleap.Labels.load_file(csv_file, as_format="deeplabcut")
    if merged_labels is None:
        merged_labels = labels
    else:
        merged_labels.extend_from(labels, unify=True)

print("Merged labels:")
merged_labels.describe()

merged_labels.save("merged_labels.slp")
merged_labels.save("merged_labels.pkg.slp", with_images=True)  # comment this out to skip saving with embedded images

This assumes a folder structure like this:

labeled-data/
    video1/
        my_labels.csv
        img00000.png
        img00001.png
        ...
    video2/
    ...
Batlad commented 1 year ago

I was just running into this issue myself in the latest version and found a work around that isn't mentioned in the thread. I noticed the below error message, particularly "nodes=[M, U, L, T, I, !],"

Instance(skeleton=skeleton, points=instance_points) File "<attrs generated init sleap.instance.Instance>", line 10, in __init__ File "C:\Users\mq20197886\.conda\envs\sleap\lib\site-packages\sleap\instance.py", line 420, in _validate_all_points f"There is no node named {node_name} in {self.skeleton}" KeyError: 'There is no node named HeadBack in Skeleton(nodes=[M, U, L, T, I, !], edges=[], symmetries=[])'

I also noticed that in the multi animal config.yaml in my case and in the user above's case this phrase "bodyparts: MULTI!"

multianimalbodyparts:

  • HeadBack
  • PincerMid
  • LeftEye
  • RightEye
  • PostPetiole
  • ThoraxFront bodyparts: MULTI!

My work around was simply to copy my multianimalbodyparts into bodyparts replacing MULTI!

This appears to solve the issue and my import of the config.yaml appears to be loading everything correctly.

I feel this could be solved on the side of the sleap importer by checking if the DLC project is multi animal, or if there is a multianimalbodyparts field and checking there is bodyparts only contains MULTI!

I am yet to test what effect overwriting bodyparts has on the original DLC project.

Cheers, Trev

BebB108 commented 1 week ago

Hi I am getting KeyError: 1204 while trying to upload training data of DLC as a .csv file. i am using 1.3.3 version. Its a multi animal project in DLC with 18 joints each.