Closed Mijar007 closed 8 months ago
Hi Michael,
thanks for reporting your issues. I will be working through them piece by piece, starting with the easy ones. I will also split my answers, so you it is easier to track.
Describe the bug Errors during "Refine behavior" step using CALMS21 and own dataset
Error using CALMS21 (PAPER) dataset, converted to npy format. After training, error message after switching to "Refine Behaviors (optional)" tab:
AttributeError: 'NoneType' object has no attribute 'split' Traceback:
File "C:\ProgramData\anaconda3\envs\asoid\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Users\Rabenstein\Python\A-SOID-main\asoid\app.py", line 328, in
This is an unintended error you caught, thank you for reporting this. However, the CALMS21 dataset cannot be used for the refinement steps (which is the message you should have received if this error wouldn't have occurred).
The dataset can be used in all other steps except the manual refinement (Refine Behavior and Create new dataset). This dataset is big enough that you will not require any manual refinement to achieve high performance.
Error during "Refine behaviors": Adding one new video with pose file Error after pushing "Predict labels and create example videos":
TypeError: feature_extraction() takes 3 positional arguments but 4 were given Traceback:
File "C:\ProgramData\anaconda3\envs\asoid\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "C:\Users\Rabenstein\Python\A-SOID-main\asoid\app.py", line 328, in
---
This one is an issue that came in with our latest addition of 3D feature extraction. I will fix the corresponding section in the code and upload a new version ASAP. Thank you for reporting this.
Problems during installation: Package version must be setted to specific version: cython==0.29.34 Package hdbscan could only be installed using "#conda install -c conda-forge hdbscan". MS Visual C++ 14.31.31103 was already installed. MS Visual C++ 14.38.33135 was installed manually during ASOiD installation.
Unfortunately this is a known issue with HDBscan installations on Windows computers. I am glad that you found a solution.
Hi Jens, thank you very much. I will try it again after you fixed the sections. It is good to know, that the CALMS21 dataset can not simply used as a demo file for all steps. I tried to use it as a “positive control” to check if the errors are caused by my own input or the tool. Best wishes, Michael
I uploaded a hotfix that should fix the issue. Can you install the new version and confirm this with me?
I haven't created a release tag for this yet, so just the laster version from GitHub please
Thank you Jens.
Now I could proceed further with my own data.
However the TypeError: feature_extraction() takes 3 positional arguments but 4 were given
appeared again in the 'Discover' step. I uploaded ten pose files and pushed the 'Preprocess files' button. Then the error appeared.
The AttributeError: 'NoneType' object has no attribute 'split'
with the CALMS21 dataset ist still present and blocks the 'Predict' and 'Discover' steps (in case it is intended to use these steps with the CALMS21 dataset).
I uploaded a hot fix for the discovery step. Should work now.
Concerning the Attribute Error, I am working on a solution and will upload in the next days. That should be limited to the CALMS21 though.
Again, thank you Jens. I tried it again and the TypeError: feature_extraction() takes 3 positional arguments but 4 were given
was not raised again.
I found another error, but for this, I will open another issue.
Hi, I'm running in different errors.
Describe the bug Errors during "Refine behavior" step using CALMS21 and own dataset
Error using CALMS21 (PAPER) dataset, converted to npy format. After training, error message after switching to "Refine Behaviors (optional)" tab:
Error with own dataset: Input: Pose DLC csv files Annotation files: BORIS binary, delta t = 0.1 s, full video length Video format: .avi General: 20 annotated videos, ~1-1:30 min long 60 videos with poses in total, ~1-1:30 min long Most of the time, no annotated action present
Default ASOiD settings used
using all 20 annotation files with poses leads to only one training iteration during "Active training" using first 10 annotation files with poses leads to six training iteration during "Active training"
Error during "Refine behaviors": Adding one new video with pose file Error after pushing "Predict labels and create example videos":
Selecting step "Create New Dataset" displays AttributeError: 'NoneType' object has no attribute 'split', like using the CALMS21 data set
Expected behavior No errors, more training iterations using 20 instead of 10 training data sets
Screenshots If applicable, add screenshots to help explain your problem.
CALMS21 data set
Own data set:
Desktop (please complete the following information): Win 11 pro, 10.0.22631, x64, German language
packages in environment at C:\ProgramData\anaconda3: Name Version Build Channel _anaconda_depends 2023.09 py311_mkl_1 anaconda-anon-usage 0.4.2 py311hfc23b7f_0 anaconda-catalogs 0.2.0 py311haa95532_0 anaconda-client 1.12.1 py311haa95532_0 anaconda-cloud-auth 0.1.4 py311haa95532_0 anaconda-navigator 2.5.0 py311haa95532_0 anaconda-project 0.11.1 py311haa95532_0
General Python 3.11.5 In Asoid environment: Python 3.8.18
MS Edge used as Browser
Problems during installation: Package version must be setted to specific version: cython==0.29.34 Package hdbscan could only be installed using "#conda install -c conda-forge hdbscan". MS Visual C++ 14.31.31103 was already installed. MS Visual C++ 14.38.33135 was installed manually during ASOiD installation.
Project Config (please post the content of the corresponding config.ini file) CALMS21: [Project] PROJECT_TYPE = CALMS21 (PAPER) PROJECT_NAME = Feb-26-2024_test2 PROJECT_PATH = C:\Users\Rabenstein/Desktop/asoid_output FRAMERATE = 30 KEYPOINTS_CHOSEN = nose, neck, hip_left, hip_right, tail_base EXCLUDE_OTHER = False INDIVIDUALS_CHOSEN = resident, intruder CLASSES = attack, investigation, mount, other MULTI_ANIMAL = True IS_3D = False
[Data] DATA_INPUT_FILES = C:\Users\Rabenstein\Python\A-SOID-main\calms21_task1_train.npy LABEL_INPUT_FILES = C:\Users\Rabenstein\Python\A-SOID-main\calms21_task1_train.npy TEST_DATA_INPUT_FILES = C:\Users\Rabenstein\Python\A-SOID-main\calms21_task1_test.npy TEST_LABEL_INPUT_FILES = C:\Users\Rabenstein\Python\A-SOID-main\calms21_task1_test.npy
[Processing] LLH_VALUE = 0.6 ITERATION = 0 MIN_DURATION = 0.4 TRAIN_FRACTION = 0.01 MAX_ITER = 100 MAX_SAMPLES_ITER = 40 CONF_THRESHOLD = 0.5 N_SHUFFLED_SPLIT = None
Own data (10 training sets) [Project] PROJECT_TYPE = DeepLabCut PROJECT_NAME = Feb-26-2024_test2 PROJECT_PATH = C:\Users\Rabenstein/Desktop/asoid_output FRAMERATE = 30 KEYPOINTS_CHOSEN = Nose, Front_Right_1, Front_Right_2, Front_Left_1, Front_Left_2, Rear_Right_1, Rear_Right_2, Rear_Left_1, Rear_Left_2, Midline_Front, Midline_Center, Midline_Rear EXCLUDE_OTHER = False FILE_TYPE = csv INDIVIDUALS_CHOSEN = single animal CLASSES = Wiggle strong, Wiggle weak, other MULTI_ANIMAL = False IS_3D = False
[Data] DATA_INPUT_FILES = GOPR0612DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0613DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0614DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0615DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0616DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0617DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0618DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0619DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0620DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv, GOPR0621DLC_resnet50_Wiggle_TestJan26shuffle1_100000.csv LABEL_INPUT_FILES = GOPR0612.avi_No focal subject.tsv, GOPR0613.avi_No focal subject.tsv, GOPR0614.avi_No focal subject.tsv, GOPR0615.avi_No focal subject.tsv, GOPR0616.avi_No focal subject.tsv, GOPR0617.avi_No focal subject.tsv, GOPR0618.avi_No focal subject.tsv, GOPR0619.avi_No focal subject.tsv, GOPR0620.avi_No focal subject.tsv, GOPR0621.avi_No focal subject.tsv ROOT_PATH = None
[Processing] LLH_VALUE = 0.1 ITERATION = 0 MIN_DURATION = 0.1 TRAIN_FRACTION = 0.05 MAX_ITER = 100 MAX_SAMPLES_ITER = 30 CONF_THRESHOLD = 0.5 N_SHUFFLED_SPLIT = None
Additional context Add any other context about the problem here.
Best wishes, Michael Rabenstein