Closed yangyuke001 closed 2 years ago
Hi and thank you for working with our project!
You are right, we don't have this included yet, but I am happy to help you with getting started.
I downloaded your example dataset and was wondering, why it does not include the likelihood values of the DLC model?
However, I put together a quick script that you can use to get your data in the right format to create the training dataset. Just replace the csv_to_numpy.py script with the function below and you should be set. If you encounter any further issue, just reach out here or via e-mail.
Cheers, Kevin
"""
Variational Animal Motion Embedding 1.0-alpha Toolbox
© K. Luxem & P. Bauer, Department of Cellular Neuroscience
Leibniz Institute for Neurobiology, Magdeburg, Germany
https://github.com/LINCellularNeuroscience/VAME
Licensed under GNU General Public License v3.0
"""
import os
import numpy as np
import pandas as pd
from pathlib import Path
from vame.util.auxiliary import read_config
def csv_to_numpy(config, datapath):
"""
This is a function to convert your pose-estimation.csv file to a numpy array.
Note that this code is only useful for data which is a priori egocentric, i.e. head-fixed
or otherwise restrained animals.
example use:
vame.csv_to_npy('pathto/your/config/yaml', 'path/toYourFolderwithCSV/')
"""
config_file = Path(config).resolve()
cfg = read_config(config_file)
path_to_file = cfg['project_path']
filename = cfg['video_sets']
for file in filename:
# Read in your .csv file, skip the first two rows and create a numpy array
data = pd.read_csv(datapath+file+'.csv', skiprows = 3, header=None)
data_mat = pd.DataFrame.to_numpy(data)
data_mat = data_mat[:,1:]
# save the final_positions array with np.save()
np.save(os.path.join(path_to_file,'data',file,file+"-PE-seq.npy"), data_mat.T)
print("conversion from DeepLabCut csv to numpy complete...")
print("Your data is now ine right format and you can call vame.create_trainset()")
@kvnlxm Thank you for your response!
First ,DLC giving 3d pose results not including likelihood values,just x/y/z coords.
I have replaced csv_to_numpy.py script to creating the training dataset,and successed with Step 1.3:vame.create_trainset(config),but in Step 2: vame.train_model(),I got errors. Here is the terminal out put:
`In [4]: vame.csv_to_numpy(config, datapath='C:\Users\Citydo\Documents\code\VAME3D\3d-vame-Jun10-2022\videos\pose_estimation\') conversion from DeepLabCut csv to numpy complete... Your data is now ine right format and you can call vame.create_trainset()
In [5]: vame.create_trainset(config) Creating training dataset... Using robust setting to eliminate outliers! IQR factor: 4 z-scoring of file camera-1-hand IQR value: 2.10, IQR cutoff: 8.42 Lenght of train data: 450 Lenght of test data: 50 A training and test set has been created. Now everything is ready to train a variational autoencoder via vame.train_model() ...
In [6]: vame.train_model(config) Train Variational Autoencoder - model name: VAME
Using CUDA GPU active: True GPU used: NVIDIA GeForce RTX 3080 Laptop GPU Latent Dimensions: 30, Time window: 30, Batch Size: 256, Beta: 1, lr: 0.0005
Compute mean and std for temporal dataset. Initialize train data. Datapoints 450 Initialize test data. Datapoints 50 Scheduler step size: 100, Scheduler gamma: 0.20
RuntimeError Traceback (most recent call last)
Hi,
Please change the value of the num_features setting in your config.yaml to 28 and try again.
Best, Pavol
@pavolbauer Oh thank you,I have changed the value of num_features to 30 ,and it worked!
First of all,thank you for your awesome project!
Do you plan to use 3D csv files of deeplabcut project as input?
Or would you please tell me how to make the training dataset using the 3D csv files?
Here is one of the 3D csv file: hand_DLC_3D.csv
Look forward to hearing from you!