mlfpm / deepof

DeepLabCut based data analysis package including pose estimation and representation learning mediated behavior recognition
MIT License
37 stars 6 forks source link

Issue while running .create method #5

Closed Lucas97223 closed 1 year ago

Lucas97223 commented 1 year ago

I am trying to run the instruction project.create(verbose=True) as indicated in the github, but I unfortunately meet this error message: Intel MKL ERROR: Parameter 6 was incorrect on entry to DGELSD. Traceback (most recent call last): File "c:\users\lopezlab\deepof tests.py", line 11, in my_deepof_project = my_deepof_project.create(verbose=True) File "c:\users\lopezlab\anaconda3\lib\site-packages\deepof\data.py", line 745, in create tables, quality = self.load_tables(verbose) File "c:\users\lopezlab\anaconda3\lib\site-packages\deepof\data.py", line 516, in load_tables deepof.utils.smooth_mult_trajectory( File "c:\users\lopezlab\anaconda3\lib\site-packages\deepof\utils.py", line 878, in smooth_mult_trajectory smoothed_series = savgol_filter( File "c:\users\lopezlab\anaconda3\lib\site-packages\scipy\signal_savitzky_golay.py", line 352, in savgol_filter _fit_edges_polyfit(x, window_length, polyorder, deriv, delta, axis, y) File "c:\users\lopezlab\anaconda3\lib\site-packages\scipy\signal_savitzky_golay.py", line 223, in _fit_edges_polyfit _fit_edge(x, 0, window_length, 0, halflen, axis, File "c:\users\lopezlab\anaconda3\lib\site-packages\scipy\signal_savitzky_golay.py", line 193, in _fit_edge poly_coeffs = np.polyfit(np.arange(0, window_stop - window_start), File "<__array_function__ internals>", line 180, in polyfit File "c:\users\lopezlab\anaconda3\lib\site-packages\numpy\lib\polynomial.py", line 668, in polyfit c, resids, rank, s = lstsq(lhs, rhs, rcond) File "<__array_function__ internals>", line 180, in lstsq File "c:\users\lopezlab\anaconda3\lib\site-packages\numpy\linalg\linalg.py", line 2300, in lstsq x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj) File "c:\users\lopezlab\anaconda3\lib\site-packages\numpy\linalg\linalg.py", line 101, in _raise_linalgerror_lstsq raise LinAlgError("SVD did not converge in Linear Least Squares") numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares>

What should I do? Thanks in advance

lucasmiranda42 commented 1 year ago

Hi, @Lucas97223, and thank you for your interest in deepof!

The traceback suggests a problem with MKL while smoothing the time series. May I ask you on which platform you're running the package and how you installed it? I may be able to provide better help from there :)

Best, Lucas

Lucas97223 commented 1 year ago

I use 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)]. Do you need other info or is it enough?

lucasmiranda42 commented 1 year ago

Thanks! Which operating system are you using? Could you try creating a new conda environment with python 3.9 and installing deepof there using pip?

The commands should in principle look like this:

conda create --name deepof python=3.9 # Create a new conda environment using python 3.9 pip install deepof # Install deepof via pip

Once installed, the commands you tried should hopefully work. Give it a go and let me know if you still have issues!

Lucas97223 commented 1 year ago

Ok thank you. I will let you know

Lucas97223 commented 1 year ago

Just did it, I now have this error message Traceback (most recent call last): File "", line 1, in File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\deepof\data.py", line 745, in create tables, quality = self.load_tables(verbose) File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\deepof\data.py", line 428, in load_tables loaded_tab = loaded_tab.T.reset_index(drop=False).T File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\frame.py", line 3698, in T return self.transpose() File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\frame.py", line 3689, in transpose new_arr = self.values.T File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\frame.py", line 11739, in values return self._mgr.as_array() File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\internals\managers.py", line 1770, in as_array arr = self._interleave(dtype=dtype, na_value=na_value) File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\internals\managers.py", line 1817, in _interleave arr = blk.get_values(dtype) File "C:\Users\LopezLab\anaconda3\envs\DeepOF\lib\site-packages\pandas\core\internals\blocks.py", line 1914, in get_values return self.values.astype(_dtype_obj) MemoryError

lucasmiranda42 commented 1 year ago

Dear Lucas,

I see that DeepOF seems to be throwing a memory error at the very first step of the pipeline (loading the DLC tracklets from disk). How many videos are you loading? And how long are they?

Maybe trying with a toy dataset (such as the one you can find here) is a good start. You can also try to reduce your dataset to a few videos for testing, and use an HPC cluster for the real thing, where memory resources are not a problem.

Let me know if this helps!

Best, Lucas

Lucas97223 commented 1 year ago

Hey, I figured out the problem, it was coming from the workstation that i was using. Basically, it had not enough RAM memory for the size of my project. But problem solved! I just changed workstation/computer.

Besides, still running the create method, I meet other problems. When performing the "Iterative imputation of ocluded bodyparts", the program returns "[IterativeImputer] Early stoppingcriterion not reached" and this, as many times as I have videos. The error message after is: File "C:\Users\LopezLab\anaconda3\envs\Deepof\lib\site-packages\deepof\data.py", line 748, in create tables.keys() == self.exp_conditions.keys() AssertionError: experimental IDs in exp_conditions do not match.

I think it is a problem with the argument exp_conditions (the dictionnary) of the function deepof.data.Project. My question is therefore, what is supposed to be the form of this dictionnary. For example, what would it be if I have 10 experiments, half with control mice and half with genetically modified mice.

Thank you for your answers.

lucasmiranda42 commented 1 year ago

Dear Lucas,

Glad you solved the memory problem! Please open new individual issues in the future when new questions arise; that way answers are easier to find for other users šŸ˜ƒ

Regarding your current points:

1) [IterativeImputer] Early stoppingcriterion not reached

That's a warning of the iterative imputation algorithm that DeepOF runs to fill in missing values due to occlusions. It seems that, by default, only one iteration is executed, which is suboptimal. I just updated the default parameters, so this shouldn't be a problem anymore if you update. If you don't want to update, however, you can easily adjust the number of iterations the algorithm runs for by setting "enable_iterative_imputation" to a higher integer (i.e. 250) in the Project constructor.

2) Could you show me how you're loading the experimental conditions when calling the project constructor? I would leave that parameter blank, and load it afterward from a CSV file using the .load_exp_conditions() method, as described in this tutorial.

Hope this helps, and let me know if you have any more questions! Best, Lucas

Lucas97223 commented 1 year ago

Ok, I'm sorry. Do you want me to create a seperated issue ?

  1. I'm gonna try upgrading and see if it works
  2. I hadn't seen this page on the github, I'm gonna try that. In case, Project = deepof.data.Project(project_path="C:/Users/LopezLab/Desktop", video_path="C:/Users/LopezLab/Desktop/lopez_lab/DATA/right_videos", table_path="C:/Users/LopezLab/Desktop/lopez_lab/DATA/right_H5", project_name="draft_deepof", exp_conditions={1: "ctrl"}) As I didn't know what to put in the dictionnary, it ran like that.

Thanks

lucasmiranda42 commented 1 year ago

No problem! Here you can access the entire documentation, in case you hadn't seen it before šŸ˜ƒ

I'll close this issue for now since the original problem was solved, but feel indeed free to open new threads if new problems arise! We're happy to help :)

Best, Lucas