Open LLaxman93 opened 2 years ago
Hi Llaxman,
Thanks for pointing this out. The repo has been updated.
Best, Nahuel
Hi Nahuel,
Thank you for updating, but still, there are missing files to run the python codes.
For example, the data you uploaded is in .mat format and there are only test data, not train data.
Could you please confirm it and may be upload the correct files?
With Regards, Llaxman
From: Nahuel Alejandro Costa Cortez @.> Sent: Friday, September 30, 2022 6:46 AM To: NahuelCostaCortez/DTW-Li-ion-Diagnosis @.> Cc: Laxman Timilsina @.>; Author @.> Subject: Re: [NahuelCostaCortez/DTW-Li-ion-Diagnosis] Missing Folders (Issue #1)
Hi Llaxman,
Thanks for pointing this out. The repo has been updated.
Best, Nahuel
— Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/NahuelCostaCortez/DTW-Li-ion-Diagnosis/issues/1*issuecomment-1263415522__;Iw!!PTd7Sdtyuw!RQUYJibJ4G-NAAep7XzipFXTWM2slf3MC6dXw_MPaUS5EzWHACeQRs-SY54m72YCC1xlCH7TyVELyLl9OdSwrKFgyQ$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/A3KMKJXOZU3QK5LGR6IZPATWA3AEXANCNFSM6AAAAAAQX7TTZ4__;!!PTd7Sdtyuw!RQUYJibJ4G-NAAep7XzipFXTWM2slf3MC6dXw_MPaUS5EzWHACeQRs-SY54m72YCC1xlCH7TyVELyLl9OdS-1gqm3Q$. You are receiving this because you authored the thread.Message ID: @.***>
Hi Llaxman,
Thank you for your interest. The data used in this study was generated from the Alawa toolbox, which is programmed in Matlab and so the files are in .mat format. You can find the training data either in the paper or in the readme of this repo and the test data in the /mat folder. You can convert the data to a python format with the save_data or the read_mat functions in utils.py.
Best, Nahuel
Hi Nahuel, I would like to ask about the training set. I know from the paper that 40% capacity loss data is used for training. Training set data are selected from public data, such as LFP:http://dx.doi.org/10.17632/bs2j56pn7y. More than 700,000 pieces of data? I also found x_test_0.mat in /mat in the code file, set this to training data? I look forward to your reply. Thank you very much.
Hi 1651682636,
As you say, the training data are public and are initially composed of around 700,000 voltage curves, which are subjected to a pre-processing that is explained in the paper and you can find in the get_data function of utils.py. For the test data, duty cycles were generated for 3 cells with slightly different parameters than the train cell, and correspond to the files x_test_1.mat, x_test_2.mat and x_test_3.mat.
Hope this makes it clearer for you.
Best, Nahuel
Hi Nahuel, Thank you very much for your answer. I would also like to know about x_test_0.mat. Is this file produced using the same LR and OFS as the training set? Thanks again for your reply. Best, 1651682636
Hi 1651682636,
You are correct, x_test_0.mat was produced using the same LR and OFS as the training set. It was only used for debugging purposes and is no longer available in the repository, as it was not used for experimental results.
Best, Nahuel
Hi Nahuel,
Thank you very much for your answer. I have a clearer idea of the problem now. Thanks again!
Best, 1651682636
Hi Nahuel, When I convert the data to a python format with the save_data, ValueError: object too deep for desired array
Hi tq22277848,
On what array is this happening to you? It looks like you are trying to use a function that expects a specific dimension with a vector of larger dimensions. It usually happens when you have a vector of shape (x,1), which is a 2-dimensional vector and you treat it as a 1-dimensional vector. However, if you don't provide me with more information I can't know what the problem is.
Thanks, Nahuel
Hi Nahuel,
When I use LFP data to create training sets, I call save Data function. info、V、V The three indexes of reference are changed to pathinfo and volt part1、volt Part1, does this sub-modification correspond?
Thanks, tq22277848
Hi Nahuel,
I have another question. Can I get this weight file/ checkpoints/checkpoint. I look forward to your reply!
Thanks, tq22277848
Hi tq22277848,
For getting the mat files you should execute the Read_me_VvsQmatrix.m from here. Once you do that, you should have two variables "pathinfo" and "volt" in the workspace. You need to copy those files to the mat/LFP folder and then execute the save_data function (I updated the repo because the path to this folder was not correct so you may want to clone/pull the repo).
As for the weight file, you have all the models´ weights in the /saved folder in h5 format. You can also check the experimentalResults file to see how the weights of each model are loaded.
Thanks for your interest, Nahuel
Hi Nahuel,
I did as you said, but when I run the 'savedata' function, there will be problems with the data dimension index. Now I'm troubled and I look forward to your help.
Thanks, tq22277848
Hi Nahuel,
If I want to use the training set to run the 'train' program, there is no file checkpoints/checkpoint in it. There is no way to save the new h5 file.
Thanks, tq22277848
Hi tq22277848,
Could you attach a screenshot of the problem you are having? I can't help you if you don't give me concrete information. You can send it to my personal email if you prefer (costanahuel@uniovi.es).
As for the train program, the checkpoints/checkpoint file is created when training, you don´t need it before hand. However, what you do need is to create an empty folder name "checkpoints". Github does not allow to upload empty folders, that's why this folder is not in my repository.
Thanks, Nahuel
Hi Nahuel
I have sent my modification and current problem to your email, and I look forward to your help.
Thanks, Nahuel
Hi Nahuel,
According to your instructions, I first processed the new file in matlab, but still reported an error when running, and sent it to your mailbox. I look forward to your help.
Thanks, tq22277848
No data and saved folders