Closed ralphpeterson closed 2 years ago
Hi Ralph,
this is indeed weird - not sure what the issue is.
Just to be safe: Update to the latest version of DAS which is v0.28.3 with pip install das --upgrade --no-deps --force
. Maybe we have fixed the bug that caused your issue.
You can also check the training data, for instance, that the audio and the annotations are correctly aligned.
The following snippet will load the dataset and plot the train
data - you can modify this to also check the test
, and val
parts:
import das.npy_dir
store = das.npy_dir.load(path_to_dataset_folder)
plt.subplot(211)
plt.plot(store['train']['x'][:], 'k')
plt.subplot(212)
plt.plot(store['train']['y'][:])
Above code is modified from [https://janclemenslab.org/das/tutorials/make_ds_notebook.html#inspect-the-dataset]().
I'm also happy to have a quick look at the data and the training myself if you want to share the dataset with me - maybe I can spot the issue.
After emailing with Jan, the conclusion is that this issue was due to incomplete annotations.
For example, for a 15 minute file with 1000s of vocalizations, my understanding was that I could annotate a subset of the file, train a model with that subset, and use the resulting trained model to predict on the rest of the file. Instead, one should either 1.) label all vocalizations present in a file or 2.) use "File/Export for DAS” and set the start/stop time for the chunk(s) you would like to export.
Thanks again for your help Jan!
Hello and thank you for DAS!
I have successfully trained some models and predicted segment instances on recordings of gerbil vocalizations. In trying to optimize predictions, I used these models to predict segments on new files, corrected predictions, then exported annotations for training. After training a new model with these updated annotations (~2000 vocalizations), I am mysteriously no longer able to predict with the new model. No vocalizations are predicted and no errors are thrown.
I looked through the test set report and see that my vocalization labels (
vox
) have values of 0 for their f1 score, precision, and recall. I'm hoping that there's something obvious that I am overlooking -- any assistance you could provide would be much appreciated!I am running version
0.28.1
on a Windows workstation.Best, Ralph
From test set report: