Closed DominiqueMakowski closed 3 years ago
@Tam-Pham @zen-juen @JanCBrammer @hungpham2511 @sangfrois @CSchoel
We are getting closer and closer from an initial release (0.1.0) which hopefully will be marked by the submission of a first paper.
For this, I'd need you to review the manuscript, make changes and add whatever you want to add (I've run dry of inspiration in the conclusion section π ). The manuscript is the .Rmd file, which can be edited with a regular notepad (the text is in markdown). You can file a compiled version in the pdf.
Also, do verify your name & affiliations. Note for @sangfrois and @CSchoel that I've removed the special characters from the affiliations for now; waiting for https://github.com/crsh/papaja/issues/360 to fix it.
Regarding the outlet, I'm still not 100% certain (opinions are welcome) but I'd like to try a general journal first, before trying for software-specific journals like JOSS. I think it would be a good fit for Journal of Neuroscience Methods, but we could potentially try Scientific Reports first, as they do publish software as well.
Hey @DominiqueMakowski !! Last time I read it, I wasn't sure of the direction the paper was headed. Now I have a better idea and I think my comments can have a more effective impact. You can see my latest commit.
I've made some changes to the intro and proposed a way to wrap up on your first statement in the conclusion (as you were looking for some inspiration). Let me know if you like. Either way, I was planning to commit again in a couple of hours once I have my daily dose of sun rays.
PS: I totally agree with you for the journal!
And... I wanted to say : good job ! I think the points that are made in this paper address issues that are super relevant for more than 1 area in neuropsychological reasearch. Though, to me, a lot of very important details are still missing and have to be founded in the litterature. I mean, this can be even more solid than we might think.
So after a bathing in the sun and thinking about this.
I still have some more ideas, but i'm tired. Will be back on it first thing in the morning. I'm quite excited by this actually
cool, go ahead and I'll make a revision pass once you're done.
Ok ! Done.
I pushed my last modifications. I'll wait for your revisions. Hope this helped and will be appreciated. Really.
I have to state again : super excited by this and I think it has great potential.
PS: definetely, Journal of Neuroscience methods is a good fit!
Hi, I would really appreciate some feedback. Especially, since the concluding remarks I wrote were wiped out, but it was the thing that @DominiqueMakowski was looking for. No stress, I'd just like to know why it didn't fit and which way we should aim fo this last section.
mmh I don't think we wiped anything out on purpose, I didn't have the chance to look at your changes yet, maybe something got wrong with the githubbing!
Cool, that's what I had in mind! I think @Tam-Pham didn't pull the dev branch before pushing the python tutorial :)
No worries, let's thank the Git gods for keeping versions of everything :)
Done!
Made some edits on phrasings/grammar in the manuscript and elaborated abit on how the pipeline facilitates interpretations about cognition/affectivity! However I think there are some paragraphs where the tone needs to be revised further as well as the coherence to improve the readability :)
The first paragraph on Research Gaps doesn't seem to flow too well - may need more looking into it :)
@Tam-Pham and I have looked through the most updated version of the manuscript and created a new Rmd (*_tamzen.Rmd), as we feel this will be easier to track who makes what changes, given that multiple collaborators are on this.
The major comments can be found on the commit. Other amendments are quite minor (mainly grammatical and expressions), do take a look and let us know what you think!
I (finally) finished revising, mostly streamlining to avoid too much repetition to keep it short n' sweet. @JanCBrammer could you give it a look to see how do you find it?
@DominiqueMakowski, see #200.
I would love to avoid Elsevier jounals
Me too... but π
That's a really complex issue you brought up, and as much as I despise the current publishing system, it's not that easy yet to completely avoid all for-profit publishers, especially since some OA publishers are also questionable π
In a perfect world, I'd 100% agree with going for a full OA journal, but for now, their academic recognition is still, sometimes - very, unfortunately - not up to what it should be π
Concretely, one of the best alternatives would be JOSS, which I truly enjoy and support, and I remember that you also were quite keen on going with it. However, having published several papers there, I also know that these were often looked at weirdly by recruiters, as not "real" papers (which again, is terrible, given the fact that writing a software is as much - if not more - of a feat than a regular study, and usually has way more scientific impact...). And since this paper includes mostly young researchers, which publications journals will be important for their career, I thought it might be good to try first with a "regular" (i.e., not software) journal.
We could try Scientific Reports (although I'm not 100% sure that it fits their scope, one can always try, especially as I've heard they have a quick decision process), which is not Elsevier, Psychophysiology (Wiley), Advances in Methods and Practices in Psychological Science (APS), or Journal of Neuroscience Methods (indeed Elsevier, but TBH it seems like the best outlet based on its scope...).
Anyway, I'll now send the manuscript to my PI for revision, and also discuss with her the options, and we'll see. Feel free to share more thoughts!
(On a side note, my personal answer to this publisher-issue is to make all my papers accessible on my website (although I expect to receive a letter someday π¬π« ))
@DominiqueMakowski, I completely agree with your assessment that pragmatism beats the moral highground. Also you make an interesting point regarding the recognition of JOSS within "established" academic circles, I wasn't aware of that. Just wanted to raise the issue (probably mostly to put my mind at ease :D). It's really nice to know that you seem to have spend quite some thought on this already (which in hindsight does not suprise me)!
(On a side note, my personal answer to this publisher-issue is to make all my papers accessible on my website (although I expect to receive a letter someday π¬π« ))
:heart:
There are some jounals in this post that haven't been mentioned yet.
Phew, 24 days since @DominiqueMakowski asked for comments and I just got around to read the manuscript. Sorry for being late to the party. :sweat_smile:
I think when it comes to Neuroscience, I qualify as the "interested layman" that we are supposed to keep in mind when writing scientific texts and I really like the current state of the paper. It gives a good impression of what Neurokit is and why it is needed without becoming too technical. :+1: I just have a few remarks:
Regarding the journal I would also prefer to avoid Elsevier for the reasons @JanCBrammer mentioned, but I can also understand the point @DominiqueMakowski is making. Unfortunately I am not familiar enough with the domain to give a concrete suggestion.
There are some jounals in this post that haven't been mentioned yet.
@JanCBrammer which one caught your eye as a potential candidate?
@CSchoel thanks for your input! I made some adjustments based on your comments.
Regarding cooperative editing in git: I like to have a line break after each sentence, so that git can isolate changes on the level of sentences instead of paragraphs. I think this should also be possible with R markdown and might be worth considering when there are more revisions planned or warranted by reviewers.
Veyr good point indeed! We'll try that for future revisions!
Would it be worthwhile to give an overview of what data types and algorithms are currently supported in Neurokit in tabular form? I think this would help readers to decide whether they can use NeuroKit2 for their specific task and it could increase visibility of the paper when people search for implementations of specific algorithms.
true, the only issue I'm worried about is that this would sort of set in stone the features that we have (in appearance), which goes against the whole evolutive aspect. If we say "we have this, this and that algorithms", I'm not sure it will do it justice as it might be more in the future? I mean we could still stress that this is the features available at this current date, but still, you know.
Also, I'm not sure how to concretely make this table, cause we have different features with different methods for different signals types and there are like crosslinks and overlaps π€ If anybody has a clearer picture in mind feel free to illustrate βΊοΈ
* Regarding cooperative editing in git: I like to have a line break after each sentence, so that git can isolate changes on the level of sentences instead of paragraphs. I think this should also be possible with R markdown and might be worth considering when there are more revisions planned or warranted by reviewers.
@CSchoel, @DominiqueMakowski, this is an amazing suggestions. It was bugging me that the diffs would show up on a paragraph level.
@DominiqueMakowski, without having looked into them in much detail, I think we could consider eLife or F1000 Research.
true, the only issue I'm worried about is that this would sort of set in stone the features that we have (in appearance), which goes against the whole evolutive aspect. If we say "we have this, this and that algorithms", I'm not sure it will do it justice as it might be more in the future? I mean we could still stress that this is the features available at this current date, but still, you know.
You are right, that could be a problem. Maybe we could just add a sentence to the table caption like "Please keep in mind that this table represents version x.y.z of NeuroKit2 from
Also, I'm not sure how to concretely make this table, cause we have different features with different methods for different signals types and there are like crosslinks and overlaps :thinking: If anybody has a clearer picture in mind feel free to illustrate :relaxed:
I was thinking about the table in terms of keywords that someone might want to search for. Maybe something like
Goal | Datatype | Methods |
---|---|---|
peak detection | ECG | pamtompkins1985, hamilton2002, ... |
- | RSP | khodadad2018, biosppy |
- | EDA | neurokit, gamboa2008, kim2004 |
delineate QRS complex | ECG | peak, cwt, dwt |
If the methods have individual names that researchers might search for they could also appear in the last column like "FancyPeak (pamtompkins1985)". One interesting question is also how fine- or coarse-grained this table should be. I think a good guideline could be to stay on the level of what is described in the "Functions" section in the package documentation. However, I noticed that this documentation, for example, does only briefly mention sample entropy. So if I am just interested in calculating the sample entropy of my data, NeuroKit probably would not be my first choice, simply because it is not so obvious that this measure can be calculated with this toolkit. This could be an argument for a more fine-grained approach to the table (which might then explode in size?). As you can see I am also not quite sure how it should look, but at least you now know my thoughts. :laughing:
@JanCBrammer I am really intrigued with the publication approach of F1000Research since I read a very good paper that was published there and I am considering this "journal" for the publication of nolds. However, I have not yet had the courage to consider it for any of my "main" publications for my thesis since I am unsure how its impact would be judged by more conservative members of the examination board. Do you have any experience with that?
@DominiqueMakowski, one last remark: Since today I was asked again how nolds can be cited, I looked for ways to improve its "citeability" that would not take much time and came across Zenodo, which I see now you are also using for NeuroKit. I would be glad if you could update the citation to nolds to point to the following Zenodo record: https://zenodo.org/record/3814723 .
Comments from boss:
Very nicely written paper. One point is not very clear is whether NeuroKit2 can process neurophysiological data. The examples were on physiological data and later mention of only physiological data in the discussion. If it can do EEG data, would be good to provide a very simple example.. also any capabilities of tagging the physiological data to EEG data? Good to also highlight the portability or shareable scripting for reproducibility and also if there are any features to aid analysis parameters reporting following some neuroimaging standards like Cobidas-MEEG etc.
Will clarify the point about EEG support in the manuscript (that currently, not directly (altho some functions can be used), but why not improve support in future directions) βΊοΈ
unamed chunk
(figure # 1)This issue has been automatically marked as inactive because it has not had recent activity. It will eventually be closed if no further activity occurs.
The NeuroKit2 project grew (and keeps growing) faster than expected, thanks to the involvement and contributions of talented people. In fact, it already contains more lines of code than its older brother and better documentation. And it can only get better π
We are reaching a stage where the package is maintained, useable, useful and documented, and we can now think about working toward publishing the software in a few months (end of spring?). The question is where to publish it.
Here's a list of potential journals (feel free to share other journals or any experience):
Existing packages: hrv (JOSS), HeartPy (JORS), pyphysio (SoftwareX), unfold (PERJ), pygaze (Behaviour Research Methods), fiedltrip (Computational Intelligence and Neuroscience), MNE (NeuroImage, Front. Neurosci), eeglab (Journal of Neuroscience Methods)