Closed maryupshall closed 4 years ago
Thanks for your submission. I'll edit it and assign reviewers soon.
@rgutzen @cJarvers Could you review this submission ?
Sure. I'm happy to
I'd be happy to review this as well, but I'll be very busy throughout November. Is early December an acceptable time frame for the review? Otherwise I have to pass.
Thanks @rgutzen and @cJarvers. @cJarvers I think we can wait for you review in December yes. @mupsh Is that ok for you ?
Hi,
Yes, that’s ok for me.
Cheers,
Mary
On Oct 29, 2019, at 11:07 AM, Nicolas P. Rougier notifications@github.com wrote:
Thanks @rgutzen https://github.com/rgutzen and @cJarvers https://github.com/cJarvers. @cJarvers https://github.com/cJarvers I think we can wait for you review in December yes. @mupsh https://github.com/mupsh Is that ok for you ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ReScience/submissions/issues/9?email_source=notifications&email_token=ALXX22SQIASVDPSPMWB63RLQRBGUNA5CNFSM4JFG6XY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECQ3J2A#issuecomment-547468520, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALXX22X2CVJRBIPKWJYDB7TQRBGUNANCNFSM4JFG6XYQ.
Hi @mupsh, I think you chose a nice study to replicate and I really hope I can help improve it towards a level of rigor, I think it could have. So, here my review. You show some qualitative and some quantitative replications, but you also show some clear discrepancies that you either ignore or don't discuss sufficiently. I think there are a couple of major and minor changes that can make your replication study much stronger and more interesting and useful for the reader. In the following, I ordered my comments into general remarks, comments related to the code, comments regarding the replication of each figure, and comments related to the actual article text.
unable to execute 'swig': No such file or directory error: command 'swig' failed with exit status 1
After installing swig version 3.0.12 it runs through.end_times = np.ravel(list(zip(np.arange(3, last_set + 100 + 1, 100), np.arange(100, last_set + 100 + 4, 100))))
parameters
list is rather fragile since it relies on an arbitrary but specific order of the parameters. A dictionary would be a better choice.# need to divide the given value by 1.5 to get the correct graph
. This operation is not motivated or even mentioned in the text.clamp_current
which is calculated by sodium_current()
, which takes v
,h
,and hs
as state/time-dependent arguments. These are calculated by solving the voltage_clamp
ODE using ic
initial condition in the time packet time
. Looking into voltage_clamp()
, __clamp__()
is called which calls the ODE function from the parameters list (here ode_3d()
). ode_3d()
itself is not easy to understand and debug since it has several not intuitive equations which stretch over multiple line. But most remarkably ode_3d()
does not use its time argument t
, which essentially means that the whole current function does not depend on the time packet it is iterated over.
The main point here is that you organize and structure the code obfuscates dependencies and makes it really hard to understand what is happening in the code.hs
values does not become clear. Please, motivate this in the caption.ic
), missing clarifying comments, and convoluted structure.Thanks @rgutzen for this very detailed review. @mupsh You can wait for the second review or you can start modifying your article, taking @rgutzen comments and suggestions into account.
@rgutzen Thank you for your extensive comments and we're working on changes. @cJarvers Since we're going to be making fairly extensive changes it might be easiest for us to add our changes so you don't have to go through 2 sets.
Sounds good. I'm looking forward to your revisions.
@cJarvers Gentle reminder
@mupsh Did you have time to update the manuscript?
@rougier It is still in the works. I sincerely apologize for the delay, it's been a very busy month. Considering the holidays, would it be ok if I got it back to you for the first week of January?
Sure. @cJarvers Would that be ok for you ?
Yes, first week of January is fine with me.
@mupsh Gentel reminder. Can you do the review by Monday?
Just to make sure we're all on the same page: @rougier, would you like the review by me to be done on Monday or the revisions by @mupsh? I have not done the review yet since the authors suggested I wait for their revisions first. If that is a problem, I can try to do the review by Monday.
@rougier We can get the updated manuscript by Monday, no problem.
@cJarvers Sorry, the message was adressed to @mupsh. @mupsh Monday would be perfect and then @cJarvers can review the revision.
Sorry for the delay in getting this out, and thanks to @rgutzen for his thorough review. Overall we've implemented most of the requested changes and believe this puts the manuscript in a better state. For clarity we've included the original comments along with our reply. @cJarvers we're ready to move on with the review now.
First of all, there is only a brief introduction, which doesn't answer the interesting question of what your motivation is for replicating exactly this study? And why is this a relevant paper? What are the key insights? And why would it be beneficial for the scientific community to replicate these results?
There is no information on the reproducibility of the original implementation. Is the original data available? Is the original code available? Only at the very end, you mention the communication with the authors. This is very relevant to follow your replication and should be moved to the beginning of the paper and elaborated on.
Ideally, you can also ask the original authors, if they agree to publish the code they provided you alongside this replication study.
Your paper is rather short, and writing short precise papers is generally desirable. However, you mostly just state what you replicated, and I'm totally missing a description of your journey how you replicated the results. What was the approach? What are the design principles or your implementation? Which difficulties did you encounter? What did not work? These aspects, I would consider the most valuable part of your research, and the ones others can really benefit and learn from.
Especially, when you are not able to replicate a result or figure (exactly, or at all), it is not sufficient to say that you tried, it didn't work out, and then speculate that it might be due to different parameters. There needs to be a more detailed account of what exactly you tried, why you think the replication fails, and generally, you need to present a reasonable effort in investigating and fixing the failed replication.
For each figure I'm missing an explanation of what exactly is shown and why this is interesting. This doesn't need to be long, but it is required in order to make sense of the replication, as this is the logical basis on which anyone can then start evaluate the similarity to the original results.
For generating the simulation results, additional simulation parameters are set. Instead of hiding them in the code, they should be reported, and if necessary, discussed (e.g time resolution, input, discarded initialization time, ...)
The main goal of replication studies is to validate results from other publications by reimplementation. However, another goal is also to provide understandable and easily reproducible code so that others get better insights into the computational aspects of the original study and reuse/edit/expand the analysis or part of it. The second goal requires the code to be in a certain format. If possible, it should use standards and existing software and packages which are common in the field. Furthermore, it needs to be logically structured and well documented. Writing custom code to replicate other custom code misses the opportunity of adding real value to the implementation.
So I strongly encourage you to use standard resources for the neuron models, like for example PyNN or NeuroML
Besides documenting the code and structuring it in a way that makes it easy to read and understand, please also explain your code in the article. This is a computational paper and addressing and discussing certain implementation decisions in the text can be very helpful. The main outline of the implementation and the design principles could even be presented in their own Section.
For the packages you do use, make sure you also mention them in the article, state what they do and why you chose them, especially for less common packages.
executing run.py in the provided environment return an error while producing figure 3: unable to execute 'swig': No such file or directory error: command 'swig' failed with exit status 1 After installing swig version 3.0.12 it runs through.
For understandability, it helps to assign variables with telling names instead of using numerical values in equations without explanations.
code lines like end_times = np.ravel(list(zip(np.arange(3, last_set + 100 + 1, 100), np.arange(100, last_set + 100 + 4, 100))))
for some reason you named tau_hs different from the other variables as 'tah_hs'
I strongly suggest making use of packages to handle units (like quantities or pints) at least when variables are passed to functions. This easily ensures that the right magnitudes and units are used. Once the units are check in a function, they can then be omitted for the actual computations to not slow them down. Otherwise, it can get quite bothersome to trace back which magnitude the resulting values have.
For a better overview and less error-proneness, it is generally advisable to outsource parameter settings (into a yamls, json, or event txt file), and import them and not set them by hand in each function.
Also the parameters list is rather fragile since it relies on an arbitrary but specific order of the parameters. A dictionary would be a better choice.
Why do you cite the original paper as 'Levitan et al. (2014)' although the first author is Qian?
Commonly in articles, you present you the work in the present tense. Here, it can make sense to refer to the original work in the past tense, but at for least your results and figures it would make sense to use the present tense. Please review, and use it consistently.
You use 'replicated' and 'reproduced' interchangeably throughout the text. Please adhere to the definitions of ReScience use the correct terminology. Reproduction: Running the same computation on the same data; Replication: repeating a computational study in the spirit of a published protocol, but with a different realization. (https://rescience.github.io/faq/)
You use a lot of emphasizing adjectives like 'successfully' or 'faithfully'. Instead of using these formulations, describe the replication in a more neutral and precise manner. State exactly the argumentation why and to which degree you consider a result as replicated.
reference the location of the equation you modified in the original paper
There is a wrong minus sign in equation (compare to the original)! It is however correct in the code.
explain the variables in the equation
You need to explain you motivation why you modified this time constant equation and you need to argue why exactly you did change this numeric value from 40 to 60, instead of many other changes you could have made, and why the modified version should be considered 'better' than the original.
Figure 1 should be moved into Results section, after the first paragraph
"All original figures, [..], were successfully replicated". Not true. At least the last figure was not replicated!
"..., successfully showing that they had not compromised the original sodium current description." Elaborate. What constitutes 'not compromised' here?
"seemingly identical results" Be precise how they differ, how you compared the results, and on what basis (according to which features) you consider them equal.
'the same but “backwards” or ”flipped”' Be more precise. Along which axis? According to which attribute you consider it being 'flipped'?
In many sentences there are missing articles, for example:
"in our implementation". Either 'with' our implementation, or in our simulation.
In the discussion section, you mention that the original code contains different parameters than the paper. Please, state also these parameters and include this in the discussion about which equation to choose, and why and how to modify it.
Use consistent Figure names. Either 'Figure 2 B1' or 'Figure 2B1'.
"More specifically, our replication of Figure 1B was “backwards” or ”flipped” where in our replication the sodium current amplitude in time, which is opposite in the original implementation where it decreases towards 0 in time." This sentenced is very unclear, please re-formulate.
"Hence, we can only speculate that this may have potentially been a graphing error." There are clear qualitative differences in the plots (see above) which can not be explained by 'graphing error', please elaborate.
"Overall, this implementation replicated the original paper."/"With the exception of Figure 6, this implementation confidently shows that the original results were correctly implemented" No, you present a partial replication and found several discrepancies. Be more precise.
Impressive ! @cJarvers can we set a deadline for your review ? Like two weeks, would that be ok ?
@rougier Yes, two weeks should be fine.
Okay, @mupsh , here's my review:
This review is relative to commit https://github.com/mupsh/ReScience_Levitan_2014/commit/d5c641ffd14d5740e032e2864083215ac49dec73
I ran the code using Python 3.6.10, swig 4.0.1, PyDSTool 0.90.3, numpy 1.18.1, scipy 1.3.0 and matplotlib 3.1.1.
The code produced a warning message ("Test function going crazy: class Hopf_Bor(BordermMethod, BiAltMethod)"), but otherwise ran without problems and produced the figures as shown in the paper.
The core results of the original paper are replicated. This requires some major parameter changes, so I'm not sure whether I share your optimistic conclusion (that "the 2D and 3D models were implemented correctly in the original work"), but I find the explanations of the discrepancies and your overall reasoning clear enough to accept this as a replication - thanks to your work I'm sufficiently confident that the conclusions of the original article were correct, even if there were errors in the implementation or the reporting.
In general, I find the article is now a good deal clearer than before the revision. The explanations of what each figure shows and what adjustments were necessary to replicate each are easy to follow.
Unfortunately, the article is not self-contained. In order to understand the explanations of the replication, it is necessary to know the original article, which is behind a paywall. I could not find any guidelines about how self-contained the article should be (do you know any, @rougier?) and I guess most readers interested in a replication would know the original. Still, I think it would be good have a section that explains the model itself. This would enable to reader to better understand what is going on, without going back to the original article.
I also concur with @rgutzen that adding a section that explains the code would be helpful. This should not include overly general things like OOP vs. imperative programming (and I don't think that was meant with "design principles") and should be more aimed towards giving the reader a starting point to exploring/understanding the code. This would essentially be a more elaborate version of the first paragraph of your methods section (explaining why and how you used which package), including some pointers like the fact that you factorized your code into the differential equations for all three models (diff_eq.py
), the steady-state equations and time constants (gating.py
), the membrane currents (current.py
) and nullclines (nullclines.py
).
Another thing that could improve the clarity of the article would be to list all the changes made to the original model and between different figure in one central place, e.g., a table. Unless I missed something, these changes are:
Especially for the sodium conductance, it would be good to know what value you used for different simulations, the one from 1A or 1B.
Otherwise, I noticed a few smaller issues:
The code is well structured and commented. The separation into simulation code, runners for the individual figures and plot setup works well and helped me find my way around the code.
In two places, this wasn't quite true: The diff_eq.py
module contains a lot of stuff, some of which is not directly related to the differential equations and could for example go to an experiments.py
module (e.g. the clamp experiments or IV curves). The other place is the functions for figures 3c and 4b, due to the use of pyDSTool. I don't really know how to separate that out better, though.
I also agree with @rgutzen that using packages for handling / checking units would be useful. This is especially true since some of the differences from the original paper involve scaling, so it would be good to make sure this is not due to some implicit unit conversion error somewhere.
One minor point I noticed: the explanation of parameter parameter_name
in diff_eq.pulse
is only "Parameter to"; there's probably something missing there.
Overall, I'm quite happy with the paper. I think some improvements are possible: (1) explaining the model in the text to make the article more self-contained, (2) listing changes and parameter values in one place / table and (3) adding explicit unit handling to the code.
thanks @cJarvers. @mupsh can you address the comments ?
@mupsh Don't forget to change the title!
@cJarvers I think you're right and we may need to add a section for aiming at self-contained article when the original article is not available (behind a paywall). Could you propose some changes authors instruction based on your experience ?
Note I found this link which seems to be free: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4254877/ (or is it just me ?)
@rougier We're working on implementing the requested changes.
@rougier True, the ncbi-link is free. Question is whether green OA like this solves the issue, since readers still need to find the free version. Maybe adding a link to the replication article would work. Anyways, I'll open a separate issue to suggest some changes to the author instructions.
Sorry again for the delay in getting this out, and thanks to @cJarvers for his thorough review. As before, we've implemented most of the requested changes and believe this puts the manuscript in an even better state. For clarity we've included the original comments along with our reply.
I ran the code using Python 3.6.10, swig 4.0.1, PyDSTool 0.90.3, numpy 1.18.1, scipy 1.3.0 and matplotlib 3.1.1.
The code produced a warning message ("Test function going crazy: class Hopf_Bor(BordermMethod, BiAltMethod)"), but otherwise ran without problems and produced the figures as shown in the paper.
The core results of the original paper are replicated. This requires some major parameter changes, so I'm not sure whether I share your optimistic conclusion (that "the 2D and 3D models were implemented correctly in the original work"), but I find the explanations of the discrepancies and your overall reasoning clear enough to accept this as a replication - thanks to your work I'm sufficiently confident that the conclusions of the original article were correct, even if there were errors in the implementation or the reporting.
In general, I find the article is now a good deal clearer than before the revision. The explanations of what each figure shows and what adjustments were necessary to replicate each are easy to follow.
Unfortunately, the article is not self-contained. In order to understand the explanations of the replication, it is necessary to know the original article, which is behind a paywall. I could not find any guidelines about how self-contained the article should be (do you know any, @rougier?) and I guess most readers interested in a replication would know the original. Still, I think it would be good have a section that explains the model itself. This would enable to reader to better understand what is going on, without going back to the original article.
I also concur with @rgutzen that adding a section that explains the code would be helpful. This should not include overly general things like OOP vs. imperative programming (and I don't think that was meant with "design principles") and should be more aimed towards giving the reader a starting point to exploring/understanding the code. This would essentially be a more elaborate version of the first paragraph of your methods section (explaining why and how you used which package), including some pointers like the fact that you factorized your code into the differential equations for all three models (diff_eq.py), the steady-state equations and time constants (gating.py), the membrane currents (current.py) and nullclines (nullclines.py).
Another thing that could improve the clarity of the article would be to list all the changes made to the original model and between different figure in one central place, e.g., a table. Unless I missed something, these changes are:
the changed formula for tau_n (used everywhere) the value of g_Na in Figure 1A the value of g_Na and the pulse width in Figure 1B the newly fitted polynomial coefficients in Figure 1C; the fitted values should be reported in the text the reversal potentials for NMDA and AMPA receptors in Figure 6 the conductances g_AMPA and g_NMDA in Figure 6A and 6B (different values)
Especially for the sodium conductance, it would be good to know what value you used for different simulations, the one from 1A or 1B.
Otherwise, I noticed a few smaller issues:
Use of tenses is sometimes inconsistent. For example, in the second paragraph one sentence starts with "Previous work had", whereas the next starts with "Previous work has". The formulation "This new model included, or exclused a slow inactivation term" might confuse readers. It's not clear that this relates to the 2D and 3D versions mentioned before. The legend in Figure 1A covers a small part of the curve. The "ringing" in Figure 2 is hardly visible. It might help to make the figure bigger and possibly decrease the line width. In the text, the term "ringing" is not explained, except by referring to Figure 2. A brief definition might help. In the explanation for Figure 4, you mention that you use a different naming convention (SNP vs. LPC) but do not explain why. The last sentence of section 2.4 is incomplete.
The code is well structured and commented. The separation into simulation code, runners for the individual figures and plot setup works well and helped me find my way around the code.
In two places, this wasn't quite true: The diff_eq.py module contains a lot of stuff, some of which is not directly related to the differential equations and could for example go to an experiments.py module (e.g. the clamp experiments or IV curves). The other place is the functions for figures 3c and 4b, due to the use of pyDSTool. I don't really know how to separate that out better, though.
I also agree with @rgutzen that using packages for handling / checking units would be useful. This is especially true since some of the differences from the original paper involve scaling, so it would be good to make sure this is not due to some implicit unit conversion error somewhere.
We've refactored our more experimental functions into experiments.py. Additionally, we have implemented units where prior to computation units are forced into mV, uA/cm^2, or mS/cm^2 (depending on the natural unit used) and the unit is discarded. We experimented with doing unit checking more robustly i.e. running one iteration of the ODE to ensure unit matching. Unfortunately based on the amount of jumping between ODE and "analysis" type code we do, that became unwieldy and less readable. We believe our approach finds a balance.
With regards to the bifurcation ODE (3c/4b) we agree entirely. Unfortunately, due to the level of fine-tuning required for each plot there was not an easy refactor for this. Form a non-refactor standpoint the code presented maps fairly nicely onto the natural steps someone familiar with XPPAUTO might use to reproduce them. For this reason, we've left the code in a more "scientific" state.
One minor point I noticed: the explanation of parameter parameter_name in diff_eq.pulse is only "Parameter to"; there's probably something missing there.
Thanks for your reply, @mupsh. I'm quite happy with how you addressed my comments.
I ran the code again (with slightly different versions of the dependencies: python 3.8.2, scipy 1.4.1, matplotlib 3.2.1, sympy 1.5.1, Pint 0.11, PyDSTool 0.91) and everything behaves as expected.
There are only two minor things I noticed in the article:
@rougier After these minor things are fixed, I'd recommend acceptance.
@cJarvers @rgutzen Thanks for your very nice and extensive reviews. I therefore accept the manuscript that will be published soon hopefully. Congratulations @mupsh 🎉 !
I will make a PR to your repository to complete your metadata. In the meantime, can you register your code repository at https://www.softwareheritage.org/save-and-reference-research-software/. You should obtain a swid that you can post here.
@rougier Great news, thank you very much! We've made the minor fixes, I've registered the code as requested and obtained the following SWH-ID: https://archive.softwareheritage.org/swh:1:rev:e1040fe3fbaefef4166964772efa08ada444db6e;origin=https://github.com/mupsh/ReScience_Qian_2014.git/
I created a PR (https://github.com/mupsh/ReScience_Qian_2014/pull/10) for small changes to your repo before publication. Can you check it?
Ok, I just saw PR has been merged. I'll try to publish the paper by the end of the week. Don't hesitate to spam me if it's not done :)
@rougier I might be mistaken, but spamming you just in case :)
It has been published and online ! Sorry for the delay. See https://zenodo.org/record/3842360
Original article: https://www.ncbi.nlm.nih.gov/pubmed/25185810
PDF URL: https://github.com/mupsh/ReScience_Qian_2014/blob/master/article/article.pdf Metadata URL: https://github.com/mupsh/ReScience_Qian_2014/blob/master/article/metadata.yaml Code URL: https://github.com/mupsh/ReScience_Qian_2014
Scientific domain: Computational Neuroscience Programming language: Python Suggested editor: Benoit Girard, Tiziano Zito