ReScience / submissions

ReScience C submissions
28 stars 7 forks source link

[Re] Inter-areal balanced amplification enhances signal propagation in a large-scale circuit model of the primate cortex #72

Closed ViniciusLima94 closed 9 months ago

ViniciusLima94 commented 1 year ago

Original article: Joglekar, Madhura R., et al. "Inter-areal balanced amplification enhances signal propagation in a large-scale circuit model of the primate cortex." Neuron 98.1 (2018): 222-234.

PDF URL: https://github.com/ViniciusLima94/ReScience-Joglekar/blob/master/ReScience_Joglekar.pdf Metadata URL: https://github.com/ViniciusLima94/ReScience-Joglekar/blob/master/metadata.tex Code URL: https://github.com/ViniciusLima94/ReScience-Joglekar/tree/master/code

Scientific domain: Computational Neuroscience Programming language: Python (NEST SIMULATOR) Suggested editor: @rougier Suggested reviewers: Hans Ekkehard Plesser, Robin Gutzen, Vahid Rostami, Marcel Stimberg

rougier commented 1 year ago

Thanks for your submission, we'll assign an editor soon!

rougier commented 1 year ago

@benoit-girard @oliviaguest Can any of you edit this submission ?

oliviaguest commented 1 year ago

@rougier this is really outside my expertise, sorry!

benoit-girard commented 1 year ago

I will handle this one.

benoit-girard commented 1 year ago

Dear @ViniciusLima94 the provided URLs end-up on "error 404", can you do something about that?

ViniciusLima94 commented 1 year ago

Dear @benoit-girard , I believe that was because I forgot to change the repository to public.

It should work now, can you let me know?

Thank you.

benoit-girard commented 1 year ago

@ViniciusLima94 I can now access it, thanks. I now will be looking for reviewers!

ViniciusLima94 commented 1 year ago

Thank you @benoit-girard !

benoit-girard commented 1 year ago

@damiendr and @rgutzen : would you be interested (and available) to review this submission to ReScience C?

rgutzen commented 1 year ago

I have a potential conflict of interest since I'm currently working in the same lab as the second first author Renan Shimoura.

benoit-girard commented 1 year ago

@rgutzen thanks for your answer, I will then contact someone else.

benoit-girard commented 1 year ago

@mstimberg : would you be interested (and available) to review this submission to ReScience C?

mstimberg commented 1 year ago

@benoit-girard : yes, would be happy to review it :blush:

benoit-girard commented 1 year ago

Great, thanks Marcel!

mstimberg commented 1 year ago

@benoit-girard a quick question (it's been a while since I reviewed for ReScience…): should I go ahead with my review, or wait until a second reviewer is assigned?

benoit-girard commented 1 year ago

Sorry, I should have been explicit about that: yes, you can go ahead. I will try to identify a 2nd reviewer as fast as possible.

benoit-girard commented 1 year ago

@heplesser : would you be interested and available to review this paper?

heplesser commented 1 year ago

I am afraid I would not get around to reviewing before August.

benoit-girard commented 1 year ago

@heplesser : I will try to find someone that can work on it in the coming week, but should I fail, I may come back to you in August...

benoit-girard commented 1 year ago

@otizonaizit : would you be interested and available to review this paper?

mstimberg commented 1 year ago

:wave: @ViniciusLima94, @benoit-girard, please find my review below:

I have reviewed the paper/code and can confirm that the code reproduces the figures of the ReScience manuscript. I can also confirm that these results replicate the results from the original paper (within reasonable bounds). In general, this is a well executed replication, the paper is clearly written and convincingly discusses the issues encountered during the replication. The availability of the source code for the original paper remains a bit unclear to me (see below). This and missing instructions in the code repository are the major issues that IMHO should be addressed before publication. I also identified a number of minor issues, see the list below

Major issues

Minor issues

Paper

benoit-girard commented 1 year ago

@vitay @stephanmg @pietromarchesi : would one of you be available to provide a second review of this submission?

benoit-girard commented 1 year ago

@mstimberg : thanks a lot for the review!

benoit-girard commented 1 year ago

@ViniciusLima94 : you can start adressing the points raised by @mstimberg while I continue to search for a second reviewer.

ViniciusLima94 commented 1 year ago

Thank you very much for you review and commends on our manuscript @mstimberg . I'll start to address the points raised as soon as possible, @benoit-girard .

stephanmg commented 1 year ago

@vitay @stephanmg @pietromarchesi : would one of you be available to provide a second review of this submission?

Hi @benoit-girard, I would like to do a review for this submission.

benoit-girard commented 1 year ago

This is great, @stephanmg, thanks a lot! You are welcome to proceed!

ViniciusLima94 commented 1 year ago

Dear @mstimberg ,

We thank you for reviewing our manuscript, and for the suggestions and corrections that you suggest. As it is usual, changes in the manuscript will be indicated with blue text, and bellow we list your comments followed by our responses.

Major issues

Availability of original source code: Most of the ReScience manuscript reads as if the Brian 2 source code of the original paper is not available (it does not seem to be linked from the original article), and the authors therefore had to write the NEST implementation from scratch based on the description in the paper (and guessing/adjusting a few parameters that weren't explicitly mentioned). But then I was confused by the comment on p.12: "Furthermore, we noticed the same phenomenon when running the original Brian 2 code for this model (data not shown)." Does that mean that the authors had access to the original Brian 2 code, e.g. by asking the authors for it? If yes, why did this not clear up the questions of the unspecified parameters? In either case, I think the (un)availability of the original source code should be stated near the beginning of the manuscript, possibly mentioning the unreferenced Matlab code that the authors found on GitHub as well.

We thank the reviewer for pointing out our lack of clarity regarding the presence of Brian 2 source code associated with the original publication and its use in replicating the results.

We added a section right after the paper’s Introduction named “Availability of code implementations for the original model”, clarifying this issue:

_In the original publication, the authors do not link any associated code repository with their implementation of the models described. However, we found a repository with Matlab and Python (Brian 2) scripts in the following address: \href{github.com/xjwanglab/JoglekarEtAl2018_Neuron}{github.com/xjwanglab/JoglekarEtAl2018Neuron}. Despite the fact that those scripts helped to detect discrepancies and non-specified parameters in the original article, for the spiking model, not all parameters could be directly ``plugged-in'' into Nest without conversion, and some adjustment was often required in order to obtain the firing rate regimes specified in the original paper, as we elaborated throughout this replication paper.

In the results section we modified the following two descriptions to make it clear how exactly the authors’ implementation helped us to replicate the model.

Paragraph after equation 27: This condition was not mentioned in the original article. However it was found in the Matlab implementation of the model that we cited in the section ``Availability of code implementations for the original model''.

Paragraph after equation 28: _In the original paper, the authors mention that a background input was given to the network in order to maintain the spiking activity of the excitatory population between $0.75$ and $1.5$ Hz and the inhibitory population between $5$ and$6$ Hz. Although the authors did not specify the background noise parameters, the model equations found in their Brian2 code take the form $dV/dt = f(V) + (\sigma/\tau)\xi(t)$ (``rishimodelpython_brian2_spiking.py''), lines 137 and 141). We converted the values of $\sigma$ set on their code using Eq.ref{eq:noiseinput}. However, some adjustments of the parameters were required to reproduce the specified steady-state rates (Table~ref{tab:rates}).

Text added in section 3.3, subsection “Synchronous regime”: _Furthermore, we noticed the same phenomenon when running the original Brian 2 code for this model (data not shown), which suggests this is not an issue of incorrectly fitting parameters but rather an intrinsic property of the model. Additionally, we noticed that in the authors' script the seed was fixed (``rishimodelpython_brian2spiking.py'', lines 30-32).

Installation/run instructions: The code repository currently does not contain any README file or other explanation on how to install the necessary requirements and to run the code. Everything is rather straightforward (conda environment, main.py file that can run the various protocols, etc.), but there should be step-by-step instructions on how to install and run everything. The authors provide two conda environments, one with detailed pinned versions (which I used) and one which only states the general dependencies. I think the latter needs to specify that the NEST version needs to be at least version 3.0 which integrates the adapted rate model. The authors state that "it is recommended to use this updated version", but I think it is not only "recommended" but necessary to run the code? If I am not mistaken, NEST simulations are only reproducible by fixing both the seed and the number of threads used for simulation. This should be made explicit in the instructions as well.

We thank the reviewer for pointing out our mistake. We now added a detailed README file to the code directory.

Minor issues

Paper

p.1: "Here we replicate a recent modelling by" → "a recent modelling study by"

We thank the reviewer for pointing out our mistake. The sentence is now corrected.

p.1: "One of the key contributions Joglekar et. al. study" → "of the Joglekar et al. study" (also in several places: "et. al." should be "et al.")

We thank the reviewer for pointing out our mistake. The sentence is now corrected.

p.2: "with the code related to this article in: XXXXXX" needs a URL

We thank the reviewer for pointing out our mistake. We have now inserted the link to the following repository: github.com/ViniciusLima94/ReScience-Joglekar/blob/master/code/interareal/

Figure 1: I'd again state here what "FLN" stands for, since someone might look at the figure before reading the rest of the text. Also, you might consider plotting FLN instead of log(FLN) and use a logarithmic color map, so that the colorbar values would be 10², 10⁴, etc. A bit clearer IMO, but of course there's nothing wrong with the current figure.

We thank the reviewer for the suggestion, we have updated the Figure 1(a), by showing FLN rather than log(FLN), and adopted a logarithmic scale for the colorbar. Additionally, we replaced the title “log(FLN)” in this panel by “Connection strength”.

Figure 3f: The legend in the figure seems to have swapped the labels for the dotted vs. solid lines

We thank the reviewer for directing our attention to this mistake. We have now corrected it.

References: Readers would appreciate DOIs for the references, I think

We added the DOIs to the references in the bibtex file however they still do not appear in the paper.

Code

Even after looking at the source code, I am not quite sure what "protocol 3" does. The comment states "compute average frequencies per population in each stage", but there does not seem to be any output/figure at the end of the simulation

We thank the reviewer for directing our attention to this mistake. Indeed, there is no output figure for this protocol since it serves to compute the neurons’ population average firing rate to verify if they lie in the ranges specified by the original paper, as we mention in Section 3.4:

“were the mean firing rate ranges in the absence of input which are: 0.75 to 1.5 Hz for the excitatory population; and 5 to 6 Hz for the inhibitory population.”

The outcomes of this protocol are shown in Table 4.

Additionally, we added the following print commands at the end of protocol 3 (lines 210-218 on main.py):

print(f" Weak GBA ") print(f" | Async | Sync |") print(f" E | {rewa}| {rews}|") print(f" I | {riwa}| {riws}|") print("------------------------------") print(f" Strong GBA ") print(f" | Async | Sync |") print(f" E | {resa}| {ress}|") print(f" I | {risa}| {riss}|")

Which will give the average firing rate as values printed in the command line.

"protocol 4" is an option in main.py but does not do anything

We thank the reviewer for directing our attention to this mistake. We removed protocol 4 from the argparse options.

Before we had protocol four to simulate the Synchronous regime with strong GBA in a case where the activity of the neurons diverge.

Before submission, we moved this analysis to protocol 3, and it is corresponding to the last simulation in this protocol (lines 163-164 in main.py)

iss, , tss, , maxfss,,_ = fig5_6.simulate(lnt = lnt, seed = 10, simtime = simtime, reg = 'sync', gba = 'strong-gba', transient = 0, dt = dt) plot_figures.plot_raster(t_ss, i_ss, None, simtime, 'figures/fig7.png', save=True)

The generated file names 5, 6, 7 correspond to figures 4, 5, 6 in the paper, which is a bit confusing

We thank the reviewer for directing our attention to this mistake. We have renamed the corresponding figures to match their order in the paper.

One of the generated files has "joglekar" in the name (fig3_joglekar_e) while all the others are simply called fig_3f, etc.

We thank the reviewer for directing our attention to this mistake. We have corrected it.

stephanmg commented 1 year ago

This is great, @stephanmg, thanks a lot! You are welcome to proceed!

Will provide my review by end of the next week.

benoit-girard commented 1 year ago

@mstimberg : Are you satisfied with the authors' answers and modifications based on your review?

@stephanmg : Perfect!

mstimberg commented 1 year ago

@benoit-girard, @ViniciusLima94 I thank the authors for the changes, almost all of my concerns have been addressed. I was able to run all the code and to reproduce all the figures (with one minor exception, see below) and table 4 following the (very detailed!) instructions in the README file. I did all this based on the conda environment, so I did not review the other installation instructions (NEST from source, pip installs, etc.) in detail, but everything seemed very reasonable from a cursory look. My figure 5 looks slightly different from the one in the paper, though – could this be an issue with seed and/or number of threads? I ran it with NEST 3.4 and 20 threads (python main.py 2 20):

fig5

A bit of nitpicking: fig1_b.pdf and fig1_c.pdf appear as Figure 2b and 2c in the paper :)

I am in general convinced with the explanations regarding the parameters that weren't mentioned in the original paper, but could you comment more on the $\tau$ parameter? You state that you could not find values in the code and used 20ms in the weak LBA and 30ms in the strong LBA condition, with the same values for excitatory and inhbitory cells. From a quick look at the source code in rishimodelpython_brian2_spiking.py, however, it seems that they used 20ms for excitatory and 10ms for inhibitory cells? https://github.com/xjwanglab/JoglekarEtAl2018_Neuron/blob/42c21da0df79611f62b8aa549b2bacd921c79d61/rishimodelpython_brian2_spiking.py#L38-L39C1

We added the DOIs to the references in the bibtex file however they still do not appear in the paper.

Um, apologies, apparently displaying them is switched off in the LaTeX template (not quite sure why?).

stephanmg commented 1 year ago

Thanks for the submission @ViniciusLima94

@benoit-girard find my comments below, most of the issues have been fixed meanwhile by @ViniciusLima94 though...

Major issues

The most major issue I faced was the lack of running instructions / README. This seems already been fixed in a pretty detailed way now. Very good!

Minor issues

Paper:

Code

@mstimberg my figure looks also slightly different than yours... I'm not too familiar with NEST simulator, but I agree it could then be an issue of seeds / number of threads perhaps. NEST 3.5 and number of threads 32 is what I used.

mstimberg commented 1 year ago

@mstim my figure looks also slightly different than yours... I'm not too familiar with NEST simulator, but I agree it could then be an issue of seeds / number of threads perhaps. NEST 3.5 and number of threads 32.

In the new README they state that they ran with 20 threads, which is what I used. So that might explain the differences between our results (but not the difference to the result in the manuscript). But I just realized that it also depends on the compiler that is used (see NEST docs). I guess in my case (Linux, NEST installed via conda) it uses GCC, but if you or the authors are on macOS, it might rather use clang?

PS: @mstimberg, not @mstim :)

stephanmg commented 1 year ago

@mstim my figure looks also slightly different than yours... I'm not too familiar with NEST simulator, but I agree it could then be an issue of seeds / number of threads perhaps. NEST 3.5 and number of threads 32.

In the new README they state that they ran with 20 threads, which is what I used. So that might explain the differences between our results (but not the difference to the result in the manuscript). But I just realized that it also depends on the compiler that is used (see NEST docs). I guess in my case (Linux, NEST installed via conda) it uses GCC, but if you or the authors are on macOS, it might rather use clang?

PS: @mstimberg, not @mstim :)

Ah, that might explain it, I'm using clang on OSX. But I can also try it on my Ubuntu if necessary.

stephanmg commented 1 year ago

To wrap it up: I do not have further objections, if the points I raised are incorporated into the manuscript.

ViniciusLima94 commented 1 year ago

Dear @mstimberg, we thank you again for your remarks and commentaries that allowed us to improve our manuscript. We will answer your new batch of suggestions in topics bellow:

My figure 5 looks slightly different from the one in the paper, though – could this be an issue with seed and/or number of threads? I ran it with NEST 3.4 and 20 threads (python main.py 2 20):

Thank you for bringing this issue for our attention. We were able to reproduce the same result you had above. While we don’t have a concrete answer to why this happened, there are a few things we can comment on:

But in any case, we do confirm that it will depend on the number of threads (seeds set) :

image

At the same time, the result in Figure 6, portraying the instability of the network in this regime does not show an unstable firing rate anymore. We could find an unstable state again when using the seed 100 (line 227-235 in main.py) and by running the script with four threads: python main.py 2 4. We tested it in different laptops using the conda environment we provided.

In the new version of the manuscript we added a comment (see section Important information needed during replication) that the results, for instance to each extent the stimulus will propagate and whether the firing rate will be stable, depends on the version of NEST and Python as well as the number of threads used. We comment this on the results section for the synchronous spiking neuronal model:

“Conversely, this dependence on initial conditions can generate instances in which the network is in a less excitable state, as a consequence activity may not reach areas such as PROm. In other cases, activity will reach even area 24c, without divergence of firing rates, if the network is in a slightly more excitable state.”

Finally, we recall that the instability effect is also observed by running their Brian 2 code for the synchronous and strong GBA regime (see Figure below obtained by running their code), hence this is not solely due to NEST.

image

A bit of nitpicking: fig1_b.pdf and fig1_c.pdf appear as Figure 2b and 2c in the paper :)

True, we have changed the filename. We have also changed the name of the corresponding functions in “plot_figures.py” (fig1b() -> fig2b(); fig1c() -> fig2c()).

I am in general convinced with the explanations regarding the parameters that weren't mentioned in the original paper, but could you comment more on the parameters? You state that you could not find values in the code and used 20ms in the weak LBA and 30ms in the strong LBA condition, with the same values for excitatory and inhibitory cells. From a quick look at the source code in rishimodelpython_brian2_spiking.py, however, it seems that they used 20ms for excitatory and 10ms for inhibitory cells?

If I’m not mistaken the missing tau I referred to is the one for the local circuit using a rate model (figure 2 in the paper; and table 2). The link above is for the spiking model, for which we used the values you mentioned (lines 62 and 63 in our setParams.py).

Additional modifications

ViniciusLima94 commented 1 year ago

Dear @stephanmg, we thank you for your remarks and commentaries that helped us to improve our ReScience manuscript.

Paper

Typos ("wring distance"), need to double check for them throughout manuscript

We double checked the manuscript for the mentioned typo, we found it in the legend of figure 2 and corrected it.

Figure 2. Wouldn't mix red and green in a figure (color-blind)

We updated the colors to blue (with dashed lines), and black in the new version of the manuscript.

Would avoid highlighting in Table 2 but state in legend, same for a later Table

We opted to keep the highlighting since it gives an immediate view of the missing parameters, but we also comment on the legends in the new version of the manuscript.

Figure 2. Fonts seems rather small in legend and axis description

We increased the font size of labels/legends/titles to 15 points in the new version of the manuscript.

DOIs are now available? In the recent manuscript I still do not see them.

We managed to make DOis visible by setting “doi = true”, in line 305 of rescience.cls.

Code

Remove superfluous / unused code in main.py

We have removed unnecessary comments and unused code from this and other scripts.

Has the code been linted? It partly looks like, but would be great to have a consistent formatting throughout. Please double check.

We improved the code quality by running black and manually adjusting each script where needed.

stephanmg commented 1 year ago

LGTM now, also the reproduction issue above looks good.

mstimberg commented 1 year ago

All looks good from my side as well, thanks for the additional explanations here and in the paper. Of course, I can't leave without some final nitpicking – but please go ahead and take these comments into account as you see fit, no need for another round of review from my side:

All in all, great work, I was happy to review it :tada:

ViniciusLima94 commented 1 year ago

Dear @mstimberg ,

We thank you for your final remarks.

In a freshly checked out repo, the code/figures directory does not exist and the plotting code will raise an error trying to write to that directory

We pushed the figures directory.

I think fig3_b_d.pdf should be fig3_c_d.pdf

We updated the name of the figure to be "fig3_c_d.pdf ".

I can reproduce Fig 4 and 5 by running python main.py 2 20 and Fig 6 by running python main.py 2 4 – but maybe it would be good to separate the two so that one can exactly reproduce the figures from the paper without overwriting the figures from the other call? Another option would be to give an option to call main.py without any number of threads, and explicitly use the thread numbers used by the authors for each plot.

Now the simulation that generates Figure 6 is in a separated protocol, we added the new protocol to the README file, where we also indicate the number of threads used to exactly reproduce the figure in our manuscript.

We thank you and @stephanmg for the comments that helped us to improve this code+manuscript considerably :)

ViniciusLima94 commented 1 year ago

Dear @benoit-girard ,

Let us know if something else is need from our part in order to proceed.

Best regards, Vinicius.

benoit-girard commented 1 year ago

Sorry for the delay, I am back from holidays and I can see that all of you have done a great job in the mean time! I understand from the reviewers' last feedback that they are satisfied, so the paper is accepted, we will now proceed with the publishing steps...

ViniciusLima94 commented 1 year ago

No problem @benoit-girard . Thank you!

benoit-girard commented 1 year ago

@rougier I think I will need your help for the publication process.

ViniciusLima94 commented 11 months ago

Dear editors,

Since it has been some time since the approval of our manuscript, we would like to kindly ask if there is a prevision of the publication release?

Also, let us know if there is something in which we can be helpful.

Cordially, The authors

rougier commented 11 months ago

My fault. Can you give me the url to your latex sources (will be easier for me)?

ViniciusLima94 commented 11 months ago

Dear @rougier

Is this link enough? https://www.overleaf.com/2655993686zbxxybtskpct

Cordially

rougier commented 10 months ago

Sorry for loooonnng delay. Zeonodo API has just changed and I need to adapt the published script but for that I need their input on how to do it. Can you ping me here again on Monday to make sure to not have further delay in publishing? In the meantime, you can try to fill as much data as possible in the metadata.tex.

ViniciusLima94 commented 10 months ago

Dear @rougier ,

I'm pinging you as you requested.

I also just pushed a version of the "metadata.tex", with more information inserted. Let me know if there is anything else I can help with.

Best, Vinicius.

rougier commented 10 months ago

Thanks. Zenodo problem is fixed. Could you fill in the matadata.yaml instead (since the .tex i generated from .yaml). I'll then download you overleaf and work from here.