ReScience / submissions

ReScience C submissions
28 stars 7 forks source link

Review Request: C. de la Torre-Ortiz and A. Nioche #10

Closed c-torre closed 3 years ago

c-torre commented 4 years ago

Dear ReScience editors,

I request a review for the following replication:

Original article: https://doi.org/10.3389/fncom.2015.00149

PDF URL: https://github.com/c-torre/replication-recanatesi-2015/blob/master/re-neural-network-model-of-memory-retrieval.pdf Metadata URL: https://github.com/c-torre/replication-recanatesi-2015/blob/master/metadata.tex Code URL: https://github.com/c-torre/replication-recanatesi-2015/

Scientific domain: Computational Neuroscience Programming language: Python Suggested editor: @oliviaguest

oliviaguest commented 4 years ago

Happy to take this on. @bobaseb are you interested in reviewing this? :smile:

bobaseb commented 4 years ago

Sure I'm in :)

oliviaguest commented 4 years ago

:wave: @anne-urai would you be able to review this?

anne-urai commented 4 years ago

I don't really have expertise in neural networks, but my colleague @jamesproach is, and is happy to do this. He'll register as a reviewer first.

oliviaguest commented 4 years ago

@anne-urai thank you!

@jamesproach let me know when you are ready and if you have any questions. Same goes for you @bobaseb of course. The sooner the better of course, but let us know when you plan to get this done. :smile:

bobaseb commented 4 years ago

The authors do a good job in transforming the original model based on individual neurons to a population model. New parameters are introduced to this effect and the original model's dynamics (i.e., retrieving a memory while inhibiting others under periodic inhibition dynamics) is broadly reproduced.

The code is clear and human-readable with enough comments to get the gist of each computation. I was able to run it with ease on a linux machine.

Three specific questions came to mind when reviewing this work.

  1. Why is the sparsity parameter different? Shouldn't it be 0.1?

  2. Where is the recall activation threshold parameter defined?

  3. Why is there no attempt to plot something similar to figures 3 to 6?

Since the original model has been modified and only the first figure of the original paper seems to have been replicated, perhaps this is best suited as a partial replication.

oliviaguest commented 4 years ago

@jamesproach what's a realistic ETA for you to take a look here? Thanks. :smile:

jamesproach commented 4 years ago

The authors reproduce simulation results from Recanatesi et al using a model of associative memory retrieval which accounts for the sequential activation of memory though two network features: (1) synaptic connections which enforce sequential associations between particular memories and (2) periodic inhibition. This is an extension of the Hopfield Network model of auto-associative memory. In their reproduction, the authors employ a dimensionality reduction proposed in the original article which groups units that present the same activity across all memories reducing the size of the dynamical system by several orders of magnitude.

The authors were able to reproduce the simulation results from the original article with slight modifications to the parameters. The associated Python code is clearly written, well commented and fully reproduces the figures in the manuscript.

Major Comment: What accounts for the difference between the results with the original and reproduction variable? As a work aimed at reproduction, presenting differences between dynamics with the original parameters and the scaled reproduction variables would be useful. In some cases parameters need to be changed by N (10^5). Presentation of the differences would be helpful to understand this.

Minor Comments: After running the network for several different seeds, I found that there were several periods where the same memory was recalled in sequence. The original article presents a simulation with up to three repetitions while the provided code often produces repetitions of 6 or longer, even 14 in one case. Is this a meaningful difference resulting from the difference in parameters?

Could the authors provide a reproduction of the simulation results in figures 3-6.

rougier commented 4 years ago

@oliviaguest @c-torre Gentle reminder

c-torre commented 4 years ago

Dear all, We thank the reviewers for their comments. We also apologize for our delay. We are still addressing the reviews and related arising issues (ETA two weeks from now). Thank you for your patience. With kind regards, Carlos

c-torre commented 4 years ago

Dear all, Thank you for your patience so far. We had to scale up our computing and build a pipeline for the computer cluster due to very expensive computations (i.e. simulating many networks for many time steps, around 8h per network in the cluster). In that regard, we've been experiencing delays due to particulary long job queues in the system. We are working on this issue and expect to deliver our response in one week time.

oliviaguest commented 4 years ago

Thank you for the update! Sounds like you are making huge progress. :+1:

rougier commented 4 years ago

Any progress ?

c-torre commented 4 years ago

Dear all, We would like to update on the status of the replication.

According to the reviews, we reproduced the all the figures of the initial paper. Although the overall behavior of the network is similar to the initial network (figure 1), we cannot replicate many of the other results (figures 3 to 6). At this point, our conclusion would be that the original article is not reproducible.

However, while working on the replication, we began a discussion by mail with one author of the original article. He has been able to provide scripts from early versions of the code (in a proprietary language). It is very likely that the quality of this replication would benefit from this newfound help, but further investigation is still required. We therefore thank in advance the editorial board and the reviewers for their patience.

Best regards, Carlos

oliviaguest commented 4 years ago

@rougier this seems to be a reason to rewrite and re-review this work? What's our policy on this type of situation?

rougier commented 4 years ago

I think we have a section on non-reproducibility where it is written to try to contact the original author to try to see where's the problem. I guess we can now wait for the new input to check if @c-torre will be able to replicate the work based on the new input. We can leave this thread open until then.

rougier commented 4 years ago

Gentle reminder

c-torre commented 4 years ago

Dear everyone, I hope you are safe in these difficult times.

In this time, we addressed the reviewer comments, including the change in one parameter value. The model shown to be particularly sensitive to this and the dynamics were completely distorted.

We then reimplemented the model making new adjustments to equations and parameters, as we were able to obtain figure 2 again. We launched series of simulations to reproduce the last figures of the original article. Unfortunately, results were not as expected.

We contacted the original author, who provided apparently a similar implementation to what they used for their article. They also provided further explanations on theoretical concepts driving their implementaiton. After reimplementing their code in Python, we obtained yet another working version of the network that could replicate figure 2. We launched the extended simulations to produce the last figures, but again its behavior was far from intended. We were not able to get in touch with the original author again.

On the other hand, we are now able to provide a lot more details on the theory and implementation. In addition, we have two working versions of the code driven by similar but not equal theory: we have a low sparsity version that works with simulation numbers as described in the article; we also have a high sparsity version assumes an infinite number of neurons, which is the closest to the original implementation.

We are wondering how we should proceed in this case.

oliviaguest commented 4 years ago

We contacted the original author, who provided apparently a similar implementation to what they used for their article.

We were not able to get in touch with the original author again.

@c-torre Can you please provide a bit more info regarding what has changed and why? I don't blame you in any way, I just would like to know more and your opinion.

c-torre commented 4 years ago

The complete timeline occured as follows: I contacted the corresponding author of the original article, but I did not get a reply back. Then I contacted the first author. He was very willing to help, provided some initial discussion and finally the partial code that allowed to reproduce figure 2.

I launched the longer simulations to get results for the last figures, but again could not reproduce results. At this point I got back to contact the first author of the original article. I could not get a reply even after reminding.

I'm wondering if the current situation may have an effect in any way, but unfortunately it is all guessing just by now. It does not seem to be any other changes that would explain why we could not get back to them again.

oliviaguest commented 4 years ago

@c-torre Very useful. Thank you. Have you emailed the corresponding author again? Or just once at the start?

oliviaguest commented 4 years ago

PS: @c-torre I also forgot to ask... did you email any of the other authors? Original article has four in total.

c-torre commented 4 years ago

I emailed the corresponding author only at the beginning, as I was initially told by the first author he might not reply to emails often in general. I did not contact other authors apart from first and corresponding (last).

oliviaguest commented 4 years ago

@rougier @khinsen @benoit-girard what do we do/think in this case? Is there a value in asking for input from (any of) the original authors?

rougier commented 4 years ago

Disclaimer: I've published articles with the second author of this submission (@AurelienNioche)

From my understanding, authors managed to replicate original results using the low sparsity version with (heavy ?) modification of the original model. I think it is thus a successful replication even though the model seems very fragile and this must be stated somewhere in the paper. Since the corresponding authors is now un-responsive, I'm not sure it's worth contacting the other authors. Or we set a hard deadline for getting an answer (2 weeks?) and if they don't get the answer, we can publish the paper (provided reviewers and editor agree).

oliviaguest commented 4 years ago

I agree on all points, but I think it's worth contacting all the authors of the original paper since if I was one of them I would want to know?

khinsen commented 4 years ago

I cannot comment on the changes to the models, that's beyond my domain competence (which is close to zero). As @oliviaguest says, as an author of the original work, I'd like to be informed, but I'd say "informing" is sufficient - no need to wait for a reply that may or may not come.

There is also the question of whether this is a successful replication or not. I'd say it's neither, but we have known for a while that our success/failure classification does not cover all situations. My understanding of this thread is that results similar to the original publication could only be obtained by modifying the underlying model. This suggests one of (1) the original implementation has a problem, (2) the re-implementation has a problem, (3) both implementations have a problem, or (4) the model is too sensitive to implementation details to be reproducible in practice.

The ideal way to proceed (after accepting this replication of course) would be to encourage the authors of original and replication to work together on figuring out what happened.

oliviaguest commented 4 years ago

I agree on no delay — sorry that I left that ambiguous — by waiting on anybody. Merely that I think we should inform them and maybe wait a bit (because they might give us useful info), but certainly not an endless amount of time.

The ideal way to proceed (after accepting this replication of course) would be to encourage the authors of original and replication to work together on figuring out what happened.

I am happy with this idea. I wonder what @c-torre would prefer?

c-torre commented 4 years ago

Thank you for all your input. We agree with your the points we discussed. I'll contact the corresponding author and copy to the other the authors. We'll inform them about the replication issues and that we'll be submitting current replication within the next two weeks.

oliviaguest commented 4 years ago

@c-torre I am pleased with this especially since 2 weeks is more than enough time.

mkatkov commented 4 years ago

res

I got this image by following code in jupyterLab (python3, windows, anaconda)

import numpy as np
import scipy
import scipy.sparse
import matplotlib.pyplot as plt

import sys
from IPython.display import display, clear_output
class Model:
    def __init__(self):
        self.th0= 0
        self.gamma= 0.5
        self.N= 100000
        self.P= 16
        self.f=0.01
        self.kappa= 11
        self.xi0=1
        self.dt= 1e-2
        self.T= 10
        self.phi=0
        self.tau= 1e-1
    def makePatterns(self):
        self.eta= scipy.sparse.random(self.N,self.P, density=self.f, data_rvs=lambda n: np.ones((n,)) )
    def gain(self, x):
        x1= np.where( (self.th0+x)<=0., 0, self.th0+x )
        return np.power( x1, self.gamma)
    def Jr(self, c):
        r= self.gain(c)
        pc= self.eta.T.dot(r)-self.f*r.sum()
        return (self.eta.dot(pc)-self.f*pc.sum()-r*self.phi)*self.kappa/self.N
    def grad(self, c):
        return (-c+self.Jr( c )+np.random.randn( c.shape[0], c.shape[1] )*self.xi0  )/self.tau

np.random.seed(12354)
mod= Model()
mod.N=10000
mod.P= 16
mod.f=0.1
#mod.eta*=0
mod.xi0=0.5
mod.kappa= 1000

phiAmp=400
phi0= phiAmp+600
nT= 50000
memRec= np.zeros( (nT, ) )
tRecAll=[]
ITR=np.empty((0,1))
itrByLen=[np.empty((k-1, 0)) for k in range(1, mod.P+1)]

fig, ((ax1, ax2), (ax3,ax4))=plt.subplots( nrows=2, ncols=2,figsize=(12,5))
display(fig)

for it in range(1000):
    sys.stdout.write(str(it)+"        \r")
    mod.makePatterns()
    tr= np.empty( (mod.N, nT) )
    c= np.random.randn( mod.N,1 )
    c= (mod.eta.tocsr()[:,0]).todense()
    tr[:,:1]= c
    for k in range(1,tr.shape[1]):
        c+= mod.grad(c)*mod.dt
        tr[:,k]=np.array(c)[:,0]
        mod.phi= phi0-phiAmp*np.sin( k/50 )
    mu=( mod.eta.T.dot(mod.gain(tr))-mod.f*mod.gain(tr).sum(axis=0)).T/mod.N 
    mem1= ((mu>0.5).cumsum(axis=0)>0).sum(axis=1)
    trec= np.argwhere(np.diff(mem1))
    tRecAll.append(trec)
    ITR=np.vstack( (ITR, np.diff(trec, axis=0)) )
    memRec+= mem1
    itrByLen[len(trec)-1]=np.hstack( (itrByLen[len(trec)-1], np.diff(trec, axis=0)) )
    ax1.cla()
    ax1.plot( memRec/len(tRecAll) )
    ax2.cla()
    ax2.hist(ITR, bins= 100)
    ax2.set_xlim([0,10000])
    ax2.set_yscale('log')
    ax3.cla()
    for itr in itrByLen:
        if itr.size>0:
            ax3.plot(itr.mean(axis= 1))
    clear_output(wait=True)
    display(fig)
    fig.savefig('res.png', dpi=600)

It is not exactly Figure 3, but close enough. I was lazy to find the parameters corresponding to original paper. You can get similar figure for large range of parameters although the relationship between parameters require some tuning. The sensitivity to changes in parameters are discussed in details, although, again not numerically, and not in terms of absolute numbers - just what are the conditions where system works as intended, in ("Memory states and transitions between them in attractor neural networks S Recanatesi, M Katkov, M Tsodyks Neural computation 29 (10), 2684-2711" ). Roughly this depends on the specific shape of phase diagram in Figure 1, and it is usually much faster to compute it, and then choose the parameters.

In this simulation fanout in panel C goes down, similar to the experimental data - the conditions we did not find during the original publication.

I guess the paper is not written clear enough if it takes more than half a year to replicate simulations.

Best,

c-torre commented 4 years ago

Hi all, I hope everything is well. We would like to update on the state of the replication, especially since many things happened since we contacted all authors.

We got replies and clarifications for a few questions we had. They spotted and confirmed that several typos were introduced into the equations that made the replication impossible.

The first author got a way back into the cluster where the original files were and found that the parameters in the cluster simulations were different than the ones in the previously provided script, and also than those reported in the paper.

As of right now, it seems that the first author could finally fully simulate all the figures, which should complete the replication work. We still have to launch the longer simulations ourselves confirm replication on our side, and then proceed to translate and clean the code, and update the writing accordingly.

The first and last authors expressed their willingness to publish a correction on the original article, especially when this replication paper is published so they can provide a direct link to an implementation, and credit us for helping in the corrections.

We are working on the simulations now and will be back with hopefully good news

oliviaguest commented 4 years ago

@c-torre: Thank you so much for the update and well done on all the work in figuring this out! I am very pleased to have pushed (lightly) towards you contacting them — this is highly productive for the journal/this publication... and I hope also has been useful for you and your work as well as, of course, science generally! Great work and I am excited to see the next steps. 👏

c-torre commented 4 years ago

Dear all, Please find an updated version of the code as in the original issue. The PDF link in the original issue link will take you to our previous manuscript submission if needed for comparison. The new PDF can be found at: https://github.com/c-torre/replication-recanatesi-2015/blob/full-replication/re-neural-network-model-memory-retrieval.pdf

We are thankful for the reviewer's comments as they have drastically improved the quality of our work. We also thank all the discussion, as it led the replication work in the appropriate direction. We would encourage to revisit both the manuscript and code, as very major changes have taken place. Here we address the review comments:

Review 1

A typo was found and corrected the parameter to its correct value of 0.1

We followed the approach of the original interpretation now, were recalls are consistently observed at every inhibition minima, where rates always go above this threshold value. In the implementation this means that recalls are calculated not based on a threshold, but on the time iteration at which the peak maximum occurs.

We have corrected this major issue and now provide missing and extra figures.

Review 2

We have been collaborating with the original authors as we discovered errors in the original article. Normalization of parameters was indeed one of the main issues with the original paper. We have addressed this in detail in the new manuscript, as it is the core of the major changes since our original review request.

A new section has been added with the new figures, which address more in detail changes to the seed, as 10000 networks with different seeds are simulated to produce the recall analysis figures and conclusions. We agree that Figure 2 can be seed or parameter-dependent, and that many recalls in a row are possible with seed changes as seen in later figures.

In the time you may take to have a look at our replies, we intend to continue collaborating in the correction of the original article, as the original authors expressed wishes to continue working together on this issue and hope encourage future citations to also include our work.

Thank you for all your patience and help, Best, Carlos

rougier commented 4 years ago

@oliviaguest Gentle reminder

oliviaguest commented 4 years ago

@jamesproach and @bobaseb would you be able to give feedback (e.g., if your comments, questions, have been addressed) on the above, please?

bobaseb commented 4 years ago

Yes, my comments have been addressed. I'm happy the replication worked out in the end after contacting the original authors and correcting for errors in the original article (all stated in the current replication).

jamesproach commented 4 years ago

This revision addressed all of my concerns and is substantially improved. Finding and correcting the errors in the original manuscript was a great part of the manuscript.

James On Sep 8, 2020, 10:05 AM -0700, Sebastian Bobadilla Suarez notifications@github.com, wrote:

Yes, my comments have been addressed. I'm happy the replication worked out in the end after contacting the original authors and correcting for errors in the original article (all stated in the current replication).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ReScience_submissions_issues_10-23issuecomment-2D689014653&d=DwMCaQ&c=mkpgQs82XaCKIwNV8b32dmVOmERqJe4bBOtF0CetP9Y&r=FvVAtGmb9lau_8Aw4jIq6Q&m=e1QLBnrncr6TH-Rq1_449FmkdKLYLpduvlHaBJ5Xds8&s=T67hBdR5r5E1JkUHrZ0wk5J01x0TZCgZvEA3dzx87is&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AI27GOBP25BQGWRZYGPIHZLSEZP7HANCNFSM4JKONXOQ&d=DwMCaQ&c=mkpgQs82XaCKIwNV8b32dmVOmERqJe4bBOtF0CetP9Y&r=FvVAtGmb9lau_8Aw4jIq6Q&m=e1QLBnrncr6TH-Rq1_449FmkdKLYLpduvlHaBJ5Xds8&s=YWt984OJZa2_I8y7DA914XlEP_0l2f3oTETdvcgr2fE&e=.

oliviaguest commented 4 years ago

@c-torre OK, amazing!

Can you please update the metadata.md file with the details of each reviewer, editor, etc., please? Acceptance date is today and also see: https://github.com/ReScience/ReScience/issues/48#issuecomment-689776914! 🥳

c-torre commented 4 years ago

Hi all,

This is very good news. However, we went an extra step to ensure that everything is absolutely correct this time. Thanks to the original authors we have spotted the following mistakes in our current submission:

  1. Equation 2 should have the same time scaling as equation 10, adding sqrt(dt) for the noise term.
  2. Equation 4 has a mistake in the constant part.
  3. Equation 6 has a mistake in the contiguity terms.
  4. A parenthesis is lacking in the manuscript text.
  5. A parameter value was not updated in the new manuscript text.
  6. Parameter table has some issues in the consistency of scaling.
  7. Code should be re-run to ensure that is bug-free and replication holds after this tweaking.

While the replication is in a very good place now, as opposed to out first submission, I believe it's in the community's best interest to address these issues before proceeding to final publication.

oliviaguest commented 4 years ago

Sure, @c-torre! I'd assumed you had fixed these. Let me know when it's good to go.

c-torre commented 3 years ago

Hi! Manuscript is ready and the code has a Zenodo DOI now. What's next?

2020_rescience_neural_network_model_of_memory_retrieval.pdf

rougier commented 3 years ago

We'll publish the paper next week most probably. Don't hesitate to remind us.

oliviaguest commented 3 years ago

[ Indeed @rougier and sorry @c-torre for taking a bit of time. I am dealing with a big chronic illness flare up and will get to this ASAP. Apologies. ]

c-torre commented 3 years ago

Hi! No worries, thanks for letting me know. Take care and proceed when you're feeling better. There's no rush on our side. All the best.

rougier commented 3 years ago

@oliviaguest I can publish it for you, just tell me.

oliviaguest commented 3 years ago

If you can that would be amazing — thank you, @rougier!

rougier commented 3 years ago

@oliviaguest no prob. @c-torre can you remind me what is the repository for your article such that I can make PR against it? I would need especially the metadata.yaml file.

c-torre commented 3 years ago

Added article source with metadata.yaml at c-torre/replication-recanatesi-2015-article

rougier commented 3 years ago

Perfect, thank. Can you update the template with the latest template (and check it does not break things). This would be necessary to include a software heritage id for your code. This is now the default way of ensuring the coe will be available in the future. To have your code on software heritage is really simple, just check https://www.softwareheritage.org/save-and-reference-research-software/