Closed maekclena closed 7 years ago
Thanks @maekclena I will assign reviewers soon.
The authors reproduce successfully the results of the original paper on 2 of the 4 proposed tasks. The other tasks (category matching and vibrotactile discrimination) would not bring much to prove that the model is correctly reproduced, so this is fine. The model itself is actually quite simple, reproducing the tasks (especially the probabilistic task) seems to have been the main difficulty in the reimplementation.
There were no dependency problem to run the simulations, everything is standard and runs well (tested under linux). The code for the model is very well organized, clear and commented. There are a few docstrings missing in the classes, but the names of the methods are self-explaining anyway.
article
The article is well written and describes the difficulty of the reimplementation. A few words on the implementation itself could have been nice: why classes for the populations and projections instead of plain numpy arrays (what could have been slightly more readable)? Does reduce
bring much in terms of speed, or is it just more generic (there are never more than two incoming projections to a population in the current model, but it could be beneficial for extensions)? Things like that.
In the results section, you do not provide the setup for computing the success rates and convergence speed. How many networks were used? What is the variance?
You could also describe slightly more in the text in how far Figs. 1 and 2. reproduce the original figures: it is not obvious without reading the original article that fixate dominates until the go signal, after which the correct action is chosen. Don't feel like you have to do it, but it would have been also nice to reproduce other figures, such as 2C or the upper part of 2D and 4C.
It might also be useful to clarify the nature of the feedback weights w' in equations 14 and 16: once an action is selected, only the feedback synapses leaving the corresponding selected Q-value unit are activated to update tags, more precisely: w' ij = w ij × z i . The model could also have dedicated feedback connections but the simpler method is to use the feedforward synapses' weights.
It is indeed not clear in the original article whether the feedback weights are also learned (as they seem to imply), or if they just copy the feedforward ones. What was used in the original implementation? So much for the biological plausibility: as the weights are randomly initialized, there is no chance that the feedforward and feedback weights take the same value in this learning setup. So the superiority over backpropagation is not that obvious...
Small typos:
code
README.md
, the usage should be described as:python3 simulation.py -h...
as the scripts are not directly executable.
statistics
of python 3, 3.4 is likely the minimal version.from __future__ import print_function
at the beginning of simulation.py
and plot-activation.py
, and to use the numpy implementation of median
instead of statistics in simulation.py
:
# from statistics import median
from numpy import median
Many thanks to @vitay for the review. I will have to locate another 2nd reviewer, so please bear with me @maekclena while I try to track somebody down.
EDIT: Please see below.
Would either of @MehdiKhamassi or @benoit-girard be able to take on the mantle of 2nd reviewer? :smile:
Hi @oliviaguest; I'd be interested to do the review.
Excellent! Go ahead!
You are too fast for me! Next time...
Any updates on this @eroesch?
Also have you had any time to address some of the above @maekclena?
No, I haven't had time yet. I will wait for @eroesch's review to make all necessary corrections at once.
I am almost done.
@eroesch 🔔
Don't pay attention to the conflict, I just updated the submission directory. No incidence at all on this submission. @oliviaguest @eroesch Any chance for an update ?
@eroesch Do you think you can complete your review by the end of the week ?
The submission presents a novel implementation of a model describing a new rule for reinforcement learning [1]. The authors review aspects of the implementation that appeared important when reproducing the work, and describe results obtained in similar setups. They reproduce 2 of the 4 tasks used in the original paper. I have managed to run the simulation (macos, Anaconda), and I am satisfied with the code provided, which is actually much shorter than I had anticipated. I thank the authors for their time—and their patience! Find a few comments below, which I believe will help making the submission clearer.
[1] J. Rombouts, S. Bohte, and P. Roelfsema. “How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks”. In: PLoS Computational Biology 11.3 (2015), e1004060. doi: 10.1371/journal.pcbi.1004060.
@maekclena you probably want to read above reviews if you haven't already.
Hey @rougier do you know @maekclena's email address? Perhaps we need it to get in contact?
On the article (PDF), there is the email of the corresponding author.
our response is being prepared... Very sorry for being late and thank you for your patience...
Sorry for the delay and the lack of communication. We made the necessary corrections with respect to the reviews.
Most notable changes:
As well as additional details and rewritten sentences in the article.
Reviewers — @vitay & @eroesch — are you satisfied with the changes that have been made? If yes, can you please formally accept the paper?
I am for accepting the paper, the changes answered my questions.
I am accepting the paper, the changes answered (most of) my questions.
Dear @falex33 and @maekclena — Congratulations! How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks has been accepted! Publication and more details will soon follow.
Hi @oliviaguest
The archived PDF has wrong links for the red buttons "Article repository", etc. on the first page.
The first one, for instance, points to https://github.com/ReScience-Archives/Le_Masson-Alexandre-2016/article whereas the "good" link would be https://github.com/ReScience-Archives/Le_Masson-Alexandre-2016/tree/master/article
Sorry about this. No idea how to fix this problem unfortunately as this is the first time I've done this. I won't try anything until somebody who knows more advises. Perhaps @rougier knows more?
The links are in the header of the .md
file for the paper. What I don't know is how to manage the update to Zenodo, if one is done at all.
Sorry @rougier , we'll need an opinion :-)
I think you need to regenerate the right PDF and upload a new version to Zenodo (while keeping the same DOI).
Indeed. This is why I asked for your more experienced opinion. So unless I am mistaken, which is possible... I'm not sure that I can change any files without emailing Zenodo. I am of course more than happy to email them myself to fix this problem!
Note: File addition, removal or modification are not allowed after an upload has been published. This is because a Digital Object Identifier (DOI) is registered with DataCite for each upload. If you've made a mistake please contact us.
Can't you release a new version on Zenodo using the same DOI ? If not, we'll have to issue a new DOI and update the website and ReScience repository.
I'm unfamiliar with Zenodo but I don't think so...?
Just checked and you'r right. We'll have to contact Zenodo. Can you handle it ?
I think before emailing we/I should fix the repo which we can edit ourselves first. So I need to edit the md file to correct the link, and recompile the pdf. Is there anything else that needs updating? Does the release on github allow changes?
No, I think you will have to make a new release (and double check links before making the release). I can help you on checking the PDF.
We really need to automate this part to avoid future errors. I did the same for one submission but I was lucky enough to realizd something was wrong before submitting to Zenodo.
OK, I'll try on Monday!
Zenodo lets you upload a new file and declare that it renders an earlier one obsolete. That's probably the best and most transparent way to handle corrections. If I remember correctly, you have to
I see
is new version of this upload
Is this a kind of "official" replacement method? Will there be two DOIs?
@khinsen could you point me to where that option is, please?
It's under "Related/alternate identifiers". There you can enter URLs or DOIs of related things and assign a role to them. The two relevant roles are "is new version of this upload" and "is previous version of this upload".
For an example of how this is presented to the reader, see this software upload. Under "related identifiers" in the right column you can see both "previous versions" and "new versions".
@pdebuyl: Each upload has its own DOI. There are just annotated links between the two uploads. It's more of a versioning than a replacement method, but that's how the scientific record should work in my opinion.
OK, I am attempting step one now of @khinsen's solution. Can somebody check that I do the github-side of things correctly please before I do anything on Zenodo? Thanks for the help so far. Will post a link when I have compiled the corrected .md
.
OK, so I think this has now fixed the link problem: https://github.com/ReScience-Archives/Le_Masson-Alexandre-2016/tree/master/article Is the next step a new release on github?
I think a link for the notebook in the PDF while there is no notebook, do you know why ? Was it the case with previous version ?
Better? :smile:
As the article is in re-processing (okay, because of me :-/) is it useful to have the link to data
? It has not been used by the authors and contains only the template's README and LICENSE.
Removed it. Is the repo and article looking done now? Do we need the data
and notebook
directories?
AUTHOR
Dear @ReScience/editors,
I request a review for the replication of the following paper:
I believe the original results have been faithfully replicated as explained in the accompanying article.
Repository lives at https://github.com/maekclena/ReScience-submission/tree/Le_Masson-Alexandre-2016
EDITOR
09 August 2016
)12 August 2016
)8 September 2016
)30 November, 2016
1 December, 2016
1 December, 2016