neurolibre / neurolibre-reviews

Where NeuroLibre reviews live.
https://neurolibre.org
3 stars 1 forks source link

[REVIEW]: NiMARE: Neuroimaging Meta-Analysis Research Environment #7

Closed roboneuro closed 2 years ago

roboneuro commented 2 years ago

Submitting author: @tsalo (Taylor Salo) Repository: https://github.com/NBCLab/nimare-paper Editor: @pbellec Reviewers: @agahkarakuzu Jupyter Book: http://neurolibre-data-prod.conp.cloud/book-artifacts/roboneurolibre/github.com/nimare-paper/0195c842c8b6f5d42150df814500a9aadb05cc75/_build/html/ Repository archive: 10.5281/zenodo.6624793 Data archive: 10.5281/zenodo.6624795 Book archive: 10.5281/zenodo.6624790 Docker archive: 10.5281/zenodo.6624797

Status

status

Status badge code:

HTML: <a href="http://neurolibre.herokuapp.com/papers/28dfe9bf9747b20c7f70221badb19baf"><img src="http://neurolibre.herokuapp.com/papers/28dfe9bf9747b20c7f70221badb19baf/status.svg"></a>
Markdown: [![status](http://neurolibre.herokuapp.com/papers/28dfe9bf9747b20c7f70221badb19baf/status.svg)](http://neurolibre.herokuapp.com/papers/28dfe9bf9747b20c7f70221badb19baf)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ltetrel, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @pbellec know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @ltetrel

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

ltetrel commented 2 years ago

@roboneuro generate nl-notebook

roboneuro commented 2 years ago

:seedling: We are currently building your NeuroLibre notebook! Good things take time :seedling:

roboneuro commented 2 years ago

We ran into a problem building your book. :(

Click here to see build log

      ["Found built imag", "server running at https://binder.conp.cloud/jupyter/user/nbclab-nimare-paper-mes1u4ee/\\n\", \"image\": \"binder-registry.conp.cloud/binder-registry.conp.cloud/binder-nbclab-2dnimare-2dpaper-3e379f:eaf0b42c8794f59b511d4aec9d26d1da672cd907\", \"repo_url\": \"https://github.com/NBCLab/nimare-paper\", \"token\": \"DYno5S0GRVOqmkXJK5ht5w\", \"binder_ref_url\": \"https://github.com/NBCLab/nimare-paper/tree/eaf0b42c8794f59b511d4aec9d26d1da672cd907\", \"binder_launch_host\": \"https://binder.conp.cloud/\", \"binder_request\": \"v2/gh/NBCLab/nimare-paper.git/eaf0b42c8794f59b511d4aec9d26d1da672cd907\", \"binder_persistent_request\": \"v2/gh/NBCLab/nimare-paper/eaf0b42c8794f59b511d4aec9d26d1da672cd907"]
      
tsalo commented 2 years ago

@tsalo do you know the (peak) RAM consumption for your submission ? Just tried and it crashes also on mybinder.org

Unfortunately, I have not... The object that appears to be causing the problem does accept a memory limit parameter, though, so I think that could help. The problem (at least I assume it could be a problem) is that setting memory_limit means using temporary files. Would that be an issue?

ltetrel commented 2 years ago

If the submission exceed our maximum 4G RAM limit there is nothing I can do unfortunately.

I don't know how much swap memory there is on our cluster (usually it is 2~3 times the acual RAM, so maybe 2x4G) nor how to change that. What I can change is the storage limit allowed per user on the binder instance which is currently 2G (so swap memory would not use more than 2G).

tsalo commented 2 years ago

I've set the memory limit for that step to 500MB. If it's still a problem then I will pre-generate and load the results like I did for the GCLDA annotation step.

ltetrel commented 2 years ago

I've set the memory limit for that step to 500MB. If it's still a problem then I will pre-generate and load the results like I did for the GCLDA annotation step.

Good let's give it a try :)

ltetrel commented 2 years ago

@roboneuro generate nl-notebook

roboneuro commented 2 years ago

:seedling: We are currently building your NeuroLibre notebook! Good things take time :seedling:

roboneuro commented 2 years ago

We ran into a problem building your book. :(

Click here to see build log

      ["Cloning into '/tmp/repo2dockerpv4bk0lf'...\\n", "HEAD is now at 3225e1e Add review badge.\\n", "Using PythonBuildPack builder\\n", "Step 1/51 : FROM buildpack-deps:bionic", "\\n", " ---> cb3fc72df6ea\\n", "Step 2/51 : ENV DEBIAN_FRONTEND=noninteractive", "\\n", " ---> Using cache\\n", " ---> 02b5fe258a88\\n", "Step 3/51 : RUN apt-get -qq update &&     apt-get -qq install --yes --no-install-recommends locales > /dev/null &&     apt-get -qq purge &&     apt-get -qq clean &&     rm -rf /var/lib/apt/lists/*", "\\n", " ---> Using cache\\n", " ---> 7723e2ab2b36\\n", "Step 4/51 : RUN echo \\\"en_US.UTF-8 UTF-8\\\" > /etc/locale.gen &&     locale-gen", "\\n", " ---> Using cache\\n", " ---> f359f24d94b5\\n", "Step 5/51 : ENV LC_ALL en_US.UTF-8", "\\n", " ---> Using cache\\n", " ---> 6276e131c31e\\n", "Step 6/51 : ENV LANG en_US.UTF-8", "\\n", " ---> Using cache\\n", " ---> f938f4c705c2\\n", "Step 7/51 : ENV LANGUAGE en_US.UTF-8", "\\n", " ---> Using cache\\n", " ---> 315ab9f77383\\n", "Step 8/51 : ENV SHELL /bin/bash", "\\n", " ---> Using cache\\n", " ---> 9085d274d9a3\\n", "Step 9/51 : ARG NB_USER", "\\n", " ---> Using cache\\n", " ---> b0d7321ef3bb\\n", "Step 10/51 : ARG NB_UID", "\\n", " ---> Using cache\\n", " ---> a7c1356f3093\\n", "Step 11/51 : ENV USER ${NB_USER}", "\\n", " ---> Using cache\\n", " ---> 2aa184680286\\n", "Step 12/51 : ENV HOME /home/${NB_USER}", "\\n", " ---> Using cache\\n", " ---> 1ddd30194e07\\n", "Step 13/51 : RUN groupadd         --gid ${NB_UID}         ${NB_USER} &&     useradd         --comment \\\"Default user\\\"         --create-home         --gid ${NB_UID}         --no-log-init         --shell /bin/bash         --uid ${NB_UID}         ${NB_USER}", "\\n", " ---> Using cache\\n", " ---> b4f39039ff5c\\n", "Step 14/51 : RUN wget --quiet -O - https://deb.nodesource.com/gpgkey/nodesource.gpg.key |  apt-key add - &&     DISTRO=\\\"bionic\\\" &&     echo \\\"deb https://deb.nodesource.com/node_14.x $DISTRO main\\\" >> /etc/apt/sources.list.d/nodesource.list &&     echo \\\"deb-src https://deb.nodesource.com/node_14.x $DISTRO main\\\" >> /etc/apt/sources.list.d/nodesource.list", "\\n", " ---> Using cache\\n", " ---> 63401797a001\\n", "Step 15/51 : RUN apt-get -qq update &&     apt-get -qq install --yes --no-install-recommends        less        nodejs        unzip        > /dev/null &&     apt-get -qq purge &&     apt-get -qq clean &&     rm -rf /var/lib/apt/lists/*", "\\n", " ---> Using cache\\n", " ---> 3de70ee474b7\\n", "Step 16/51 : EXPOSE 8888", "\\n", " ---> Using cache\\n", " ---> 767a3594a01b\\n", "Step 17/51 : ENV APP_BASE /srv", "\\n", " ---> Using cache\\n", " ---> 2927326a645b\\n", "Step 18/51 : ENV NPM_DIR ${APP_BASE}/npm", "\\n", " ---> Using cache\\n", " ---> c73216c94c36\\n", "Step 19/51 : ENV NPM_CONFIG_GLOBALCONFIG ${NPM_DIR}/npmrc", "\\n", " ---> Using cache\\n", " ---> fce96bf8747f\\n", "Step 20/51 : ENV CONDA_DIR ${APP_BASE}/conda", "\\n", " ---> Using cache\\n", " ---> 1df834c2d52e\\n", "Step 21/51 : ENV NB_PYTHON_PREFIX ${CONDA_DIR}/envs/notebook", "\\n", " ---> Using cache\\n", " ---> b1b0f8b85487\\n", "Step 22/51 : ENV NB_ENVIRONMENT_FILE /tmp/env/environment.lock", "\\n", " ---> Using cache\\n", " ---> 11c076718dec\\n", "Step 23/51 : ENV KERNEL_PYTHON_PREFIX ${NB_PYTHON_PREFIX}", "\\n", " ---> Using cache\\n", " ---> ba856f66cd22\\n", "Step 24/51 : ENV PATH ${NB_PYTHON_PREFIX}/bin:${CONDA_DIR}/bin:${NPM_DIR}/bin:${PATH}", "\\n", " ---> Using cache\\n", " ---> c1d9814cc1b9\\n", "Step 25/51 : COPY --chown=1000:1000 build_script_files/-2fusr-2flib-2fpython3-2e8-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2factivate-2dconda-2esh-391af5 /etc/profile.d/activate-conda.sh", "\\n", " ---> Using cache\\n", " ---> 847820fe0b21\\n", "Step 26/51 : COPY --chown=1000:1000 build_script_files/-2fusr-2flib-2fpython3-2e8-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2fenvironment-2epy-2d3-2e7-2elock-4f1154 /tmp/env/environment.lock", "\\n", " ---> Using cache\\n", " ---> 1823a219cfa4\\n", "Step 27/51 : COPY --chown=1000:1000 build_script_files/-2fusr-2flib-2fpython3-2e8-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2finstall-2dminiforge-2ebash-514214 /tmp/install-miniforge.bash", "\\n", " ---> Using cache\\n", " ---> a219e5b3836a\\n", "Step 28/51 : RUN mkdir -p ${NPM_DIR} && chown -R ${NB_USER}:${NB_USER} ${NPM_DIR}", "\\n", " ---> Using cache\\n", " ---> 4cdc2be62996\\n", "Step 29/51 : USER ${NB_USER}", "\\n", " ---> Using cache\\n", " ---> e68790eb2eb7\\n", "Step 30/51 : RUN npm config --global set prefix ${NPM_DIR}", "\\n", " ---> Using cache\\n", " ---> 167cf6a99158\\n", "Step 31/51 : USER root", "\\n", " ---> Using cache\\n", " ---> c569a97625e5\\n", "Step 32/51 : RUN TIMEFORMAT='time: %3R' bash -c 'time /tmp/install-miniforge.bash' && rm -rf /tmp/install-miniforge.bash /tmp/env", "\\n", " ---> Using cache\\n", " ---> c2985f604c42\\n", "Step 33/51 : ARG REPO_DIR=${HOME}", "\\n", " ---> Using cache\\n", " ---> 7f3d1f5072d2\\n", "Step 34/51 : ENV REPO_DIR ${REPO_DIR}", "\\n", " ---> Using cache\\n", " ---> 515ed35d6be5\\n", "Step 35/51 : WORKDIR ${REPO_DIR}", "\\n", " ---> Using cache\\n", " ---> 4ab86ee14a53\\n", "Step 36/51 : RUN chown ${NB_USER}:${NB_USER} ${REPO_DIR}", "\\n", " ---> Using cache\\n", " ---> 9112082605e9\\n", "Step 37/51 : ENV PATH ${HOME}/.local/bin:${REPO_DIR}/.local/bin:${PATH}", "\\n", " ---> Using cache\\n", " ---> 618fab06d00a\\n", "Step 38/51 : ENV CONDA_DEFAULT_ENV ${KERNEL_PYTHON_PREFIX}", "\\n", " ---> Using cache\\n", " ---> 6720236bb76e\\n", "Step 39/51 : COPY --chown=1000:1000 src/binder/requirements.txt ${REPO_DIR}/binder/requirements.txt", "\\n", " ---> Using cache\\n", " ---> 13be4d14872f\\n", "Step 40/51 : USER ${NB_USER}", "\\n", " ---> Using cache\\n", " ---> ee1ee6c04e43\\n", "Step 41/51 : RUN ${KERNEL_PYTHON_PREFIX}/bin/pip install --no-cache-dir -r \\\"binder/requirements.txt\\\"", "\\n", " ---> Using cache\\n", " ---> a41fcf850dff\\n", "Step 42/51 : COPY --chown=1000:1000 src/ ${REPO_DIR}", "\\n", " ---> ac36e2bbb1d7\\n", "Step 43/51 : LABEL repo2docker.ref=\\\"3225e1e2040c7a1c93b946696b2bdd3d90321b8a\\\"", "\\n", " ---> Running in 62d9a3812482\\n", "Removing intermediate container 62d9a3812482\\n", " ---> 288a31dfd174\\n", "Step 44/51 : LABEL repo2docker.repo=\\\"https://github.com/NBCLab/nimare-paper\\\"", "\\n", " ---> Running in a3232e2bcd80\\n", "Removing intermediate container a3232e2bcd80\\n", " ---> 1d21cb66869d\\n", "Step 45/51 : LABEL repo2docker.version=\\\"2021.08.0\\\"", "\\n", " ---> Running in 8ae4407d16db\\n", "Removing intermediate container 8ae4407d16db\\n", " ---> d0675e2c3ebf\\n", "Step 46/51 : USER ${NB_USER}", "\\n", " ---> Running in df766e2d5178\\n", "Removing intermediate container df766e2d5178\\n", " ---> 073258286699\\n", "Step 47/51 : ENV PYTHONUNBUFFERED=1", "\\n", " ---> Running in 812716a0b94f\\n", "Removing intermediate container 812716a0b94f\\n", " ---> 9995043cffba\\n", "Step 48/51 : COPY /python3-login /usr/local/bin/python3-login", "\\n", " ---> e102b626ab23\\n", "Step 49/51 : COPY /repo2docker-entrypoint /usr/local/bin/repo2docker-entrypoint", "\\n", " ---> 3c4abd8b9bad\\n", "Step 50/51 : ENTRYPOINT [\\\"/usr/local/bin/repo2docker-entrypoint\\\"]", "\\n", " ---> Running in bda5b1dc6534\\n", "Removing intermediate container bda5b1dc6534\\n", " ---> 8af86de13767\\n", "Step 51/51 : CMD [\\\"jupyter\\\", \\\"notebook\\\", \\\"--ip\\\", \\\"0.0.0.0\\\"]", "\\n", " ---> Running in e2327996e5d3\\n", "Removing intermediate container e2327996e5d3\\n", " ---> b1e8b02e2286\\n", "{\\\"aux\\\": {\\\"ID\\\": \\\"sha256:b1e8b02e228687de5661a45139f2031168164c2c0c8247054e903adbbdc08319\\\"}}", "Successfully built b1e8b02e2286\\n", "Successfully tagged binder-registry.conp.cloud/binder-registry.conp.cloud/binder-nbclab-2dnimare-2dpaper-3e379f:3225e1e2040c7a1c93b946696b2bdd3d90321b8a\\n", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 21694976, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 21694976, \"total\": 210460187}, \"progress\": \"[=====>                                             ]  21.69MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 56789504, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 56789504, \"total\": 210460187}, \"progress\": \"[=============>                                     ]  56.79MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 92998144, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 92998144, \"total\": 210460187}, \"progress\": \"[======================>                            ]     93MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 127535616, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 127535616, \"total\": 210460187}, \"progress\": \"[==============================>                    ]  127.5MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 162073088, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 162073088, \"total\": 210460187}, \"progress\": \"[======================================>            ]  162.1MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": {\"current\": 197935104, \"total\": 210460187}, \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushing\", \"progressDetail\": {\"current\": 197935104, \"total\": 210460187}, \"progress\": \"[===============================================>   ]  197.9MB/210.5MB\", \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Pushing image\\n\", \"progress\": {\"b323267758d6\": \"Pushed\", \"2937f5f8c538\": \"Pushed\", \"b53458b0dbc2\": \"Pushed\", \"d7ade7041f87\": \"Layer already exists\", \"97ccd16dd874\": \"Layer already exists\", \"d7f207d5f58e\": \"Layer already exists\", \"fc67d1b2a8f5\": \"Layer already exists\", \"60e35b85f76d\": \"Layer already exists\", \"5ee76d3e13ce\": \"Layer already exists\", \"d08d6f5128db\": \"Layer already exists\", \"b2a202cd17d6\": \"Layer already exists\", \"66a52b938110\": \"Layer already exists\", \"81b677cf14fa\": \"Layer already exists\", \"4d7e960c0f42\": \"Layer already exists\", \"3d7d72ccccff\": \"Layer already exists\", \"5ce4715f5733\": \"Layer already exists\", \"7c92751b7d81\": \"Layer already exists\", \"2e6bfb4089f3\": \"Layer already exists\", \"b0eb2032f1da\": \"Layer already exists\", \"1ab6ee41ee9a\": \"Layer already exists\", \"824821e7a1be\": \"Layer already exists\", \"824bf068fd3d\": \"Layer already exists\"}, \"layers\": {\"b323267758d6\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b323267758d6\"}, \"2937f5f8c538\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"2937f5f8c538\"}, \"b53458b0dbc2\": {\"status\": \"Pushed\", \"progressDetail\": {}, \"id\": \"b53458b0dbc2\"}, \"d7ade7041f87\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7ade7041f87\"}, \"97ccd16dd874\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"97ccd16dd874\"}, \"d7f207d5f58e\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d7f207d5f58e\"}, \"fc67d1b2a8f5\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"fc67d1b2a8f5\"}, \"60e35b85f76d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"60e35b85f76d\"}, \"5ee76d3e13ce\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ee76d3e13ce\"}, \"d08d6f5128db\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"d08d6f5128db\"}, \"b2a202cd17d6\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b2a202cd17d6\"}, \"66a52b938110\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"66a52b938110\"}, \"81b677cf14fa\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"81b677cf14fa\"}, \"4d7e960c0f42\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"4d7e960c0f42\"}, \"3d7d72ccccff\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"3d7d72ccccff\"}, \"5ce4715f5733\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"5ce4715f5733\"}, \"7c92751b7d81\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"7c92751b7d81\"}, \"2e6bfb4089f3\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"2e6bfb4089f3\"}, \"b0eb2032f1da\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"b0eb2032f1da\"}, \"1ab6ee41ee9a\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"1ab6ee41ee9a\"}, \"824821e7a1be\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824821e7a1be\"}, \"824bf068fd3d\": {\"status\": \"Layer already exists\", \"progressDetail\": {}, \"id\": \"824bf068fd3d\"}", "Successfully pushed binder-registry.conp.cloud/binder-registry.conp.cloud/binder-nbclab-2dnimare-2dpaper-3e379f:3225e1e2040c7a1c93b946696b2bdd3d90321b8a", "Built image, launching...\\n", "server running at https://binder.conp.cloud/jupyter/user/nbclab-nimare-paper-c0ffuhgp/\\n\", \"image\": \"binder-registry.conp.cloud/binder-registry.conp.cloud/binder-nbclab-2dnimare-2dpaper-3e379f:3225e1e2040c7a1c93b946696b2bdd3d90321b8a\", \"repo_url\": \"https://github.com/NBCLab/nimare-paper\", \"token\": \"iBpO7RdXQ722zmSYq3K3Qw\", \"binder_ref_url\": \"https://github.com/NBCLab/nimare-paper/tree/3225e1e2040c7a1c93b946696b2bdd3d90321b8a\", \"binder_launch_host\": \"https://binder.conp.cloud/\", \"binder_request\": \"v2/gh/NBCLab/nimare-paper.git/3225e1e2040c7a1c93b946696b2bdd3d90321b8a\", \"binder_persistent_request\": \"v2/gh/NBCLab/nimare-paper/3225e1e2040c7a1c93b946696b2bdd3d90321b8a"]
      
ltetrel commented 2 years ago

Same errors are previously: book_nimare.log

What I would suggest is that you debug and build yourself with the preview service: https://roboneuro.herokuapp.com/ You can also use the following GUI (same as the traditional binder): https://binder.conp.cloud/ Push your repo changes, put the repo link (optionnaly commit hash if you want to build a specific commit), the jupyter book will build under the hood.

This will allow to have much faster iteration, without me always re-trying to build if it does not execute.

tsalo commented 2 years ago

I will do just that, thank you. Unfortunately, my university's HPC is down for maintenance at the moment, so it might be a few days before I'm able to switch to using pre-generated content for the problematic decoding step.

ltetrel commented 2 years ago

Is the pre-generated content heavy (in terms of space) ? Maybe an easier step would be to reduce/downsample your data a bit ?

tsalo commented 2 years ago

I don't think so. The current zipped data folder is 535.1 MB, and the content I would need to generate for this step would probably just be a TSV file, so adding it wouldn't noticeably increase the size of the data folder.

The whole book uses the same 2mm3 template, so I think switching to a lower-resolution one would require a lot of work.

tsalo commented 2 years ago

I have updated the book to use pre-generated files for that problematic step, and I re-built it on the FIU HPC. I then submitted a preview request on RoboNeuro yesterday. It seemed to be building for a while, and I got the submission email, but after a while the page updated to say "no preview found with that ID" and there was no associated email. I recall this being a problem before, so I resubmitted today just to make sure it wasn't a one-off issue, but it happened again.

pbellec commented 2 years ago

@tsalo sorry to hear you ran into yet-another hurdle. Loic is on vacation at the moment, and I unfortunately cannot resolve this issue myself. We'll get back to you asap.

ltetrel commented 2 years ago

Hi @tsalo,

I will be able to look at your submission tomorrow :)

ltetrel commented 2 years ago

Still memory errors (I suppose) for subtraction.md, macm.md and resources.md (it succeeded correction.md this time). jb_build.log

I highly advise you to benchmark the notebooks and make sure they consume strictly less than 4G. I am also in the process of checking where it fails, still waiting for the compute env to spawn (takes a while since it needs to build the book before).

ltetrel commented 2 years ago

Let me know when the build finished @tsalo so I can check the logs for you. I suspect the log issue being related to roboneuro not able to stick to a long-running build, but I have no clues on that since I did not work on roboneuro frontend :(

tsalo commented 2 years ago

I submitted to the preview service about 10 minutes ago (right after I saw your comment), and it looks like it either failed or just stopped updating the preview status page a moment ago.

ltetrel commented 2 years ago

I submitted to the preview service about 10 minutes ago (right after I saw your comment), and it looks like it either failed or just stopped updating the preview status page a moment ago.

It is still running: image

ltetrel commented 2 years ago

It failed, again a kernel died error for the same notebooks cbma.md, annotation.md and correction.md. nimare-book.log

So we have the confirmation that neurolibre needs more memory than local jupyter-book build. Try to further reduce the memory consumption (3G?), I think we are going into the good direction at least... Again Neurolibre is not a computing environment, so this is somehow expected.

tsalo commented 2 years ago

I will re-run with 3GB as the limit on the FIU HPC. Could you share the logs for the failing pages as well, just in case it builds successfully for me?

EDIT: annotation.md has successfully built, so I'm guessing that reducing --mem-per-cpu to 3000 won't catch the problems Neurolibre is experiencing.

ltetrel commented 2 years ago

Could you share the logs for the failing pages as well, just in case it builds successfully for me?

What do you mean ? There is only one log file which is the one I sent.

tsalo commented 2 years ago

Sorry, I was referring to the three notebook-specific logs referenced inside that file:

WARNING: Couldn't find cache key for notebook file content/annotation.md. Outputs will not be inserted.
  Last execution failed with traceback saved in /mnt/books/NBCLab/github.com/nimare-paper/fe4a044859d3079d38957addbf6c487140430747/_build/html/reports/annotation.log
WARNING: Couldn't find cache key for notebook file content/cbma.md. Outputs will not be inserted.
  Last execution failed with traceback saved in /mnt/books/NBCLab/github.com/nimare-paper/fe4a044859d3079d38957addbf6c487140430747/_build/html/reports/cbma.log
WARNING: Couldn't find cache key for notebook file content/correction.md. Outputs will not be inserted.
  Last execution failed with traceback saved in /mnt/books/NBCLab/github.com/nimare-paper/fe4a044859d3079d38957addbf6c487140430747/_build/html/reports/correction.log
ltetrel commented 2 years ago

Oh ok, checking now!

ltetrel commented 2 years ago

annotation.log cbma.log correction.log

tsalo commented 2 years ago

Ah, CBMA and correction failed because they were writing out files to a read-only file system!

I commented in NBCLab/nimare-paper#18 about the annotation issue.

ltetrel commented 2 years ago

I was not really aware of those files, this is something important that we would need to ad to our roboneuro logs. Are you trying to write some files onto the system ? This seems to be a new issue (so it is not memory related anymore).

ltetrel commented 2 years ago

It is trying to write to ./../data/nimare-paper/ which is obviously read only (we don't allow user to write to the data folder, so they don't mess up with other datasets from other submissions :)

tsalo commented 2 years ago

That makes sense. I pre-generated those files because I knew they wouldn't be permanent, but I didn't realize they wouldn't write out at all. I can skip the step that actually saves the files and add another admonition about why we don't run that part, but I am curious if there's a way to save files throughout the course of the Book build. Could they be saved to a different location?

ltetrel commented 2 years ago

Each user has up to 10GB by default. So you can write stuff anywhere outside of ./data.

While thinking of it, it may be interesting to follow data template from cookie cutter but that would require lot of changes/refactoring code on my side.

ltetrel commented 2 years ago

wdyt would be the most suitable place for writing intermediate data for you, for ex data/processed ? https://github.com/drivendata/cookiecutter-data-science/tree/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/data

ATM I suggest you create a new dir at REPOSITORY_ROOT/outputs/

tsalo commented 2 years ago

Writing to an outputs subdirectory definitely sounds like an easy solution. I will do that. Thanks!

tsalo commented 2 years ago

I've updated the book to use the most recent NiMARE release, which means that there is no Java requirement. Also, I switched to writing out certain files to outputs, which also let me reduce the size of the zipped data file. I re-built the book with the new changes and it ran successfully on the FIU HPC with a memory limit of 3GB in a little over 2 hours.

Should I resubmit to RoboNeuro?

ltetrel commented 2 years ago

yes let's give it a try :)

tsalo commented 2 years ago

Submitted!

EDIT: After 10 minutes, it did the standard thing where it says "No preview found with that ID."

ltetrel commented 2 years ago

Ok let's wait at least 2 hours and I will check

ltetrel commented 2 years ago

Whoops, more fails :( book-build.log

05_cbma.log 06_ibma.log 07_correction.log 09_subtraction.log 10_macm.log 12_decoding.log

ltetrel commented 2 years ago

It seems like the outputs directory does not exists. And indeed I don't see it on your repo

ltetrel commented 2 years ago

As for ressources.md I am clueless about this, I would bet on memory issues again ? I don't have report log for this one.

tsalo commented 2 years ago

The outputs directory is created by 03_download_data.md. For some reason it looks like the chapters were being executed in a random order:

Executing: /home/jovyan/content/10_macm.md
Executing: /home/jovyan/content/12_decoding.md
Executing: /home/jovyan/content/05_cbma.md
Executing: /home/jovyan/content/06_ibma.md
Executing: /home/jovyan/content/03_download_data.md
Execution Succeeded: /home/jovyan/content/03_download_data.md
Executing: /home/jovyan/content/09_subtraction.md
Executing: /home/jovyan/content/04_resources.md
Executing: /home/jovyan/content/11_annotation.md
Execution Succeeded: /home/jovyan/content/11_annotation.md

When I run it on the FIU HPC, the files are executed in alphabetical order- that's why I added numbers to the filenames. Do you know if there's a Jupyter book setting that determines that?

ltetrel commented 2 years ago

Oh ok I think I understand why... Jupyter book caching does not execute notebooks in the same order each time, this is quite weird but somehow related to race conditions (yes this is bad...). So I would advice your folder layout on the github repo directly, this will also help (a little) to reduce cpu usage and storage file access to create dirs etc... Weird that you don't have this issue on your HPC though...

In short for you, just add the output directory on github, and make sure your code create directory just if it does not exists

ltetrel commented 2 years ago

@tsalo Ok at this stage and to alleviate the process what I would suggest is that you submit using https://binder.conp.cloud Then when you have your instance ready you can play here, create folders as you want, update parameters as needed etc... I can also help on your software to reduce memory if needed.

Don't worry it will take some time but I am sure we will be able to fix all issues one by one, your submission is quite compute-intensive but it should pass.

ltetrel commented 2 years ago

After some digging, it seems that it is io.convert_neurosynth_to_dataset in 04_resources.md that is problematic here. Hence the other notebooks does not have access to the files needed for their execution.

tsalo commented 2 years ago

In my recent set of changes, I tried to load files that are created in earlier pages (based on the numerical prefix) from the created version in outputs/ rather than the Repo2Data-downloaded data folder. I figured it would reduce the data being stored on Google Drive and streamline the book a bit. However, if they aren't executed in order, then there will be a lot of cases where the files are not available in the "later" pages because the earlier pages weren't executed. I think I need to add the generated files back into the data directory and load any files that might be created in one page and used in another from the data directory instead of outputs.

I'm hoping that it's just a matter of the order of execution and there isn't a memory issue in 04_resources.md, but I could be wrong.

ltetrel commented 2 years ago

I was not clear in my latest comment, there is a memory issue with 04_resources.md with io.convert_neurosynth_to_dataset. I clearly see the memory going up from ~400MB to more than 4GB. As a result this notebook crash and the rest of notebooks cannot execute because(I assume) they don't have access to the generated files from ressources.

For the ordering I am not sure if there is an issue or not based on the 04 memory issue. What I experimented is that when executing jupyter book without caching, execution is in order: (sorry, I lost the output, but I clearly saw it). Whereas with caching I am not sure if it executes in order, what is sure is that the staged logging is not in order.

tsalo commented 2 years ago

I've reverted the recent changes, so it should work just fine if the pages aren't run in order. I really don't know how there's still a memory issue when it runs just fine with a 3GB limit on the FIU HPC... I can try using reduced versions of Neurosynth and NeuroQuery, though, which should reduce memory.

pbellec commented 2 years ago

@tsalo quick comment: memory cap on HPC typically are sofware based: there is in effect a large pool of memory available, and a process monitors memory usage. This type of mechanism can be forgiving of quick spikes of memory usage. In the case of neurolibre execution pods, it is possible that the memory limit is much harsher. This is very much guesswork: I am not aware of the details of memory management neither at your HPC or in kubernetes. But I have definitely witnessed cases of non-reproducible out-of-memory error in HPCs, so at least it is a possibility. Reducing the memory footprint should hopefully solve this n-th blocker.

tsalo commented 2 years ago

@pbellec thank you for explaining that. I'll reduce the datasets that are converted.