brainhackorg / global2021

Website for Brainhack global 2021
https://brainhack.org/global2021/
MIT License
3 stars 11 forks source link

Niviz: Configurable quality control image generation and rating #181

Open jerdra opened 2 years ago

jerdra commented 2 years ago

Title

Niviz: Configurable quality control image generation and rating

Leaders

Jerrold Jeyachandra @jerdra

Collaborators

None

Brainhack Global 2021 Event

BrainHack Toronto

Project Description

The process of QCing is universally boring, terrible and inefficient.

The Problem with QC

  1. Most pipelines people write and use don't generate QC images, especially those that are as user-friendly as widely established pipelines such as fMRIPREP

  2. Even then, the QC images that are generated do not necessarily match how users end up QC'ing and rating images.

  3. Most of the time users must figure out their own way to record and organize their QC results, this is incredibly variable across individuals. Your collaborator might use differing definitions, organizational principles, and file formats for storing their QC results than you.

  4. Comparing rated images is often slow, manual and therefore painful. Often-times users have doubt about their ratings and would want to compare it to other images with the same rating. Doing this is often a very manual process (i.e lookup similar QC ratings on your spreadsheet, find file, open both images and compare)

The Solution - Niviz

Niviz is a simple, configurable Python-based tool that:

  1. Enables researchers to generate QC images using a simple YAML file for any pipeline that outputs NIFTI, GIFTI or CIFTI images
  2. Provides a small web application using niviz-rater that collects generated QC images (or any set of images organized in a BIDS-style dataset!!!) into an interactive QC interface. In addition, QC can be configured to suit the user's needs using (yet another) simple YAML file.

Link to project repository/sources

QC image generation

https://github.com/TIGRab/niviz

QC web application

https://github.com/jerdra/niviz-rater

Goals for Brainhack Global

Both niviz and niviz-rater are relatively new projects and therefore require a bit of maintenance and organizational effort. The primary goals are as follows:

Niviz

  1. Bug squashing
  2. Implementing a YAML schema validation step
  3. Documentation: Getting started, tutorials (including OSF dataset)
  4. Writing unit-tests

Niviz Rater

  1. Unit tests
  2. Several UX feature additions
  3. Basic feature additions to one day support collaborative QC rating and analysis (i.e inter-rater reliability)
  4. Maintenance (packaging)
  5. Documentation: Tutorials, getting started, API

Good first issues

Issues can be found under:

https://github.com/TIGRLab/niviz/issues

https://github.com/jerdra/niviz-rater/issues

Look for the good first issue label for easy topics!

Communication channels

https://mattermost.brainhack.org/brainhack/channels/brainhack-toronto

We'll probably create our own channel if this picks up interest :)

Skills

The repositories are primarily written in Python and Javascript, these components are mostly independent from one another so you don't need to know both!

Python

Intermediate

Git

Intermediate

Javascript

Familiarity with Svelte framework is preferred. I'm still learning myself!

Onboarding documentation

Contributing

What will participants learn?

Depending on which repository you contribute to:

Niviz

  1. Python unit testing
  2. nipype interface development
  3. niworkflows image generation

Niviz-Rater

  1. Svelte javascript framework
  2. Bottle for building python web applications
  3. Light database work with peewee
  4. Python packaging
  5. Unit testing

Data to use

As part of contributing to the documentation efforts of this project, we'd like to host some OSF sample data.

Niviz

Some image data from a pipeline like fMRIPrep

Niviz-Rater

Some QC image data so that users can play around with writing a YAML specification file and using the QC interface

Number of collaborators

3

Credit to collaborators

Project contributers will be included using the GitHub allcontributors bot. I'm still setting this up :see_no_evil:

Image

Leave this text if you don't have an image yet.

Type

coding_methods, documentation, visualization

Development status

1_basic structure

Topic

data_visualisation, other

Tools

Nipype, other

Programming language

documentation, Python, html_css, javascript

Modalities

DWI, fMRI, MRI

Git skills

1_commit_push, 2_branches_PRs

Anything else?

No response

Things to do after the project is submitted and ready to review.

jerdra commented 2 years ago

Hi @brainhackorg/project-monitors my project is ready!

jerdra commented 2 years ago

For Twitter:

Niviz: Tired of neuroimaging QC being slow, manual, and painful? Want to contribute to, and shape the development of a relatively new project? Niviz and Niviz-Rater are companion apps that were developed to help streamline neuroimaging QC through (1) Automating image generation with a simple config file (2) Providing a simple web interface to facilitate the QC process and handling all the boring stuff behind the scenes!

hanayik commented 2 years ago

Hi @jerdra, I recently stumbled across these projects (niviz and niviz-rater) while prepping for another Brainhack. I thought you might find our web-based image viewer project interesting given the interactive QC work you're doing. We have some Vue, React, and vanilla JS examples.

https://github.com/niivue/niivue

If you have any suggestions, feel free to submit an issue on our repo page.

neurolabusc commented 2 years ago

@jedra: @cdrake and I are NiiVue contributors. We will both be virtually attending the BrainHack DC, which will be held concurrently with BrainHack Toronto. I think there is a lot of opportunity for collaboration.

jerdra commented 2 years ago

Hi @hanayik and @neurolabusc!

Thanks for posting in this issue, all the work you've listed looks incredibly well thought out! I'd be more than happy to sit in for a brainstorming session and share our works with each other. I'm a bit pressed for time today but happy to drop in over the next two days if you have a session planned :) Should we open up a channel on mattermost for general QC discussion and perhaps planning?

If you're interested in learning more about niviz, the Toronto hack is planning tool demos tomorrow 10:30-12:00pm EST, schedule is posted here: https://brainhackto.github.io/global-toronto-12-2021/. I'd love to hear your thoughts, feedback and perhaps ideas on how we can work together

Finally, I haven't had the chance to look too deeply into niivue and niimath but from a high-level view it looks like we're approaching QC from a slightly different standpoint, so I'm very interested to learn more about your approach and vision over the coming days