MozillaFestival / open-leaders-6

Project tracking repo for Mozilla Open Leader 6
https://foundation.mozilla.org/en/opportunity/mozilla-open-leaders/
Creative Commons Attribution Share Alike 4.0 International
24 stars 7 forks source link

Swipes for Science #51

Open akeshavan opened 5 years ago

akeshavan commented 5 years ago

Project Lead: @akeshavan

Mentor: @derekhoward

Welcome to OL6, Cohort D! This issue will be used to track your project and progress during the program. Please use this checklist over the next few weeks as you start Open Leadership Training :tada:.


Before Week 1 (Sept 12): Your first mentorship call

Before Week 2 (Sept 19): First Cohort Call (Open by Design)

Before Week 3 (Sept 26): Mentorship call

Before Week 4 (Oct 3): Cohort Call (Build for Understanding)

This issue is here to help you keep track of work during the first month of the program. Please refer to the OL6 Syllabus for more detailed weekly notes and assignments past week 4.

akeshavan commented 5 years ago

[DRAFT] Vision or Mission Statement

I’m working with biomedical researchers to annotate their large datasets by engaging citizen scientists so that researchers and citizen scientists working together can accelerate scientific discovery.

I’m working openly because I believe that everyone can be a part of the scientific process, regardless of their educational background.

mekline commented 5 years ago

I'd love to hear more about this! What kinds of datasets and annotations are you working with?

akeshavan commented 5 years ago

Many! Primarily brain imaging research. I'll list some apps below:

  1. braindr.us -- this project is to study pediatric mental health by analyzing thousands of brain MRI images. Kids move a lot in the MRI scanner, which results in poor quality images. So braindr is like a Tinder app for brains -- swipe left to fail a bad quality image! You can read about the whole experiment at http://results.braindr.us

  2. braindrles.us -- this is a collaboration with researchers at USC who are studying stroke lesions. They ran automated stroke segmentation on hundreds of brain images, and need to know which algorithm did the best job. So here, you swipe left if you see a poor-quality stroke segmentation.

  3. appstract.pub -- this is a text annotation application for scientific abstracts. I wanted to know the distribution of sample sizes in neuroimaging studies of autism, but no database indexes this information! So in appstract, you tap on the highlighted numbers that describe the sample size from abstract text. I plan on expanding this so that you can annotate info other than numbers.

  4. whaledr -- this is an app where you listen to a 5 second sound clip from the ocean (and see its spectrogram), and swipe right if you hear a whale. Its still very much a work in progress, and we are still working on creating a good tutorial! But you can check out the "WhaleChats" section to hear example sound clips :)

If you have any similar annotation needs for your research, we should chat! I want to work on making these apps more configurable/extensible so that anyone can use them.

Wentale commented 5 years ago

Hi Anisha, Nice to meet you in the cohort meeting this week. I like the vision that keeps it broad enough but includes the key elements of your project. Also thanks to whaledr....that gave me the idea for the cohort name of Whales Song!

akeshavan commented 5 years ago

Thanks @Wentale ! I voted for Whale Song 👍

nerantzis commented 5 years ago

I do agree. Every citizen and every students can participate in science!

akeshavan commented 5 years ago

Here is my Open Canvas: https://docs.google.com/presentation/d/1ms4z2pXf3zPzrfGwQS876jgwklOYmYR6UxWee5J5l1c/edit?usp=sharing

kmahelona commented 5 years ago

Yay citizen science! There's all sorts of challenges when crowd sourcing data annotation. We're doing this for quality control of te reo Māori readings and because it's an endangered language we often struggle with questions like what's "correct" pronunciation. I'd be keen to learn about your challenges of engaging with citizen scientists and how you navigate those.

akeshavan commented 5 years ago

@kmahelona how cool! I've only just started working with citizen science, so I'm still new to the challenges. In my proof of concept app (braindr.us) we found a way to remove bad-quality annotations with a machine learning algorithm -- you can read more about it at results.braindr.us. Basically, we had a few answers we knew were correct, and we weighted people's responses based on this ground truth.

p.s. Your project (https://github.com/MozillaFestival/open-leaders-6/issues/68) sounds awesome!

kmahelona commented 5 years ago

Re Open Canvas, problem solution is nice and to the point. For me I feel like contributor profile should include some info on the types of people we think would be good for our project (e.g. the obvious one is "likes to work on open sourced projects" or for us we'd love to have other indigenous developers involved in #68).

akeshavan commented 5 years ago

Thanks @kmahelona , I've updated my Open Canvas to be more specific on the contributor profile.

Also, here is my roadmap: https://github.com/SwipesForScience/SwipesForScience/issues/16

mlbonatelli commented 5 years ago

Here is my Open Canvas: https://docs.google.com/presentation/d/1ms4z2pXf3zPzrfGwQS876jgwklOYmYR6UxWee5J5l1c/edit?usp=sharing

First of all, I loved the name! :heart: And regarding your Unique Value Proposition, I think is great, sounds fun and entertaining! But annotating data is something that needs to be done with carefull, are you concern about that?

akeshavan commented 5 years ago

@mlbonatelli thanks! Yes, crowdsourcing annotations is hard. The first thing we need to do is make sure, as scientists, that we are effectively explaining our data and how to annotate it to a general audience. In Swipes for Science, I want to have a template that scientists fill in with text and images and that displays this information nicely. Another thing we can do is use machine learning to aggregate annotations by different users in a smart way -- we did this in the braindr.us project (you can read more about it at results.braindr.us)

mlbonatelli commented 5 years ago

@mlbonatelli thanks! Yes, crowdsourcing annotations is hard. The first thing we need to do is make sure, as scientists, that we are effectively explaining our data and how to annotate it to a general audience. In Swipes for Science, I want to have a template that scientists fill in with text and images and that displays this information nicely. Another thing we can do is use machine learning to aggregate annotations by different users in a smart way -- we did this in the braindr.us project (you can read more about it at results.braindr.us)

Really nice initiative @akeshavan ! If I can give another suggestion is, after the scientists done the template, maybe having a trial version with others scientists and people from the general public to see if they really understood the information?