Shared-Reality-Lab / IMAGE-server

IMAGE project server components
Other
2 stars 7 forks source link

Create process for people to log weird ML results #194

Open jeffbl opened 2 years ago

jeffbl commented 2 years ago

We need a way for people to log weird ML responses, like finding oranges where there are none, or labeling things "clean room" or "embassy" in oddly specific ways. During tech-arch today, @gp1702, you indicated you'd like this to be structured in some way so you can extract information or add to a training set. Logging individual work item here in github doesn't makes sense since it'll overwhelm the project board. Please propose another method for doing this, since we're already finding examples.

Cybernide commented 2 years ago

Suggest making something in-extension that we can potentially extend in the future to the general public. @gp1702 and possibly @Sabrina-Knappe, @jaydeepsingh25, @Clarisa1999 we should talk.

Cybernide commented 2 years ago

@gp1702 Just commenting here as a reminder to follow up: Could you please give us a list of feedback items that you would need?

Cybernide commented 2 years ago

@gp1702 has informed me that in terms of ML, what would be good is a list of all the items that our preprocessor detects in a graphic, along with maybe checkboxes or some way of selecting ones that were erroneous. I'm assigning to me, but I'm also requesting some help/insight because I don't know how to easily implement this in a separate Google form.

I think this particular feedback feature would have to be in-extension, and therefore possibly outside the scope of this next release.

Cybernide commented 2 years ago

@jeffbl Please advise as to timeline/approach to take

jeffbl commented 2 years ago

One maybe easy way to do this implementation-wise would be to:

QUESTION: What ethics/warnings do we need to apply when we gather this data, and where/how is it stored? Commercial forms programs outside McGill might be verboten, depending on their ToS / storage location of data.

Cybernide commented 2 years ago

Is there someone that we can easily consult with in terms of the best platform to be using and what we're allowed to do when getting feedback? I am having a lot of trouble understanding the Chrome developer terms and trying to reconcile them with the McGill policies.

jeffbl commented 2 years ago

There are two questions:

  1. what are McGill approved data gathering tools? I know limesurvey is approved, and I think you mentioned that microsoft forms are as well, as they were approved for user testing?

  2. what are we allowed to gather, and how are we allowed to use it based on extension requirements? I only know what is linked from our ToS, but can help weigh in on questions where it is not clear.

Cybernide commented 2 years ago

Linking this issue with https://github.com/Shared-Reality-Lab/audio-haptic-graphics-UX/issues/21

Cybernide commented 2 years ago

Putting this here for design considerations when it becomes feasible. Thanks, @gp1702

The list of semantic segmentation categories: https://groups.csail.mit.edu/vision/datasets/ADE20K/ Object detection: https://gist.github.com/AruniRC/7b3dadd004da04c80198557db5da4bda

Cybernide commented 2 years ago

This needs to be moved out @jeffbl I'm going to get started on designing a generalized feedback feature.

jeffbl commented 2 years ago

The web form we're making for CSUN will be linked to data if the user allows us to keep it, but I'm moving this out since eventually, we'd like to have more formalized feedback specifically for categorizing weird ML results.