Open jeffbl opened 2 years ago
Suggest making something in-extension that we can potentially extend in the future to the general public. @gp1702 and possibly @Sabrina-Knappe, @jaydeepsingh25, @Clarisa1999 we should talk.
@gp1702 Just commenting here as a reminder to follow up: Could you please give us a list of feedback items that you would need?
@gp1702 has informed me that in terms of ML, what would be good is a list of all the items that our preprocessor detects in a graphic, along with maybe checkboxes or some way of selecting ones that were erroneous. I'm assigning to me, but I'm also requesting some help/insight because I don't know how to easily implement this in a separate Google form.
I think this particular feedback feature would have to be in-extension, and therefore possibly outside the scope of this next release.
@jeffbl Please advise as to timeline/approach to take
One maybe easy way to do this implementation-wise would be to:
QUESTION: What ethics/warnings do we need to apply when we gather this data, and where/how is it stored? Commercial forms programs outside McGill might be verboten, depending on their ToS / storage location of data.
Is there someone that we can easily consult with in terms of the best platform to be using and what we're allowed to do when getting feedback? I am having a lot of trouble understanding the Chrome developer terms and trying to reconcile them with the McGill policies.
There are two questions:
what are McGill approved data gathering tools? I know limesurvey is approved, and I think you mentioned that microsoft forms are as well, as they were approved for user testing?
what are we allowed to gather, and how are we allowed to use it based on extension requirements? I only know what is linked from our ToS, but can help weigh in on questions where it is not clear.
Putting this here for design considerations when it becomes feasible. Thanks, @gp1702
The list of semantic segmentation categories: https://groups.csail.mit.edu/vision/datasets/ADE20K/ Object detection: https://gist.github.com/AruniRC/7b3dadd004da04c80198557db5da4bda
This needs to be moved out @jeffbl I'm going to get started on designing a generalized feedback feature.
The web form we're making for CSUN will be linked to data if the user allows us to keep it, but I'm moving this out since eventually, we'd like to have more formalized feedback specifically for categorizing weird ML results.
We need a way for people to log weird ML responses, like finding oranges where there are none, or labeling things "clean room" or "embassy" in oddly specific ways. During tech-arch today, @gp1702, you indicated you'd like this to be structured in some way so you can extract information or add to a training set. Logging individual work item here in github doesn't makes sense since it'll overwhelm the project board. Please propose another method for doing this, since we're already finding examples.