Closed jonfroehlich closed 5 years ago
Jon: To me, this is a higher-level discussion about what makes a sidewalk accessible and what aspects of this can be automatically determined using AI. I think this is a terrific discussion point but you need to pop-up a level imo. :) I think this even ties into larger discussions about AI and ethics, etc. (which is fascinating).
Neural network enabled by the large size and richness (ie both image and distance data) of the project sidewalk dataset relative to earlier datasets, however a challenge is the noisiness of the dataset. Jon: Actually, I think the juicer bit is that while we get better with more labels, we don't get substantially better. Why do we think this is? And are there models that might benefit from more data? Can you point forward to some suggestions? Or what's the limiting factor here--is it indeed noise? I'm not so sure.
No universal model yet- we optimize separately for either pre-crop or for full-scene labeling (via training data). We use two different models for our two application scenarios. Pre-crop models have the advantage for classification performance because crops are already centered and almost certainly contain a significant portion of the feature. However, localization within panoramas will require some method for sampling or focusing on particular subsections of the image, which could potentially improve the performance either in speed or accuracy. \esther{Reference YOLO}. Jon: But why is a universal model necessary, especially given that these two tasks (validation vs. localizing problem) are different--so it makes sense that we would need a separate model. Also, it doesn't 'cost' that much to train and run two models so what's the real value proposition here? This also seems like it may be a 'down-in-the-weeds' point for our Discussion.
(Perhaps) Ideally, we would develop one `universal model' that performs optimally on both tasks. Jon: I'm not so sure. See above.
In our case, when applying the precrop model to the labeling task and vice versa, we find xxx and yy performance. (should this be in the experiments or results section instead?). Jon: depends. Sometimes I have lil results like this in the Discussion too to highlight a preliminary point.
Ambiguities in the Crowdsource Labeling Task: Labelling is subjective and difficult even for humans, there are ambiguities in marker placement. Jon: what do you mean by ambiguities in marker placement? You mean in what justifies being labeled or do you mean ambiguities in where people place markers (e.g., some mark a curb ramp at this x,y position while others do so at this x,y position)
Ritterbusch, S., & Kucharek, H. (2018, July). Robust and Incremental Pedestrian Path Network Generation on OpenStreetMap for Safe Route Finding. In International Conference on Computers Helping People with Special Needs (pp. 302-309). Springer, Cham.
I have an in-submission paper to IMWUT on sidewalk location inference, which extends methods by Ritterbusch et al., 2018. Note that these techniques do not use computer vision.
Is it ok to cite an in-submission work? How would one do that (eg just in the bibliography as usual)?
I think just citing Ritterbusch is fine.
Sent from my iPhone
On Apr 25, 2019, at 11:06 AM, infrared0 notifications@github.com wrote:
I have an in-submission paper to IMWUT on sidewalk location inference, which extends methods by Ritterbusch et al., 2018. Note that these techniques do not use computer vision.
Is it ok to cite an in-submission work? How would one do that (eg just in the bibliography as usual)?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Before closing this out, I'd like to double check our current Discussion and make sure we capture the important points in this thread.
I just added the "predicting friction strips/severity" bit to the discussion. All other points seem fairly well accounted for, so I'm closing this issue.