Computational-Content-Analysis-2020 / Readings-Responses

Repository for organising "exemplary" readings, and posting reponses.
6 stars 1 forks source link

Images, Art & Videos - Naik, Kominers, Raskar, Glaeser, Hidalgo 2017 #47

Open jamesallenevans opened 4 years ago

jamesallenevans commented 4 years ago

Naik, Nikhil, Scott Duke Kominers, Ramesh Raskar, Edward L. Glaeser, César A. Hidalgo. 2017. “Computer vision uncovers predictors of physical urban change.” PNAS 114(29):7571–7576.

ckoerner648 commented 4 years ago

I was totally surprised that Naik et al 2017 found a robust relationship between street change and college education but not between street change and median income. Beyond the theories that the authors test I think that this finding gives rise to an additional interesting question: what is the relationship between deindustrialization and street improvement? The authors compare new residential buildings in 2014 to old industrial areas in 2007. However, to some extent, they select on the dependent variable, i.e., the existence of a street. Many residential streets in 2014 were forest in 2007 and hence did not have Google Streetview. Is it mostly because of deindustrialization in most major U.S. cities that we see the new residential buildings in Brooklyn (where they improve the street score of formerly industrial areas), as opposed to on Long Island or in the New York State / New Jersey suburbs?

katykoenig commented 4 years ago

The ability to predict changes in perceptions of safety through Google Street View images was interesting, but I am skeptical regarding their samples: Streetscape was trained using images from Boston and New York (two Eastern cities) and then used to predict perceptions of safety in Baltimore, Boston, Detroit, New York, and Washington, DC (again, mostly Eastern cities). I am wondering if this both the tool, Streetscape, and the findings of the paper are externally valid. Most notably, they fail to provide information regarding the number of images used for each city, so I question if one city with many issues had a large effect on these results and drown out the other cities in their pooled regression. Also, I am wondering if the authors could better argue external validity had they completed a multilevel model in which the city was a second-level predictor to see the difference between the within-city variance vs. the between-city variance.

laurenjli commented 4 years ago

In the paper, the authors discuss one prominent theory by Burgess that emphasizes locations and social networks. What about availability/accessibility/proximity of suburbs? Does this affect how urban counterparts grow (i.e. like Detroit where people who are able to leave for suburbs)?

di-Tong commented 4 years ago

I wonder how exactly is the Streetscore generated in this project. Do the authors create their own streetscores prediction model by asking people to rate the safety of a subset of their data? Or do they make use of the Streetscore algorithm trained in the Place Pulse project? If so, how was that algorithm developed in the first place?

ccsuehara commented 4 years ago

I like the appendix of this paper, it was very detailed and covered great topics. I have the same question as @katykoenig : What is the decision behind training on a set of cities and predicting another set of cities? Could this have been done by neighborhoods? Also, I was wondering how we could establish causality, going beyond their proposed correlations.

HaoxuanXu commented 4 years ago

I'd love to know more about the underlying logic for streetscores. Is that score too one dimensional? A lot of times, visual upgrades of a community shows more about gentrification rather than improved vibrance of a community

sunying2018 commented 4 years ago

This paper is interesting to use computer vision methods to analyze street-level imagery to figure out what factors influence the physical dynamic change of neighborhoods. I am interested in the method to quantify neighborhood appearance at different points in time - the computer vision algorithm: Streetscore. Though it may be hard to interpret, I am curious about what specific features of imagines influence the value of street score?

bjcliang-uchi commented 4 years ago

I find myself not quite convinced by this article because from the pictures shown in page 2, the level of street-change as well as the perceived safety seem highly sensitive to 1) the weather and 2) the number of houses (do more houses necessarily mean improvement?)

alakira commented 4 years ago

I also have a question regarding Streetscore. The authors precludes "the features of trees and sky to minimize seasonal effects", but how they could control the seasonal change on ground and buildings? Is there any way or instrument that could control seasonal change?

YanjieZhou commented 4 years ago

I wonder if it is externally valid for researchers to restrict the range of sample cities to US eastern cities, which, according to the authors' statement, has been through a great period of development from 2007 to 2014, which makes the selection of these five cities questionable.

kdaej commented 4 years ago

This paper only included ground and buildings to predict the safety score of streets. This means that they excluded the sky and trees. While excluding the sky is useful to minimize the seasonal effect, I was wondering what would be the implication of removing the trees. In urban areas, trees are part of the city’s efforts to make the environment more friendly to people, which can indicate the level of safety in the streets.

cindychu commented 4 years ago

This article is very interesting and well organized that driven by classic urban planning and sociology theory and leveraged computer vision methods to analyze the change of neighbourhood. For this paper, I have mainly two questions: 1) the human annotation is mainly about ‘safety’ and ‘safety perception’, will other angle or perception of the neighborhood, for example, ‘modern’ (as we know in MIT Place Pulse project, there are more ratings are asked) will influence the results? ; 2) while the image is 360° panorama at the beginning, how it is transferred into a specific angle during human rating and further analysis?