jakobzhao / geog595

Humanistic GIS @ UW-Seattle
Other
39 stars 13 forks source link

06-ai #14

Open jakobzhao opened 3 years ago

jennylee719 commented 3 years ago

Think piece 05/10 Jenny Lee

Agnew (2011) explores how the revived understanding of place can benefit geography studies. Departing from previous conceptualization of place as passive and stagnant, Agnew illuminates how the reconceptualization of place should embody the complexity and dynamism of human life and its interconnectedness with place. Agnew discusses three major dimensions of place, including “place as a location” where an activity is located, “place as a series of locales” of peoples’ everyday life, and “place as a sense of place” which explains peoples’ attachment or unique experience with place. While these dimensions are helpful, I feel that they do not adequately address the role spatial data science and AI technologies in geography studies’ understanding of place. The rest of this week’s readings (Janowicz et al., 2020; Zhao et al., 2021; Zhou et al., 2018) highlight the rise of new tools in geography research, signaling also the need to revisit how we approach and study place through GeoAI and data science. Based on this week’s readings, I would like to suggest two new dimensions to place that may be helpful in conceptualizing place in our data-intensive culture: place as imagined and place as datafied. To begin with, place as imagined, which has been inspired by the deepfake research by Zhao et al. (2021), is the techno imaginary of place enabled by Artificial intelligence. While deepfake of geospatial data poses great security and social threats, it is also telling of the power of AI to imagine what places can look like. In a more positive light, the predictive power of algorithms to predict what can happen in certain places in the future has been used to prevent crime. This technoimaginary of place does not solely reside in the imaginary; it can have material consequences by impacting policies on how certain places should be managed. In this aspect, imagination is an important part of how places exist in our technologically powered society. Accordingly, how AI can imagine space and how this imagination can be exploited could be a part of the critical geospatial data literacy advocated by Zhao et al. (2021). Secondly, space as datafied has been inspired by Janowicz et al., (2020) article, and in particular their emphasis on ‘social sensing.’ The datafication of place can take myriad of forms, depending on the researcher’s objective, but there seems to be two major elements: datafication through sensing technologies that collect digital traces of people that include “information pertaining to one’s location, but also attributes such as the ambient temperature, luminosity, noise level, and so on” (Janowicz, 2020, p. 5) and datafication through peoples’ selective data exposure on social media through location tagging and other social media interaction. For instance, the networked counterpublic of the #StandingRock movement can be an example of the latter. While place as imagined and place as datafied require further refinement, they open up new avenues of understanding and building knowledge about place alongside the growth of geospatial technologies.

reconjohn commented 3 years ago

What is "place" in the context of technological innovation and big data? First, we need to discuss the technological innovation which has influenced the conceptual definition of place to answer the question. GeoAI, GIScience, the new paradigm of joint frameworks of empirical, theoretical, and computational research, and big data are examples of innovation. GIScience concerns the use of GIS and relevant technologies such as mapping and spatial analysis while GeoAI extends GIScience with Artificial Intelligence (AI) to answer why spatial matters by creating intelligent geographic information such as image classification, object detection, scene segmentation, simulation, interpolation, link prediction, data integration, geo-enrichment, and so on (Janowicz et al., 2019).

The computational enhancement and big data have created a new paradigm by joining empirical and theoretical aspects. Four theoretical ways were introduced - the neo Marxist, the humanist, the feminist, and the performative (Agnew, 2011) explaining the conceptual change of the meaning of "place." Place is more than "space" as it is not bounded, rather complex, dynamic, and permeable. Agnew (2011) argued that place is fundamental to understanding knowledge production and dissemination since it includes the experiences of human beings as agents while the experience of place is different from different groups from the feminist perspective - pluralism.

In this regard, "place" can be discussed in the concepts of location, locale, and sense of place for empirical purposes. Deep fake of the geographic satellite image of cities discussed in the paper (Zhao et al., 2021) describes the place as the location by introducing fake images of geographic maps. The cities in the demonstration of the study are examples of the concept of location. Zhou et al. (2017), on the other hand, discussed "place" as the locale, settings of daily activities such as home, school, part, etc. by introducing Places datasets, the scene-centric datasets with three CNN architectures for classification of scenes or place. Moreover, the sense of place, the other dimension of "place" can be discussed with the feminist, humanist and performative points of view that interpret places are time-space configuration established by intersections of agents such as people and things. This week's practical exercise well presents the example of the sense of place by exploring a book author's sense of a city, Seattle through Natural Language Processing (NLP).

We need to further think about data literacy and ethics in the wave of technological innovation and big data. Deep fake and Places datasets discussed above illuminate the importance of data literacy and ethics in that technological innovation could entail the prosperity of societal gains and losses based on how to approach the technologies. Furthermore, social sensing by taking advantage of technologies such as NLP can question the ethics of digital tools while it leverages the use of user-generated digital content to better understand human dynamics.

nvwynn commented 3 years ago

These articles read together presented a challenging but interesting experience for me. Agnew provides a detailed historical analysis that frames scholars’ efforts at defining and conceptualizing space and place. The editorial of Janowicz et al. is a call for reinvigorated GeoAI research and can be understood as a sort of research agenda-setting paper. And, though both Zhou (2017) and Zhao (2021) lay out some very technical details concerning scene recognition in machine learning and deep fake geography creation and detection (respectively), they also call forward some very fundamental questions in the field. One fundamental argument made by Zhao et al. is that “we ought to admit that fake, for good or ill, is an inevitable component of human civilization” (p. 12) and with this realization address the consequences of the “lies” that emerge in a spatial GIScience format just as we have with other occurrences of lying in the field (Monmonier’s example of lying with maps, was mentioned. I also think of Darrell Huff’s 1954 book “How to Lie with Statistics”). Zhou et al. connects to some of the foundational concepts in geography that Agnew discusses in that a key aspect of the CNN scene recognition learning model is to “identify the place in which the objects seat (e.g., beach, forest, corridor, office, street, ...)” (p. 1). This paradigm is sympatico with a Leibnizian-like definition of space-that, “Space thus exists because of relations between sites at which events and objects are located.” (Antognazza, 2008) (Zhou, 2017, p. 9) In the context of machine learning-based scene recognition, we can push Agnew’s assertion that, “in the end it is the concrete effects of places that matter more than remaining at the abstract level of conceptualizing place,” one step further and understand that the concrete effects do not simply matter more, but are in fact the only way that an AI can “conceptualize” a scene. Given the limitations of scenes as encompassing the entirety of what would be considered a “place,” due to the fact that they are by and large a purely visual formation of space (despite, and perhaps ironically given that “annotated datasets further enable artificial systems to learn visual knowledge linking.” Zhou, 2017, p. 2), we can understand Janowicz’s call for “social sensing” as a way to add depth to these AI-mediated places.

Food for thought... When reading Zhao, I found myself thinking about how most ontological and epistemological frameworks have been developed with regard to what we (humans) know. What are the limits to articulating ontologies and epistemologies through methodologies related to machine-learning? In other words, is there a point at which an AI transcends any human-created algorithm so that “we” can no longer know or describe what “it” knows?

shuangw1 commented 3 years ago

This week’s readings again refer to some of the critical ideas in geography, such as place and space, and the notion of place-making based on human behaviors, emotions and experience. While the discussion of space and place is not new, GeoAI is regarded as a new subfield in geography, and it inherited a lot of discussions in space and place. In Janowicz et al’s article (2019), they pointed out the use of AI in geography is not new, but this subfield witnessed exponential growth due to the recent generation and sharing of big data. They summed up that GeoAI is a subfield of spatial data science that includes image classification, object detection, scene segmentation, simulation and interpolation, link prediction, (natural language-based) retrieval and question answering, and so on. GeoAI has recently re-emerged as a subset of AI (Janowicz et al. 2019), but one argument I heard from the AAG session this year about GeoAI is the question of whether GeoAI can be viewed as an independent field, or for some data scientists and engineers, it is just data science plus some coordinates? What can the geography community teach outside of the discipline and what outside discipline can gain if they hear the word GeoAI?

Speaking back to space and place concepts, I think one critical concept here to understand the linkages between them and GeoAI is that “much of the social sensing research, semantic signatures are often rooted in the concept of place, using the place as the reference system through which to compare different activities, dynamics, and social interactions” (Janowicz et al. 2019). We can also see this in Zhou et al’s article (2017) – apart from traditional object-based detection (recognize a cat or a dog), they tried to differentiate between scenes. The scene is not just one object but can be described by humans as a sequence of experiences or feelings (e.g., one mouth under a nose with two eyes above equals a face). The training method they described is also largely human-based; the ground truth they conveyed involved a lot of human experiences. In this case, I think although a lot of the AI technique seems universal, it is also very local-based and local-generated. It is contextually sensitive. This reflects what Agnew (2011) talks about in their article about "pluralism". Different groups have different experiences in places and therefore generated their own "sense of place".

One of the most interesting arguments I found in Zhao et al’s article (2021) is their statements that lies can not be avoided and it is part of human civilization. Also, it will be anxious for humans to worry about fake all the time but without precautions, it is also harmful to our society. That triggers me to think about the autonomous driving example emerging recently. If deep fake is inserted into some of those areas, it is very harmful to safety concerns and human lives. And what precautions we need to think beforehand rather than dealing with it after it happens, certainly need scientists and practitioners to think through it.

stevenBXQ commented 3 years ago

Steven 05/10

This week’s readings have sparked my thoughts on the potential challenges and possibilities that artificial intelligence brings to the discipline of geography.

Under the correspondence framework of truth, GeoAI, similar to the generic AI, is intertwined with “false” information. Artificial intelligence can be easily biased, either intentionally or unintentionally. “Deep Fake” has become a heated topic for a couple of years. Zhao et al. (2021) extended the discussion of this advanced technique for “falsifying” data to geography. While this intentional use of AI is gaining attention, another potential problem lies in the input data. The input data used for AI training, or “sample” in statistical terminology, is probably not representative of the true population due to various reasons. While Zhou et al. (2017) mentioned that a “diverse” database is essential, I find their consideration still not enough. For example, in ImageNet, a very popular image dataset that Zhou et al. also used in their experiment, about 45% of the images were taken in the U.S., another large portion in North America and Western Europe, and only around 1% from China and 2.1% from India. Shankar et al. (2017) Such inherently biased input data will certainly influence the trained AI model’s accuracy in recognizing objects or places in different countries or cultures, which should be given extra attention when in application.

While many of the discussions around GeoAI are around whether the products are “true” as if they accurately represent the physical objects on earth, I think Natural Language Processing (NLP) will help us better understand the “sense of place” (Agnew, 2011) from a humanistic perspective. Janowicz et al. (2019) pointed out that NLP is “facilitating the extraction of geographic information from unstructured (textual) data.” I believe that the “geographic information” here includes not simply the locational information about places, but more importantly, human experiences and emotions of places (“sense of place”) because feelings are abstract and exist more often in words rather than maps. More importantly, NLP could help capture the human emotions of both physical, absolute places, such as customer reviews of restaurants, and abstract, relative places, such as the experiences of Holocaust victims (Knowles et al., 2015).

Similar to many other technological advances, artificial intelligence will produce a profound influence on both our daily life and academic studies. As Zhao et al. (2021) concluded in their paper, we should recognize the opportunities brought by AI and also be fully aware of the problems it may cause.

References: Agnew JA (2011) The SAGE Handbook of Geographical Knowledge.: 316–330. DOI: 10.4135/9781446201091.n24. Janowicz K, Gao S, McKenzie G, et al. (2019) GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. International Journal of Geographical Information Science 34(4): 1–12. DOI: 10.1080/13658816.2019.1684500. Knowles AK, Westerveld L and Strom L (2015) Inductive Visualization: A Humanistic Alternative to GIS. GeoHumanities 1(2): 233–265. DOI: 10.1080/2373566x.2015.1108831. Shankar S, Halpern Y, Breck E, et al. (2017) No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. arXiv. Zhao B, Zhang S, Xu C, et al. (2021) Deep fake geography? When geospatial data encounter Artificial Intelligence. Cartography and Geographic Information Science: 1–15. DOI: 10.1080/15230406.2021.1910075. Zhou B, Lapedriza A, Khosla A, et al. (2017) Places: A 10 Million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 40(6): 1452–1464. DOI: 10.1109/tpami.2017.2723009.

larissa-soc commented 3 years ago

(Zhou, 2017) explains the processes and information that enable AI programs to learn, as well as the challenges, like separating similar classes, and burgeoning solutions such as the rise of massive databases with millions of entries and their augmentation by labeling images with a “ground truth category”, using classifiers to scale up the dataset. Undoubtedly, there are huge advantages to these AI technologies, but reading this article brought Agnew (2011) and Verbeek (2001) to the front of my mind. I bring up Agnew, because the terms place and space pepper the body of Zhou’s piece, and the way in which AI is operating is a perfect example of the conflation of place and space. AI uses spatial dimensions it detects in a picture to help classify a place. Interestingly, the AI systems described by Zhou have to utilize the association and context of identified objects to make the accurate distinction between a classroom table and a kitchen table. The use of association as a determinant of place is in line with the performative perspective, but the experiential essence of the feminist perspective has no place in this process. Furthermore, the Neo-Marxist perspective raises some red flags, especially in the context of Don Ihde’s ideas. Neo-Marxists have pointed out the colonization of socially produced space under capitalism and, from Agnew’s point of view, did not adequately explain the dialectical relationship between conceptual and concrete space (Agnew, 18). From my perspective, training AI to identify and classify using human perceptions is both an exciting and terrifying link between the two. Digital technology is new to the Anthropocene, and the rapid progress we have made is impressive, but it also has consequences. What does it mean for our conceptualization of place and space if the human cognition being used to define them in these programs is exclusively based on computer users who are on Amazon Mechanical Turk? As we, hopefully, continue to democratize the digital space, we need to pay attention to how pre-designed, pre-categorized worldviews are being transferred through the digital space. As more users engage with digital technology, they are subject to quasi-constraints and information trajectories (Verbeek), ultimately mediating the users' engagement with the digital world. If we know this mediation is occurring, I think it is even more essential to engage the feminist and Neo-Marxist perspectives and ask Who is designing these systems? who is benefitting from their implementation? How are people actually experiencing these technologies? References: Agnew, J., 2011. Space and place. Handbook of geographical knowledge, 2011, pp.316-331. Verbeek, P.P., 2001. Don Ihde: The Technological Lifeworld. In American Philosophy of Technology: The Empirical Turn. (pp. 119-146). Indiana University Press. Zhou B, Lapedriza A, Khosla A, et al. (2017) Places: A 10 Million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 40(6): 1452–1464. DOI: 10.1109/tpami.2017.2723009.

S-Arnone commented 2 years ago

Janowicz et al.'s article on the capabilities and potentials of GeoAI was particularly interesting to me in the way that it bolstered the role of technology in mediating our relationship to geodata. Discussion of digital personal assistants' role in answering more personalized questions about or environment, for example, raises for me the question of AI echo chambers (Janowicz et al. 2019, 4) Where, as we have previously discussed, the creation and interpretation of maps mirrors the biases of creators and viewers, I wonder how the role of AI in interpreting data for us will either uniquely reinforce our own beliefs or pass on a cohesive message about the status quo. In the former scenario, I imagine user engagement will be a primary design imperative, driving AIs to learn the desires and expectations of users before satisfying them through cartographic replication and/or the omission or overrepresentation of certain data. In the latter, on the other hand, I imagine a more traditional configuration of map-makers yielding disproportionate power over interpretation - yet in this case AI would actively amplify this effect through embedding. Either way, it would seem that a kind of data synthesis is in the works, defined by the authors as, "[implying] that one data source can be used as a proxy for another more difficult to acquire dataset." (Ibid, 2) In this arrangement, data gathered through social sensing and more or less objective facts would play off of one another to find the sweet spot whereby the user is encouraged to engage either through the restriction of data as a means of changing or reinforcing user views.

Zhao et al. likewise raises interesting questions of disinformation and misinformation spreading through map creation and dissemination which can be extrapolated from a general concern regarding deepfake maps. Particularly, I thought it was worth noting that no platform for map dissemination is neutral either; in my experience many of the most worrisome platforms are large news aggregators operating on and off major social media platforms and communities formed on unmoderated or under moderated platforms like Telegram. Where the idea of deep fake detection seems to rely on access and moderation capability, the creation and dissemination of deep fake maps may go unchecked. Where one of the most important parts of countering disinformation and misinformation is getting ahead of its dissemination, this appears impossible due to both the functionality of maps and their dissemination. Maps, as we discussed last week can facilitate story telling, but by and large appear subordinate to textual or audio information through embedding or understanding. So, it would seem, there are significant limitations to both the application of detection within platforms and its capacity to counter disinformation and misinformation within the individual psyche.

A practical application of this lies within news aggregators for the Syrian Civil War and Ethiopian Civil War, where warring parties as well as outsiders often attempt to manipulate the information environment through the spread of disinformation and misinformation and the creation of closed spaces. Mapping sources (like https://syria.liveuamap.com/ and https://twitter.com/mapethiopia?lang=en) which are run by OSINT experts may be able to use detection to deter the spread of disinformation and misinformation using Zhao et al.'s detection strategies. Yet, the far more challenging problem of closed or under moderated platforms may largely go unaddressed. Given the fact that opposing parties often lodge competing claims to control the same city, region, etc. through the use of old or false pictures, this may have a significant impact on things like IDP flows and military maneuvers.

References Janowicz K, Gao S, McKenzie G, et al. (2019) GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. International Journal of Geographical Information Science 34(4): 1–12. DOI: 10.1080/13658816.2019.1684500. Zhao B, Zhang S, Xu C, et al. (2021) Deep fake geography? When geospatial data encounter Artificial Intelligence. Cartography and Geographic Information Science: 1–15. DOI: 10.1080/15230406.2021.1910075.

skytruine commented 2 years ago

Janowicz at UCSB focuses on GeoAI especially Geospatial semantics, and his GeoAI paper offered a general understanding of GeoAI - AI for solving geographic problems, pointed out three important research directions: spatially explicit models, question answering and social sensing. For me, it's an outline of reference when I need to discuss GeoAI. Before reading the article, I don't think the adoption of general ML/DL models in geographic problem solving can be understood as GeoAI - similar as that I don't think the application of traditional static models such as OLS in geographic data is spatial statistics. To me, only when we integrate the unique characteristics of spatial data with the structure, the development of AI models can we claim GeoAI (i.e GeoSVM is GeoAI, the adoption of CNN in street image classification and further analysis is not GeoAI but the geographic application of AI). According to Janowicz's paper, my understanding of GeoAI can be categorized into spatially explicit models, and the employment of general AI models in solving geogrpahic problems is accepted as part of GeoAI.

When it comes to deepfake satellite image, I think it's an important pilot study, although the deepfake dataset and detection method here is relatively simple. The paper following an activism paradigm, on the one hand, it pointed out that satellite images do not always reflect ground truth, and one reason for unreliable satellite images is the rise of AI represented by deepfake techniques; on the other hand, it explores the characteristics of AI-generated satellite images, develop a simple SVM model as a prelimary attempt in detection manipulation. Besides, I to some extent agree with S-Arnone's opinion (https://github.com/S-Arnone)"there are significant limitations to both the application of detection within platforms and its capacity to counter disinformation and misinformation within the individual psyche." S-Arnone pointed out two limitations: detection itself is lagging, in a real situation the "fake" part can be the interpretation. To solve S-Arnone's concern, extra efforts are needed. For the first concern, an independent fact-checking platform is needed and prior methods such as watermark and blockchain are needed ; For the second concern, we need to enlarge the concept of "fake" maps/satellite images to include improper interpretation attached with maps or satellite images - from a technical perspective, it's totally different from the detection of image manipulation.

Last but not least, Zhou et al.'s paper is a good demo for me to write dataset centered paper - I planned to do so, construct a more advanced manipulated satellite images dataset serving as a benchmark.

gracejia513 commented 2 years ago

Artificial intelligence has no doubt gained significant attention in many disciplines, and the heatwave hasn't ceased. GeoAI weaves the AI into geographical analysis, creating new opportunities and challenges delineated in this week's reading materials. Janowicz et al. listed three research directions: spatially explicit models, question answering, and social sensing. I am very excited to read the social sensing section as sensor-rich devices can be used to identify human mobility patterns. Human interaction adds emotional and contextual value to space and thus creates a unique dataset that can be interpreted very differently. One example I can think of is during COVID-19, how people temporally and spatially clustered at a specific location has more meaning than a simple social gathering. As close contacts can transmit disease, a densely populated area would cause concern and could be perceived as a risk (or other negative feelings) to some individuals. Suppose someone is interested in studying people's emotional change or stress level in the pandemic. In that case, social sensing can convert the mobility data into a measurement of emotion, which is quite hard to capture in other forms. The above statement mirrors the author's argument that "one data source can be used as a proxy for another more difficult to acquire dataset." Though I have not got my hands on related projects, it becomes interesting to me just describe the outline.

Reading Zhao's paper on deep fake geo images opened a new horizon for me. I would think of fake geo-information as ill-portrayed maps, which experienced geographers can easily debug. The fake pictures became very convincing with AI, and it was challenging to distinguish. Zhao et al. further proposed a fake image detection method that works well on the given dataset. This topic should bring the community's collective effort for a standardized approach to tackle deep fake images. It is important to distinguish and flag those fake images when they are misleading. It is necessary to take advantage of fake images when sensitive information can be protected using this technology. And in this case, the term "fake" is expected to have a neutral meaning.

JerryLiu-96 commented 2 years ago

(Zhao et al., 2021) studied deepfake geographical imaginaries and how to detect the deepfake products. In the era of post-truth, not only urban environment can be deepfaked, for instance, climate-change skepticisms can use deepfaked satellite images to deny coastal erosion, sea level rise, deforestation, and else. So far I talked about malicious use of deepfake images, but can it be used for good? Deepfake geographical images remind me of a technique that is used by map producers to expose copyright violators: trap streets. It is not a GeoAI because it is not generated by any algorithm, it is planted into the map by the producer intentionally. Trap street does not intend to do anything unethical, in contrast, trap street defends responsible use of map. Trap street is not an example for good deepfake geographical images, but I have no doubt that deepfake geography is not limited to malicious purpose. Deepfake images are unauthentic from technical view. However, In area of humanistic geography, inauthenticity does not equate disinformation or worthlessness. Recall the reading on location spoofing, online protesters spoofed, in other words, faked their location. However, they were not maliciously spreading fake information, they are voicing their opponents to the project. There might also exist such cases where deepfake satellite images are easy to tell by human as fake images but serve ethical purposes. As a result, I cannot agree more with the conclusion that geospatial data literacy is more important than detection. there are so many cases where fake or not is not a great concern. I am also interested in how the GAN network in GeoAI can contribute to predicting the change of land use.

(Zhou et al., 2017) developed an CNN-based image classifier, I am not an expert in AI, I am just wondering is the dataset in Fig.7 a long-tail dataset because I noticed the y-axis increases exponentially.

(Janowicz et al., 2019) extensively discussed the origins of GeoAI and pointed out three directions for future GeoAI studies: spatially explicit models, question answering, and social sensing. I think extracting online sentiment of aforementioned spoofed location is an example of social sensing. Developing a strong AI that can either fake or fact-check is one thing, making AI learn what fake is not fake is another.

jakobzhao commented 2 years ago

@gracejia513 glad that you are interested in the social sensing paper. For more details, you can refer to

Liu, Y., Liu, X., Gao, S., Gong, L., Kang, C., Zhi, Y., ... & Shi, L. (2015). Social sensing: A new approach to understanding our socioeconomic environments. Annals of the Association of American Geographers, 105(3), 512-530.

jakobzhao commented 2 years ago

@JerryLiu-96 regarding the "good" use of deepfake geography, please refer to https://www.unite.ai/the-new-cgi-creating-neural-neighborhoods-with-block-nerf/

jakobzhao commented 2 years ago

@S-Arnone thank you for sharing with us the case related to the lies within news aggregators for the Syrian Civil War and Ethiopian Civil War. I doubt whether it can be fully detected using the approach my co-authors and I have provided, but I believe it is important to initiate a public conversation on the possible mis-/dis-information. Please recall the truth regime discussed the standing rock paper.

jakobzhao commented 2 years ago

@JerryLiu-96

In area of humanistic geography, inauthenticity does not equate disinformation or worthlessness. Recall the reading on location spoofing, online protesters spoofed, in other words, faked their location. However, they were not maliciously spreading fake information, they are voicing their opponents to the project. There might also exist such cases where deepfake satellite images are easy to tell by human as fake images but serve ethical purposes.

You are right! Let us discuss it in class!

Jxdaydayup commented 2 years ago

GeoAI is a very popular topic in GIScience in recent years. While I thought simply before it referred to a series of advanced geospatial techniques borrowed from computer science, Janowicz’s piece (2019) points out that it is caused by a new culture of data creation and sharing, which includes content-open, data reuse, and data-intensive exploration. Making content open allows the public to have access to numerous data. Reusing data, especially large datasets, allow data users to capture as much contextual information as possible, and has the advantage of allowing other researchers to test the reproducibility of a prior study. Nowadays some journals in the domain of GIScience require researchers to share their dataset and codes for the test of reproducibility. It also resonates with the current data culture. Lastly, data-intensive exploration encourages the combination of multiple data sources which may support a more holistic understanding of a research question and/or mitigation of the problems of data bias or data sparsity. While intensive data may open up opportunities for a variety of potential studies, it should be mindful that answering a research question is more than exploring the data itself by utilizing the emerging large volume of data. The author also envisions the moonshots of GeoAI and asks if “we can design a software agent that takes a user’s GIS-related domain question, understands how to gather the required data, how to analyze them, and how to present the results in a suitable form”. This moonshot may require GeoAI to solve a challenge – summarizing geographic information while answering more open-ended questions. It also requires GeoAI to categorize which questions can be answered by a software agent and which questions have to be dealt with by highly-trained GIS analysts.

The ethical future of GeoAI is also a heated topic. Zhao’s piece (2021) sheds light on the potential misuse of geospatial data and suggests possible coping strategies. It’s interesting to see how satellite images can be falsified with non-existent landscape features without recognition by human eyes. While an increasing amount of papers are celebrating the advantage of AI techniques in terms of their use in geographical questions solving, more papers alike are needed at the same time to reflect on GeoAI’s controversial capacity and unforeseeable impact on society.

cpuentes12 commented 1 year ago

"GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond" discusses the emerging field of GeoAI, this week's topic, providing an overview of the potential applications of GeoAI including automated mapping, spatial data analysis, and spatial decision-making. I was particularly interested in the concept of utilizing GeoAI in smart assistants, as this strikes me as a natural progression from our current reliance on quick question-answering technologies and is perhaps the most public-facing application mentioned in the paper. The paper also discusses the challenges associated with using AI in a spatial context, such as the need for high-quality and diverse data, the interpretability of AI models, and the potential for algorithmic bias. The paper concludes by calling for increased collaboration between GIS and AI researchers to further develop the field of GeoAI.

Zhao et al.'s "Places: A 10 Million Image Database for Scene Recognition" describes the creation and use of a large-scale (over 10 million images) dataset of images of various scenes and places from around the world, labeled with scene categories. The paper discusses the methodology used to collect and label the dataset, as well as the performance of various deep learning models trained on the dataset for scene recognition tasks. The authors ultimately argue that the Places dataset is a valuable resource for advancing research in computer vision, particularly in areas related to scene understanding and visual recognition. I for one have a very limited knowledge of the current state of GeoAI and found it bonkers how much extra classification can happen from the first step of identifying the macro-classes Indoor, Nature, and Urban. The potent ability for AI to identify and classify things does make me slightly uneasy (as I could see this capacity easily being turned against marginalized and/or anti-government individuals or groups), I can also recognize why these kinds of developments are of interest and how they might also be beneficial in some sectors.

Finally, "Deep fake geography? When geospatial data encounter Artificial Intelligence" argues that the use of AI for geospatial data analysis raises concerns about the accuracy and reliability of the resulting information. This paper highlights the potential for errors and biases in the algorithms used for image classification, object detection, and other spatial analysis tasks, and discusses the challenges of verifying and validating the accuracy of AI-generated geospatial data, and the need for more transparency and accountability in the development and application of these techniques. Bo and his co-authors suggest that incorporating human oversight and expertise can help address these issues, and emphasize the importance of ethical considerations in the use of AI for geospatial data analysis, particularly with regard to privacy, surveillance, and potential negative impacts on marginalized communities. As we've discussed in class extensively, we're living in a "post-truth" era, and geospatial data is no exception. I appreciate this paper's discussion of the biases that skew algorithms used to perform analytical tasks, and the connection between those biases and their potential for harm to marginalized communities.

yohaoyu commented 1 year ago

This week, Three papers explore machine learning (ML) and artificial intelligence (AI) applications in geography, along with potential issues such as deepfake. The Places paper (Zhou et al., 2017) introduces a large image database that focuses on scenes rather than objects, which differs from previous datasets. Data quality strongly influences ML models' outcomes, serving as the foundation for the learning process. In this paper, I think the shift from objects to scenes also alters the standard for data quality. So, I wonder how can we integrate humanistic thought and values of equity, diversity, and inclusivity into the training data.

The review paper by Janowicz et al. (2019) provides a comprehensive overview of the boundaries of GeoAI and current research topics. I'm impressed by the need for high-quality spatial data infrastructure, although it is somehow an obscured area in the scientific community due to the huge workload. Providing high-quality data as a public good should be considered a government responsibility under appropriate regulations. Additionally, data availability plays a crucial role in shaping research agendas, such as the (over-)growing research on green space provision due to the usage of deep learning and Google Street View data. While green space is an important topic, many studies in this area tend to be data-driven rather than question-driven, potentially overshadowing other key topics which are lack of datasets.

The deepfake paper by Zhao et al. (2021) delves into a new and fascinating realm of AI-generated fake geographical data. With the recent development of state-of-the-art generative models for texts and images, distinguishing truth from fake becomes increasingly challenging, affecting both individuals and the public decision-making process. While individuals making mistakes may be acceptable, the implications of fake data on public decision-making raise many questions to us. What data can be trusted? Who takes the cost of fake data usage? etc.

amabie000 commented 1 year ago

Some very technical writings this week! Zhou et al (2017) describe the creation of a Place database to train machine learning algorithms on visual scene and object recognition using Convolutional Neural Networks (CNNs). Reading through the methodology for extracting and processing this data was more than a little concerning given the reliance on open internet images searches to source photos (with the highest concern being that images have unique urls and are not duplicated) and Amazon Mechanical Turk (AMT) labor to complete the categorization ground truthing. MTurk has been widely likened to essentially an internet crowdsourcing sweatshop and there are ethical concerns for academic researchers utilizing MTurk labor as both data processors and human subjects. AMT model also strikes me as a liminal space between the tasks workers receive (labeled as HITs - Human Intelligence Tasks) and the AGI futures being pursued. The AMT site (https://www.mturk.com/) describes the benefits of outsourcing this work to humans because “[w]hile technology continues to improve, there are still many things that human beings can do much more effectively than computers, such as moderating content, performing data deduplication, or research." As I look at the images samples in this paper, I wonder who those people in the images are and are they aware of their likeness being used to training sets or being printed in this article? Zhao et al (2021) describes the concept of deep fake geospatial data via satellite imagery and develops a process that seeks to detect deep fakes. The authors note that fakes and lies are not a new phenomenon in geospatial sciences or human experience at large. However, the allusion to the fairytale ‘The Emperor's New Clothes' was a bit confusing for me, especially as it was used to suggest deepfakes seem “inevitable or even natural” (13). I am not sure what part the emperor’s nonexistent clothes are representing - the deepfakes or the act of creating them? My understanding of this tale may just be very different from the authors. In any case, the author’s point to a need to be proactive in the face of changing technological landscapes and not fall so easily to despair or optimism without careful consideration through a humanistic understanding. I read Janowicz et al (2019) last, which in retrospect maybe should have been the first on my list! In reading Janowicz et al (2019), I am left wondering if the ways that data is agglomerated, appended, reused, and synthesized in GeoAI context is not already creating a version of deepfaked geographies yet to be considered as such, particularly in reference to social sensing data extracted from human bodies and activities through near devices and assumed into contextual relevance alongside other data metrics.