jakobzhao / geog595

Humanistic GIS @ UW-Seattle
Other
38 stars 13 forks source link

Spatial Database #5

Closed jakobzhao closed 3 years ago

tpmccrea commented 4 years ago

The readings this week expounded that, as researchers, it is essential that we attempt to fully understand our data in the context of three key areas: Accuracy, Uncertainty, and Truth. A robust understanding of what these terms mean, how each area impacts data quality, visualization/representation, and research outcomes is crucial to ensuring that the error and uncertainty that is present in the data, and research is documented and understood by the reader/viewer, as well as the researcher themselves.

Reading these pieces reminded me of an article that I read recently which studied various Landsat vegetation classification algorithms by analyzing the differences (Inconsistency) in land cover classification outputs. The algorithms, which ostensibly served the same purpose (identifying vegetation health) had significant disagreements when it came to mapping high-magnitude disturbance occurrences. There was a fair amount of inconsistency present amongst the models themselves, as well as issues with accuracy and precision between what the algorithms computed, and the reality on the ground. All of this amounts to error and uncertainty being introduced to the data, the visualization of vegetation indices and the research that could come out of analyzing the data.

This brings me back to the MacEachren, et al. piece, which discussed some potential options for conveying uncertainty to readers/viewers. To some extent, I feel that it is likely easier than before to fully understand and convey degrees of error and uncertainty. Certainly when it comes to spatial data, our ability to run complex analysis using cloud computing and other “Big Data” techniques allows researchers to have a much more robust picture of the data than researchers in the past. It is important that we work as MacEachren says on, “interdisciplinary” and “formalized” solutions to representing and understanding the role and impacts of uncertainty.

weixingnie commented 4 years ago

What an interesting reading week this is? This week's readings consistently focus on the topic "uncertainty" in geographic context or geospatial information uncertainty.

The uncertainty in a geographic context is a complicated problem but an inevitable process. Mei-Po Kwan, a professor from the University of California, engages the reader to motivate and think the aspect beyond the static data and constrain of the geo-information system. In his description, they focus on the daily routes of the people and intended to dive into the study of their daily decision making and examining the pre-existed assumption. However, the result is shocking. The "uncertainty" is revealed from multiple aspects, thus the pre-existed assumption/model could not accurately capture the initial intention. For example, the neighborhood is not the most frequent zone for people to spend their time. In fact, what truly constraint and influences the daily routes of people are things difficult to measure. The social group, friends, public relationships, even some distant shops could attract people miles away from their neighborhood.

Another article --- "Visualizing Geospatial Information Uncertainty: What We Know and What We Need to Know" explores more into the UI/UX interface. In order to reduce the uncertainty as much as possible. In terms of design strategy, the mapping framework includes the following elements --- "positional accuracy", "attribute accuracy", "logical consistency", "lineage" and "completeness". Personally, I feel this is an approach to HCDE with a specific focus on the geo-information system. Later in the article, we do have a glance at visualization strategy. As a researcher, we intend to imagine that the hardest thing is to solve the problem and we ignore the rest. How to properly communicate with the people are the priority to determine the success of your research.

jouho commented 4 years ago

We've talked a lot about how data as solid facts can sometimes be misleading, but what if the data itself were wrong?

Mei-Po Kwan was the first person to articulate the uncertain geographic problem, (UGCoP). Uncertain geographic context problem refers to the problem that findings of the effects of area-based attributes (e.g., land-use mix) on individual behaviors or outcomes (e.g., physical activity) could be affected by how contextual units or neighborhoods are geographically delineated. He mentions that The problem "arises because of the spatial uncertainty in the actual areas that exert contextual influences on the individuals being studied and the temporal uncertainty in the timing and duration in which individuals experienced these contextual influences". At the same time I was surprised that such an important methodological problem was articulated just years ago, I was also thinking about all the geographical studies and researches conducted that are potentially under threat of UGCoP.

The concept of "true causally relevant" geographic context also drew my attention. Mei-Po Kwan says that as no researcher has complete and perfect knowledge of the "true causally relevant" geographic context, no study that uses area-based contextual variables to explain individual behaviors or outcomes can fully overcome the problem. I wonder what are some future improvements to our methodology to overcome this critical issue. As of now, I thought that it is crucial that all of us acknowledge the underlying limitation of our methodology and be critical about how we study certain geographic questions. This is especially important when we, geographers are answering deadly health questions such as health mapping of coronavirus.

angellinn commented 4 years ago

This week’s readings discuss how geodata is generated, represented, and understood. A call-to-action is to think about what new tools and perspectives we should use when facing the emerging digital worlds.

First, “The Uncertain Geographic Context Problem” points out how UGCoP could be a major reason why findings concerning the effects of social/physical surroundings on health behaviors and outcomes are often inconsistent. Second, “Visualizing Geospatial Information Uncertainty” identifies the key research challenges in visualization and analyzes the potential visual methods to cope with uncertain information and decision making. Lastly, “Stand with #StandingRock” uses a contextual approach to understand geospatial big data and calls for a reconfiguration of the pre-established regime of truth. While the authors of each reading examine geospatial data in different perspectives, the common topic that is brought up by all three is “data uncertainty.”

In a time when many decisions are driven by big data analysis, how well do people understand the meaning of these data and how they are generated? Zhang and Zhao mentioned an interesting point about evaluating the “intention” of various data actors while determining geospatial data as true or not. This is where Geography/Anthropology comes in to incorporate human factors into data interpretation. This concept connects to my Raspberry Pi project in that when reading the data pattern collected by Sense HAT, I can also think about the factors that lead to the changes and the potential events that could be happening.