uchicago-computation-workshop / Fall2019

Repository for the Fall 2019 Workshop
12 stars 1 forks source link

10/31: Calling Bullshit #8

Open smiklin opened 4 years ago

smiklin commented 4 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

PAHADRIANUS commented 4 years ago

Thank you in advance for staying with us to address such a serious issue in present day academia. Having taken a few extra looks into the case studies session on callingbullshit.org, I am shocked by two aspects of the problem: on one side, how often that even the most dedicated and vigilant scholars, even experts in discerning such flaws and "calling bullshit" like yourselves (you mentioned your own publication's misuse of figures) may unconsciously produce "bullshit" results; on the other side, how serious that even the government agencies and eminent researchers seem to intentionally exploit the audience's inability of calling bullshit and manipulate research results to justify their conclusions. Frankly, I am less concerned by the former problem, since consistent interchange of ideas between scholars can help mend the flaws. One is always more acute at finding others' errors than those of himself. As the case of the sexual orientation study showed, that after you communicated with the authors, they displayed willingness to address and discuss the inferential flaws in their conclusion. But the later problem, which has more significant impact on a societal level, is difficult to resolve. Statistical results and figures are constantly showed in an artificially altered way by enterprises for business and governmental bodies for politics. It is already a severe situation that a government agency like the NIH, on a vital policy such as distributing national funds to investigators, uses forged results. What's more infuriating is that after you have pointed out their thought defects, the people from NIH did not really bother to make further explanations. Now R21 grant is fully online. It seems to me that the scholars' calling bullshit voices are not nearly enough to raise public awareness in government and business bullshit. Of course, you host the calling bullshit course to educate future researchers, but the majority of the society remained unable to distinguish bullshit. How can we extend the effort to bring the ability of critical analysis to the ordinary masses and regulate more powerful bullshit makers such as the government and the industrial magnates?

wanitchayap commented 4 years ago

Thank you so much for your presentation! I really enjoy all of the readings. In the case study of "Machine learning about sexual orientation?”, it seems like the problem with this publication is that the authors went too extreme with their interpretations. Do you think that this may result from the competitiveness in publishing papers? That is, if a paper is not exciting or controversial, it may be less likely to be published, and thus, this pressures researchers to claim things beyond their results’ scope? If this is the case, do you think it might be better for academia as a whole to just focus on the results rather than the interpretations of research papers?

policyglot commented 4 years ago

Dr Bergstrom and Dr West,

Here's a suggestion and question for a sub-topic you can add into your course- Literature Reviews.

They are so much longer in social science journals than for the physical sciences! One may argue that there is inherently more subjectivity in social sciences. But as your reading by the physicist Sokal shows, dense language can be used to mask bullshit. So instead of adding clarity, these reviews may be doing the exact opposite.

Specific Questions

  1. Historically, how did this expectation of long and verbose literature reviews emerge for social science publications?
  2. Going forward, how can incentives be realistically redesigned at top journals to favor conciseness and clarity?

Thank you for your enlightening (and entertaining) collection of links at callbushit.org! I've bookmarked them and will be sure to keep sending you ideas after the workshop on how to enhance your course even more.

sanittawan commented 4 years ago

Having read the case study and a few other posts on the course page, I am equally concerned about the issue of data misrepresentation and fairness in machine learning models. The models per se seem to be innocuous, but with the data that we feed into the models, they can lead to biases or even sustain existing inequality. Can you suggest some good ways to (i) detect problems with the data and (ii) in cases where we have to work with what we have, what can we do to mitigate these problems?

Another question: so far you’ve taught many iterations of the class. In a similar fashion to your conclusion about humans’ ability to detect cues and make decisions at poker games, do you think that humans are bad at detecting bullshit even given that they are trained to do so? Is there a chance that machines are better than us in catching bullshit?

nwrim commented 4 years ago

Thank you for an interesting read and great presentation! In particular, writings about data visualization was a great systematic summary of the things that I knew in some way but not coherently, and the example of black jack and "light of Holy Spirit" on the case study of Wang and Kosinski (2017) made a lot of sense to me.

I think the war against "bullshit" must be fought in two closely related but distinct battlefields. One battlefield is making people identify and "calling bullshit" and the other is clearing out the bullshits that had already contaminated the minds of people. I think this project is an excellent approach in addressing the issue related to first battlefield, and I think your efforts will make a difference if many people read through the writings.

However, I am personally more concerned about the second battlefield, and would love to hear more about your opinions on how to deal with it. I think the greatest problem is that people are reluctant to changing their mind once it is set, even if they were given corrected information. A good demonstration of this reluctance is in the topic of political misperception. According to a review by Flynn and Nyhan (2017):

research indicates that corrective information often fails to change the false or unsupported belief in question, especially when the targeted misperception is highly salient. In some cases, corrections can make misperceptions worse (Nyhan and Reifler 2010; Nyhan, Reifler, and Ubel 2013). Even the release of President Obama’s long-form birth certificate had only a brief effect on beliefs that he was not born in this country (Nyhan 2012). Moreover, people have difficulty accurately updating their beliefs after finding out that information they previously accepted has been discredited (Bullock 2007; Cobb, Nyhan, and Reifler 2013; Thorson 2015a)

Ideally, if all people are able to perform "calling bullshit", the bullshit will not contaminate the mind of most people. However, I do not think majority of population will be able to perform "calling bullshit", at least in the near future. This means that we will need a way to actually change people's mind, after they were influenced by bullshit. What thoughts do you have in this matter?

SoyBison commented 4 years ago

Thanks for coming to our workshop! My question is more of an ontological one. In On Bullshit, there is talk about the fine line between lying and bullshit.

It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it.

I would disagree with this first sentence. It's clear that someone can deceive without having a belief about the truth. The liar may trick the mark into performing some deed solely to benefit the liar even if the liar is in the dark about the lie's subject. This has the consequence that the bullshit-lies distinction is solely up to intention. This seems problematic in the same way that any intention-based value theory is. To illustrate this, imagine two scenarios, one where the agent is acting with defensive intention (like to convince the mark that they aren't ignorant of the subject. This we would identify as bullshitting) and one where the agent is acting with offensive intention (to genuinely sow chaos, which would be a lie).

If we really want to apply an epistemic distinction, we could assert that a bullshitter doesn't know that they're not telling the truth, and the liar does, but this falls into a conundrum again; if the liar's belief turns out to be wrong, then the situations are indistinguishable again. This just brings us to a confidence in knowledge problem, seated in the fact that to know something is indistinguishable from the act of believing something strongly.

My question then, is this: if there's no real distinction in the effects of otherwise identical cases of lying and bullshit (other than intention, which is immeasurable from the outside, and not always apparent through introspection), then why worry about a practical distinction? Perhaps the judgement of the act (to be clear, not the actor) should only be based on the factual status of their statements, and the consequences of voicing those statements.

goldengua commented 4 years ago

Thanks for your papers. It is a very interesting idea to look at the hypothesis about real world and statistical and data analysis tools from the perspective of bullshit. I especially enjoy how you derive a divergent interpretation from the results Wang and Kosinski paper. Rather than hypothesing that machine could detect some subtle cues beyond human perception, it is more likely to conclude that machine is better at integrating cues and updating the posterior probability than human beings. I think this discussion should call our cautions about interpreting the results going through a 'black box' such as neural networks. Regarding we have known little about the detailed mechanisms about many tools we are using, I was wondering how we could make the linking hypothesis from the results to the theories we care about? Is there any protocol for such process? Moreover, can we probe what the black box is doing by giving it behavioural tasks that we test on human beings and compare the performance of black box with human being's, to infer its machenisms based on its capability?

hihowme commented 4 years ago

Thanks you advance for your presentations and thanks for the wonderful reading! I guess many of them all have this skeptical voice in our head, but we never made it seriously. I enjoyed the reading so much, as you say the voice loud about how we should take another look at those what so-called "Big Data" and looks convincing while actually not. Especially the Misleading axes -- just a simple including of 0 could make a big difference! I have a question in general, that is, what do you think is the right way to publish a good paper, or, to make a good research? Take the Machine learning about sexual orientation in case study for example, the authors must know they are gonna hit because they have 2 hit word in their research -- machine learning and sexual orientation. Even though they found out this may be not the truth, they may still want to publish that because the editors love it. What do you think we could do, or what we should do to change this situation that people only cares about the name, and the story of your paper, if that is catching or not, rather than if your paper is true?

lulululugagaga commented 4 years ago

Thanks for the presentation.

I enjoy reading this article. We've seen cases that an article on SNS is made up of one small piece of conclusion from academic journals, which is misleading and deceptive. But I haven't thought that the journals themselves could have such bullshit problems with unreliable data, computing methods, tools or models.

My question is that when doing researches, if we highly rely on big data where the sources in many cases are from industry or gov, but on the other hand we don't have clue to the way they collect and rectify the data, how plausible such data can be used since the bullshit concern has been raised?

timqzhang commented 4 years ago

Thank you for the presentation ! This is a quite interesting topic that I have never noticed before. It is a bit surprise to learn about the notion of “bullshit” and that it accompanies with us so closely. Here are my questions:

  1. In the discussion of humbug, it is mentioned that the trick of humbug is how to make right impression of oneself for others, while the content of humbug is not that important. However, in order to make a convincing impression, a good content is necessary. Therefore, I wonder when making a humbug, what makes a good manipulated message so to form a strong impression delivery?
  2. How do normal people identify “bullshit” during their normal life? In other words, how to avoid deliberate humbug from other people. It is quite difficult, though, since that the purpose of humbug is to make you believe them and make the impression on them that they are willing to see. Therefore, in this game, what is the strategy for this side?
liu431 commented 4 years ago

Amazing idea of creating a class like this! And also thank you for making them publicly available online, so that people like me could learn the important skill of 'calling bullshit'.

I'm trying to think about the roots of bullshits in our world. Lack of statistical training and critical thinking is a major one, as you mentioned on the website. However, do you think sometimes people speak bullshit even though they know it's bullshit, especially when their audience are more likely to be convinced by the bullshit?

nswxin commented 4 years ago

Thank you for your presentation!

I agree with you that our world is full of bullshit. In addition to all categories you’ve mentioned, I also encountered many things else: 1. the useless calorie that seems to do no good to my health 2. unwanted travel time conducting home to school, etc. But my concern is, is bullshit meaningless? In my opinion, something seemingly trivial may prove to be useful in its way. For example, in MACS class, I’ve encountered an incredibly boring research concerning how the Chinese government censors posts online, with no apparent extrapolation and application to other academic areas. However, its research method proved to be very advanced and is used as an exemplary to teach us to design the experiments in social science research. Thus, to me, an alternative way to deal with bullshit (if one, unfortunately, encounters one and wish to cover the lost as much as possible), is to learn from it from another perspective. For example, one may think what characteristics of this bullshit make it the bullshit and avoid them as much as possible.

ShuyanHuang commented 4 years ago

Thank you for presenting. I find the case study of machine learning about sexual orientation particularly interesting. The way to evaluate a machine learning research without knowing the technical details in the blackbox is inspiring. I think the two problematic interpretations of this research exactly come from the ignorance about the black box (of the researchers, not us). The researchers came to these two interpretations because they tried to interpret what was going on in the black box while they couldn't. The poor interpretability of machine learning methods have made it difficult to apply them to social science studies, which leads many researchers to try to improve the interpretability. And I believe much more inferences can be done with improved interpretability of the machine learning algorithms. So do you think the two interpretations in the case study will be justified if we can actually see which features are discovered and used by the black box?

hesongrun commented 4 years ago

Thank you so much for the wonderful presentation. The title is really fascinating and I can't agree with you more that there should exist such kind of a course for general public to identify highbrow nonsense masked with advanced technology and models. My question is how can we solve the problems of the fake news and misinformation on social media? Nowadays, such kind of information can be spread and shared in a fast manner like virus. Is there any ways to fundamentally contain the harm it causes to our society? Thanks again!

keertanavc commented 4 years ago

Really fun readings! We know that independent fact-checkers check each news item before actually approving to publish them. But do you think they also ought to check for misleading bullshit to improve the integrity of the reported news? If so, how can we go about streamlining this process? Because unlike hard and cold facts, bullshit is often the right facts but presented in a very misleading way.

linghui-wu commented 4 years ago

Thank you in advance for the brilliant and interesting presentation! Callingbullshit, at the very first I thought, is such a bold topic, and after I reading the papers and articles, I realized the significance of avoiding "talking about bullshit" in our own studies. "Bullshit" may exist in vivid graphs as illustrated in The Principle of Proportional Ink because inaccurate ratio may take advantage of human beings' sensory biases to reach exaggerated conclusions. As data visualization is so indispensable in presenting our ideas, do you think adding a more detailed explanation would somehow alleviate the problem and how do we choose a proper proportion between graphics and descriptions?

Leahjl commented 4 years ago

Thanks for the presentation.Some research papers can be really misleading if we don't understand those complex models, which makes it harder to challenge the author's theory. It is illuminating to spot the bullshit in the guise of big data and fancy algorithms without knowing the details. In the case study "Machine learning about sexual orientation?", you bring out an idea of "black box" to evaluate a quantitative claim in terms of the input data and interpretation. Then how can we evaluate the model of certain papers if we want to analyze and do some modeling about certain social science problems?

bakerwho commented 4 years ago

I love the frankness with which your work addresses the real problem of pervasive and wide-spread BS-mongering. Frankly, I feel your work should be required reading for all scientists, journalists, as well as other communicators and decision-makers.

My question is to do with apathy. The screenshot below is an image that was shared by the official BJP Twitter handle in 2018. Prime Minister Modi features prominently in it, alongside a blatantly BS infographic that literally doesn't even try to lie well.

Screen Shot 2019-10-30 at 3 04 06 PM

The Twitter link here is still live and shows that this has not been taken down, forget apologized for. Modi was re-elected as the BJP's frontman in 2019 with an even stronger majority than we saw in 2014.

It seems like the digital age has enabled intentional, blatant and (I'd say) shameless BS-mongering for clearly vested interests. There are no consequences for it, and I believe there should be. How can we address the apathy that normalizes, enables, and even supports this kind of BS?

anuraag94 commented 4 years ago

Thanks in advance for your presentation, and pardon me for calling bullshit on you. With respect to your case study analysis of Wang and Kosinsky (2018), I'm concerned with the method by which you argue your alternative explanations. I think it was critically and rhetorically sound to apply the parsimony principle to the authors' interpretations: that it is unlikely that their results suggest strong evidence for prenatal hormone exposure.

However, you conclude by proposing a most likely explanation for the results—that sexual orientation influences grooming and self-presentation. You can use Occam's razor to rule the author's interpretation out of consideration for the problem at hand, that's valid. However, you propose several candidate models that might rebut the authors' interpretation, yet somehow select and expose one of them as your most likely choice. Without further inquiry, it's an overreach to use Occam's razor to adjudicate between the candidates.

Could you please comment on this?

MegicLF commented 4 years ago

Thank you so much for your presentation.

I am really interested in your discussion of “bullshit” and “calling bullshit”. Could you elaborate more on this topic? What are the main sources for so many “bullshit” in higher education? Why do people keep producing “bullshit”? What’s the harm of “bullshit” to students like us? Do you think that it would be possible to set up some mechanisms to reduce the number of “bullshit” in academia? If so, what should be emphasized in the selection process? What should we pay attention to when we are trying to at least reduce the level of “bullshit” we are producing?

rkcatipon commented 4 years ago

I really enjoyed these articles, thank you for sharing and for your work at large!

In response to @SoyBison's comment, I'm also interested in the defined space between unintended and intended bullshitting, in particular, whether the distinction between disinformation versus misinformation matters when operationalized.

I would argue that a practical distinction between lying and bullshitting is necessary because with intention comes motivation. And it is understanding what is to be gained in the potential deceit that helps to form a judgment. But as @SoyBison put it, the intent is hard to measure and therefore may not a reliable criterion for detecting misinformation versus blatant disinformation. Therefore my question to you the presenters and the group at large is, can the intent to deceive be measured? Or is intent inherently cerebral, and therefore we must accept approximations at best?

bhargavvader commented 4 years ago

I have to say, I was super pleased that the same things were upsetting all of us - I remember reading the Wang and Kosinski paper and thinking about what a load of bullshit it was. Super psyched to hear you all and chat with you all tomorrow.

My question is about how do we go about constructing a different approach (or episteme) in the field of computer science, or rather - information. When I reason with myself about it, I like to think of it as a leftist computer science - one which is aware of the ethical and moral responsibilities of us as scientists and researchers, while also trying to develop an approach to science which is concerned with anti-fascism and fairness, broadly.

This would involve all the precautions against bull shit one would want to take in general (constant vigilance!), but also, I believe, a constant awareness of the historical context of the tools we use and the consequences of the tools we create. I'm reading Orit Halpern's Beautiful Data: A History of Vision and Reason since 1945 (https://www.dukeupress.edu/beautiful-data) right now and I'm digging it - it's making me rethink what I thought of of the fields of complex systems, AI, and a lot of the fields of science I engage with (I received a degree in computer science engineering before starting this program here).

I realise this sounds kind of vague...but that's because it is kinda vague in my head right now. I have a feeling of what I think about this data oriented episteme we live in, but I can't put it into words very well. It'd be cool to talk about it and maybe develop the vocabulary to conceiving of this.

Edit: So I just noticed your Sokal reading - I'm personally not a big fan of it, and I think he really misses a lot of points of critical theory/anthropology.. Kinda have beef with @policyglot about this too, for quoting it. I'd like to maybe chat about this and what you think of think of this.

heathercchen commented 4 years ago

Thanks for your presentation in advance! I am quite impressed at the way you brought up this question to the public and trying to raise people's awareness of "data bullshit" nowadays. Most of the examples you discussed in the reading materials are those from less "academic strict" resources, like those technical reports investigated by private analysis companies and articles in media platforms. My question is that, do you believe, or do you think there exists "data bullshit" and the behavior of "calling bullshit" in academia nowadays? I mean, there are certainly some scientists successfully utilizing the black box characteristic of big data to make results more "enticing" for publication. Do you think such an issue is worth stressing? If it is worth mentioning, what is your comment on that? Thank you for your time!

yongfeilu commented 4 years ago

Thank you for the presentation! The discussion on 'bullshit' and 'calling bullshit' is really intriguing. Living in a world full of information asymmetry, anyone, whether from industry, politics or academia, can have an incentive to take advantage of his or her knowledge of more information over others to gain benefits. Therefore, I think it more worthwhile to ask ourselves what is not bullshit than what is bullshit. Moreover, maybe you can say that these so-called bullshits can contaminate people's minds, but thinking of this in another way, bullshits can also be something that makes the world run more smoothly. Without exaggerating or trying desperately to figure out the importance of their contributions, scholars may not be able to sell so intriguing stories to top journals, merchants may not be able to sell their products to customers, and politicians could not make voters satisfied. In this way, can we say that a more critical question should be to distinguish which bullshit is good and which is bad and extremely unacceptable?

anqi-hu commented 4 years ago

Thank you for sharing your ideas with us and I love the cover photo of your homepage. I see that the purpose of your course is to teach individuals "identifying bullshit, seeing through it, and combating it with effective analysis and argument". While this ideally equips one with a keen eye for the analytical details and truthfulness in the scientific language and figures they are exposed to, I'm somewhat concerned about a potential side effect, in this case, of over-identifying bullshit. When people get more sensitive than necessary in the context of knowledge, they might become hypercritical on issues that are in fact only slightly bullshit or even perfectly fine in that regard. Do you see this as a valid concern? If so, what means are you taking to prevent, and/or does any part of your course address such a problem?

di-Tong commented 4 years ago

Thanks for developing a space to characterize and discuss this annoying phenomena. I wonder if and how the emergence and spread of Bullshit was intrinsically related to the development of quantitative methodology, the domination of scientific discourse as well as the expertise division and knowledge specialization?

YuxinNg commented 4 years ago

Welcome to Chicago! And thanks for these articles, I really enjoy reading them. I am particularly interested in one article, Visualization: Misleading axes on graphs. I had an Information Visualization course a year ago and I know how misleading the graphs can be. In that class, I learned a lot about how to "correctly" visualize information (e.g. applying certain algorithms to certain cases). It seems that there's already some standard rules for the visualization. My first question would be do you think in the future, the visualization field will be more standardized? One thing I learned from my Information Visualization class is that "information visualization is not art." But in the reality, I think many people, including me would like to see more creative visualization. So, my second question is that how can people find a balance between standardized visualization (reduce the risk of misleading but less attractive or fun) and creative visualization (increase the risk of misleading but more fun). Thanks

dongchengecon commented 4 years ago

Thanks a lot for sharing with us such an interesting topic! It seems that you have well developed a course with respect to detecting Bullshit from the mass research products and news in our daily lives. It is great that you have mentioned a clear definition of what you think Bullshit is, since the definition could vary wildly from person to person, or a specific person, but from different time period in his or her life. Have you ever thought about adding a part in your course, which is "the art of creating Bullshit"? Personally, the person with the best ability to detect bullshit could also be the one best at creating it. If the students could do both detection and creation, then we might assume that they have become great "Bullshitter".

jsgenan commented 4 years ago

Thanks for bringing our attention to such a meaningful project! I like the subtitle "Data reasoning in a digital world". Living in the world of "big data", everybody needs to be educated critically to interpret raw data themselves. The wrongful graphs, in my point of view, is the result of the reporters' belief that graphs are just a means of storytelling, just more direct and eye-catching than words. When there could be so many ways to interpret in a number from the annual report, why would they give up graphs to strengthen their arguments? (I'm not saying they are right. I just don't know what to do about it) As for the neural network case study, it seems that many people just give up critical thinking because it's told to be "a black box". What are the criteria we could use to critique an unknown subject like this? I believe they've already been called bullshit by coworkers, but how could we convey this to the general public?

harryx113 commented 4 years ago

Thank you for coming to Chicago and sharing.

While it is crucial to be a critical and independent thinker, I also think that bullshitting is almost inevitable. First, some bullshits are unintentional. Second, it's often in people's interest to bullshit, and people are not incentivized to speak the truth. Third, people love hearing bullshit that goes their way to avoid cognitive dissonance.

What is your view on the following two types of people? Type A: People who have the ability to research, critically examine and call bullshit ONLY WHEN THEY CHOOSE TO. If it's something they don't care as much, they let it slip even it's bullshit. Type B: People who believe anything is bullshit until it is proven not. They always stay skeptical and critical. Which type do you want your students to be?

The reason I asked this question is that there are just too many bullshits in life, and calling them out can be exhausting and hurtful. I just thought it would be very interesting to hear what you think.

tianyueniu commented 4 years ago

Thank you for sharing these interesting readings! I find the case study on Machine Learning & Sexual Orientation and the article on misleading axes particularly interesting. People are incentivized to 'lie' because exaggerated results often gain more attention from the public, regardless of their validity. Also, given the complexity of machine learning models and the different ways to select, boost, bag models together, I believe that researchers can almost get any results they want if they've made enough 'adjustments' to their models. So personally, it's getting harder for me to see through borderline lies in research articles. How do you propose we train ourselves to become better at detecting bs? And at what point should we tell ourselves that this study has too much exaggeration to be taken seriously?

ziwnchen commented 4 years ago

Thanks for your presentation! Here is my question:

  1. You mentioned a lot of "bullshit" in data visualization and interpretation of data science research. Compare to the past, are we exposed to more "bullshits"? Do you think that some characteristics of big data and machine learning research (e.g., black box algorithm, huge data size) facilitate the growth of bullshit by being "intimidating" to non-experts?

  2. Detecting potential "bullshits" in academia is important. However, I think even more important is to prevent such kinds of "bullshits" from spreading. For example, "machine learning and sexual orientation" is published in the top journal and attract great attention. Most of its readers might just accept what the paper has claimed because of its reputation. Do you have any suggestions about hindering the diffusion of "bullshit"? Especially "bullshit" in top journals?

adarshmathew commented 4 years ago

(Excellent discussion on here, which I'll be calling upon to raise my question of the politics of calling and countering bullshit.)

Building off of @nwrim's point on the mountains of pre-existing bullshit, we have entire ecosystems built on this past knowledge, with their own believers (in-group) and non-believers (out-group). The in-group is invested in (short-term) self-preservation, viewing criticism from the out-group as unenlightened or politically motivated. In some cases, like @bakerwho's example, the in-group derives glee out of flaunting their provably false assertions. Bullshit, in such cases, seems to be the production or elevation of 'evidence' that legitimizes the status quo.

There's a subtle yet critical difference between calling bullshit and actively countering bullshit. The former is an isolated act of rejection (or even ridicule) but the latter requires legitimacy and consensus, both of which are political. So, what's the role of legitimacy and power in countering/dismantling bullshit?

jtschoi commented 4 years ago

Thank you in advance for your presentation and discussion.

After having read some of the case studies on your website, I was curious about what your opinions are on the scale of calling out bullshit. For instance, in the Gun Deaths example that you presented, I felt that it was a rather effective display of the increase in deaths, as the graph — combined with the color red — looked like blood dripping (and the link Visualizing Data link goes onto discussing confusion versus deception as well). In cases like these, where the line between total bullshittery and (for the lack of a more concise word) artistic choice may seem blurry to some, I feel that a discussion rather than calling bullshit is more suitable (like in your website and Visualizing Data). However, I feel that those who are more vocal might be quick to call bullshit and irresponsibly shame the content creator before such healthy discussions are made. In light of this, do you feel that calling bullshit should be moderated?

chun-hu commented 4 years ago

Thank you in advance for the presentation. I enjoyed reading the case study and I found the concept of black box quite interesting. The analytic machinery that helps us make predictions and decisions can also lead to biased interpretation and analysis. Do you think people who are experts in the black box algorithms more or less likely to bullshit about their results? On one hand, they benefit from the power of these algorithms to conduct their research, while on the other hand, they are clearly aware of how misleading these models can be.

luxin-tian commented 4 years ago

Thank you for presenting this fascinating material calling for attention to the misleading and deceptive arguments masked by highbrow and unfathomable models and theories. I was wondering if this problem exists not only in academia but also in commercial fields? Are there any people using such tricks to achieve any dishonest purpose? If so, how can we identify and how to attract public attention to this problem? Can we draw any policy implications from this concern?

Anqi-Zhou commented 4 years ago

Really interesting and critical topic! As you said, bullshit is very common in political , commercial and even academic world. So is any possibility that these bullshit have some values, or some meanings to people's life? What's the difference between bullshit and academic dishonesty? Or just the same? Do you think whether bullshit is acceptable or not?

bjcliang-uchi commented 4 years ago

Thank you for this presentation! When you are teaching this class on Calling Bullshit, I am just wondering whether you find students more like "they cannot figure out what is bullshit" or "they just don't bother to"? Like the proverb says, "you cannot wake up someone who is only pretending to be asleep." Also, how would you differentiate the difference between bullshit (nonsense) and active lying?

HaowenShang commented 4 years ago

Really interesting topic! Thanks for your presentation! You mentioned that we are living in a bullshit-rich modern environment and we need to learn how to identify bullshit. However, we are not able to have deep understanding of every field in our life. So if we are not familiar with some fields, how can we identify the misleading bullshit in those field?

tonofshell commented 4 years ago

The included literature and your website are all very good resources for identifying bullshit, which is certainly a significant problem. However, I would argue that a more significant problem is that many people continue to believe in bullshit, and even double down on their belief in said bullshit, because that bullshit reinforces their preconceived notions on how the world works. What do you think are effective ways to combat this entrenchment?

RuoyunTan commented 4 years ago

Thank you for today's work! I find the theme and materials of your course very fascinating and thought-provoking. I constantly feel that the notion of "big data" is becoming too prevalent in today's world, even more than what's needed -- companies use it to advertise their products and services, researchers use it for new projects, etc. "Big data" is almost everywhere. So are the "data scientists".

Cathy O'Neil describes this as a "big data bubble", but she also thinks that there will be a "bursting" of this "bubble" when people develop a set of standards of what a data scientist should be able to do. As a graduate student studying computational social science, I am wondering what standards we should establish for educational institutions when big data is a topic of study. Could you share some of your thoughts on this?

ruixili commented 4 years ago

Thanks for your presentation! The topic is really interesting. With the emergence of social media, the cost of expression ideas is almost free. Although free speech is the cornerstone of a democratic society, does the free speech lower the average quality of each post? Additionally, some advertisement company is actually taking advantage of speaking bullshit by exaggerating to attract consumers' attention, which is demonstrated to be really efficient. How should we consider this situation?

weijiexu-charlie commented 4 years ago

Thanks for your presentation. The reading material is really interesting. Do you think that "bullshit" is more likely to occur in interdisciplinary areas where many topics are rarely explored and bullshit is created under the mask of novelty? Do you think it becomes even harder to identify bullshit in such cases?

KenChenCompEcon commented 4 years ago

I literally laughed out when reading the paper. So interesting is the topic! As you mentioned our life is bombarded with a plethora of bullshit, and I agree with it to some extent. Do you think it possible to develop a framework explaining how disincentivized people are to deviate away from remaining silent and keep advocating these bullshit? I suspect that as bullshit accumulates, the cost of being less bullshit will grow as well?

hanjiaxu commented 4 years ago

Thank you very much for presenting! Those articles and websites are one of the most genuine articles/resources that I have read in recent years. I really appreciate the authenticity and rigorousness of the "On Bullshit" paper. It strongly resonates with my own experience in the U.S. I have heard many many times during interview workshops about "faking it till make it" kind of story. I also read the book "bad blood" about the bullshit culture in the silicon valley. I am wondering if there is a cultural difference in the emphasis on "confidence" and "bullshit"?

SixueLiu96 commented 4 years ago

Thanks for your presentation! I must say this looks very interesting and somewhat exotic to me. I am wondering is this because of ways for communication? Firstly, you mention that when information is transmitted, it may cause misunderstanding somehow. The information receiver may add her understanding in some way misinterpreting what the other meant before. And is that unavoidable? I feel like this happens a lot that we have almost taken this as normal. Also, I kind of feel like this discussing is in the scope of philosophy. The progress of the human world is the process of constantly breaking existing ideas and developing new ideas. Therefore, is this kind of thing necessarily or even unavoidable in the development of science and technology?

minminfly68 commented 4 years ago

Thanks for the enlightenment on this topic! It is pretty common for researchers or politicians to manipulate and utilize the data under the banner of science. As we all might notice, social science is becoming more and more "science", and we are here to learn computational social science which is a quite new branch in social science field.

Coming from a traditional social science school with prior rigorous training on traditional social science, I always doubt that is it a must for social science to become more "science"? Martin Wright wrote a famous article in 1960 on why there is no international theory, and I really wanna to use this opportunity to ask would the rise of "bullshit" and data science discourage the developments in the traditional school which finally becomes no more theory as Martin Wright claimed in 1960? If so, how should we tackle with this big threat?

YanjieZhou commented 4 years ago

Thanks for your presentation. Misinformation is indeed a serious topic especially in the modern society, a vast social network contructed on the basis of linked information, which gives rise to a huge amount of business that feeds on providing information to those who need it, while the value of the information remains to be sceptical. Thus how could we obtain information in a more economical way in the modern society, especially for us graduate students?

yutianlai commented 4 years ago

Thank you for the presentation. Certainly there are reasons behind such bullshit, one of which is creating and adopting such models or theories indicates academic innovation and could increase the researcher's academic competence, which is ,particularly important to one's career. What do you think would be the best way to reduce "bullshit" when it is bringing benefits to the creators?

nt546 commented 4 years ago

Thanks for your presentation. In science journalism many times there's bullshit because of misinterpretation and misrepresentation of results. For machine learning models it could be because of the input data. Do you think even algorithms could themselves contribute to bullshit? If yes, could you provide an example wherein it's not designed with this objective in particular yet its results are bullshit?