Blackmill / book-club

Book club weekly notes
22 stars 4 forks source link

Thinking fast and slow: chapters 6-9: Jan 14 #47

Closed elle closed 4 years ago

elle commented 4 years ago

Aiming to read:

Chapter 6: Norms, surprises, and Causes Chapter 7: A machine for jumping to conclusions Chapter 8: How judgements happen Chapter 9: Answering an easier question

MC: @antoinemacia Notes: @mcgain

See you 12pm Tuesday, Jan 14th @ https://whereby.com/blackmill

Ping gday@blackmill.co if you want a calendar invite and access to the Slack beforehand.

elle commented 4 years ago

6: Norms, surprises, and causes

Assessing normality

System 1: maintain and update world view - a model of what's around us, and what is normal. It determines interpretation of what is normal in the present and expectations for the future.

A surprise is an indication of how we perceive our world and what we expect from it.

2 types of surprises:

Repetition of similar events => makes them a norm rather than a surprise. Very little repetition is needed for a new experience to feel normal.

A surprise can cause an abnormality in how we perceive our world.

System 1, which understands language, has access to norms of categories, which specify the range of plausible values as well as the most typical cases.

Seeing causes and intentions

Example: spending the day sightseeing, and a missing wallet. Is the cause pickpocketing or forgetting the wallet somewhere?

Rules of associative coherence - we come up with plausible stories to explain what happened

Side note: somewhat explains why we make assumptions

Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities.

--

The psychologist Paul Bloom, writing in The Atlantic in 2005, presented the provocative claim that our inborn readiness to separate physical and intentional causality explains the near universality of religious beliefs. He observes that “we perceive the world of objects as essentially separate from the world of minds, making it possible for us to envision soulless bodies and bodiless souls.”

--

people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning.

7: a machine for jumping to conclusions

Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort. Jumping to conclusions is risky when the situation is unfamiliar, the stakes are high, and there is no time to collect more information.

When not recent event comes to mind, distant memories govern

Conscious doubt is not in the repertoire of System 1; it requires maintaining incompatible interpretations in mind at the same time, which demands mental effort. Uncertainty and doubt are the domain of System 2.

A bias to believe and confirm

Gilbert proposed that understanding a statement must begin with an attempt to believe it: you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it.

When system 2 is busy, system 1 is gullible and biased to believe almost anything. Because the system that is in charge of doubting is busy.

Associative memory contributes to a general confirmation bias.

Positive test strategy is how system 2 tests a hypothesis. Usually we look for data to contradict a belief we hold. But system 1 is more accepting of suggestions and thus leads us to the wrong ideas.

Exaggerated emotional coherence (halo effect)

Tendency to like (or dislike) everything about a person --even things that haven't been observed.

the adjective stubborn is ambiguous and will be interpreted in a way that makes it coherent with the context.

Side note:

Decorrelate error:

I graded students’ essay exams in the conventional way. I would pick up one test booklet at a time and read all that student’s essays in immediate succession, grading them as I went. I would then compute the total and go on to the next student... I adopted a new procedure. Instead of reading the booklets in sequence, I read and scored all the students’ answers to the first question, then went on to the next one.

Side note: what Be Applied is trying to counteract

Wisdom of crowds:

Exercise: estimate the number of pennies in a jar. An individual will usually over- or under-estimate the number. But the average of all the guesses tends to be quite accurate.

Independent errors (in the absence of systematic bias) tend to average to zero. If the group is biased, then the average will be biased.

The principle of independent judgements:

To derive the most useful information from multiple sources of evidence, you should always try to make these sources independent of each other.

--

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist.

--

It is the consistency of the information that matters for a good story, not its completeness. Indeed, you will often find that knowing little makes it easier to fit everything you know into a coherent pattern... It explains why we can think fast, and how we are able to make sense of partial information in a complex world.

Biases:

8: how judgements happen

Biological roots of the rapid judgements we make - how safe a situation is... any threats? are you friendly or hostile?

Todorov then compared the results of the electoral races to the ratings of competence that Princeton students had made, based on brief exposure to photographs and without any political context. In about 70% of the races for senator, congressman, and governor, the election winner was the candidate whose face had earned a higher rating of competence... Surprisingly (at least to me), ratings of competence were far more predictive of voting outcomes than ratings of likability.

--

Evaluating people as attractive or not is a basic assessment. You do that automatically whether or not you want to, and it influences you.

Judgment heuristic: for example negative emotional response...

As expected, the effect of facial competence on voting is about three times larger for information-poor and TV-prone voters than for others who are better informed and watch less television

Sum like variables: ignoring the size of the sample... and complete neglect to quantity in an emotional context

Intensity matching: using a scale of colours to match emotions.

The mental shotgun: "the control over intended computations is far from precise: we often compute much more than we want or need."

9: answering an easier question

why we have intuitive judgments about many things that we know little about.

You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why...

Substituting questions

A simple account of how we generate intuitive opinions on complex matters: if a satisfactory answer to a complex question is not found, system 1 will find a related easier question and answer that one, considering that the answers should be fitted to the original question.

--

intensity matching, is available to solve that problem. Recall that both feelings and contribution dollars are intensity scales. I can feel more or less strongly about dolphins and there is a contribution that matches the intensity of my feelings.

--

System 2. Of course, System 2 has the opportunity to reject this intuitive answer, or to modify it by incorporating other information. However, a lazy System 2 often follows the path of least effort and endorses a heuristic answer without much scrutiny of whether it is truly appropriate.

Mood heuristics for happiness:

How happy are you with your life these days? What is my mood right now?

Asking how many dates you had last month is a form of priming, but the two questions are not synonymous... Any emotionally significant question that alters a person’s mood will have the same effect.

The affect heuristic:

When people let their likes of dislikes determine their believes about the world.

Your emotional attitude to such things as irradiated food, red meat, nuclear power, tattoos, or motorcycles drives your beliefs about their benefits and their risks. If you dislike any of these things, you probably believe that its risks are high and its benefits negligible. The primacy of conclusions does not mean that your mind is completely closed and that your opinions are wholly immune to information and sensible reasoning.

mcgain commented 4 years ago

Notes of our conversations

Some people are getting confused by the System 1/System 2 terminology, so we are defining them as. System 1 = Automatic System 2 = Effortful

We talked about how we do sometimes think about coincidence statistically, and some surprises shouldn’t actually be surprising. This book has primed us to think more statistically.

Antoine is deliberately being skeptical about all the author's tricks, so he is no longer tricked by them.

Paul Bloom's theory of the separation of self and body raised questions for Lachlan. He feels the book is missing a deeper explanation. To others it rings true, and is self evident, and a deeper grounding in the philosophy of mind could help.

We talked about free will, the definition of the mind and how they relate to the book. Richard explained a bunch of different theories of mind that destroy our typical notions of self, giving examples of lobe separation, conjoined twins, and the movement of Vietnamese cyclists "in flock".

Eveeryone noticed that these chapters seem to reinforce the previous ones. Not much new stuff seems to be introduced but more everyday experiences are tied into Systems 1 and 2 in the same way as previous chapters.

Antoine posed a question about whether different mediums should be scrutinised differently because they affect us differently. We concluded that everything, especially the book itself, must be questioned.

Richard theorised on evil ways to use priming and the orders of questionnaires to get your own way.