Rationality Lab – Test Yourself!

One of the worst things you can assume is that you are more rational than the average person. Because everybody thinks that, and half of us must be wrong.

In fact, average isn’t very impressive. Contrary to popular belief, being rational doesn’t necessarily mean only separating your emotions from your thoughts. It’s about making correct inferences from a set of data and assumptions. Lots of research in cognitive psychology and behavioral economics show that there are systemic problems in the way we reason, and these problems repeatedly taint the conclusions we make about the world. Worse, these biases are dangerous because they not only give us wrong conclusions, but they also trick us into feeling very confident that we are right.

The following quiz was given to a bunch of (secular) students at the University of Chicago. It’s a very small sample size, but we’re not going to draw any profound conclusions from it. Rather, we’re going to debrief the the lab, as a learning experience.

So before you continue reading, take fifteen minutes from your life to take the quiz. Note: You won’t get a grade, but they’ll be submitted to me to look at, although I probably won’t do much looking.

—–

Okay, so we begin.

Question 1:

All religious people are irrational.
All non-Christians are not irrational. 
All religious people are Christians.

Conclusion: Therefore all Christians are irrational.

Is this conclusion correct or incorrect?

It is important to note that the second premise gives us useless information. Knowing properties of non-Christians tell us nothing about the properties of Christians in this case. So we are left with the first and second premises: religious people are a subset of irrational people, and Christians are a subset of religious people. However, this containment does not exclude the possibility that some Christians can be non-religious (which according to Josh Oxley is also possible in real life). Since not all Christians are religious, the conclusion is incorrect.

Results: A surprising 40% of respondents did not get this question correct. *Looking at the online results, it doesn’t look very promising either. I’m surprised at how tough this question is.

Question 2:

Our humanist advisor Joshua Oxley has six pairs of bright green socks and six pairs of white socks in his drawer. In complete darkness, and without looking, how many socks must he take from the drawer in order to be sure to get a pair that match?

If there are only two colors of socks, then we only need to take out three socks. This is because if we take out two pairs of socks, they either match or they don’t. If they don’t match, then the third sock taken out must match one of the two we initially took out. Again, the additional information often throws people off. We have a hard time discerning what information is relevant and what is not.

Results: 30% of people got this wrong. There was one who put “13” (which would guarantee a pair that don’t match). There were also odd answers like “2”. *Our online respondents nearly all got this right.

Question 3:

We’re playing a two-player game. The rule is that one player says an integer from 1 to 10, and the other person says a number which is the sum of the previous number and an integer 1 to 10. The players continue this until “50” is said. The first person who says “50” wins.

For example, if I say 5, then my opponent can say any number between 6 and 15. Suppose my opponent says 13. Then I can say any number between 14 and 23. And so on…

If you wanted a guaranteed win, would you go first or second? If you go first, what number would you say?

This is a very tough question that requires you to reason backwards. People presented with this question often start with test cases: “What if I say 10 first? And then my opponent can say 11 to 20. So let’s say she says 11… then I can say 12 to 21…” In other words, they reason through the game as if they are playing it in real time and fail to look at the game in a simpler way.

Think about it like this. What would one have to say to get to 50? If someone says 40-49, then the opponent could say fifty on the next turn. So I want to make my opponent say 40-49. That would mean I would have to say 39!

So effectively, the person who says 39 wins the game. Likewise, through the same reasoning, the person who says 28 wins the game. This implies that the person who says 17 wins the game. This implies that the person who says 6 wins the game. So starting first and saying six guarantees a win with correct play.

Results: 20% of quiz-takers got this very difficult question right.

Question 4:

An individual has been described by a neighbor as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer? (Steve lives in our world.)

Credit to Daniel Kahneman’s Thinking Fast and Slow for this question. This question is meant to demonstrate the representativeness heuristic that we use to judge relative probabilities. We tend to think that things that have resemblances close to what we expect are more probable, and we come to these conclusions without much regard for base rates and general frequency in the population. While librarians may be commonly tidy and meek, there are actually a lot more farmers in this world than librarians. So it is more probable that Steve is a farmer.

The Secular Alliance people, being the skeptics that they are, questioned whether being named “Steve” and “having a neighbor” could tilt the statistics in favor of the opposite conclusion. I doubt “having a neighbor” would. However, the name “Steve” places him in the English-speaking world, which tends to be more developed. However, a quick search on Google showed that even in America, farmers outnumber librarians about five to one. Does being meek and tidy overcome these odds? I don’t know… but it’s a very interesting issue.

Question 5:

If a random word is taken from an English text, is it more likely that the word starts with a “k”, or that “k” is the third letter?

This question, also from Kahneman’s Thinking Fast and Slow demonstrates the availability heuristicWe can more easily think of words that start with “k”, so we often mistakenly think that “k” words are more common in the typical English text. However, words that have the third letter as “k” are more commonly used.

Question 6:

A polygraph exam is correct 96% of the time. We know that 5% of husbands cheat on their wives. Suppose a man just failed a polygraph (in which the man claimed he didn’t cheat). What is the probability that he cheated on his wife?

A typical Bayesian inference problem that most people, unfortunately, get wrong. First of all, the answer is neither 5% nor 96% because both sets of information have to be taken into account. The explanation for this is not that short, but it isn’t as hard as you think, so please read about Bayes’s Theorem if you have no idea what I’m taking about.

Let C denote the event of him cheating in the past, and let F be the event of failing the polygraph.

P(C|F) = \frac{P(F\cap C)}{P(F)}

The numerator of this fraction, the probability of the guy cheating on his wife and failing is 0.96 * 0.05

Now look at the denominator. Note that there are two ways to fail. Either you did cheat on your wife and fail, and you didn’t cheat on your wife and fail. The probability of failing, taking into account these two ways, is 0.96 * 0.05 + 0.04*0.95

\frac{P(F\cap C)}{P(F)}= \frac{0.96*0.05}{0.96*0.05 + 0.04*0.95}

If you do the math, you’ll get 55.8%. This polygraph probably isn’t as practically useful as you thought it was.

Question 7: 

There are 5 coins in a bag. Four of those coins are normal and fair. The fifth coin is double-sided with heads.

Suppose I randomly choose a coin out of the bag and flip it 5 times. It lands on heads every time. What is the probability that the next flip of that coin will be heads?

This is hands-down my favorite Bayesian inference problem. It’s not very easy, but it illustrates the power of Bayes’s Theorem to give us correct conclusions based on the totality of evidence.

First of all, let’s take a look at the most obvious incorrect answer: 60%. The reasoning goes something like this: you have a 4/5 chance that you got a fair coin, and a 1/5 chance you didn’t. So 4/5 of the time, you’ll get a 50% chance of heads, and 1/5 of the time, you’ll get heads all the time.

So 4/5*0.5 + 1/5 = 2/5 + 1/5 = 3/5 = 60%. This answer completely ignores the fact that we flipped the coin five times and got heads each time.

Let’s look at this rationally. Are we justified in ignoring the “evidence” of those coin flips? What if instead we flipped it five times and saw at least one tails? Then we would be know it was a fair coin, and the probability of the coin coming up heads again is a simple 50%.

Now what if we flipped it one million times and got heads each time? Wouldn’t we be really really confident that we actually chose the unfair two-headed coin? We would therefore be pretty confident that the next flip would be heads.

So intuitively, we do have to take into account the evidence. We also know that the correct probability is somewhere between 60% and 100% because without any evidence, we start with 60%, and the more evidence we see of continued “heads”, the closer we get to 100% (although we will never get there).

Luckily, Bayes’s Theorem gives us a way of calculating the “correct” probability. (As Eliezer Yudkowsky says, calculating any other probability will send you straight to Bayesian hell, where nothing you expect will happen, even if you are given many prior events.)

Let H denote the event of flipping heads on the next flip. Let H_n denote the event of flipping n heads in a row.

P(H|H_5) = \frac{P(H\cap H_5)}{P(H_5)}

Note that the event (H and H_5) is equivalent to H_6.

P(H|H_5) = \frac{P(H_6)}{P(H_5)}

Now what is the probability of getting n heads in a row given our assumptions? We know that we have a 4/5 chance that we get the fair coin, which lends a (0.5)^n probability, and 1/5 of the time we get the biased coin, which lends a 100% probability.

P(H_n) = \frac{4}{5}*(0.5)^n + \frac{1}{5}*1

Doing some basic calculations, we get:

P(H_6) = 0.2125

P(H_5) = 0.2250

(Decimals are exact.)

P(H|H_5) = \frac{0.2125}{0.2250}

The answer is about 94.4%. We should be fairly confident that we’re going to get a heads on the next turn, thanks to the evidence.

Only two UChicago students got this extremely tough question right. *None of the online people have gotten this right at the time of this post.

Question 8:

There is a large cube made up entirely of individual 1 x 1 x 1 cubes. The large cube measures n x n x n (so it is made up of n^3 small cubes). A fierce thunderstorm passes and knocks out the outer layer of 1 x 1 x 1 cubes (on all six sides). Suppose n is some non-trivial positive integer (n is greater than or equal to 3). What is the shortest algebraic expression (in terms of n) for the number of small cubes that were knocked off by the storm?

This is one of those “don’t think too hard” problems. The inefficient, but not necessarily incorrect, way of approaching this problem is to directly solve it by trying to count all the cubes that fell off. You’ll have to be careful about double or triple counting, but it is doable if you have some solid spatial reasoning skills.

But then you’ll get an expression that’s really weird, and unless you’re a math whiz who can do some awesome factoring and completion of squares (and cubes), you’re unlikely to get the simpler answer below. Instead of thinking how many cubes fell, think of how many are left. We started with n^3 cubes. There are (n-2)^3 cubes left. So the difference should tell us how many cubes fell off.

n^3 - (n-2)^3 = 6n^2 - 12n + 8

The right hand side of this equation is an acceptable answer, although the left side is what I was looking for.

Question 9: 

Try to finish all the questions above before you answer this one. All the questions you answered above will be scored and given 1 point for each correct answer. Then each test taker will be ranked and placed into a decile. You will be scored according to the formula below. The closer you are to your actual decile, the more points you get.

Points = 1 – ((Reported Decile – Actual Decile)^2)/100

I can’t give you a correct answer on this, other than the fact that you’re most likely wrong. How do I know you’re wrong? Because most people get this wrong. Take a look at the results I received so far:

Aside from one outlier, it seems that everybody else reported that they are above average. I don’t even have to start grading these quizzes to know that there is something very wrong here. I hate to say it, but at least half of these people are somewhat deluded about their abilities. The results from the UChicago secular students were a bit better. A lot more people put  under average decile, but you can’t help but notice that the lower-scoring participants tended to be more wrong (and too optimistic) about their performance. I like to end with this question because I think this is the most dangerous part about irrationality. Our minds don’t just reason poorly all the time; our minds reason poorly while we think we are reasoning well, at least relative to everyone else. If you are truly an expert at something (music, chess, debate, etc.), you’ve probably seen the Dunning-Kruger effect, the strong tendency for incompetent people to not know that they are incompetent (and a corollary tendency for very competent people to doubt their relative competence too much). This is not a condemnation of anyone’s abilities, but in all areas, I think it is important to understand Bertrand Russell’s maxim that “the trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” If we don’t really start rigorously questioning how smart or right or rational we really are, we have no hope of arriving at a true conception of reality, which is what I hope you want.

So what lessons can we draw from this lab? I have a few in mind, that you should think about so you can really change your life and hopefully become more rational (and help me with this journey too).

Learn about rationality. There’s a whole online community dedicated to this topic. There are many skeptics groups and meetups that often talk about this stuff. There’s a ton of cognitive science research on our biases. Read up and learn.

Practice thinking outside the box. Use indirect ways to arrive at an answer. Sometimes problems can’t be solved by using the most obvious way of solving a problem. There are plenty of puzzles and mind exercises on the internet if you want to have fun.

Doubt yourself. Seriously. If you’re confident or certain about something, think about whether it is justified. If you’re very uncertain about other things, think about why.

-Learn lots of facts. Yes, rationality isn’t just about reasoning. It’s about making good decisions based on evidence, and the more facts you have, the better your decisions automatically are. So it might be a good idea to have some idea about the relative frequency of occupations. Other important topics include: What’s most likely to kill you? What are the most common things that lead to happiness? What medicines/treatments actually work?

Learn Bayes’s Theorem and its applications.You really can’t ignore something more basic than this. Otherwise, it’s like learning biology without understanding evolution or literature without understanding grammar.

Change your mind. We humans practice confirmation bias all the time. You might not think so, but it is really really hard to change your mind. You might want to start practicing changing your mind. Start with the small things: is vanilla really the best? Work your way up. Does capitalism really work? Is atheism really true? The higher up you go, the harder it is to change your mind. Interesting, isn’t it?

Well that about wraps it up. I hope you enjoyed this. So how did you guys do? Comments welcome.

Advertisements

Posted on April 23, 2012, in Rationality and tagged , , , , . Bookmark the permalink. 1 Comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: