Category Archives: Rationality
I admit that one of the most annoying things about philosophical discourse is this assumption that there exists this sacred, unbreakable distinction between the “subjective” and the “objective”. Whether we are talking about things like beauty, morality, value, or even probability, there is a tendency to think of the subjective as that realm where it is merely opinion or feelings, fleeting as they may be, and the objective as that realm which is eternally and cosmically really really true no matter what you think or say.
I think these poor definitions (or maybe misconceptions of the definitions) put unnecessary restrictions on our thinking, especially if all we want is a coherent and satisfying framework for believing that the things we care about have value even when we aren’t here. If, on the other hand, you aren’t satisfied with anything but a cosmic, utterly transcendent, nearly magical idea of “value”, then I can’t help you.
As a utilitarian, I like to think of value as determined by, well, utility. But doesn’t utility depend on beings to be in existence, and what if those beings aren’t there anymore? Isn’t value completely subjective?
Let’s do a thought experiment. Suppose we got to explore a new planet in some far away solar system, and we discovered that the inhabitants of this planet had all disappeared or died, leaving their valuables like their cars and dishwashers in their houses. Are we to say that these artifacts have no value (apart from artifact value) even though we don’t know how to drive alien cars and their dishwashers are worse than ours? Does the nonexistence of the creatures affect the value?
I think we can safely say that value does definitely depend on how it once related to the subjective lives of those around them. If there were no conscious creatures ever, the Universe would just be a barren place, and the idea of value just wouldn’t have coherent meaning.
I think think value is not merely what we think or feel. When I say a car (or something abstract) has value, I don’t mean that I like cars and you should too. I mean that I recognize that conscious creatures do (or had once) like cars, and that this object-person relationship that emerges from the consideration of utility in others is something I recognize. It’s the difference between saying, “I like chocolate ice cream” and “I observe that chocolate ice cream has increased the utility of many people and thus I recognize that it has value, especially given that I think there’s something objectively real in chocolate ice cream (its sugary, creamy awesomeness in the form of certain chemical arrangements)”
After all, the ability to go beyond one’s own mind and recognize others is the basis of science, morality, and the typical ways Bayesians converge towards truth, or at least agreement.
So yes, I think value is subjective, but it isn’t as subjective as you subjectively think.
Often, discussions with religious people turn into accusations of the atheist’s supposed “foolishness”. They attack the arrogant presupposition that humans can know better than God through their “weak” and “broken” faculties like reason. True Wisdom ™, or so it seems, comes only from the Divine, which transcends all possible human knowledge.
Yet, for this argument to be consistent, one would first have to demonstrate the superiority of a specific person or group of persons in the realm of epistemology. Undoubtedly, this claim isn’t even consistent with most religious viewpoints. If the atheist can be wrong in the eyes of God, then so can the religious person. Saying otherwise is to build a false hierarchical paradigm in which certain people, by privilege of belonging in a certain group, win the intellectual argument by default.
Even more importantly, arguing this way is a hasty dismissal of the other side. It undermines effective discourse and begs the question. It defeats the purpose of dialog and dries up further conversation.
Commitment to reason is a prerequisite for fruitful discussion. And contrary to popular belief, the use of rationality doesn’t mean we switch to a purely emotionless and calculating brain mode. It at the very least means we treat each other in conversation seriously and with a measure of charity.
Of course, if you don’t like reading, you can check out this wonderful inspirational video on how commitment to reason can take us far and wide.
P.S. Starting classes while working a full-time job. Updates will be sporadic from now on.
My previous post about having a humble attitude towards inspiration has sparked some criticism, some of which is from Dianoilogos, who writes that “inspiration is for everyone (not just for the gifted few)“:
This is concerning to me. Yes, not everyone has the opportunity to trulyappreciate a sunrise, listen to beautiful music, or revel in the sights and sounds of nature. That’s true. But saying inspiration is a “privilege.” It just doesn’t set well.
Inspiration is for everyone. But if we want to ensure everyone has equal access to it, we need to make sure the society is structured in a way everyone can adequately avail themselves of that blessing and the opportunity to appreciate the world in all its wonders regardless of whether they’re rich or poor,geniuses or not, etc.
Looking back, I think I chose the wrong ordering of words. No, having hope and being inspired doesn’t and shouldn’t make you an elite. It doesn’t make you inherently better than anyone else. Yes, like the many good things in life, inspiration is truly for everyone.
I still stand by my claim that certain privileges make access to inspiration easier, and that we should recognize that. More importantly, we all agree that the world has many wonders, and we should work towards a society that gives all people a chance to appreciate these wonders.
Lastly, this is a good reminder that constructive and honest criticism of vocabulary, tone, and content help us make our arguments better, and provide us with opportunities to be more rational people.
I realized that there’s a thing called “missing the point” about “missing the point”. After all, I kind of knew that my defense about the pure awesomeness of reductionism would be misconstrued into something much much worse. (But who knew it would be my friend Andrew Tripp, of the Depaul Alliance for Free Thought?)
Such is the reality of internet discussion. So let’s clear up some of the charges.
For instance, Mr. Mei seems to hold the belief that at some point that I said reductionism, namely the understanding that everyone and everything is made up of atoms, is a horrible evil viewpoint that causes children to have nightmares. Or something.
Actually, not quite. As even the original author of my chosen definition of reductionism has said, such use of diction (i.e., “evil evil belief”) are for comic effect (to satirize a culture that seems to consistently misunderstand the concept). Also, I have to point out that that only a passing reference was made to Andrew’s post, and that the post itself was not an ad hominem, or even a specific reply, to what Andrew wrote. (If anything, I was more interested in the replies of a bunch of philosophically-minded Christian Facebook friends). But let’s move on.
I would love, LOVE for him or anyone else to show me where I said that.
By posing this question, it seems that Andrew doesn’t seem to understand what I thought the most distasteful part of his original post was, or what I’m criticizing and not criticizing. And the pride by which he innocently quotes Dan Fincke again in his reply (which is what I was objecting to) seems to confirm my point. Here’s the bulk of the relevant quotation, with my added bolding for emphasis:
There is a tendency to talk like the only level of explanation that is at all meaningful is on the physics level. Now, of course everything in our experience is ultimately physical and made up of atoms, which are further composed of subatomic particles. But that does not mean that atoms are the only level on which true things can be said. Those atoms combine in remarkably complex patterns that give rise to the objects of study in chemistry, biology, psychology, and sociology. Those emergent patterns are real. It’s not like in biology we say, “There’s no such thing as evolution because this organism and its descendants are really still just patterns of atoms”. The differences in the patterns of atoms that make up one organism and its offspring are significant. They are worth saying there is something new evolved in nature when an organism is distinct enough in the patterns of its properties from its ancestors. These are real subjects of study. Real differentiations in nature.It would be stupidity to judge those patterns as somehow artificial simply because there is a way to conceptualize the organisms in purely atomic terms that pay no attention to the features that are interesting on the biological level.
The biggest point here is that both philospher (Dan) and social activist (Andrew), by writing this piece or quoting it as if it were accurate, seem to really miss the point about the mere reality of reductionism. For one, they seem to think that reductionism means “judging patterns as somehow artificial”, and that somehow thinking about things atomically destroys features that are interesting on higher levels. However, that doesn’t seem to be what reductionism actually is, which is incidentally covered on my Number Three point about how “maps are really important”. I don’t just think this is just bad philosophy; it’s also quite misleading to a general audience.
But I think Andrew’s beef with the overall secular community is the following:
How does this work with reductionism, then? Well, take that viewpoint and mix it with the sorts of upper-crust white academics who sit at the top of the atheist movement…
…Thus, we have the movement’s near-total lack of engagement with issues of race, gender, and institutional violence.
This is what I refer to when I criticize over-the-top reductionism. Not reductionism itself, but the ability of it to intersect with old notions of color-blindness to allow otherwise rational atheists to ignore issues affecting marginalized communities.
Now this is just confusing. After proudly quoting an entire Dan Fincke paragraph about the mistaken idea that “reductionism” is about ignoring the importance of human-level maps, and after writing an entire post using the word “reductionism” straight-up, only by the end of Andrew’s reply do we get somewhat of a conditional statement: so we’re not talking about true reductionism but (fake?) over-the-top reductionism.
And hence my point on Number Two, lampooning how people think atheism causes mass murderers, or how Darwinism leads to racism. Somehow Andrew also missed the point here: the thing is, if people want to offer a critique of the misuse of a particular concept, they have to be careful.
Just as eugenicists don’t deserve to be described as “Darwinists” (and in my humble opinion “social Darwinist” is really really pushing it), people who think reductionism implies race-blindness (or any other normative position, implicit or explicit) don’t deserve to be described as “reductionists”. At the very least, we should use language that makes this distinction clear, not wobble around with confused definitions and popular misconceptions. After all, we don’t say of 1940’s America that too many (white) scientists “have an annoying tendency to be Darwinists“.
And to reiterate my points so that everything is really really clear.
1) Human-level maps are really really important. It means that our mental conceptions matter. It means that we have to work towards social justice. It means that race, gender, sexual orientation, political institutions, and the issues and correlations that tie everything together really matter.
2) You have to be careful about what you say is “real”. Especially when you’re talking about reductionism, saying that the patterns of “biology” are real and that you can’t describe it on a lower-level is really bad philosophy. Also don’t fall for the Mind Projection Fallacy, and wear lots of sunscreen.
3) Reductionism is still awesome, beautiful, and unbelievably cool. I think I covered this in my original post.
4) All we need is love. I dunno. I just felt like writing that.
Reductionism <ri-duhk-shuh-niz-uhm>, n. The evil evil belief that people are made of cells. And cells are made of atoms. And atoms are made of quarks and leptons. And everything is quantum configurations in something or other.
For many, the “evil evil” part throws them off, way off. To them, it’s not just scary (like death). It seems morally repugnant. After all, do we start treating each other like we’re just globs of goo? Is there not something “real” in the make-up of a human being, something that goes beyond just an assortment of cells? Does it lead to erasure and marginalization of peoples, or the continuation of male, white hierarchies?
Are you telling me that everything I see above, its beauty and truth and essence included, is all reducible to smaller things?
Well here are a few points to clear up our Hollywood, pop culture idea of this scary, scary idea. So here we go:
1) Reductionism is true.
If anything, you should accept reductionism because it’s overwhelmingly likely to be true. The alternative would mean that there’s something in us or about us that isn’t material or physical, which is magic. And if it’s magic, it isn’t an explanation. And you shouldn’t believe it.
2) Reductionism isn’t a normative claim.
Being a reductionist doesn’t mean you should or shouldn’t be a liberal or a conservative. It doesn’t say if religion is good or bad (although it suggests that most religions are untrue). It doesn’t say anything about how you should or shouldn’t treat other people. So claims about reductionism leading to social ills are in the same approximate category as claims about atheism leading to the Holocaust or claims about Darwinism leading to eugenics.
3) The dichotomy between Map and Territory does not mean that maps aren’t important.
Reductionism doesn’t say that our moral and ethical systems are worthless. It doesn’t say that biology is useless because it’s ultimately physics (even though biology IS physics). In fact, we must acknowledge that our human-level maps are incredibly important. And no, it isn’t just science that’s important. Philosophy, sociology, anthropology, literature, art, history. Juggling, piano-playing, fire-breathing, skydiving. All of it. Important, and invaluable to us all.
4) Reductionism is beautiful, in every way.
Think about it. You’re made of atoms. And the atoms (or other smaller things) make up entirely what you are.
And the entire immensity of the Universe, its happenings and events, which go on on an unimaginably massive scale are also made up of smaller things, on a scale unimaginably small. The atoms that make up who you are in every way–mentally, physically, consciously, biologically, psychologically–all of it comes from the explosions of unbelievably large stars millions of miles in diameter, explosions that were made up of unbelievably small and subtle quantum events on scales far smaller than less than one millionth of a centimeter.
Correct me if I’m wrong, but this view of the world is amazing. It means that there’s no magic, no fuzziness, no blurriness or supernatural nonsense in our model of reality. It means that the “magic” that we feel in our experience is great, because we are able to understand it on a more general (but less information rich) level. It means that somehow we’ve evolved a remarkable and intricate consciousness that can abstract from complex information, that can reason and discover, and that can develop amazing ethical systems to make the world a better place. Reductionism gives us a beautiful perspective to understand reality.
As Carl Sagan said, “the beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.”
You can see why on Skepticon 5’s page for the new lineup of speakers. (Okay, fine, I realize comparing myself, who was merely quoted, to the speakers is a bit too much.) But this is probably as famous as I’m going to get. So let me enjoy this moment.
If you watch this inspirational clip and I don’t tell you that it is from Inherit the Wind, you might guess that it is from an angry “New Atheist” or something. Nope. The dialogue was recorded in 1960.
Atheism isn’t cynical. It isn’t a state of despair. Atheism has been and always will be beautiful, inspiring, and life-affirming.
One of the worst things you can assume is that you are more rational than the average person. Because everybody thinks that, and half of us must be wrong.
In fact, average isn’t very impressive. Contrary to popular belief, being rational doesn’t necessarily mean only separating your emotions from your thoughts. It’s about making correct inferences from a set of data and assumptions. Lots of research in cognitive psychology and behavioral economics show that there are systemic problems in the way we reason, and these problems repeatedly taint the conclusions we make about the world. Worse, these biases are dangerous because they not only give us wrong conclusions, but they also trick us into feeling very confident that we are right.
The following quiz was given to a bunch of (secular) students at the University of Chicago. It’s a very small sample size, but we’re not going to draw any profound conclusions from it. Rather, we’re going to debrief the the lab, as a learning experience.
So before you continue reading, take fifteen minutes from your life to take the quiz. Note: You won’t get a grade, but they’ll be submitted to me to look at, although I probably won’t do much looking.
Okay, so we begin.
All religious people are irrational.
All non-Christians are not irrational.
All religious people are Christians.
Conclusion: Therefore all Christians are irrational.
Is this conclusion correct or incorrect?
It is important to note that the second premise gives us useless information. Knowing properties of non-Christians tell us nothing about the properties of Christians in this case. So we are left with the first and second premises: religious people are a subset of irrational people, and Christians are a subset of religious people. However, this containment does not exclude the possibility that some Christians can be non-religious (which according to Josh Oxley is also possible in real life). Since not all Christians are religious, the conclusion is incorrect.
Results: A surprising 40% of respondents did not get this question correct. *Looking at the online results, it doesn’t look very promising either. I’m surprised at how tough this question is.
Our humanist advisor Joshua Oxley has six pairs of bright green socks and six pairs of white socks in his drawer. In complete darkness, and without looking, how many socks must he take from the drawer in order to be sure to get a pair that match?
If there are only two colors of socks, then we only need to take out three socks. This is because if we take out two pairs of socks, they either match or they don’t. If they don’t match, then the third sock taken out must match one of the two we initially took out. Again, the additional information often throws people off. We have a hard time discerning what information is relevant and what is not.
Results: 30% of people got this wrong. There was one who put “13” (which would guarantee a pair that don’t match). There were also odd answers like “2”. *Our online respondents nearly all got this right.
We’re playing a two-player game. The rule is that one player says an integer from 1 to 10, and the other person says a number which is the sum of the previous number and an integer 1 to 10. The players continue this until “50” is said. The first person who says “50” wins.
For example, if I say 5, then my opponent can say any number between 6 and 15. Suppose my opponent says 13. Then I can say any number between 14 and 23. And so on…
If you wanted a guaranteed win, would you go first or second? If you go first, what number would you say?
This is a very tough question that requires you to reason backwards. People presented with this question often start with test cases: “What if I say 10 first? And then my opponent can say 11 to 20. So let’s say she says 11… then I can say 12 to 21…” In other words, they reason through the game as if they are playing it in real time and fail to look at the game in a simpler way.
Think about it like this. What would one have to say to get to 50? If someone says 40-49, then the opponent could say fifty on the next turn. So I want to make my opponent say 40-49. That would mean I would have to say 39!
So effectively, the person who says 39 wins the game. Likewise, through the same reasoning, the person who says 28 wins the game. This implies that the person who says 17 wins the game. This implies that the person who says 6 wins the game. So starting first and saying six guarantees a win with correct play.
Results: 20% of quiz-takers got this very difficult question right.
An individual has been described by a neighbor as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer? (Steve lives in our world.)
Credit to Daniel Kahneman’s Thinking Fast and Slow for this question. This question is meant to demonstrate the representativeness heuristic that we use to judge relative probabilities. We tend to think that things that have resemblances close to what we expect are more probable, and we come to these conclusions without much regard for base rates and general frequency in the population. While librarians may be commonly tidy and meek, there are actually a lot more farmers in this world than librarians. So it is more probable that Steve is a farmer.
The Secular Alliance people, being the skeptics that they are, questioned whether being named “Steve” and “having a neighbor” could tilt the statistics in favor of the opposite conclusion. I doubt “having a neighbor” would. However, the name “Steve” places him in the English-speaking world, which tends to be more developed. However, a quick search on Google showed that even in America, farmers outnumber librarians about five to one. Does being meek and tidy overcome these odds? I don’t know… but it’s a very interesting issue.
If a random word is taken from an English text, is it more likely that the word starts with a “k”, or that “k” is the third letter?
This question, also from Kahneman’s Thinking Fast and Slow demonstrates the availability heuristic. We can more easily think of words that start with “k”, so we often mistakenly think that “k” words are more common in the typical English text. However, words that have the third letter as “k” are more commonly used.
A polygraph exam is correct 96% of the time. We know that 5% of husbands cheat on their wives. Suppose a man just failed a polygraph (in which the man claimed he didn’t cheat). What is the probability that he cheated on his wife?
A typical Bayesian inference problem that most people, unfortunately, get wrong. First of all, the answer is neither 5% nor 96% because both sets of information have to be taken into account. The explanation for this is not that short, but it isn’t as hard as you think, so please read about Bayes’s Theorem if you have no idea what I’m taking about.
Let C denote the event of him cheating in the past, and let F be the event of failing the polygraph.
The numerator of this fraction, the probability of the guy cheating on his wife and failing is 0.96 * 0.05
Now look at the denominator. Note that there are two ways to fail. Either you did cheat on your wife and fail, and you didn’t cheat on your wife and fail. The probability of failing, taking into account these two ways, is 0.96 * 0.05 + 0.04*0.95
If you do the math, you’ll get 55.8%. This polygraph probably isn’t as practically useful as you thought it was.
There are 5 coins in a bag. Four of those coins are normal and fair. The fifth coin is double-sided with heads.
Suppose I randomly choose a coin out of the bag and flip it 5 times. It lands on heads every time. What is the probability that the next flip of that coin will be heads?
This is hands-down my favorite Bayesian inference problem. It’s not very easy, but it illustrates the power of Bayes’s Theorem to give us correct conclusions based on the totality of evidence.
First of all, let’s take a look at the most obvious incorrect answer: 60%. The reasoning goes something like this: you have a 4/5 chance that you got a fair coin, and a 1/5 chance you didn’t. So 4/5 of the time, you’ll get a 50% chance of heads, and 1/5 of the time, you’ll get heads all the time.
So 4/5*0.5 + 1/5 = 2/5 + 1/5 = 3/5 = 60%. This answer completely ignores the fact that we flipped the coin five times and got heads each time.
Let’s look at this rationally. Are we justified in ignoring the “evidence” of those coin flips? What if instead we flipped it five times and saw at least one tails? Then we would be know it was a fair coin, and the probability of the coin coming up heads again is a simple 50%.
Now what if we flipped it one million times and got heads each time? Wouldn’t we be really really confident that we actually chose the unfair two-headed coin? We would therefore be pretty confident that the next flip would be heads.
So intuitively, we do have to take into account the evidence. We also know that the correct probability is somewhere between 60% and 100% because without any evidence, we start with 60%, and the more evidence we see of continued “heads”, the closer we get to 100% (although we will never get there).
Luckily, Bayes’s Theorem gives us a way of calculating the “correct” probability. (As Eliezer Yudkowsky says, calculating any other probability will send you straight to Bayesian hell, where nothing you expect will happen, even if you are given many prior events.)
Let H denote the event of flipping heads on the next flip. Let H_n denote the event of flipping n heads in a row.
Note that the event (H and H_5) is equivalent to H_6.
Now what is the probability of getting n heads in a row given our assumptions? We know that we have a 4/5 chance that we get the fair coin, which lends a (0.5)^n probability, and 1/5 of the time we get the biased coin, which lends a 100% probability.
Doing some basic calculations, we get:
(Decimals are exact.)
The answer is about 94.4%. We should be fairly confident that we’re going to get a heads on the next turn, thanks to the evidence.
Only two UChicago students got this extremely tough question right. *None of the online people have gotten this right at the time of this post.
There is a large cube made up entirely of individual 1 x 1 x 1 cubes. The large cube measures n x n x n (so it is made up of n^3 small cubes). A fierce thunderstorm passes and knocks out the outer layer of 1 x 1 x 1 cubes (on all six sides). Suppose n is some non-trivial positive integer (n is greater than or equal to 3). What is the shortest algebraic expression (in terms of n) for the number of small cubes that were knocked off by the storm?
This is one of those “don’t think too hard” problems. The inefficient, but not necessarily incorrect, way of approaching this problem is to directly solve it by trying to count all the cubes that fell off. You’ll have to be careful about double or triple counting, but it is doable if you have some solid spatial reasoning skills.
But then you’ll get an expression that’s really weird, and unless you’re a math whiz who can do some awesome factoring and completion of squares (and cubes), you’re unlikely to get the simpler answer below. Instead of thinking how many cubes fell, think of how many are left. We started with n^3 cubes. There are (n-2)^3 cubes left. So the difference should tell us how many cubes fell off.
The right hand side of this equation is an acceptable answer, although the left side is what I was looking for.
Try to finish all the questions above before you answer this one. All the questions you answered above will be scored and given 1 point for each correct answer. Then each test taker will be ranked and placed into a decile. You will be scored according to the formula below. The closer you are to your actual decile, the more points you get.
Points = 1 – ((Reported Decile – Actual Decile)^2)/100
I can’t give you a correct answer on this, other than the fact that you’re most likely wrong. How do I know you’re wrong? Because most people get this wrong. Take a look at the results I received so far:
Aside from one outlier, it seems that everybody else reported that they are above average. I don’t even have to start grading these quizzes to know that there is something very wrong here. I hate to say it, but at least half of these people are somewhat deluded about their abilities. The results from the UChicago secular students were a bit better. A lot more people put under average decile, but you can’t help but notice that the lower-scoring participants tended to be more wrong (and too optimistic) about their performance. I like to end with this question because I think this is the most dangerous part about irrationality. Our minds don’t just reason poorly all the time; our minds reason poorly while we think we are reasoning well, at least relative to everyone else. If you are truly an expert at something (music, chess, debate, etc.), you’ve probably seen the Dunning-Kruger effect, the strong tendency for incompetent people to not know that they are incompetent (and a corollary tendency for very competent people to doubt their relative competence too much). This is not a condemnation of anyone’s abilities, but in all areas, I think it is important to understand Bertrand Russell’s maxim that “the trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.” If we don’t really start rigorously questioning how smart or right or rational we really are, we have no hope of arriving at a true conception of reality, which is what I hope you want.
So what lessons can we draw from this lab? I have a few in mind, that you should think about so you can really change your life and hopefully become more rational (and help me with this journey too).
–Learn about rationality. There’s a whole online community dedicated to this topic. There are many skeptics groups and meetups that often talk about this stuff. There’s a ton of cognitive science research on our biases. Read up and learn.
–Practice thinking outside the box. Use indirect ways to arrive at an answer. Sometimes problems can’t be solved by using the most obvious way of solving a problem. There are plenty of puzzles and mind exercises on the internet if you want to have fun.
–Doubt yourself. Seriously. If you’re confident or certain about something, think about whether it is justified. If you’re very uncertain about other things, think about why.
-Learn lots of facts. Yes, rationality isn’t just about reasoning. It’s about making good decisions based on evidence, and the more facts you have, the better your decisions automatically are. So it might be a good idea to have some idea about the relative frequency of occupations. Other important topics include: What’s most likely to kill you? What are the most common things that lead to happiness? What medicines/treatments actually work?
–Learn Bayes’s Theorem and its applications.You really can’t ignore something more basic than this. Otherwise, it’s like learning biology without understanding evolution or literature without understanding grammar.
–Change your mind. We humans practice confirmation bias all the time. You might not think so, but it is really really hard to change your mind. You might want to start practicing changing your mind. Start with the small things: is vanilla really the best? Work your way up. Does capitalism really work? Is atheism really true? The higher up you go, the harder it is to change your mind. Interesting, isn’t it?
Well that about wraps it up. I hope you enjoyed this. So how did you guys do? Comments welcome.