Edit: Some of the below links don't work, they just take you back to this page. For the ones where that is the case, I have tried to add the urls in parentheses afterwords. If you find one where that is not the case, please let me know.Pre-addendum (predendum?): While writing this, it has occurred to me that there is another idea which, while perhaps not influencing my worldview (whatever that is), has definitely affected my life an enormous amount. So I'll write about that next time! And now for the article I was intending to write before realizing that:
This will probably sound like several ideas, but it all ties together really well into a single, coherent worldview. Much more coherent and applicable to living in the real world than any other one that I've encountered. This post isn't intended to teach all of these concepts, this is basically just me explaining myself. Because of that, the way I'm going to structure this is by splitting it up into a few sections and explaining the views I used to hold and contrasting them with the views that I hold nowadays.
If you want to actually learn all this stuff, read the Sequences on Less Wrong. They're really poorly laid out, extremely long, and very, very valuable. I do strongly recommend them to anyone who is at all interested in having accurate beliefs, being effective at whatever it is you choose to do, having an interesting life, and last and certainly least, philosophy. If this ends up being super incredibly valuable sounding, you might also want to check out the Center for Applied Rationality, who run week long intensive workshops on this, which will get you up to speed much quicker than reading blog posts, most likely. I was in the first of their attempts at that, back when it was 9 weeks long.
I'll try to include the appropriate links at the appropriate moments so that if you want to read more on a specific subject you can just click through to the Less Wrong article on it. I'm sure to miss some, though, and there's a lot more content there, so I again recommend reading the Sequences. If you want a more easygoing, fictional read, then even though I cringe to write this sentence, I do recommend reading the fan fiction Harry Potter and the Methods of Rationality. HPMoR and the Sequences are written by the same person, Eliezer Yudkowsky.
If you talk to boring old school philosophers, the thing they will mean when they say "rationality" or "reason" is the belief that the only way to gain information about the world is by thinking, and that interacting with the world is useless. For instance, by considering axioms, you can come up with new, true, and useful mathematical theorems. I have a background in mathematics, so this was highly appealing to me as a youngster.
Contrast that with empiricism, the (old school definition of which is) belief that you can only gain information about the world by directly observing it. For instance, you can look up and see that the sky is blue. This is obviously useful, but really hard to formalize. Science is the most common formalization, and works really darn well. I don't really have a science background apart from generic nerd stuff plus public school plus the first year of a physics/chemistry double major. Neither of those things is what I mean by rationality. What I mean, and what LW (lesswrong.com) and CFAR (rationality.org) mean, and what most people who self-describe as rationalists mean, is a combination of the two: rationality is using empiricism (looking at the world) and understanding mathematics (especially statistics) in order to come to accurate and useful ideas about the world, and then use those ideas to act on the world in effective ways. There's a whole lot in there, so let's break it down.
Epistemology and the Nature of Belief
Epistemology is the question of how one comes to beliefs, and what exactly a belief is. I never had a satisfactory answer for either of these outside of mathematics, where it's extremely clear: you perform calculations, you follow a proof, and you arrive at a conclusion. If A, then B, because I proved it. I believe that. I know it. I proved it, and so there isn't a shadow of doubt in my mind.
The real world is a lot messier than that. You don't get given axioms. Trying to follow logically from premises to conclusions is useful and good, but there are always hidden premises you didn't know about, and they can mess up your predictions in surprising ways. It's a lot harder to come to true beliefs.In fact, what is a belief? I used to believe (hah!) that a belief was like it is in mathematics: a binary true or false, believe or don't believe, 100% certain or definitely-not, claim about the world. If I encountered things that contradicted a belief, then usually I would figure something else caused them and they didn't really contradict, but if I encountered enough of them, I would change my mind. No real way of telling what "enough" was, though.
That's not what a belief is.
What a belief really is, is a sort of model of the world, which makes predictions, and which you have a confidence level in. A confidence level can be expressed in a few ways: 80%, 4:1, .8, those all mean the same thing. They mean that if I make 5 predictions that I say I am 80% confident in, then I should get 4 of them right, if I am well calibrated. Probability is in the mind, not in reality (the map, not the territory): if I flip a coin and it lands and I haven't looked at it, then I have a 50% confidence that it is heads and 50% confidence it's tails. If somebody else has looked at it, they have 100% confidence that it's heads.
Well, not actually 100%. You can never be 100% certain about anything. There could have been light glinting off the coin just as they looked at it and overheard someone else say the word "heads" in a conversation and thought it was heads when it was really tails. This isn't just nitpicking, it's actually important that you don't have 100% confidence in anything, because of how we learn. Or to put it in jargon, how we update our beliefs on encountering evidence, which is to say, Baye's Theorem.
I'll illustrate it by a single example and then move on, because others have written explanations than what I could write here. Suppose you're worried you have a specific, rare type of cancer. You don't have any particular reason to be worried, but you're paranoid. 10 out of every 100,000 people have this kind of cancer. There's a test you can take, which is 90% accurate at diagnosing cases of it (90% of the people with this cancer who take the test get the result "You have this cancer!"). It's also 95% accurate at avoiding false positives (only 5% of the people who take it and don't have the cancer get the result "You have this cancer!"). You take the test, and it says "You have this cancer!". What are the odds you really have that cancer? Try to solve it before reading on.
It's only 0.18%. The prior odds that you have the cancer are 10 out of 100,000. Of the 10 people who have cancer and take the test, 9 of them get the result you got. Of the healthy 99,990 people who take the test, 4,999.5 of them get the result you got. So your odds of having the cancer are 9/4999.5, or 0.18%.
People never guess that on their own if they don't already know Bayes' Theorem. People have no idea how we're supposed to go about incorporating new information into our view of the world. Others have done a better job of explaining all this, so if you don't understand this after my brief explanation, go read those.
It may not seem like it, but this really is a revolutionary idea. I probably can't convey that just now, though, so let's move on.
How Words Work
I used to believe that words were grand, Aristotelian categories that fell from the Platonic Realm, and everything in the world neatly fit into one definition or another. This is not the case.
Words are fuzzy things. They're clouds in attribute-space. The closer an object is to the center of that cloud, the more likely someone is to describe it with that word, but it's never a 100% thing. Apples are a fruit, so are pears, and almost everyone will call a strawberry a fruit (but a biologist won't!) and whether someone calls an olive a fruit depends on when you ask. Seriously, this has been studied, and the same people when asked on multiple occasions are inconsistent in their response.
So while it's true that, for instance, genders are not perfect categories that everybody fits into, that's not a uniquely powerful case against the use of gender-words as categories. It applies to all other words we use to try to chop up the physical world into understandable sections. There's a whole lot more about this and other ways to use words properly on that above link, so I'll leave off here and move on to everybody's favorite subject:
For a long time I was a moral realist. I thought that all actions were moral or immoral (or neutral) as an aspect of the action itself. I wavered as to whether context mattered. I was a deontologist. I thought that morality was a thing that existed in the universe. If not the same way that carbon exists, then at least the same way that prime numbers exist. If an action was immoral, it was immoral regardless of the consequences.
I don't believe that anymore. I am now what I would call a utility function consequentialist. There's kind of a lot to unpack there, so let's get started. I think that everybody has preferences, which we can represent with a utility function (by everybody here I mean a person, using the definition from my previous post. I do actually consider many humans to be persons, don't let my other post fool you. Though this applies just as well to meme-creatures!). A utility function is a mathematical formula that takes in a state of the world and spits out a number representing how much you like that. Now, we clearly can't actually calculate that in real life, but it's still very useful to think in terms of.
The next thing to try to understand is the idea of expected utility. This is the idea that, tying into what I said earlier about beliefs as probabilities, you should consider that, given you do a specific action, what are the probable end results of that? Take each result in terms of utility, multiply it by how likely it is, add them all up, and that's the expected utility of that action.
As a utility function consequentialist, what I consider the word "moral" to mean is "an action is more moral than another action if it increases the expected utility of the world more than the other action." Two things to immediately note: first, I don't think it's useful to describe individuals as moral or immoral, only actions. Second, whose utility function are we talking about? Well, mine, of course. Who else's would I care about? By definition, my utility function is "the set of things I care about and how much I care about them." So that's what moral means for me. For you it means following your utility function.
So morality is just preference. Well, saying "just" preference implies that preferences aren't particularly important. But they are! They're the most important thing there is. And the kinds of preferences that people stick the morality label on are often extremely strongly held. They don't even feel like preferences, they feel like they're an actual fundamental part of the universe. But something feeling that way doesn't make it so, and I don't know of any reason to believe that they actually are. And believe me, I've searched.
How it All Ties Together
As I said, this may have seemed like a lot of unconnected things, but in my mind they are really a single idea. I used to think of myself as needing to focus on morality, both in terms of figuring out what the proper set of deontological rules to follow is (libertarianism? golden rule? the Bible? haha no of course not that), and then following them. I now think of myself as an agent acting in the world to try to direct my life, my self, and the world the way that I want. I used to really struggle with the world not fitting into my neat little categories (what do you mean, "species" isn't well defined? either two things are the same species or they aren't!) and I am now much more comfortable thinking in terms of there being a distinction between the map, my mental model of the universe, and the territory, the universe itself. I'm also able to act on uncertainty, which I did not used to be able to do. This has dramatically increased the options available to me-I used to only choose life options that seemed really certain, or where my success or failure was due to something outside of my control, assuming that I put in the amount of effort to qualify me for success. I am now doing things where my success will depend entirely on my own efforts and my ability to come up with a model of the world that is good enough to let me figure out what actions will affect the world to be the way I want the most. It's a very scary but incredibly liberating feeling.
And of course, one of the main ways that the meme Rationality has affected my life is that I have met a great deal of wonderful people through our shared interest in it. Of my three roommates, two I met through a Less Wrong meetup, and the other is my partner, who discovered Rationality at around the same time as I did.
Rationality as Meme-Creature
This entry wouldn't be complete without me analysing Rationality using my blog's gimmick, would it? It's interesting: when writing this, I was at first unable to separate it out and use that lens to view Rationality. That's how strongly it's affecting my world view. But I've been trying for a few days now, and I think I have a good grasp on it.
So what does Rationality want? Well, like all meme-creatures, it wants to expand its territory. It seems to be more focused on having a strong grasp on a few minds than a weak grasp on many minds, and so it spreads largely through weekend or weeklong worksops hosted by CFAR. And through Less Wrong, of course, which influences people to various degrees. Though I have definitely noticed that people who read LW largely fall into two categories: those who think it's boring or weird or dumb and stop reading it, and those who think it's one of the most amazing things they've ever read and it becomes a significant part of their life.
Rationality also wants something else, which conjures interesting reactions in people. This is something a lot of meme-creatures want, and when a powerful meme-creature asks for it, nobody bats an eye, but when a weaker one asks for it, everybody becomes suspicious and it triggers cult-warning alarms. I'll post more on that later, but for the time being I'll just say that I think what a "cult" is, largely, is a low-status meme-creature acting like a high-status meme-creature. So anyways, what is this terribly controversial desire? Money. Rationality wants you to donate money to CFAR, MIRI (intelligence.org), and to Effective Altruism (givewell.org) causes. There's a good chance that a lot of the people reading this will immediately raise an eyebrow and say "Ahh, I see your game now!" but would not have the same reaction if this were Catholic Church passing the donation bucket on Sunday or USA asking for taxes every year. So I do recommend trying not to let that bias you at all.
Wow, I haven't started a paragraph like that since five paragraph essays in middle school. Anyways! Rationality is incredibly useful. Everyone has something to take away from it.
If you like debate, read that article I linked earlier about how words can be used incorrectly.
If you like being productive, think about beliefs in terms of probabilites, not absolute certainty.
If you like being a good person and helping people, think in terms of expected utility, not in terms of following moral rules.