Cognitive biases are tendencies to think in ways that can lead to systematic deviations from a standard of rationality or good judgment. They’re usually studied in psychology and economics, but some of them apply to our everyday life, including testing and playing Magic: the Gathering. In today’s article I’ll talk about 8 cognitive biases, how they’re making you worse at Magic, and what you can do to compensate for them. Some of the time there’s nothing you can do other than simply not acting on the bias, but the mere fact that you’re aware it exists should help you identify when it’s happening.
Keep in mind that this is just an area that interests me, and I am not an authority on the subject in any way. Ultimately this is an article about Magic, not about psychology. All the definitions come from Wikipedia.
1. Peak-End Rule
That people seem to perceive not the sum of an experience but the average of how it was at its peak (e.g., pleasant or unpleasant) and how it ended.
As humans, we’re great at focusing on one aspect of what happened and letting it define the whole experience, and that aspect is generally either the peak or the end. It’s for this reason, for example, that going 0-2 7-0 feels great, whereas going 7-0 0-2 feels like you might as well drop, even if they are the exact same record.
When you’re testing a matchup, it’s very common for each side to have a widely different perspective. I’ve certainly been on the receiving end of, “This matchup is great for me” and “I felt favored” reports from two different sides, which clearly can’t be the case. This happens because we fall prey to the peak-end rule, and it’s something we must correct if we want to have useful results in playtesting.
This happens even more when you’re testing an aggro versus control matchup. The aggro side will have some very easy wins, and it’ll think “wow, I crushed my opponent like there’s no tomorrow—this matchup is so easy.” The control side, however, will have some very complicated wins, and will remember the sweet games in which it was in complete control for half an hour. In the end, both sides will feel like the match is favorable because they are focusing on different things and letting that one moment define their whole experience.
The best way to fix this is to keep track of results. Keeping track will not give you a definitive answer (i.e., it’s not because you went 7-3 that the matchup is 70%), but it will help you identify when you’re being biased—if you think the matchup is favored but results indicate otherwise, then perhaps you should try to playtest more and get a second opinion. It will certainly stop you from thinking that control is favored every time simple because you spend more time winning than you do losing (guilty).
2. Gambler’s Fallacy
The tendency to think that future probabilities are altered by past events, when in reality they are unchanged.
Most people are familiar with the gambler’s fallacy, but we still fall prey to it from time to time. In MTG, it materializes when we apply past experiences of whether something worked or did not work out to decide whether it’s going to the next time, even if there should, in theory, be no relation.
For example, say that I keep a 1-lander that I believe is correct to keep, and I don’t draw my second land. Then, the following round, I keep a similar 1-lander, and I again don’t get there. By the third round, I’m more likely to mulligan a similar hand, because it’s already failed me twice, even though, if the hand is a good keep, then it’s a good keep regardless.
We also do this a lot with cards drawn by our opponent, by applying either “of course they’re going to have it again, they always have it” or “there’s no way they’re going to have it again,” when those probabilities shouldn’t really change—the fact that your opponent had the perfect card to stop you last game doesn’t influence whether they’re going to have the perfect card to stop you now.
The way to fix this is simply to stop doing it. Yeah, I know, that’s a bit tautological, but there’s no real way around it—you have to be aware of this bias and remove it. When you find yourself arriving at a conclusion based on past experiences that you suspect might be wrong, try to really think it through, and base your decisions on probability rather than those feelings.
3. Illusion of Control
The tendency to overestimate one's degree of influence over other external events.
In Magic, we’re taught that we must always learn from our losses to improve as players. While this is fundamentally true, it creates a culture in which individuals think they’re to blame every single time they lose, which is false. Whenever we lose, we assume we must have done something wrong, and we try to identify what it is. When we can’t, we either fabricate something, or we feel miserable. We do it because we want to think we have more control over the situation than we really do—if we do badly because we messed up, then it follows logically that if we don’t mess up, we’ll do well next time, and that’s comforting because it’s in our hands, and what’s in our hands we can fix.
The harsh reality is that we’re much less in control than we believe. Many of the times we lose, it’s not because we did something wrong—and many of the times we win, it’s not necessarily because we did something right. It’s for this reason that I dislike when I lose and someone asks me, “what happened?” (and my mother is particularly guilty of this) because it implies that for me to have lost, something “weird” had to have happened. Sometimes nothing happened, and you just lost because people lose. In our quest for self-improvement (or perhaps to appear like we’re trying to self-improve), we often create scenarios in which the conclusion is “I did something wrong, therefore I lost” when the conclusion should be “I just lost.”
One of the hardest parts of Magic is making the 55% choice ten times, getting it wrong ten times in a row, and still making the same choice the 11th time, but it’s what you must do—it’s not because a choice didn’t work out that it was wrong. “I should have mulliganed my 2-lander.” No, you shouldn’t have. You played the probabilities and lost. “I shouldn’t have played main-deck Doom Blade.” No, you should have. You were just unlucky to play against the 4 black players in the room.
Magic is a game where people lose all the time, and the sooner you accept that, the happier you will be and the easier you will be able to cope with your losses. The best players have a win percentage of about 65% at the PT level—this means that for every 3 games they play, they lose 1. And those are the best in history! Of course those players are far from perfect, but not even the perfect player would win close to all their matches.
If you want to be a competitive Magic player, you have to try to identify your flaws to improve yourself, absolutely, but you also have to be realistic. You have to accept that sometimes you lose and there is nothing you could have done, or nothing you should have done.
4. Outcome Bias
The tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
This is similar to the illusion of control (and in fact there are examples in there that are just outcome bias), but I’ve decided to give them separate entries because I think they usually manifest in different ways. Illusion of control happens when you assume you must’ve done something wrong because you failed, and outcome bias happens when you assume you must’ve done something right because you succeeded. In Magic, this bias is commonly referred to as being “results oriented” (which actually has a different meaning in the outside world).
A while ago, I used to have articles in which I dissected decks that did well at events, and analyzed whether their card choices were improvements over the stock build's. I constantly read comments like “you’re saying X is bad, but how can it be bad if the person won?” The most glaring example was a person who took a 5-color control deck and cut 3 lands for 3 Violent Ultimatums, and then proceeded to win an event. Yes, you read that right. They played 3 fewer lands than the normal build, and 3 more 7-drops that required all specific mana. I said this was something you should never do mathematically, but I saw numerous replies suggesting that I was simply jealous of the person’s success with a different idea, and if he won the event, it was clearly because it worked and I should stop being so close-minded.
Another example of this happened when I wrote about a game in which I activated Depala multiple times, always sending a land I needed to the bottom, and eventually lost whereas if I had drawn a land at any time, I’d have won immediately. I was told by multiple people that perhaps I shouldn’t have used Depala, but clearly I should have as it’s statistically more likely that she helps me than not, even if in practice it did not help me.
In the end, we must understand that the 55% decision is just that—a decision that is going to be right 45% of the time. 45% is a lot. You will, invariably, be “proven right” if you make the wrong decision, but that doesn’t mean it magically becomes the right decision. If you follow a “it worked so it must be right” policy, then it will be much harder for you to improve as a player.
5. Insensitivity to Sample Size
The tendency to under-expect variation in small samples.
This is also a problem with playtesting. Simply put, we do not do anything enough times to draw powerful statistical conclusions from it. We can have “general ideas” and “educated guesses”, but we must understand that, the smaller our sample size, the larger the variance we expect, and the less “truthful” our results will be. As before, going 7-3 does not mean the matchup is 70%—it could even be unfavorable.
At some point in our testing house, we started to track results from Drafts so that we could see which colors overperformed and underperformed. At first I was against it because I thought we would be putting more weight into this information than we should, so it was better to not have the information at all. The sample size was so small, particularly for some fringe combinations, that it might as well not exist, and we were drawing big conclusions from very small data. Nowadays I think we’ve become a bit better at dealing with this information, so I’m in favor of tracking them again, but only because we know that insensitivity to sample size is a real thing and we account for it.
There are two ways to compensate for this bias. The first is to make your sample size larger (i.e., instead of playing 10 games, play 100 or 1000.) If you go 700-300, then the matchup is likely to be around 70%. In practice, this is usually not feasible, which leads to the second: Don’t playtest to know what happened, but to learn how or why it happened. What happened is an isolated occasion—it might happen differently in the future. If you understand what caused it to happen, though, you can better understand how likely it is to happen again, and you can then extrapolate big conclusions from a relatively small amount of data.
6. Selective Perception
The tendency for expectations to affect perception.
In simple terms, this is the bias where we see what we want to see or what we expect to see, and it’s most common in deck selection where decks start looking better because we created them or because the tournament is imminent.
When we make a deck, we’re hoping that it’ll be great. Otherwise, we wouldn’t be making it. As a result, we focus on the evidence that confirms it’s great, while ignoring the evidence that would make it seem bad. It’s common to read a tournament report that says, “went 4-3, but my 2 losses were due to mistakes. The deck is great,” but there’s often no mention of how many of your 4 wins were due to opposing mistakes, because that doesn’t validate your idea that the deck is good.
It also materializes when we don’t have a deck and the tournament is approaching. In this spot, we’re desperate—we feel awful because we don’t have a deck, and we really want one, so we’re more inclined to accept evidence that tells us a deck is good because this will make us feel better. The best fix for this is, again, to just be aware of it and try to compensate.
Last week I wrote about our G/W Tokens deck and how I decided to not play it because I thought people were being victims of selective perception—we had no deck and the tournament was the following day, so we ignored all of G/W’s flaws and focused on its good points, which made it seem appealing to everybody. By choosing not to play it, I was trying to compensate for our selection bias because I knew it existed.
Of course, in this example I crashed and burned, as the deck turned out to be amazing, but I do not think my train of thought was wrong. I knew we were looking at everything with rose-colored glasses and I compensated for that, and the fact that once our glasses were off, reality turned out to actually be pink, doesn’t mean we didn’t have them to begin with.
7. Availability Cascade
A self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").
In Magic, it’s hard for us to test everything by ourselves. Thankfully, we don’t have to, as the internet provides us with a sort of “collective brain” full of information that other people have acquired. The problem emerges when the original information is incorrect and ends up spreading more and more, with each new person simply repeating the original information and by doing that validating it and making it even harder to challenge.
Something like this happened when we were testing for PT Milwaukee. The deck I liked, R/g with the Battle Rage/Become Immense combo, was supposed to have problems with G/W. It made sense—G/W is usually good versus red decks because your creatures are bigger and you have some life gain. But, most importantly, everyone everywhere said that the matchup was good for G/W—it was just common knowledge.
Except that it wasn’t right. In fact, the match was pretty good for the red deck, pre- and post-board. It only took us a couple games to figure it out, but then we tested more and more because we wanted to make sure. But that didn’t make any sense. Everyone said the matchup was good for G/W. Every player I talked to, and every article. Were they all wrong? Were we wrong?
The answer to this is probably that at some point someone was wrong, and then misinformation spread. A lot of the people who said G/W beat red hadn’t actually played the matchup. They were just repeating what other people said. Some of those had played the matchup, and concluded that red was better, but faced with overwhelming odds, they conceded that they were probably wrong and the matchup was actually good for G/W. So you end up having a scenario where, say, of 10 people to give you an answer, the first 3 think G/W wins, and the next 7 either think red wins or have no opinion, but all 10 will end up telling you that G/W wins because that’s the accepted consensus.
The way to compensate for this is to know that people can be wrong, that something is not necessarily true because it’s common knowledge, and that you are able to challenge it. You can’t doubt popular wisdom every single time of course, but knowing this phenomenon exists is important because it tells you that when your findings go against common knowledge, you aren’t necessarily going against hundreds of players . You might be going against just one or two that happened to be the first ones.
Patrick Chapin actually wrote an article about this a long time ago, and it’s one of my all-time favorites.
8. Pro-Innovation Bias
The tendency to have an excessive optimism toward an invention or innovation's usefulness throughout society, while often failing to identify its limitations and weaknesses.
Another deck building bias, and one we see all the time in spoiler season, where we think everything is broken and believe that because something is new, it must be an improvement over something that already exists. I can cite Oath of Ajani as an example, where people said turn-2 Oath of Ajani into turn-3 Gideon into turn-4 Nissa was a new possible “nut draw in Standard” while forgetting that you could already do that with a number of mana creatures.
This bias is tough to balance in Magic because if something is an improvement, we want to be the first ones to find it. As a result, we want to push the best-case scenario for every new card and if it’s good enough, we see if its flaws can be mitigated. The important thing is that you don’t skip the “flaws” part. You should be positive at first, but you cannot ignore its limitations and weaknesses forever.