Chess: A Valuable Teaching Tool for Risk Managers?
How does chess resemble risk analysis? Are there similarities, for example, between the way a chess player studies opponents’ games and the way a risk analyst studies clients’ portfolios? Igor Postelnik takes a comprehensive look at chess strategy and discusses the lessons that risk managers can learn from chess.
One of the most obvious features of financial markets is that prices move up and down unpredictably. This has led to random walk models that, in turn, suggest that practitioners should look for insight to games based on randomization: e.g., coin flips, dice rolls and card shuffles. In this article, I’d like to look at risk analysis from a chess master’s perspective. I’ll try to compare chess analysis to risk analysis and explain what risk management might learn from chess.
Although chess has no randomness or concealed information, it is nonetheless unpredictable. If two players sit down to play a game of chess, neither the game nor the result is the same as the game the same two players played yesterday.
Imagine a risk manager and a hedge fund manager trying to decide an appropriate leverage level for a portfolio and two opposing chess masters trying to decide how complicated they want their positions to be. Are there no similarities? Let’s see.
Just as higher leverage may enhance return or cause bigger loss for a risk manager or a hedge fund manager, a more complicated chess position may open unexpected variations that will lead to first-prize money or leave a player without a prize at all. Each chess move has advantages and disadvantages. While each move’s advantages include creating the possibility of a certain desirable future line of play, there is a risk that each move will open up possibilities for (perhaps unforeseen) lines of play that are desirable for the other side. Weighing the risks of this play and counterplay is the key to good judgment in chess and is really a type of risk management.
Before moving forward, let me dispel a myth that chess is a deterministic game with full information available to both players. In theory, this is true. However, in practice, it is hardly ever the case that a player sees all possibilities at once. And even if he or she actually sees them, it’s hard to predict how well an opponent will react to them. So, it comes down to probabilities: i.e., how likely is the opponent to know a certain opening or a certain type of a position?
For example, I am a 2200-rated chess player. Against someone rated below 2000, I definitely prefer to reach a simple position as soon as possible. Against someone rated above 2400, I want to keep the position very complicated for as long as possible.
As more pieces come off the board, the less room there is for calculations. Why does it matter? A simple position doesn’t require deep calculations but does require a deep understanding of strategy. Chess players, as their strength grows, learn to calculate first and understand later.
In risk management, an analyst takes a first look at a fund’s portfolio (chess position) and has to make a first move (approve for leverage). Once a certain level of leverage is approved (the first move is made), we have to consider how the portfolio manager will respond — as well as what factors will cause the trader to complicate the position (increase risk in the portfolio) and, when that happens, how the risk manager should respond.
There are other similarities between chess strategy and risk analysis. Under time pressure in a tough position, a chess player has to choose a move, while a risk manager has to choose a position in the portfolio to liquidate to meet a margin call when a portfolio is tanking. Chess players also study opponents’ games trying to anticipate how the next game will develop, while risk analysts study clients’ portfolios trying to anticipate how the next trade will affect
the portfolio.
Humans vs. Computers
A complicated chess position requires deep calculations and is more likely to cause a human player to make an error. The players understand this general guideline, but also study their future opponents’ games and try to pick a style that is least familiar to their opponent. In 1997, for example, while Garry Kasparov was preparing to play a computer, IBM programmers and chess advisers had adjusted Deep Blue to better analyze Kasparov’s previous games. The styles that are most effective for Kasparov are known in the chess world, so the computer program was fine-tuned to avoid playing such styles. By analogy, a computer risk model needs to be fine-tuned to better analyze styles a fund manager is more likely to use.
Deep Blue didn’t just play a game. It played against a specific opponent’s style, and Kasparov was embarrassingly crushed in the last game as a result. Similarly, a computer program may not treat a leverage request as too high without human understanding of the investment style behind the leverage request.
Now let’s discuss a “stress test.” It’s important to understand what happens when a chess player decides to sacrifice some pieces. The sacrifice is intended not to gain specific advantage but to create certain weaknesses in the position that the player will try to exploit later. A computer will accept the sacrifice and evaluate the current position in its favor, rather than considering the intent of the sacrifice. As the game progresses, the computer will treat an extra piece as a positive, even as its position deteriorates.
Consequently, the computer will not only miss the unexpected sacrifice but will also be unable to determine where the sacrifice would lead. Moreover, it certainly doesn’t give any thought as to why a human player would want to sacrifice at all. A human player, in contrast, might not accept sacrifice in the first place, in order not to be exposed to the opponent’s well-developed strategy.
Despite the fact that the world’s best chess players could barely manage to draw their matches against the best computer programs, average players are able to achieve decent results against the same programs by selecting inferior openings that would be ridiculed if played against other humans. The sole purpose of such openings is to create positions that rely more on deep comprehension of positional nuances than on the rough calculating power
of computers.
A human player knows that opening moves made are inferior, and it’s generally just a matter of time until he or she will eventually take advantage of them. A computer doesn’t recognize inferiority and has to prove errors by calculating. If calculations don’t reach far enough, the computer won’t select the correct strategy. Based on recent events, computer risk models, just like computer chess models, tend to ignore a piece of analysis that is not readily calculable — the piece that requires human understanding.
Keep in mind that all scenarios, at least in theory and no matter how improbable, are available on the chess board. Nevertheless, despite having superior quantitative ability, computers can’t pick them all. So, then, how can they account for all stress scenarios and calculate probabilities of events that never happened in finance? On the other hand, world champion chess players are known to make simple errors late in games because of fatigue and/or mental lapses, and computers do an excellent job in avoiding such errors.
The Art of the Sacrifice
One last strategy that is worthy of consideration is why a player is more likely to sacrifice at the opening of a match with black pieces rather than with white pieces. It is important to remember that with white pieces, he or she already has the advantage of the first move, and the goal is to keep that advantage and try to increase it. On the other hand, with black pieces, he or she is already behind, so why not sacrifice? It might help eliminate the first-move advantage.
Thinking about this from a risk management perspective, if a fund is outperforming its benchmark, why use more leverage? But if it’s underperforming, why not use more leverage?
A player may resort to sacrifices in time pressure, hoping that an opponent will make a mistake by calculating. The best way to avoid this is to exchange pieces. In the last game of the 1985 world championship, the world champion had to win to tie the match. From the first move, he launched an all-out attack. His opponent, Garry Kasparov, expected the attack and prepared in advance. Kasparov won the game and the title.
In the last game of the 1987 world championship, roles were reversed. Kasparov, as the world champion, had to win to keep the title. Not only did he not attack, he took a while to cross the middle of the board and stayed away from exchanging the pieces. His opponent was consequently forced to spend time calculating. Whenever he tried to simplify the position, Kasparov stayed back. As time began to run out, Kasparov’s opponent committed a few small errors that Kasparov was able to capitalize on, converting tiny positives into a decisive advantage.
Thus, using very little leverage, the world champion retained the crown. This game turned into a very valuable lesson for many players on how to approach must-win situations. The main lesson is that knowledge of an opponent’s strategy before the game can be crucial to the end result.
This type of knowledge can also prove to be quite useful in risk management. If we can determine, for example, what strategy a portfolio manager will choose next, proactive steps can be taken to keep a firm’s current portfolio exposure reasonable (even when current exposure does not seem excessive) and to avoid unnecessary calculations.
When models are insufficient, this knowledge can prove particularly helpful. Say, for example, a fund sells deep OTM naked puts. A stress-testing model would assign a value to a downside move and compare it to a fund’s equity. However, it wouldn’t know that the probability of the move changes daily, and it also wouldn’t take into account that something will always happen in the human world. Bobby Fischer was perhaps the best chess player ever, but not the greatest tactician. He proposed FischerRandom chess, which reduces computer knowledge and calculating power in general by selecting starting setups at random. Under such conditions, each human player will have more and less favorable starting setups. A computer won’t make such a distinction.
Computer programs are blind to human intentions and may not evaluate them correctly but do a great job in avoiding simple human blunders. Humans must specify opponents’ intentions correctly and base computer calculations on those intentions, regardless of whether the opponent is a chess grandmaster or a hedge fund manager.
Different Responses to Different Strategies In many ways, risks arising from randomness are the easiest to manage. If we flip a fair coin 100 times, for a $1 million bet each time, we know the distribution of outcomes and can plan accordingly. With financial markets, it’s more difficult, because the parameters of the distribution have to be estimated.
Risks arising from complex strategic interactions among competing (and in finance, but not chess, cooperating) entities require more subtle management. It is tempting to treat everything as random and then set a powerful computer to crank through all the calculations. This can work, as computer chess programs and successful program traders demonstrate. But it doesn’t always work.
Sometimes you have to consider the intentions of other entities and their responses to your moves. Sometimes the strategy that can be proven to be optimal with infinite computing resources (or perfect information) is a terrible strategy in practice. In these situations, a chess master may have better insights than a poker, bridge, backgammon or gin rummy champion.
IGOR POSTELNIK (FRM) is a national chess master and Risk Analyst at AxiomSL, email is ipostelnik@axiomsl.com
Original Link: http://axiomsl.com/chess-a-valuable-teaching-tool-for-risk-managers/