Many areas of financial theory depend on assumptions. One that appears extremely regularly is that market returns follow a normal distribution. Depicted below, the normal (or Gaussian) distribution is characterised by the bell-shaped curve of its probability density function, which essentially states that observations further away from their average have an exponentially lower probability of occurring. There are many examples of natural phenomena that are 'normal', such as height and IQ.
The normal distribution provides a convenient simplifying assumption to the analysis of markets, without which we would not have models such as the famous option pricing formula derived by Fisher Black and Myron Scholes. However, the problem with assuming normality is that you underestimate tail risk (low probability events), as the distribution of market returns has historically presented fatter tails (a group of academics, including Scholes, proved this the hard way at Long Term Capital Management).
We can see these fat tails empirically in the table below, which compares the number of times the normal distribution would expect a tail event to occur against how many times it actually occurred. Be aware that this tail event can occur at either end of the bell curve, as we acknowledge the potential for upside shocks too.
The normal distribution predicts far fewer tail events than actually occur. It even calculates 'Black Monday' as a one in 1.71x1085 year event! (NB. I tried my best to find a unit of time which could better explain this, but found nothing. However, during my research I was pleased to discover that a “jiffy” is actually a technical term in Physics, denoting the time taken for light to travel the length of a neutron. I’ll leave you to decide if the research was worth it.)
Solving the problem
The drawbacks of assuming the normal distribution are close to common knowledge in investment management, hence the adoption of other tools for measuring and managing risk.
One method we use in the team involves applying Monte Carlo simulations to a Conditional Value at Risk (CVaR) model. This sounds complicated, but essentially it just involves randomly simulating your portfolio a number of times (often using a historical distribution), and concluding the average loss in the worst X% of cases. For example, in the graph below I perform a Monte Carlo simulation 20 times by sampling from S&P500 returns, and would take the average loss of the worst two (10%) performers as my 10% CVaR.
Secondly we use scenario analysis, which overcomes the dependency on historical data present in the other two methods. It involves assigning probabilities to potential future events, then projecting what asset prices are expected to do in each scenario. In the Asset Allocation team we use a forum of investment professionals to come up with potential scenarios, which better prepares us for the scenarios that could drive markets in the future. We also impose a specific correlation matrix on the returns (usually fattening the tails), which allows us to simulate our portfolios without assuming the historical market distribution.
Overall, the problem with the normal distribution is a big one that both academics and professionals are aware of. Given that the risk and return trade-off is at the center of what we do, we make sure we are measuring risk realistically, not just simply.