Minimum Volatility Portfolio

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) elaborates on the concept of Minimum Volatility Portfolio, which is derived from Modern Portfolio Theory (MPT) and also in practice to build investment funds.

This article is structured as follows: we introduce the concept of Minimum Volatility Portfolio. Next, we present some interesting academic findings, and we finish by presenting a theoretical example to support the explanations given in this article.

Introduction

The minimum volatility portfolio represents a portfolio of assets with the lowest possible risk for an investor and is located on the far-left side of the efficient frontier. Note that the minimum volatility portfolio is also called the minimum variance portfolio or more precisely the global minimum volatility portfolio (to distinguish it from other optimal portfolios obtained for higher risk levels).

Modern Portfolio Theory’s fundamental notion had significant implications for portfolio construction and asset allocation techniques. In the late 1970s, the portfolio management business attempted to capture the market portfolio return. However, as financial research progressed and some substantial contributions were made, new factor characteristics emerged to capture extra performance. The financial literature has long encouraged taking on more risk to earn a higher return. However, this is a common misconception among investors. While extremely volatile stocks can produce spectacular gains, academic research has repeatedly proved that low-volatility companies provide greater risk-adjusted returns over time. This occurrence is known as the “low volatility anomaly,” and it is for this reason that many long-term investors include low volatility factor strategies in their portfolios. This strategy is consistent with Henry Markowitz’s renowned 1952 article, in which he embraces the merits of asset diversification to form a portfolio with the maximum risk-adjusted return.

Academic Literature

Markowitz is widely regarded as a pioneer in financial economics and finance due to the theoretical implications and practical applications of his work in financial markets. Markowitz received the Nobel Prize in 1990 for his contributions to these fields, which he outlined in his 1952 Journal of Finance article titled “Portfolio Selection.” His seminal work paved the way for what is now commonly known as “Modern Portfolio Theory” (MPT).

In 1952, Harry Markowitz created modern portfolio theory with his work. Overall, the risk component of MPT may be evaluated using multiple mathematical formulations and managed through the notion of diversification, which requires building a portfolio of assets that exhibits the lowest level of risk for a given level of expected return (or equivalently a portfolio of assets that exhibits the highest level of expected return for a given level of risk). Such portfolios are called efficient portfolios. In order to construct optimal portfolios, the theory makes a number of fundamental assumptions regarding the asset selection behavior of individuals. These are the assumptions (Markowitz, 1952):

  • The only two elements that influence an investor’s decision are the expected rate of return and the variance. (In other words, investors use Markowitz’s two-parameter model to make decisions.) .
  • Investors are risk averse. (That is, when faced with two investments with the same expected return but two different risks, investors will favor the one with the lower risk.)
  • All investors strive to maximize expected return at a given level of risk.
  • All investors have the same expectations regarding the expected return, variance, and covariances for all hazardous assets. This assumption is known as the homogenous expectations assumption.
  • All investors have a one-period investment horizon.

Only in theory does the minimum volatility portfolio (MVP) exist. In practice, the MVP can only be estimated retrospectively (ex post) for a particular sample size and return frequency. This means that several minimum volatility portfolios exist, each with the goal of minimizing and reducing future volatility (ex ante). The majority of minimum volatility portfolios have large average exposures to low volatility and low beta stocks (Robeco, 2010).

Example

To illustrate the concept of the minimum volatility portfolio, we consider an investment universe composed of three assets with the following characteristics (expected return, volatility and correlation):

  • Asset 1: Expected return of 10% and volatility of 10%
  • Asset 2: Expected return of 15% and volatility of 20%
  • Asset 3: Expected return of 22% and volatility of 35%
  • Correlation between Asset 1 and Asset 2: 0.30
  • Correlation between Asset 1 and Asset 3: 0.80
  • Correlation between Asset 2 and Asset 3: 0.50

The first step to achieve the minimum variance portfolio is to construct the portfolio efficient frontier. This curve represents all the portfolios that are optimal in the mean-variance sense. After solving the optimization program, we obtain the weights of the optimal portfolios. Figure 1 plots the efficient frontier obtained from this example. As captured by the plot, we can see that the minimum variance portfolio in this three-asset universe is basically concentrated on one holding (100% on Asset 1). In this instance, an investor who wishes to minimize portfolio risk would allocate 100% on Asset 1 since it has the lowest volatility out of the three assets retained in this analysis. The investor would earn an expected return of 10% for a volatility of 10% annualized (Figure 1).

Figure 1. Minimum Volatility Portfolio (MVP) and the Efficient Frontier.
 Minimum Volatility Portfolio
Source: computation by the author.

Excel file to build the Minimum Volatility Portfolio

You can download below an Excel file in order to build the Minimum Volatility portfolio.

Download the Excel file to compute the Jensen's alpha

Why should I be interested in this post?

Portfolio management’s objective is to optimize the returns on the entire portfolio, not just on one or two stocks. By monitoring and maintaining your investment portfolio, you can accumulate a sizable capital to fulfil a variety of financial objectives, including retirement planning. This article helps to understand the grounding fundamentals behind portfolio construction and investing.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Minimum Volatility Factor

   ▶ Youssef LOURAOUI Beta

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Lintner, John. 1965a. Security Prices, Risk, and Maximal Gains from Diversification. Journal of Finance, 20, 587-616.

Lintner, John. 1965b. The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets.Review of Economics and Statistics 47, 13-37.

Markowitz, H., 1952. Portfolio Selection. The Journal of Finance, 7, 77-91.

Sharpe, William F. 1963. A Simplified Model for Portfolio Analysis. Management Science, 19, 425-442.

Sharpe, William F. 1964. Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. Journal of Finance, 19, 425-442.

Business analysis

Robeco, 2010 Ten things you should know about minimum volatility investing.

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Moments d’une distribution statistique

Moments d’une distribution statistique

Shengyu ZHENG

Dans cet article, Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) présente les quatre premiers moments d’une distribution statistique : la moyenne, la variance, la skewness et la kurtosis.

Variable aléatoire

Une variable aléatoire est une variable dont la valeur est déterminée d’après la réalisation d’un événement aléatoire. Plus précisément, la variable (X) est une fonction mesurable depuis un ensemble de résultats (Ω) à un espace mesurable (E).

X : Ω → E

X est une variable aléatoire réelle à condition que l’espace mesurable (E) soit, ou fasse partie de, l’ensemble des nombres réels (ℝ).

Je présente un exemple avec la rentabilité d’un investissement dans l’action Apple. La figure 1 ci-dessous représente la série temporelle de la rentabilité journalière de l’action Apple sur la période allant de novembre 2017 à novembre 2022.

Figure 1. Série temporelle de rentabilités de l’action Apple.
Série de rentabilité
Source : calcul par l’auteur (données : Yahoo Finance).

Figure 2. Histogramme des rentabilités de l’action Apple.
Histogramme de rentabilité
Source : calcul par l’auteur (données : Yahoo Finance).

Moments d’une distribution statistique

Le moment d’ordre r ∈ ℕ est un indicateur de la dispersion de la variable aléatoire X. Le moment ordinaire d’ordre r est défini, s’il existe, par la formule suivante :

mr = 𝔼 (Xr)

Nous avons aussi le moment centré d’ordre r défini, s’il existe, par la formule suivante :

cr = 𝔼([X-𝔼(X)]r)

Moment d’ordre un : la moyenne

Définition

La moyenne ou l’espérance mathématique d’une variable aléatoire est la valeur attendue en moyenne si la même expérience aléatoire est répétée un grand nombre de fois. Elle correspond à une moyenne pondérée par probabilité des valeurs que peut prendre cette variable, et elle est donc connue comme la moyenne théorique ou la vraie moyenne.

Si une variable X prend une infinité de valeurs x1, x2,… avec les probabilités p1, p2,…, l’espérance de X est définie comme :

Μ = m1= 𝔼(X) = ∑i=1pixi

L’espérance existe à condition que cette somme soit absolument convergente.

Estimation statistique

La moyenne empirique est un estimateur de l’espérance. Cet estimateur est sans biais, convergent (selon la loi des grands nombres), et distribué normalement (selon le théorème centrale limite).

A partir d’un échantillon de variables aléatoire réelles indépendantes et identiquement distribuées (X1,…,Xn), la moyenne empirique est donc :

X̄ = (∑ni=1xi)/n

Pour une loi normale centrée réduite (μ = 0 et σ = 1), la moyenne est égale à zéro.

Moment d’ordre deux : la variance

Définition

La variance (moment d’ordre deux) est une mesure de la dispersion des valeurs par rapport à sa moyenne.

Var(X) = σ 2 = 𝔼[(X-μ)2]

Elle exprime l’espérance du carré de l’écart à la moyenne théorique. Elle est donc toujours positive.

Pour une loi normale centrée réduite (μ = 0 et σ = 1), la variance est égale à un.

Estimation statistique

A partir d’un échantillon (X1,…,Xn), nous pouvons estimer la variance théorique à l’aide de la variance empirique :

S2 = (∑ni=1(xi – X̄)2)/n

Cependant, cet estimateur est biaisé, parce que 𝔼(S2) = (n-1)/(n) σ2. Nous avons donc un estimateur non-biaisé Š2 = (∑ni=1(xi – X̄)2)/(n-1)

Application en finance

La variance correspond à la volatilité d’un actif financier. Une variance élevée indique une dispersion plus importante, et ce n’est pas favorable du regard des investisseurs rationnels qui présentent de l’aversion au risque. Ce concept est un paramètre clef dans la théorie moderne du portefeuille de Markowitz.

Moment d’ordre trois : la skewness

Définition

La skewness (coefficient d’asymétrie en bon français) est le moment d’ordre trois, défini comme ci-dessous :

γ1 = 𝔼[((X-μ)/σ)3]

La skewness mesure l’asymétrie de la distribution d’une variable aléatoire. On distingue trois types de distributions selon que la distribution est asymétrique à gauche, symétrique, ou asymétrique à droite. Un coefficient d’asymétrie négatif indique une asymétrie à gauche de la distribution, dont la queue gauche est plus importante que la queue droite. Un coefficient d’asymétrie nul indique une symétrie, les deux queues de la distribution étant aussi importante l’une que l’autre. Enfin, un coefficient d’asymétrie positif indique une asymétrie à droite de la distribution, dont la queue droite est plus importante que la queue gauche.

Pour une loi normale, la skewness est égale à zéro car cette loi est symétrique par rapport à la moyenne.

Moment d’ordre quatre : la kurtosis

Définition

La kurtosis (coefficient d’acuité en bon français) est le moment d’ordre quatre, défini par :

β2 = 𝔼[((X-μ)/σ)4]

Il décrit l’acuité d’une distribution. Un coefficient d’acuité élevé indique que la distribution est plutôt pointue en sa moyenne, et a des queues de distribution plus épaisses (fatter tails en anglais).

Le coefficient d’une loi normale est de 3, autrement dit, une distribution mésokurtique. Au-delà de ce seuil, une distribution est appelée leptokurtique. Les distributions présentes au marché financier sont principalement leptokurtique, impliquant que les valeurs anormales et extrêmes sont plus fréquentes que celles d’une distribution gaussienne. Au contraire, un coefficient d’acuité de moins de 3 indique une distribution platykurtique, dont les queues sont plus légères.

Pour une loi normale, la kurtosis est égale à trois.

Exemple : distribution des rentabilités d’un investissement dans l’action Apple

Nous donnons maintenant un exemple en finance en étudiant la distribution des rentabilités de l’action Apple. Dans les données récupérées de Yahoo! Finance pour la période allant de novembre 2017 à novembre 2022, on se sert de la colonne du cours de clôture pour calculer les rentabilités journalières. Nous utilisons des fonctions Excel afin de calculer les quatre premiers moments de la distribution empirique des rentabilités de l’action Apple comme indiqué dans la table ci-dessous.

Moments de l’action Apple

Pour une distribution normale standard (centrée réduite), la moyenne est de zero, la variance est de 1, le skewness est de zéro, et le kurtosis est de 3. À comparaison avec une distribution normale, la distribution de rentabilité de l’action Apple a une moyenne légèrement positive. Cela signifie qu’à long terme, la rentabilité de l’investissement dans cet actif est positive. Son skewness est négatif, indiquant l’asymétrie vers la gauche (les valeurs négatives). Son kurtosis est supérieur de 3, ce qui indique que les extrémités sont plus épaisses que la distribution normale.

Fichier Excel pour calculer les moments

Vous pouvez télécharger le ficher Excel d’analyse des moments de l’action Apple en suivant le lien ci-dessous :

Télécharger le fichier Excel pour analyser les moments de la distribution

Autres article sur le blog SimTrade

▶ Shengyu ZHENG Catégories de mesures de risques

▶ Shengyu ZHENG Mesures de risques

Ressources

Articles académiques

Robert C. Merton (1980) On estimating the expected return on the market: An exploratory investigation, Journal of Financial Economics, 8:4, 323-361.

Données

Yahoo! Finance Données de marché pour l’action Apple

A propos de l’auteur

Cet article a été écrit en janvier 2023 par Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).

Arbitrage Pricing Theory (APT)

Arbitrage Pricing Theory (APT)

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the concept of arbitrage portfolio, a pillar concept in asset pricing theory.

This article is structured as follows: we present an introduction for the notion of arbitrage portfolio in the context of asset pricing, we present the assumptions and the mathematical foundation of the model and we then illustrate a practical example to complement this post.

Introduction

Arbitrage pricing theory (APT) is a method of explaining asset or portfolio returns that differs from the capital asset pricing model (CAPM). It was created in the 1970s by economist Stephen Ross. Because of its simpler assumptions, arbitrage pricing theory has risen in favor over the years. However, arbitrage pricing theory is far more difficult to apply in practice since it requires a large amount of data and complicated statistical analysis.The following points should be kept in mind when understanding this model:

  • Arbitrage is the technique of buying and selling the same item at two different prices at the same time for a risk-free profit.
  • Arbitrage pricing theory (APT) in financial economics assumes that market inefficiencies emerge from time to time but are prevented from occurring by the efforts of arbitrageurs who discover and instantly remove such opportunities as they appear.
  • APT is formalized through the use of a multi-factor formula that relates the linear relationship between the expected return on an asset and numerous macroeconomic variables.

The concept that mispriced assets can generate short-term, risk-free profit opportunities is inherent in the arbitrage pricing theory. APT varies from the more traditional CAPM in that it employs only one factor. The APT, like the CAPM, assumes that a factor model can accurately characterize the relationship between risk and return.

Assumptions of the APT model

Arbitrage pricing theory, unlike the capital asset pricing model, does not require that investors have efficient portfolios. However, the theory is guided by three underlying assumptions:

  • Systematic factors explain asset returns.
  • Diversification allows investors to create a portfolio of assets that eliminates specific risk.
  • There are no arbitrage opportunities among well-diversified investments. If arbitrage opportunities exist, they will be taken advantage of by investors.

To have a better grasp on the asset pricing theory behind this model, we can recall in the following part the foundation of the CAPM as a complementary explanation for this article.

Capital Asset Pricing Model (CAPM)

William Sharpe, John Lintner, and Jan Mossin separately developed a key capital market theory based on Markowitz’s work: the Capital Asset Pricing Model (CAPM). The CAPM was a huge evolutionary step forward in capital market equilibrium theory, since it enabled investors to appropriately value assets in terms of systematic risk, defined as the market risk which cannot be neutralized by the effect of diversification. In their derivation of the CAPM, Sharpe, Mossin and Litner made significant contributions to the concepts of the Efficient Frontier and Capital Market Line. The seminal contributions of Sharpe, Litner and Mossin would later earn them the Nobel Prize in Economics in 1990.

The CAPM is based on a set of market structure and investor hypotheses:

  • There are no intermediaries
  • There are no limits (short selling is possible)
  • Supply and demand are in balance
  • There are no transaction costs
  • An investor’s portfolio value is maximized by maximizing the mean associated with projected returns while reducing risk variance
  • Investors have simultaneous access to information in order to implement their investment plans
  • Investors are seen as “rational” and “risk averse”.

Under this framework, the expected return of a given asset is related to its risk measured by the beta and the market risk premium:

CAPM risk beta relation

Where :

  • E(ri) represents the expected return of asset i
  • rf the risk-free rate
  • βi the measure of the risk of asset i
  • E(rm) the expected return of the market
  • E(rm)- rf the market risk premium.

In this model, the beta (β) parameter is a key parameter and is defined as:

CAPM beta formula

Where:

  • Cov(ri, rm) represents the covariance of the return of asset i with the return of the market
  • σ2(rm) is the variance of the return of the market.

The beta is a measure of how sensitive an asset is to market swings. This risk indicator aids investors in predicting the fluctuations of their asset in relation to the wider market. It compares the volatility of an asset to the systematic risk that exists in the market. The beta is a statistical term that denotes the slope of a line formed by a regression of data points comparing stock returns to market returns. It aids investors in understanding how the asset moves in relation to the market. According to Fama and French (2004), there are two ways to interpret the beta employed in the CAPM:

  • According to the CAPM formula, beta may be thought in mathematical terms as the slope of the regression between the asset return and the market return. Thus, beta quantifies the asset sensitivity to changes in the market return.
  • According to the beta formula, it may be understood as the risk that each dollar invested in an asset adds to the market portfolio. This is an economic explanation based on the observation that the market portfolio’s risk (measured by σ2(rm)) is a weighted average of the covariance risks associated with the assets in the market portfolio, making beta a measure of the covariance risk associated with an asset in comparison to the variance of the market return.

Mathematical foundations

The APT can be described formally by the following equation

APT expected return formula

Where

  • E(rp) represents the expected return of portfolio p
  • rf the risk-free rate
  • βk the sensitivity of the return on portfolio p to the kth factor (fk)
  • λk the risk premium for the kth factor (fk)
  • K the number of risk factors

Richard Roll and Stephen Ross found out that the APT can be sensible to the following factors:

  • Expectations on inflation
  • Industrial production (GDP)
  • Risk premiums
  • Term structure of interest rates

Furthermore, the researchers claim that an asset will have varied sensitivity to the elements indicated above, even if it has the same market factor as described by the CAPM.

Application

For this specific example, we want to understand the asset price behavior of two equity indexes (Nasdaq for the US and Nikkei for Japan) and assess their sensitivity to different macroeconomic factors. We extract a time series for Nasdaq equity index prices, Nikkei equity index prices, USD/CHY FX spot rate and US term structure of interest rate (10y-2y yield spread) from the FRED Economics website, a reliable source for macroeconomic data for the last two decades.

The first factor, which is the USD/CHY (US Dollar/Chinese Renminbi Yuan) exchange rate, is retained as the primary factor to explain portfolio return. Given China’s position as a major economic player and one of the most important markets for the US and Japanese corporations, analyzing the sensitivity of US and Japanese equity returns to changes in the USD/CHY Fx spot rate can help in understanding the market dynamics underlying the US and Japanese equity performance. For instance, Texas Instrument, which operates in the sector of electronics and semiconductors, and Nike both have significant ties to the Chinese market, with an overall exposure representing approximately 55% and 18%, respectively (Barrons, 2022). In the example of Japan, in 2017 the Japanese government invested 117 billion dollars in direct investment in northern China, one of the largest foreign investments in China. Similarly, large Japanese listed businesses get approximately 18% of their international revenues from the Chinese market (The Economist, 2019).

The second factor, which is the 10y-2y yield spread, is linked to the shape of the yield curve. A yield curve that is inverted indicates that long-term interest rates are lower than short-term interest rates. The yield on an inverted yield curve decreases as the maturity date approaches. The inverted yield curve, also known as a negative yield curve, has historically been a reliable indicator of a recession. Analysts frequently condense yield curve signals to the difference between two maturities. According to the paper of Yu et al. (2017), there is a significant link between the effects of varying degrees of yield slope with the performance of US equities between 2006 and 2012. Regardless of market capitalization, the impact of the higher yield slope on stock prices was positive.

The APT applied to this example can be described formally by the following equation:

APT expected return formula example

Where

  • E(rp) represents the expected return of portfolio p
  • rf the risk-free rate
  • βp, Chinese FX the sensitivity of the return on portfolio p to the USD/CHY FX spot rate
  • βp, US spread the sensitivity of the return on portfolio p to the US term structure
  • λChinese FX the risk premium for the FX risk factor
  • λUS spread the risk premium for the interest rate risk factor

We run a first regression of the Nikkei Japanese equity index returns onto the macroeconomic variables retained in this analysis. We can highlight the following: Both factors are not statistically significant at a 10% significance level, indicating that the factors have poor predictive power in explaining Nikkei 225 returns over the last two decades. The model has a low R2, equivalent to 0.48%, which indicates that only 0.48% of the behavior of Nikkei performance can be attributed to the change in USD/CHY FX spot rate and US term structure of the yield curve (Table 1).

Table 1. Nikkei 225 equity index regression output.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 1 and 2 captures the linear relationship between the USD/CHY FX spot rate and the US term structure with respect to the Nikkei equity index.

Figure 1. Relationship between the USD/CHY FX spot rate with respect to the Nikkei 225 equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 2. Relationship between the US term structure with respect to the Nikkei 225 equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

We conduct a second regression of the Nasdaq US equity index returns on the retained macroeconomic variables. We may emphasize the following: Both factors are not statistically significant at a 10% significance level, indicating that they have a limited ability to predict Nasdaq returns during the past two decades. The model has a low R2 of 4.45%, indicating that only 4.45% of the performance of the Nasdaq can be attributable to the change in the USD/CHY FX spot rate and the US term structure of the yield curve (Table 2).

Table 2. Nasdaq equity index regression output.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 3 and 4 captures the linear relationship between the USD/CHY FX spot rate and the US term structure with respect to the Nasdaq equity index.

Figure 3. Relationship between the USD/CHY FX spot rate with respect to the Nasdaq equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 4. Relationship between the US term structure with respect to the Nasdaq equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Applying APT

We can create a portfolio with similar factor sensitivities as the Arbitrage Portfolio by combining the first two index portfolios (with a Nasdaq Index weight of 40% and a Nikkei Index weight of 60%). This is referred to as the Constructed Index Portfolio. The Arbitrage portfolio will have a full weighting on US equity index (100% Nasdaq equity index). The Constructed Index Portfolio has the same systematic factor betas as the Arbitrage Portfolio, but has a higher expected return (Table 3).

Table 3. Index, constructed and Arbitrage portfolio return and sensitivity table.img_SimTrade_portfolio_sensitivity
Source: computation by the author (Data: FRED Economics)

As a result, the Arbitrage portfolio is overvalued. We will then buy shares of the Constructed Index Portfolio and use the profits to sell shares of the Arbitrage Portfolio. Because every investor would sell an overvalued portfolio and purchase an undervalued portfolio, any arbitrage profit would be wiped out.

Excel file for the APT application

You can find below the Excel spreadsheet that complements the example above.

 Download the Excel file to assess an arbitrage portfolio example

Why should I be interested in this post?

In the CAPM, the factor is the market factor representing the global uncertainty of the market. In the late 1970s, the portfolio management industry aimed to capture the market portfolio return, but as financial research advanced and certain significant contributions were made, this gave rise to other factor characteristics to capture some additional performance. Analyzing the historical contributions that underpins factor investing is fundamental in order to have a better understanding of the subject.

Related posts on the SimTrade blog

▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

▶ Youssef LOURAOUI Origin of factor investing

▶ Youssef LOURAOUI Factor Investing

▶ Youssef LOURAOUI Fama-MacBeth regression method: stock and portfolio approach

▶ Youssef LOURAOUI Fama-MacBeth regression method: Analysis of the market factor

▶ Youssef LOURAOUI Fama-MacBeth regression method: N-factors application

▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Lintner, J. (1965) The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. The Review of Economics and Statistics 47(1): 13-37.

Lintner, J. (1965) Security Prices, Risk and Maximal Gains from Diversification. The Journal of Finance 20(4): 587-615.

Roll, Richard & Ross, Stephen. (1995). The Arbitrage Pricing Theory Approach to Strategic Portfolio Planning. Financial Analysts Journal 51, 122-131.

Ross, S. (1976) The arbitrage theory of capital asset pricing Journal of Economic Theory 13(3), 341-360.

Sharpe, W.F. (1963) A Simplified Model for Portfolio Analysis. Management Science 9(2): 277-293.

Sharpe, W.F. (1964) Capital Asset Prices: A theory of Market Equilibrium under Conditions of Risk. The Journal of Finance 19(3): 425-442.

Yu, G., P. Fuller, D. Didia (2013) The Impact of Yield Slope on Stock Performance Southwestern Economic Review 40(1): 1-10.

Business Analysis

Barrons (2022) Apple, Nike, and 6 Other Companies With Big Exposure to China.

The Economist (2019) Japan Inc has thrived in China of late.

Time series

FRED Economics (2022) Chinese Yuan Renminbi to U.S. Dollar Spot Exchange Rate (DEXCHUS).

FRED Economics (2022) 10-Year Treasury Constant Maturity Minus 2-Year Treasury Constant Maturity (T10Y2Y).

FRED Economics (2022) NASDAQ Composite Index (NASDAQCOM).

FRED Economics (2022) Nikkei Stock Average, Nikkei 225 (NIKKEI225).

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Interest Rate Swaps

Akshit Gupta

This article written by Akshit GUPTA (ESSEC Business School, Grande Ecole Program – Master in Management, 2019-2022) presents the derivative contract of interest rate swaps used to hedge interest rate risk in financial markets.

Introduction

In financial markets, interest rate swaps are derivative contracts used by two counterparties to exchange a stream of future interest payments with another for a pre-defined number of years. The interest payments are based on a pre-determined notional principal amount and usually include the exchange of a fixed interest rate for a floating interest rate (or sometimes the exchange of a floating interest rate for another floating interest rate).

While hedging does not necessarily eliminate the entire risk for any investment, it does limit or offset any potential losses that the investor can incur.

Forward rate agreements (FRA)

To understand interest rate swaps, we first need to understand forward rate agreements in financial markets.

In an FRA, two counterparties agree to an exchange of cashflows in the future based on two different interest rates, one of which is a fixed rate and the other is a floating rate. The interest rate payments are based on a pre-determined notional amount and maturity period. This derivative contract has a single settlement date. LIBOR (London Interbank Offered Rate) is frequently used as the floating rate index to determine the floating interest rate in the swap.

The payoff of the contract is as shown in the formula below:

(LIBOR – Fixed Interest Rate) * Notional amount * Number of days / 100

Interest rate swaps (IRS)

An interest rate swap is a hedging mechanism wherein a pre-defined series of forward rate agreements to buy or sell the floating interest rate at the same fixed interest rate.

In an interest rate swap, the position taken by the receiver of the fixed interest rate is called “long receiver swaps” and the position taken by the payer of the fixed interest rate is called “long payer swaps”.

How does an interest rate swap work?

Interest rate swaps can be used in different market situations based on a counterparty’s prediction about future interest rates.

For example, when a firm paying a fixed rate of interest on an existing loan believes that the interest rate will decrease in the future, it may enter an interest rate swap agreement in which it pays a floating rate and receives a fixed rate to benefit from its expectation about the path of future interest rates. Conversely, if the firm paying a floating interest rate on an existing loan believes that the interest rate will increase in the future, it may enter an interest rate swap in which it pays a fixed rate and receives a floating rate to benefit from its expectation about the path of future interest rates.

Example

Let’s consider a 4-year swap between two counterparties A and B on January 1, 2021. In this swap, counterparty A agrees to pay a fixed interest rate of 3.60% per annum to counterparty B every six months on an agreed notional amount of €10 million. Counterparty B agrees to pay a floating interest rate based on the 6-month LIBOR rate, currently at 2.60%, to Counterparty A on the same notional amount. Here, the position taken by Counterparty A is called long payer swap and the position taken by Counterparty B is called the long receiver swap. The projected cashflow receipt to Counterparty A based on the assumed LIBOR rates is shown in the below table:

Table 1. Cash flows for an interest rate swap.
 Cash flows for an interest rate swap
Source: computation by the author

In the above example, a total of eight payments (two per year) are made on the interest rate swap. The fixed rate payment is fixed at €180,000 per observation date whereas the floating payment rate depend on the prevailing LIBOR rate at the observation date. The net receipt for the Counterparty A is equal to €77,500 at the end of 5 years. Note that in an interest rate swap the notional amount of €10 million is not exchanged between the counterparties since it has no financial value to either of the counterparties and that is why it is called the “notional amount”.

Note that when the two counterparties enter the swap, the fixed rate is set such that the swap value is equal to zero.

Excel file for interest rate swaps

You can download below the Excel file for the computation of the cash flows for an interest rate swap.

Download the Excel file to compute the protective put value

Related Posts

   ▶ Jayati WALIA Derivative Markets

   ▶ Akshit GUPTA Forward Contracts

   ▶ Akshit GUPTA Options

Useful resources

Hull J.C. (2015) Options, Futures, and Other Derivatives, Ninth Edition, Chapter 7 – Swaps, 180-211.

www.longin.fr Pricer of interest swaps

About the author

Article written in December 2022 by Akshit GUPTA (ESSEC Business School, Grande Ecole Program – Master in Management, 2019-2022).

Asset allocation techniques

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the concept of asset allocation, a pillar concept in portfolio management.

This article is structured as follows: we introduce the notion of asset allocation, and we use a practical example to illustrate this notion.

Introduction

An investment portfolio is a collection of assets that are owned by an investor. Individual assets, such as bonds and stocks, as well as asset baskets, such as mutual funds or exchange-traded funds, can be employed. When constructing a portfolio, investors often consider both the projected return and risk. A well-balanced portfolio includes a wide range of investments to benefit from diversification.

The asset allocation is one of the processes in the portfolio construction process. At this point, the investor (or fund manager) must divide the available capital into a number of assets that meet the criteria in terms of risk and return trade-off, while adhering to the investment policy, which specifies the amount of exposure an investor can have and the amount of risk the fund manager can hold in his or her portfolio.

The next phase in the process is to evaluate the risk and return characteristics of the various assets. The analyst develops economic and market expectations that can be used to develop a recommended asset allocation for the customer. The distribution of equities, fixed-income securities, and cash; sub asset classes, such as corporate and government bonds; and regional weightings within asset classes are all decisions that must be taken in the portfolio’s asset allocation. Real estate, commodities, hedge funds, and private equity are examples of alternative assets. Economists and market strategists may set the top-down view on economic conditions and broad market movements. The returns on various asset classes are likely to be altered by economic conditions; for example, equities may do well when economic growth has been surprisingly robust whereas bonds may do poorly if inflation soars. These situations will be forecasted by economists and strategists.

The top-down approach

A top-down approach begins with assessment of macroeconomic factors. The investor examines markets and sectors based on the existing and projected economic climate in order to invest in those that are predicted to perform well. Finally, funding is evaluated for specific companies within these categories.

The bottom up approach

A bottom-up approach focuses on company-specific variables such as management quality and business potential rather than economic cycles or industry analysis. It is less concerned with broad economic trends than top-down analysis is, and instead focuses on company particular.

Types of asset allocations

Arnott and Fabozzi (1992) divide asset allocation into three types: 1) policy asset allocation; 2) dynamic asset allocation; and 3) tactical asset allocation.

Policy asset allocation

The policy asset allocation decision is a long-term asset allocation decision in which the investor aims to assess a suitable long-term “normal” asset mix that represents an optimal mixture of controlled risk and enhanced return. The strategies that offer the best prospects of achieving strong long-term returns are inherently risky. The strategies that offer the greatest safety tend to offer very moderate return opportunities. The balancing of these opposing goals is known as policy asset allocation. The asset mix (i.e., the allocation among asset classes) is mechanistically altered in response to changing market conditions in dynamic asset allocation. Once the policy asset allocation has been established, the investor can focus on the possibility of active deviations from the regular asset mix established by policy. Assume the long-run asset mix is established to be 60% equities and 40% bonds. A variation from this mix under certain situations may be tolerated. A decision to diverge from this mix is generally referred to as tactical asset allocation if it is based on rigorous objective measurements of value. Tactical asset allocation does not consist of a single, well-defined strategy.

Dynamic asset allocation

The term “dynamic asset allocation” can refer to both long-term policy decisions and intermediate-term efforts to strategically position the portfolio to benefit from big market swings, as well as aggressive tactical strategies. As an investor’s risk expectations and tolerance for risk fluctuate, the normal or policy asset allocation may change. It is vital to understand what aspect of the asset allocation decision is being discussed and in what context the words “asset allocation” are being used when delving into asset allocation difficulties.

Tactical asset allocation

Tactical asset allocation broadly refers to active strategies that seek to enhance performance by opportunistically adjusting the asset mix of a port- folio in response to the changing patterns of reward available in the capi- tal markets. Notably, tactical asset allocation tends to refer to disciplined techniques for evaluating anticipated rates of return on various asset classes and constructing an asset allocation response intended to capture larger rewards.

Asset allocation application: an example

For this example, lets suppose the fictitious following scenario with real data involved:

Mr. Dubois recently sold his local home construction company in the south of France to a multinational homebuilder with a nationwide reach. He accepted a job as regional manager for that national homebuilder after selling his company. He is now thinking about the financial future for himself and his family. He is looking forward to his new job, where he enjoys his new role and where he will earn enough money to meet his family’s short- and medium-term liquidity demands. He feels strongly that he should not invest the profits of the sale of his company in real estate because his income currently rely on the state of the real estate market. He speaks with a financial adviser at his bank about how to invest his money so that he can retire comfortably in 20 years.

The initial portfolio objective they created seek a nominal return goal of 7% with a Sharpe ratio of at least 1 (for this example, we consider the risk-free rate to be equal to zero). The bank’s asset management division gives Mr Dubois and his adviser with the following data (Figure 1) on market expectations.

Figure 1. Risk, return and correlation estimates on market expectation.
 Time-series regression
Source: computation by the author (Data: Refinitiv Eikon).

In order to replicate a global asset allocation approach, we shortlisted a number of trackers that would represent our investment universe. To keep a well-balanced approach, we took trackers that would represent the main asset classes: global equities (VTI – Vanguard Total Stock Market ETF), bonds (IEF – iShares 7-10 Year Treasury Bond ETF and TLT – iShares 20+ Year Treasury Bond ETF) and commodities (DBC – Invesco DB Commodity Index Tracking Fund and GLD – SPDR Gold Shares). To create the optimal asset allocation, we extracted the equivalent of a ten-year timeframe from Refinitiv Eikon to capture the overall performance of the portfolio in the long run. As captured in Figure 1, the global equities was the best performing asset class during the period covered (13.02% annualised return), followed by long term bond (4.78% annualised return) and by gold (4.65% annualised return).

Figure 2. Asset class performance (rebased to 100).
 Time-series regression
Source: computation by the author (Data: Refinitiv Eikon).

After analyzing the historical return on the assets retained, as well as their volatility and covariance (and correlation), we can apply Mean-Variance portfolio optimization to determine the optimal portfolio. The optimal asset allocation would be the end outcome of the optimization procedure. The optimal portfolio, according to Markowitz’ seminal study on portfolio construction, will seek to create the best risk-return trade-off for an investor. After performing the calculations, we notice that investing 42.15% in the VTI fund, 30.69% in the IEF fund, 24.88% in the TLT fund, and 2.28% in the GLD fund yields the best asset allocation. As reflected in this asset allocation, the investor intends to invest his assets in a mix of equities (about 43%) and bonds (approximately 55%), with a marginal position (around 3%) in gold, which is widely employed in portfolio management as an asset diversifier due to its correlation with other asset classes. As captured by this asset allocation, we can clearly see the defensive nature of this portfolio, which relies significantly on the bond part of the allocation to operate as a hedge while relying on the equities part as the main driver of returns.

As shown in Figure 3, the optimal asset allocation has a better Sharpe ratio (1.27 vs 0.62) and is captured farther along the efficient frontier line than a naive equally-weighted allocation . The only portfolio with the needed characteristics is the optimal one, as the investor’s goal was to attain a 7% projected return with a minimum Sharpe ratio of 1.

Figure 3. Optimal asset allocation and the Efficient Frontier plot.
 Time-series regression
Source: computation by the author (Data: Refinitiv Eikon).

Will this allocation, however, continue to perform well in the future? The market’s reliance on future expectations, return, volatility, and correlation predictions, as well as the market regime, will ultimately determine how much the performance predicted by this study will really change in the future.

Excel file for asset allocation

You can find below the Excel spreadsheet that complements the example above.

 Download the Excel file for asset allocation

Why should I be interested in this post?

The purpose of portfolio management is to maximize (expected) returns on the entire portfolio, not just on one or two stocks for a given level of risk. By monitoring and maintaining your investment portfolio, you can build a substantial amount of wealth for a variety of financial goals, such as retirement planning. This post facilitates comprehension of the fundamentals underlying portfolio construction and investing.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

   ▶ Youssef LOURAOUI Optimal portfolio

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Arnott, R. D., and F. J. Fabozzi. 1992. The many dimensions of the asset allocation decision. In Active asset allocation, edited by R. Arnott and F. J. Fabozzi. Chicago: Probus Publishing.

Fabozzi, F.J., 2009. Institutional Investment Management: Equity and Bond Portfolio Strategies and Applications. I (4-6). John Wiley and Sons Edition.

Pamela, D. and Fabozzi, F., 2010. The Basics of Finance: An Introduction to Financial Markets, Business Finance, and Portfolio Management. John Wiley and Sons Edition.

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Quantitative equity investing

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) elaborates on the concept of quantitative equity investing, a type of investment approach in the equity trading space.

This article follows the following structure: we introduce the quantitative equity investing. We present a review of the major types of quantitative equity strategies and we finish with a conclusion.

Introduction

Quantitative equity investing refers to funds that uses model-driven decision making when trading in the equity space. Quantitative analysts program their trading rules into computer systems and use algorithmic trading, which is overseen by humans.

Quantitative investing has several advantages and disadvantages over discretionary trading. The disadvantages are that the trading rule cannot be as personalized to each unique case and cannot be dependent on “soft” information such human judgment. These disadvantages may be lessened as processing power and complexity improve. For example, quantitative models may use textual analysis to examine transcripts of a firm’s conference calls with equity analysts, determining whether certain phrases are commonly used or performing more advanced analysis.

The advantages of quantitative investing include the fact that it may be applied to a diverse group of stocks, resulting in great diversification. When a quantitative analyst builds an advanced investment model, it can be applied to thousands of stocks all around the world at the same time. Second, the quantitative modeling rigor may be able to overcome many of the behavioral biases that commonly impact human judgment, including those that produce trading opportunities in the first place. Third, using past data, the quant’s trading principles can be backtested (Pedersen, 2015).

Types of quantitative equity strategies

There are three types of quantitative equity strategies: fundamental quantitative investing, statistical arbitrage, and high-frequency trading (HFT). These three types of quantitative investing differ in various ways, including their conceptual base, turnover, capacity, how trades are determined, and their ability to be backtested.

Fundamental quantitative investing

Fundamental quantitative investing, like discretionary trading, tries to use fundamental analysis in a systematic manner. Fundamental quantitative investing is thus founded on economic and financial theory, as well as statistical data analysis. Given that prices and fundamentals only fluctuate gradually, fundamental quantitative investing typically has a turnover of days to months and a high capacity (meaning that a large amount of money can be invested in the strategy), owing to extensive diversification.

Statistical arbitrage

Statistical arbitrage aims to capitalize on price differences between closely linked stocks. As a result, it is founded on a grasp of arbitrage relations and statistics, and its turnover is often faster than that of fundamental quants. Statistical arbitrage has a lower capacity due to faster trading (and possibly fewer stocks having arbitrage spreads).

High Frequency Trading (HFT)

HFT is based on statistics, information processing, and engineering, as the success of an HFT is determined in part by the speed with which they can trade. HFTs focus on having superfast computers and computer programs, as well as co-locating their computers at exchanges, actually trying to get their computer as close to the exchange server as possible, using fast cables, and so on. HFTs have the fastest trading turnover and, as a result, the lowest capacity.

The three types of quants also differ in how they make trades: Fundamental quants typically make their deals ex ante, statistical arbitrage traders make their trades gradually, and high-frequency traders let the market make their transactions. A fundamental quantitative model, for example, identifies high-expected-return stocks and then buys them, almost always having their orders filled; a statistical arbitrage model seeks to buy a mispriced stock but may terminate the trading scheme before completion if prices have moved adversely; and, finally, an HFT model may submit limit orders to both buy and sell to several exchanges, allowing the market to determine which ones are hit. Because of this trading structure, fundamental quant investing can be simulated with some reliability via a backtest; statistical arbitrage backtests rely heavily on assumptions on execution times, transaction costs, and fill rates; and HFT strategies are frequently difficult to simulate reliably, so HFTs must rely on experiments.

Table 1. Quantitative equity investing main categories and characteristics.
 Quantitative equity investing
Source: Source: Pedersen, 2015.

Conclusion

Quants run their models on hundreds, if not thousands, of stocks. Because diversification eliminates most idiosyncratic risk, firm-specific shocks tend to wash out at the portfolio level, and any single position is too tiny to make a major impact in performance.

An equity market neutral portfolio eliminates total stock market risk by being equally long and short. Some quants attempt to establish market neutrality by ensuring that the long side’s dollar exposure equals the dollar worth of all short bets. This technique, however, is only effective if the longs and shorts are both equally risky. As a result, quants attempt to balance market beta on both the long and short sides. Some quants attempt to be both dollar and beta neutral.

Why should I be interested in this post?

It may provide an opportunity for investors to diversify their global portfolios. Including hedge funds in a portfolio can help investors obtain absolute returns that are uncorrelated with typical bond/equity returns.

For practitioners, learning how to incorporate hedge funds into a standard portfolio and understanding the risks associated with hedge fund investing can be beneficial.

Understanding if hedge funds are truly providing “excess returns” and deconstructing the sources of return can be beneficial to academics. Another challenge is determining whether there is any “performance persistence” in hedge fund returns.

Getting a job at a hedge fund might be a profitable career path for students. Understanding the market, the players, the strategies, and the industry’s current trends can help you gain a job as a hedge fund analyst or simply enhance your knowledge of another asset class.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Portfolio

   ▶ Youssef LOURAOUI Long-short strategy

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Chapter 9 : 133 – 164. Princeton University Press.

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Optimal portfolio

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the concept of optimal portfolio, which is central in portfolio management.

This article is structured as follows: we first define the notion of an optimal portfolio (in the mean-variance framework) and we then illustrate the concept of optimal portfolio with an example.

Introduction

An investor’s investment portfolio is a collection of assets that he or she possesses. Individual assets such as bonds and equities can be used, as can asset baskets such as mutual funds or exchange-traded funds (ETFs). When constructing a portfolio, investors typically evaluate the expected return as well as the risk. A well-balanced portfolio contains a diverse variety of investments.

An optimal portfolio is a collection of assets that maximizes the trade-off between expected return and risk: the portfolio with the highest expected return for a given level of risk, or the portfolio with the lowest risk for a given level of expected return.

To obtain the optimal portfolio, Markowitz sought to optimize the following dual program:

The first optimization seeks to maximize expected return with respect to a specific level of risk, subject to the short-selling constraint (weights of the portfolio should be equal to one).

img_SimTrade_implementing_Markowitz_2

The second optimization seeks to minimize the variance of the portfolio with respect to a specific level of expected return, subject to the short-selling constraint (weights of the portfolio should be equal to one).

img_SimTrade_implementing_Markowitz

Mathematical foundations

The investment universe is composed of N assets characterized by their expected returns μ and variance-covariance matrix V. For a given level of expected return μP, the Markowitz model gives the composition of the optimal portfolio. The vector of weights of the optimal portfolio is given by the following formula:

img_SimTrade_implementing_Markowitz_1

With the following notations:

  • wP = vector of asset weights of the portfolio
  • μP = desired level of expected return
  • e = identity vector
  • μ = vector of expected returns
  • V = variance-covariance matrix of returns
  • V-1 = inverse of the variance-covariance matrix
  • t = transpose operation for vectors and matrices

A, B and C are intermediate parameters computed below:

img_SimTrade_implementing_Markowitz_2

The variance of the optimal portfolio is computed as follows

img_SimTrade_implementing_Markowitz_3

To calculate the standard deviation of the optimal portfolio, we take the square root of the variance.

Optimal portfolio application: the case of two assets

To create the optimal portfolio, we first obtain monthly historical data for the last two years from Bloomberg for two stocks that will comprise our portfolio: Apple and CMS Energy Corporation. Apple is in the technology area, but CMS Energy Corporation is an American company that is entirely in the energy sector. Apple’s historical return for the two years covered was 41.86%, with a volatility of 35.11%. Meanwhile, CMS Energy Corporation’s historical return was 13.95% with a far lower volatility of 15.16%.

According to their risk and return profiles, Apple is an aggressive stock pick in our example, but CMS Energy is a much more defensive stock that would serve as a hedge in our example. The correlation between the two stocks is 0.19, indicating that they move in the same direction. In this example, we will consider the market portfolio, defined as a theoretical portfolio that reflects the return of the whole investment universe, which is captured by the wide US equities index (S&P500 index).

As captured in Figure 1, CMS Energy suffered less severe losses than Apple. When compared to the red bars, the blue bars are far more volatile and sharp in terms of the size of the move in both directions.

Figure 1. Apple and CMS Energy Corporation return breakdown.
 Time-series regression
Source: computation by the author (Data: Bloomberg)

After analyzing the historical return on both stocks, as well as their volatility and covariance (and correlation), we can use Mean-Variance portfolio optimization to find the optimal portfolio. According to Markowitz’ foundational study on portfolio construction, the optimal portfolio will attempt to achieve the best risk-return trade-off for an investor. After doing the computations, we discover that the optimal portfolio is composed of 45% Apple stock and 55% CMS Energy corporation stock. This portfolio would return 26.51% with a volatility of 19.23%. As captured in Figure 2, the optimal portfolio is higher on the efficient frontier line and has a higher Sharpe ratio (1.27 vs 1.23 for the theoretical market portfolio).

Figure 2. Optimal portfolio.
 Optimal portfolio plot 2 asset
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the example above.

 Optimal portfolio spreadsheet for two assets

Optimal portfolio application: the general case

We generated a large time series to obtain useful results by downloading the equivalent of 23 years of market data from a data provider (in this example, Bloomberg). We generate the variance-covariance matrix after obtaining all necessary statistical data, which includes the expected return and volatility indicated by the standard deviation of the returns for each stock during the provided period. Table 1 depicts the expected return and volatility for each stock retained in this analysis.

Table 1. Asset characteristics of Apple, Amazon, Microsoft, Goldman Sachs, and Pfizer.
img_SimTrade_implementing_Markowitz_spreadsheet_1
Source: computation by the author.

We can start the optimization task by setting a desirable expected return after computing the expected return, volatility, and the variance-covariance matrix of expected return. With the data that is fed into the appropriate cells, the model will complete the optimization task. For a 20% desired expected return, we get the following results (Table 2).

Table 2. Asset weights for an optimal portfolio.
Optimal portfolio case 1
Source: computation by the author.

To demonstrate the effect of diversification in the reduction of volatility, we can form a Markowitz efficient frontier by tilting the desired anticipated return with their relative volatility in a graph. The Markowitz efficient frontier is depicted in Figure 1 for various levels of expected return. We highlighted the portfolio with 20% expected return with its respective volatility in the plot (Figure 3).

Figure 3. Optimal portfolio plot for 5 asset case.
Optimal portfolio case 1
Source: computation by the author.

You can download the Excel file below to use the Markowitz portfolio allocation model.

 Download the Excel file for the optimal portfolio with n asset case

Why should I be interested in this post?

The purpose of portfolio management is to maximize the (expected) returns on the entire portfolio, not just on one or two stocks. By monitoring and maintaining your investment portfolio, you can build a substantial amount of wealth for a variety of financial goals such as retirement planning. This post facilitates comprehension of the fundamentals underlying portfolio construction and investing.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Portfolio

   ▶ Youssef LOURAOUI Factor Investing

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

Useful resources

Academic research

Pamela, D. and Fabozzi, F., 2010. The Basics of Finance: An Introduction to Financial Markets, Business Finance, and Portfolio Management. John Wiley and Sons Edition.

Markowitz, H., 1952. Portfolio Selection. The Journal of Finance, 7(1): 77-91.

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Long-short equity strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the long-short equity strategy, one of pioneer strategies in the hedge fund industry. The goal of the long-short equity investment strategy is to buy undervalued stocks and sell short overvalued ones.

This article is structured as follow: we introduce the long-short strategy principle. Then, we present a practical case study to grasp the overall methodology of this strategy. We conclude with a performance analysis of this strategy in comparison with a global benchmark (MSCI All World Index).

Introduction

According to Credit Suisse, a long-short strategy can be defined as follows: “Long-short equity funds invest on both long and short sides of equity markets, generally focusing on diversifying or hedging across particular sectors, regions, or market capitalizations. Managers have the flexibility to shift from value to growth; small to medium to large capitalization stocks; and net long to net short. Managers can also trade equity futures and options as well as equity related securities and debt or build portfolios that are more concentrated than traditional long-only equity funds.”

This strategy has the particularity of potentially generate returns in both rising and falling markets. However, stock selection is key concern, and the stock picking ability of the fund manager is what makes this strategy profitable (or not!). The trade-off of this approach is to reduce market risk but exchange it for specific risk. Another key characteristic of this type of strategy is that overall, funds relying on long-short are net long in their trading exposure (long bias).

Equity strategies

In the equity universe, we can separate long-short equity strategies into discretionary long-short equity, dedicated short bias, and quantitative.

Discretionary long-short

Discretionary long-short equity managers typically decide whether to buy or sell stocks based on a basic review of the value of each firm, which includes evaluating its growth prospects and comparing its profitability to its valuation. By visiting managers and firms, these fund managers also evaluate the management of the company. Additionally, they investigate the accounting figures to judge their accuracy and predict future cash flows. Equity long-short managers typically predict on particular companies, but they can also express opinions on entire industries.

Value investors, a subset of equity managers, concentrate on acquiring undervalued companies and holding these stocks for the long run. A good illustration of a value investor is Warren Buffett. Since companies only become inexpensive when other investors stop investing in them, putting this trading approach into practice frequently entails being a contrarian (buy assets after a price decrease). Because of this, cheap stocks are frequently out of favour or purchased while others are in a panic. Traders claim that deviating from the standard is more difficult than it seems.

Dedicated short bias

Like equity long-short managers, dedicated short bias is a trading technique that focuses on identifying companies to sell short. Making a prediction that the share price will decline is known as short selling. Similar to how purchasing stock entails profiting if the price increases, holding a short position entail profiting if the price decreases. Dedicated short-bias managers search for companies that are declining. Since dedicated short-bias managers are working against the prevailing uptrend in markets since stocks rise more frequently than they fall (this is known as the equity risk premium), they make up a very small proportion of hedge funds.

Most hedge funds in general, as well as almost all equity long-short hedge funds and dedicated short-bias hedge funds, engage in discretionary trading, which refers to the trader’s ability to decide whether to buy or sell based on his or her judgement and an evaluation of the market based on past performance, various types of information, intuition, and other factors.

Quantitative

The quantitative investment might be seen as an alternative to this traditional style of trading. Quants create systems that methodically carry out the stated definitions of their trading rules. They use complex processing of ideas that are difficult to analyse using non-quantitative methods to gain a slight advantage on each of the numerous tiny, diversified trades. To accomplish this, they combine a wealth of data with tools and insights from a variety of fields, including economics, finance, statistics, mathematics, computer science, and engineering, to identify relationships that market participants may not have immediately fully incorporated in the price. Quantitative traders use computer systems that use these relationships to generate trading signals, optimise portfolios considering trading expenses, and execute trades using automated systems that send hundreds of orders every few seconds. In other words, data is fed into computers that execute various programmes under the supervision of humans to conduct trading (Pedersen, 2015).

Example of a long-short equity strategy

The purpose of employing a long-short strategy is to profit in both bullish and bearish markets. To measure the profitability of this strategy, we implemented a long-short strategy from the beginning of January 2022 to June 2022. In this time range, we are long Exxon Mobile stock and short Tesla. The data are extracted from the Bloomberg terminal. The strategy of going long Exxon Mobile and short Tesla is purely educational. This strategy’s basic idea is to profit from rising oil prices (leading to a price increase for Exxon Mobile) and rising interest rates (leading to a price decrease for Tesla). Over the same period, the S&P 500 index has dropped 23%, while the Nasdaq Composite has lost more than 30%. The Nasdaq Composite is dominated by rapidly developing technology companies that are especially vulnerable to rising interest rates.

Overall, the market’s net exposure is zero because we are 100% long Exxon Mobile and 100% short Tesla stock. This strategy succeeded to earn significant returns in both the long and short legs of the trade over a six-month timeframe. It yielded a 99.5 percent return, with a 36.8 percent gain in the value of the Exxon Mobile shares and a 62.8 percent return on the short Tesla position. Figure 1 shows the overall performance of each equity across time.

Figure 1. Long-short equity strategy performance over time
 Time-series regression
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the example above.

 Download the Excel file to analyse a long-short equity strategy

Performance of the long-short equity strategy

To capture the performance of the long-short equity strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the long-short hedge fund strategy, we examine the rebased performance of the Credit Suisse index with respect to the MSCI All-World Index. Over a period from 2002 to 2022, the long-short equity strategy index managed to generate an annualised return of 5.96% with an annualised volatility of 7.33%, leading to a Sharpe ratio of 0.18. Over the same period, the MSCI All World Index managed to generate an annualised return of 6.00% with an annualised volatility of 15.71%, leading to a Sharpe ratio of 0.11. The low correlation of the long-short equity strategy with the MSCI All World Index is equal to 0.09, which is closed to zero. Overall, the Credit Suisse hedge fund strategy index performed somewhat slightly worse than the MSCI All World Index, but presented a much lower volatility leading to a higher Sharpe ratio (0.18 vs 0.11).

Figure 2. Performance of the long-short equity strategy compared to the MSCI All-World Index across time.
 Time-series regression
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the explanations about the Credit Suisse hedge fund strategy index.

 Download the Excel file to perform a Fama-MacBeth regression method with N-asset

Why should I be interested in this post?

Long-short funds seek to reduce negative risk while increasing market upside. They might, for example, invest in inexpensive stocks that the fund managers believe will rise in price while simultaneously shorting overvalued stocks to cut losses. Other strategies used by long-short funds to lessen market volatility include leverage and derivatives. Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

BlackRock Long-short strategy

BlackRock Investment Outlook

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Long-short strategy

Credit Suisse Long-short performance benchmark

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Fama-MacBeth two-step regression method: the case of K risk factors

Fama-MacBeth two-step regression method: the case of K risk factors

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the Fama-MacBeth two-step regression method used to test asset pricing models in the case of K risk factors.

This article is structured as follows: we introduce the Fama-MacBeth two-step regression method. Then, we present the mathematical foundation that underpins their approach for K risk factors. We provide an illustration for the 3-factor mode developed by Fama and French (1993).

Introduction

Risk factors are frequently employed to explain asset returns in asset pricing theories. These risk factors may be macroeconomic (such as consumer inflation or unemployment) or microeconomic (such as firm size or various accounting and financial metrics of the firms). The Fama-MacBeth two-step regression approach found a practical way for measuring how correctly these risk factors explain asset or portfolio returns. The aim of the model is to determine the risk premium associated with the exposure to these risk factors.

The first step is to regress the return of every asset against one or more risk factors using a time-series approach. We obtain the return exposure to each factor called the “betas” or the “factor exposures” or the “factor loadings”.

The second step is to regress the returns of all assets against the asset betas obtained in Step 1 using a cross-section approach. We obtain the risk premium for each factor. Then, Fama and MacBeth assess the expected premium over time for a unit exposure to each risk factor by averaging these coefficients once for each element.

Mathematical foundations

We describe below the mathematical foundations for the Fama-MacBeth regression method for a K-factor application. In the analysis, we investigated the Fame-French three factor model in order to understand their significance as a fundamental driver of returns for investors under the Fama-MacBeth framework.

The model considers the following inputs:

  • The return of N assets denoted by Ri for asset i observed every day over the time period [0, T].
  • The risk factors denoted by Fk for k equal from 1 to K.

Step 1: time-series analysis of returns

For each asset i from 1 to N, we estimate the following linear regression model:

Fama-French time-series regression

From this model, we obtain the βi, Fk which is the beta associated with the kth risk factor.

Step 2: cross-sectional analysis of returns

For each period t from 1 to T, we estimate the following linear regression model:

Fama-French cross-sectional regression

Application: the Fama-French 3-Factor model

The Fama-French 3-factor model is an illustration of Fama-MacBeth two-step regression method in the case of K risk factors (K=3). The three factors are the market (MKT) factor, the small minus big (SMB) factor, and the high minus low (HML) factor. The SMB factor measures the difference in expected returns between small and big firms (in terms of market capitalization). The HML factor measures the difference in expected returns between value stocks and growth stock.

The model considers the following inputs:

  • The return of N assets denoted by Ri for asset i observed every day over the time period [0, T].
  • The risk factors denoted by FMKT associated to the MKT risk factor, FSMB associated to the MKT risk factor which measures the difference in expected returns between small and big firms (in terms of market capitalization) and FHML associated to 𝐻𝑀𝐿 (“High Minus Low”) which measures the difference in expected returns between value stocks and growth stock

Step 1: time-series regression

img_SimTrade_Fama_French_time_series_regression

Step 2: cross-sectional regression

img_SimTrade_Fama_French_cross_sectional_regression

Figure 1 represents for a given period the cross-sectional regression of the return of all individual assets with respect to their estimated individual beta for the MKT factor.

Figure 1. Cross-sectional regression for the market factor.
 Cross-section regression for the MKT factor Source: computation by the author.

Figure 2 represents for a given period the cross-sectional regression of the return of all individual assets with respect to their estimated individual beta for the SMB factor.

Figure 2. Cross-sectional regression for the SMB factor.
 Cross-section regression for the SMB factor Source: computation by the author.

Figure 3 represents for a given period the cross-sectional regression of the return of all individual assets with respect to their estimated individual beta for the SMB factor.

Figure 3. Cross-sectional regression for the HML factor.
 Cross-section regression for the HML factor Source: computation by the author.

Empirical study of the Fama-MacBeth regression

Fama-MacBeth seminal paper (1973) was based on an analysis of the market factor by assessing constructed portfolios of similar betas ranked by increasing values. This approach helped to overcome the shortcoming regarding the stability of the beta and correct for conditional heteroscedasticity derived from the computation of the betas for individual stocks. They performed a second time the cross-sectional regression of monthly portfolio returns based on equity betas to account for the dynamic nature of stock returns, which help to compute a robust standard error and assess if there is any heteroscedasticity in the regression. The conclusion of the seminal paper suggests that the beta is “dead”, in the sense that it cannot explain returns on its own (Fama and MacBeth, 1973).

Empirical study: Stock approach for a K-factor model

We collected a sample of 440 significant firms’ end-of-day stock prices in the US economy from January 3, 2012 to December 31, 2021. We calculated daily returns for each stock as well as the factor used in this analysis. We chose the S&P500 index to represent the market since it is an important worldwide stock benchmark that captures the US equities market.

Time-series regression

To assess the multi-factor regression, we used the Fama-MacBeth 3-factor model as the main factors assessed in this analysis. We regress the average returns for each stock against their factor betas. The first regression is statistically tested. This time-series regression is run on a subperiod of the whole period from January 03, 2012, to December 31, 2018. We use a t-statistic to explain the regression’s behavior. Because the p-value is in the rejection zone (less than the significance level of 5%), we can conclude that the factors can first explain an investor’s returns. However, as we will see later in the article, when we account for a second regression as proposed by Fama and MacBeth, the factors retained in this analysis are not capable of explaining the return on asset returns on its own. The stock approach produces statistically significant results in time-series regression at 10%, 5%, and even 1% significance levels. As shown in Table 1, the p-value is in the rejection range, indicating that the factors are statistically significant.

Table 1. Time-series regression t-statistic result.
 Cross-section regression Source: computation by the author.

Cross-sectional regression

Over a second period from January 04, 2019, to December 31, 2021, we compute the dynamic regression of returns at each data point in time with respect to the betas computed in Step 1.

That being said, when the results are examined using cross-section regression, they are not statistically significant, as indicated by the p-value in Table 2. We are unable to reject the null hypothesis. The premium investors are evaluating cannot be explained solely by the factors assessed. This suggests that factors retained in the analysis fail to adequately explain the behavior of asset returns. These results are consistent with the Fama-MacBeth article (1973).

Table 2. Cross-section regression t-statistic result.
Source: computation by the author.

Excel file

You can find below the Excel spreadsheet that complements the explanations covered in this article.

 Download the Excel file to perform a Fama-MacBeth regression method with K-asset

Why should I be interested in this post?

Fama-MacBeth made a significant contribution to the field of econometrics. Their findings cleared the way for asset pricing theory to gain traction in academic literature. The Capital Asset Pricing Model (CAPM) is far too simplistic for a real-world scenario since the market factor is not the only source that drives returns; asset return is generated from a range of factors, each of which influences the overall return. This framework helps in capturing other sources of return.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: stock and portfolio approach

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: Analysis of the market factor

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Security Market Line (SML)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Factor Investing

Useful resources

Academic research

Brooks, C., 2019. Introductory Econometrics for Finance (4th ed.). Cambridge: Cambridge University Press. doi:10.1017/9781108524872

Fama, E. F., MacBeth, J. D., 1973. Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy, 81(3), 607–636.

Roll R., 1977. A critique of the Asset Pricing Theory’s test, Part I: On Past and Potential Testability of the Theory. Journal of Financial Economics, 1, 129-176.

American Finance Association & Journal of Finance (2008) Masters of Finance: Eugene Fama (YouTube video)

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Fama-MacBeth regression method: the stock approach vs the portfolio approach

Fama-MacBeth regression method: the stock approach vs the portfolio approach

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the Fama-MacBeth regression method used to test asset pricing models and addresses the difference when applying the regression method on individual stocks or portfolios composed of stocks with similar betas.

This article is structured as follow: we introduce the Fama-MacBeth testing method. Then, we present the mathematical foundation that underpins their approach. We conduct an empirical analysis on both the stock and the portfolio approach. We conclude with a discussion on econometric issues.

Introduction

Risk factors are frequently employed to explain asset returns in asset pricing theories. These risk factors may be macroeconomic (such as consumer inflation or unemployment) or microeconomic (such as firm size or various accounting and financial metrics of the firms). The Fama-MacBeth two-step regression approach found a practical way for measuring how correctly these risk factors explain asset or portfolio returns. The aim of the model is to determine the risk premium associated with the exposure to these risk factors.

As a reminder, the Fama-MacBeth regression method is composed on two steps: step 1 with a time-series regression and step 2 with a cross-section regression.

The first step is to regress the return of every stock against one or more risk factors using a time-series regression. We obtain the return exposure to each factor called the “betas” or the “factor exposures” or the “factor loadings”.

The second step is to regress the returns of all stocks against the asset betas obtained in the first step using a cross-section regression for different periods. We obtain the risk premium for each factor used to test the asset pricing model.

The implementation of this method can be done with individual stocks or with portfolios of stocks as proposed by Fama and MacBeth (1973). Their argument is the better stability of the beta when considering portfolios. In this article we illustrate the difference with the two implementations.

Fama and MacBeth (1973) implemented with individual stocks

We downloaded a sample of daily prices of stocks composing the S&P500 index over the period from January 03, 2012, to December 31, 2021 (we selected the stocks present from the beginning to the end of the period reducing our sample from 500 to 440 stocks). We computed daily returns for each stock and for the market factor retained in this study. To represent the market, we chose the S&P500 index, an important global stock benchmark capturing the US equity market.

The procedure to derive the Fama-MacBeth regression using the stock approach can be achieved as follow:

Step 1: time-series regression

We compute the beta of the stocks with respect to the market factor for the period covered (time-series regression). We estimate the beta of each stock related to the S&P500 index. The beta is computed as the slope of the linear regression of the stock return on the market return (S&P500 index return). This time-series regression is run on a subperiod of the whole period from January 03, 2012, to December 31, 2018.

Step 2: cross-sectional regression

Over a second period from January 04, 2019, to December 31, 2021, we compute the dynamic regression of returns at each data point in time with respect to the betas computed in Step 1.

With this procedure, we obtain a risk premium that would represent the relationship between the stock returns at each data point in time with their respective beta for the sample analyzed.

Test the statistical significance of the results obtained from the regression

Results in the time-series regression using the stock approach are statistically significant. As shown in Table 1, the p-value is in the rejection area, which implies that the factor that the market factor can be considered as a driver of return.

Table 1. Time-series regression t-statistic result.
img_SimTrade_Fama_MacBeth_cross_sectional_regression_stat_result Source: computation by the author.

However, when analyzed in the cross-section regression, the results are not statistically significant anymore. As shown in Table 2, the p-value is not in the rejection area. We cannot reject the null hypothesis (H0: non significance of the market factor). Market factor alone cannot explain the premium investors are considering.
This means that the market factor fails to explain properly the behavior of asset returns, which undermines the validity of the CAPM framework. These results are in line with the Fama-MacBeth paper (1973).

Table 2. Cross-section regression t-statistic result.
img_SimTrade_Fama_MacBeth_cross_sectional_regression_stat_resultSource: computation by the author.

You can find below the Excel spreadsheet that complements the explanations covered in this part of the article (implementation of the Fama and MacBeth (1973) method with individual stocks).

 Download the Excel file to perform a Fama-MacBeth two-step regression method using the stock approach

Fama and MacBeth (1973) implemented with portfolios of stocks

Fama-MacBeth seminal paper (1973) was based on an analysis of the market factor by assessing constructed portfolios of similar betas ranked by increasing values. This approach helped to overcome the shortcoming regarding the stability of the beta and correct for conditional heteroscedasticity derived from the computation of the betas for individual stocks. They performed a second time the cross-sectional regression of monthly portfolio returns based on equity betas to account for the dynamic nature of stock returns, which help to compute a robust standard error and assess if there is any heteroscedasticity in the regression. The conclusion of the seminal paper suggests that the beta is “dead”, in the sense that it cannot explain returns on its own (Fama and MacBeth, 1973).

The procedure to derive the Fama-MacBeth regression using the portfolio approach can be achieved as follow:

Step 1: time-series regression

We first compute the beta of the stocks with respect to the market factor for the period covered (time-series regression). We estimate the beta of each stock related to the S&P500 index. The beta is computed as the slope of the linear regression of the stock return on the market return (S&P500 index return). This time-series regression is run on a subperiod of the whole period from January 03, 2012, to December 31, 2015. We build twenty portfolios based on stock betas ranked in ascending order. The betas of the portfolios are then estimated again on a subperiod from January 04, 2016, to December 31, 2018.

It is challenging to maintain beta stability over time. Fama-MacBeth aimed to remedy this shortcoming through its novel technique. However, some issues must be addressed. When betas are calculated using a monthly time series, the statistical noise in the time series is significantly reduced in comparison to shorter time frames (i.e., daily observation). When portfolio betas are constructed, the coefficient becomes considerably more stable than when individual betas are evaluated. This is due to the diversification impact that a portfolio can produce, which considerably reduces the amount of specific risk.

Step 2: cross-sectional regression

Over a second period from January 03, 2019, to December 31, 2021, we compute the dynamic regression of portfolio returns at each data point in time with respect to the betas computed in Step 1.

With this procedure, we obtain a risk premium that would represent the relationship between the portfolio returns at each data point in time with their respective beta for the sample analyzed.

Test the statistical significance of the results obtained from the regression

Results in the cross-section regression using the portfolio approach are not statistically significant. As captured in Table 3, the p-value is not in the rejection area, which implies that the factor is statistically insignificant and that the market factor cannot be considered as a driver of return.

Table 3. Cross-section regression with portfolio approach t-statistic result.
img_SimTrade_Fama_MacBeth_Portfolio_cross_sectional_regression_stat_result Source: computation by the author.

You can find below the Excel spreadsheet that complements the explanations covered in this part of the article (implementation of the Fama and MacBeth (1973) method with portfolios of stocks).

 Download the Excel file to perform a Fama-MacBeth regression method using the portfolio approach

Econometric issues

Errors in data measurement

Because regression uses a sample instead of the entire population, a certain margin of error must be accounted for since the authors derive estimated betas for the sample.

Asset return heteroscedasticity

In econometrics, heteroscedasticity is an important concern since it results in unequal residual variance. This indicates that a time series exhibiting some heteroscedasticity has a non-constant variance, which renders forecasting ineffective because the time series will not revert to its long-run mean.

Asset return autocorrelation

Standard errors in Fama-MacBeth regressions are solely corrected for cross-sectional correlation. This method does not fix the standard errors for time-series autocorrelation. This is typically not a concern for stock trading, as daily and weekly holding periods have modest time-series autocorrelation, whereas autocorrelation is larger over long horizons. This suggests that Fama-MacBeth regression may not be applicable in many corporate finance contexts where project holding durations are typically lengthy.

Why should I be interested in this post?

Fama-MacBeth made a significant contribution to the field of econometrics. Their findings cleared the way for asset pricing theory to gain traction in academic literature. The Capital Asset Pricing Model (CAPM) is far too simplistic for a real-world scenario since the market factor is not the only source that drives returns; asset return is generated from a range of factors, each of which influences the overall return. This framework helps in capturing other sources of return.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: N-factors application

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: Analysis of the market factor

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Security Market Line (SML)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Factor Investing

Useful resources

Academic research

Brooks, C., 2019. Introductory Econometrics for Finance (4th ed.). Cambridge: Cambridge University Press. doi:10.1017/9781108524872

Fama, E. F., MacBeth, J. D., 1973. Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy, 81(3), 607–636.

Roll R., 1977. A critique of the Asset Pricing Theory’s test, Part I: On Past and Potential Testability of the Theory. Journal of Financial Economics, 1, 129-176.

American Finance Association & Journal of Finance (2008) Masters of Finance: Eugene Fama (YouTube video)

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Fama-MacBeth regression method: Analysis of the market factor

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the Fama-MacBeth two-step regression method used to test asset pricing models. The seminal paper by Fama and MacBeth (1973) was based on an investigation of the market factor by evaluating portfolios of stocks with similar betas. In this article I will elaborate on the methodology and assess the statistical significance of the market factor as a fundamental driver of return.

This article is structured as follow: we introduce the Fama-MacBeth testing method used in asset pricing. Then, we present the mathematical foundation that underpins their approach. I then apply the Fama-MacBeth to recent US stock market data. Finally, I expose the limitations of their approach and conclude to discuss the generalization of the original study to other risk factors.

Introduction

The two-step regression method proposed by Fama-MacBeth was originally used in asset pricing to test the Capital Asset Pricing Model (CAPM). In this model, there is only one risk factor determining the variability of returns: the market factor.

The first step is to regress the return of every asset against the risk factor using a time-series approach. We obtain the return exposure to the factor called the “beta” or the “factor exposure” or the “factor loading”.

The second step is to regress the returns of all assets against the asset betas obtained in Step 1 during a given time period using a cross-section approach. We obtain the risk premium associated with the market factor. Then, Fama and MacBeth (1973) assess the expected premium over time for a unit exposure to the risk factor by averaging these coefficients once for each element.

Mathematical foundations

We describe below the mathematical foundations for the Fama-MacBeth two-step regression method.

Step 1: time-series analysis of returns

The model considers the following inputs:

  • The return of N assets denoted by Ri for asset i observed over the time period [0, T].
  • The risk factor denoted by F for the market factor impacting the asset returns.

For each asset i (for i varying from 1 to N) we estimate the following time-series linear regression model:

Fama MacBeth time-series regression

From this model, we obtain the following coefficients: αi and βi which are specific to asset i.

Figure 1 represents for a given asset (Apple stocks) the regression of its return with respect to the S&P500 index return (representing the market factor in the CAPM). The slope of the regression line corresponds to the beta of the regression equation.

Figure 1. Step 1: time-series regression for a given asset (Apple stock and the S&P500 index).
 Time-series regression Source: computation by the author.

Step 2: cross-sectional analysis of returns

For each period t (from t equal 1 to T), we estimate the following cross-section linear regression model:

Fama MacBeth cross-section regression

Figure 2 plots for a given period the cross-sectional returns and betas for a given point in time.

Figure 2 represents for a given period the regression of the return of all individual assets with respect to their estimated individual market beta.

Figure 2. Step 2: cross-section regression for a given time-period.
Cross-section regression
Source: computation by the author.

We average the gamma obtained for each data point. This is the way the Fama-MacBeth method is used to test asset pricing models.

Empirical study of the Fama-MacBeth regression

The seminal paper by Fama and MacBeth (1973) was based on an analysis of the market factor by assessing constructed portfolios of similar betas ranked by increasing values. This approach helped to overcome the shortcoming regarding the stability of the beta and correct for conditional heteroscedasticity derived from the computation of the betas for individual stocks. They performed a second time the cross-sectional regression of monthly portfolio returns based on equity betas to account for the dynamic nature of stock returns, which help to compute a robust standard error and assess if there is any heteroscedasticity in the regression. The conclusion of the seminal paper suggests that the beta is “dead”, in the sense that it cannot explain returns on its own (Fama and MacBeth, 1973).

Empirical study: Stock approach

We downloaded a sample of end-of-month stock prices of large firms in the US economy over the period from March 31, 2016, to March 31, 2022. We computed monthly returns. To represent the market, we chose the S&P500 index.

We then applied the Fama-MacBeth two-step regression method to test the market factor (CAPM).

Figure 3 depicts the computation of average returns and the betas and stock in the analysis.

Figure 3. Computation of average returns and betas of the stocks.
img_SimTrade_Fama_MacBeth_method_4 Source: computation by the author.

Figure 4 represents the first step of the Fama-MacBeth regression. We regress the average returns for each stock with their respective betas.

Figure 4. Step 1 of the regression: Time-series analysis of returns
img_SimTrade_Fama_MacBeth_method_1 Source: computation by the author.

The initial regression is statistically evaluated. To describe the behavior of the regression, we employ a t-statistic. Since the p-value is in the rejection area (less than the significance limit of 5 percent), we can deduce that the market factor can at first explain the returns of an investor. However, as we are going deal in the later in the article, when we account for a second regression as formulated by Fama and MacBeth (1973), the market factor is not capable of explaining on its own the return of asset returns.

Figure 5 represents Step 2 of the Fama-MacBeth regression, where we perform for a given data point a regression of all individual stock returns with their respective estimated market beta.

Figure 5. Step 2: cross-sectional analysis of return.
img_SimTrade_Fama_MacBeth_method_2 Source: computation by the author.

Figure 6 represents the hypothesis testing for the cross-sectional regression. From the results obtained, we can clearly see that the p-value is not in the rejection area (at a 5% significance level), hence we cannot reject the null hypothesis. This means that the market factor fails to explain properly the behavior of asset returns, which undermines the validity of the CAPM framework. These results are in line with Fama-MacBeth (1973).

Figure 6. Hypothesis testing of the cross-sectional regression.
img_SimTrade_Fama_MacBeth_method_1 Source: computation by the author.

Excel file for the Fama-MacBeth two-step regression method

You can find below the Excel spreadsheet that complements the explanations covered in this article to implement the Fama-MacBeth two-step regression method.

 Download the Excel file to perform a Fama-MacBeth two-step regression method

Limitations of the Fama-McBeth approach

Selection of the market index

For the CAPM to be valid, we need to determine if the market portfolio is in the Markowitz efficient curve. According to Roll (1977), the market portfolio is not observable because it cannot capture all the asset classes (human capital, art, and real estate among others). He then believes that the returns cannot be captured effectively and hence makes the market portfolio, not a reliable factor in determining its efficiency.

Furthermore, the coefficients estimated in the time-series regressions are sensitive to the market index chosen for the study. These shortcomings must be taken into account when assessing CAPM studies.

Stability of the coefficients

The beta of individual assets are not stable over time. Fama and MacBeth attempted to address this shortcoming by implementing an innovative approach.

When betas are computed using a monthly time-series, the statistical noise of the time series is considerably reduced as opposed to shorter time frames (i.e., daily observation).

Using portfolio betas makes the coefficient much more stable than using individual betas. This is due to the diversification effect that a portfolio can achieve, reducing considerably the amount of specific risk.

Conclusion

Risk factors are frequently employed to explain asset returns in asset pricing theories. These risk factors may be macroeconomic (such as consumer inflation or unemployment) or microeconomic (such as firm size or various accounting and financial metrics of the firms). The Fama-MacBeth two-step regression approach found a practical way for measuring how correctly these risk factors explain asset or portfolio returns. The aim of the model is to determine the risk premium associated with the exposure to these risk factors.

Why should I be interested in this post?

Fama-MacBeth made a significant contribution to the field of econometrics. Their findings cleared the way for asset pricing theory to gain traction in academic literature. The Capital Asset Pricing Model (CAPM) is far too simplistic for a real-world scenario since the market factor is not the only source that drives returns; asset return is generated from a range of factors, each of which influences the overall return. This framework helps in capturing other sources of return.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: N-factors application

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: stock and portfolio approach

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶Youssef LOURAOUI Factor Investing

Useful resources

Academic research

Brooks, C., 2019. Introductory Econometrics for Finance (4th ed.). Cambridge: Cambridge University Press. doi:10.1017/9781108524872

Fama, E. F., MacBeth, J. D., 1973. Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy, 81(3), 607–636.

Roll R., 1977. A critique of the Asset Pricing Theory’s test, Part I: On Past and Potential Testability of the Theory. Journal of Financial Economics, 1, 129-176.

American Finance Association & Journal of Finance (2008) Masters of Finance: Eugene Fama (YouTube video)

Business Analysis

NEDL. 2022. Fama-MacBeth regression explained: calculating risk premia (Excel). [online] Available at: [Accessed 29 May 2022].

About the author

The article was written in December 2022 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Forex exchange markets

Forex exchange markets

Nakul PANJABI

In this article, Nakul PANJABI (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2024) explains how the foreign exchange markets work.

Forex Market

Forex trading can be simply defined as exchange of a unit of one currency for a certain unit of another currency. It is the act of buying one currency while simultaneously selling another.

Foreign exchange markets (or Forex) are markets where currencies of different countries are traded. Forex market is a decentralised market in which all trades take place online in an over the counter (OTC) format. By trading volume, the forex market is the largest financial market in the world with a daily turnover of 6.6 trillion dollars in 2019. At present, it is worth 2,409 quadrillion dollars. Major currencies traded are USD, EUR, GBP, JPY, and CHF.

Players

The main players in the market are Central Banks, Commercial banks, Brokers, Traders, Exporters and Importers, Immigrants, Investors and Tourists.

Central banks

Central banks are the most important players in the Forex Markets. They have the monopoly in the supply of currencies and therefore, tremendous influence on the prices. Central Banks’ policies tend to protect aggressive fluctuations in the Forex Markets against the domestic currency.

Commercial banks

The second most important players of the Forex market are the Commercial Banks. By quoting, on a daily basis, the foreign exchange rates for buying and selling they “Make the Market”. They also function as Clearing Houses for the Market.

Brokers

Another important group is that of Brokers. Brokers do not participate in the market but acts as a link between Sellers and Buyers for a commission.

Types of Transactions in Forex Markets

Some of the transactions possible in the Forex Markets are as follows:

Spot transaction

As spot transaction uses the spot rate and the goods (currencies) are exchanges over a two-day period.

Forward transaction

A forward transaction is a future transaction where the currencies are exchanged after 90 days of the deal a fixed exchange rate on a defined date. The exchange rate used is called the Forward rate.

Future transaction

Futures are standardized Forward contracts. They are traded on Exchanges and are settled daily. The parties enter a contract with the exchange rather than with each other.

Swap transaction

The Swap transactions involve a simultaneous Borrowing and Lending of two different currencies between two investors. One investor borrows the currency and lends another currency to the second investor. The obligation to repay the currencies is used as collateral, and the amount is repaid at forward rate.

Option transaction

The Forex Option gives an investor the right, but not the obligation to exchange currencies at an agreed rate and on a pre-defined date.

Peculiarities of Forex Markets

Trading of Forex is not much different from trading of any other asset such as stocks or bonds. However, it might not be as intuitive as trading of stocks or bonds because of its peculiarities. Some peculiarities of the Forex market are as follows:

Going long and short simultaneously

Since the goods traded in the market are currencies themselves, a trade in the Forex market can be considered both long and short position. Buying dollars for euros can be profitable in cases of both dollar appreciation and euro depreciation.

High liquidity and 24-hour market

As mentioned above, the Forex market has the largest daily trading volume. This large volume of trading implies the highly liquid feature of Forex Assets. Moreover, Forex market is open 24 hours 5 days a week for retail traders. This is due to the fact that Forex is exchanged electronically over the world and anyone with an internet connection can exchange currencies in any Forex market of the world. In fact for Central banks and related organisations can trade over the weekends as well. This can cause a change in the price of currencies when the market opens to retail traders again after a gap of 2 days. This risk is known as Gapping risk.

High leverage and high volatility

Extremely high leverage is a common feature of Forex trades. Using high leverage can result in multiple fold returns in favourable conditions. However, because of high trading volume, Forex is very volatile and can go in either upward or downward spiral in a very short time. Since every position in the Forex market is a short and long position, the exposure from one currency to another is very high.

Hedging

Hedging is one of the main reasons for a lot of companies and corporates to enter into a Forex Market. Forex hedging is a strategy to reduce or eliminate risk arising from negative movement in the Exchange rate of a particular currency. If a French wine seller is about to receive 1 million USD for his wine sales then he can enter into a Forex futures contract to receive 900,000 EUR for that 1 million USD. If, at the date of payment, the rate of 1 million USD is 800,000 EUR the French wine seller will still get 900,000 EUR because he hedged his forex risk. However, in doing so, he also gave up any gain on any positive movement in the EUR-USD exchange rate.

Related posts on the SimTrade blog

   ▶ Jayati WALIA Currency overlay

   ▶ Louis DETALLE What are the different financial products traded in financial markets?

   ▶ Akshit GUPTA Futures Contract

   ▶ Akshit GUPTA Forward Contracts

   ▶ Akshit GUPTA Currency swaps

   ▶ Luis RAMIREZ Understanding Options and Options Trading Strategies

Useful resources

Academic resources

Solnik B. (1996) International Investments Addison-Wesley.

Business resources

DailyFX / IG The History of Forex

DailyFX / IG Benefits of forex trading

DailyFX / IG Foreign Exchange Market: Nature, Structure, Types of Transactions

About the author

The article was written in December 2022 by Nakul PANJABI (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2024).

Exchange-traded funds and Tracking Error

Exchange-traded funds and Tracking Error

Micha FISHER

In this article, Micha FISHER (University of Mannheim, MSc. Management, 2021-2023) explains the concept of Tracking Error in the context of exchange traded funds (ETF).

This article will offer a short introduction to the concept of exchange-traded funds, will then describe several reasons for the existence of tracking errors and finish with a concise example on how tracking error can be calculated.

Exchange-traded funds

An exchange-traded fund is conceptionally very close to classical mutual funds, with the key difference being, that ETFs are traded on a stock exchange during the trading day. Most ETFs are so-called index funds and thus they try to replicate an existing index like the S&P 500 or the CAC 40. This sort of passive investing is aimed at following or tracking the underlying index as closely as possible. However, actively managed ETFs with the aim of outperforming the market do exist as well and typically come with higher management fees. There are several types of ETFs covering equity index funds, commodities or currencies with classical equity index funds being the most prominent.

The total volume of global ETF portfolios has increased substantially over the last two decades. At the beginning of the century total asset volume was in the low triple digit billions measured in USD. According to research by the Wall Street Journal total assets in ETF investments surpassed nine trillion USD in 2021.

The continuing attractiveness of exchange-traded index funds can be explained with the very low management fees, the clarity of the product objective, and the high liquidity of the investment vehicle. However, although especially the market leaders like BlackRock, the Vanguard Group or State Street offer products that come extremely close to mirroring their underlying index, exchange-traded funds do not perfectly track the evolution of the underlying index. This phenomenon is known as tracking error and will be discussed in detail below.

Theoretical measure of the Tracking Error

Simply speaking, the tracking error of an ETF is the difference in the returns of the underlying index (I for index) and the returns of the ETF itself (E for ETF). For a specific period, it is computed by taking the standard deviation of the differences between the two time-series.

Formula for tracking error

Theoretically, it is possible to fully replicate an index in a portfolio and thus reach a tracking error of zero. However, there are several reasons why this is not achievable in practice.

Origins of the Tracking Error

The most important and obvious reason is that the Net Asset Value (NAV) of index funds is necessarily lower than the NAV of its underlying index. An index itself has no liabilities, as it is strictly speaking an instrument of measurement. On the other hand, even a passively managed index fund comes with expenses to pay for infrastructure, personnel, and marketing. These liabilities decrease the Net Asset Value of the fund. In general, a higher tracking error could indicate that the fund is not working efficiently compared to products of competitors with the same underlying index.

Another origin of tracking error can be found in specific sector ETFs and more niche markets with not enough liquidity. When the trading volume of a stock is very low, buying / selling the stock would increase / decrease the price (price impact). In this case an ETF could buy more liquid stocks with the aim to mirror the value development of the illiquid stock, which in turn could lead to a higher tracking error.

Another source of tracking error that occurs more severely in dividend-focused ETFs is the so-called cash drag. High dividend payments that are not instantly reinvested drag down the fund performance in contrast to the underlying index.

Of course, transaction fees of the marketplaces can reduce the fund performance as well. This is especially true if large rebalancing efforts are necessary due to a change of the index composition.

Lastly, there are also ways to reduce the effects described above. Funds can engage in security lending to earn additional money. In this case, the fund lends individual assets within the portfolio to other investors (mostly short sellers) for an agreed period in return for lending fees and possible interest. It should be noted, that while this might reduce tracking error, it also exposes the fund to additional counterparty risk.

Tracking Error: An Example

The sheet posted below shows a simple example of how the tracking error can be computed. To not include hundreds of individual shares, the example transformed the top ten positions within the Nasdaq-100 index into an artificial “Nasdaq-10” index. Although the data for the 23rd of September is accurate, the future data is of course randomly simulated.

By using the individual weights of the index components and their corresponding weights, the index returns for the next three months can be computed.

Figure 1: Three-months simulation of “Nasdaq-10” index.
Three-months simulation of Nasdaq-10 index
Source: computation by the author.

At this point our made-up ETF is introduced with an initial investment of 100 million USD. This ETF fully replicates the Nasdaq-10 index by holding shares in the same proportion as the index. In this example only the management and marketing fees are incorporated. Security lending, index changes and transaction fees and dividends are omitted. Also, all the portfolio shares are highly liquid and allow for full replication. The fund works with small expenses for personnel of only ten thousand USD per month. Additionally, once per quarter, a marketing campaign costs additionally fifty thousand USD.

Figure 2: Computation of ETF return and tracking error.
Computation of ETF-return and Tracking Error
Source: computation by the author.

Calculating the net asset value (NAV) gives us the monthly returns of the fund which in turn allows us to calculate the three-month standard deviation of the tracking difference. Additionally, the Total Expense Ratio can be calculated as the percentage of expenses per year divided by the total asset value of the fund.

This example gives us a Total Expense Ratio of nearly 0.3 percent per annum which is within the competitive area of real passive funds. Vanguard is able to replicate the FTSE All-World index with 0.2 percent. However, the calculated tracking error is obviously smaller than most real tracking errors with only 0.0002, as only management fees were considered. Exemplary, Vanguards FTSE All-World ETF had an historical tracking error of 0.042 in 2021, due to the reasons mentioned in the section above.

Excel file for computing the tracking error of an ETF

You can also download below the Excel file for the computation of the tracking error of an ETF.

Download the Excel file to compute the tracking error of an ETF

Why should I be interested in this post?

ETFs in all forms are one of the major developments in the area of portfolio management over the last two decades. They are also a very interesting option for private investments.

Although they are conceptually very simple it is important to understand the finer metrics that vary between different service providers as even small differences can have a large impact over a longer investment period.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI ETFs in a changing asset management industry

   ▶ Youssef LOURAOUI Passive Investing

   ▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

Useful resources

Academic articles

Roll R. (1992) A Mean/Variance Analysis of Tracking Error, The Journal of Portfolio Management, 18 (4) 13-22.

Business

ET Money What is Tracking Error in Index Funds and How it Impacts Investors?

About the author

The article was written in November 2022 by Micha FISHER (University of Mannheim, MSc. Management, 2021-2023).

Approaches to investment

Approaches to investment

Henri VANDECASTEELE

In this article, Henri VANDECASTEELE (ESSEC Business School, Master in Strategy & Management of International Business (SMIB), 2021-2022) explains the two main approaches to investment: fundamental analysis and technical analysis.

Fundamental analysis

Fundamental analysis (FA) is a way of determining the fundamental value of a securities by looking at linked economic and financial elements. Fundamental analysts look at everything that might impact the value of a security, from macroeconomic issues like the state of the economy and industry circumstances to microeconomic elements like management performance. All stock analysis attempts to evaluate if a security’s value in the larger market is right. Fundamental research is often conducted from a macro to micro viewpoint in order to find assets that the market has not valued appropriately. To get at a fair market valuation for the stock, analysts often look at the overall status of the economy, then the strength of the specific industry, before focusing on individual business performance.

Fundamental analysis evaluates the value of a stock or any other form of investment using publicly available data. An investor, for example, might undertake fundamental research on a bond’s value by looking at economic variables like interest rates and the overall status of the economy, then reviewing information about the bond issuer, such as probable changes in its credit rating.

The aim is to arrive at a figure that can be compared to the present price of an asset to determine whether it is undervalued or overpriced.

Fundamental analysis is based on both qualitative and quantitative publicly available historical and current data. This includes company statements, historical stock market data, company press releases, financial year statements, investor presentations, information found on internet fora, media articles, and broker/analyst reports.

Technical analysis

Technical analysis (TA) is a trading discipline that analyzes statistical trends acquired from trading activity, such as price movement and volume, to evaluate investments and uncover trading opportunities.

Technical analysis, as opposed to fundamental analysis, focuses on the examination of price and volume. Fundamental analysis aims to estimate a security’s worth based on business performance such as sales and earnings. Technical analysis methods are used to examine how variations in price, volume, and implied volatility are affected by supply and demand for a security. Any security with past trading data can benefit from technical analysis. This includes stocks, futures, commodities, bonds, currencies and other securities. In fact, technical analysis is much more common in commodities and forex markets where traders focus on short-term price fluctuations.

Technical analysis is commonly used to generate short-term trading signals from various charting tools, but it also helps to improve the assessment of securities strengths or weaknesses compared to one of the broader markets or sectors increase. This information helps analysts improve their overall rating estimates.

Technical analysis is performed on quantitative data only that recent and historical, but publicly available. It leverages mainly market information, namely daily transaction volumes, stock price, spread, volatility, … and performs trend analyses.

Link with market efficiency

When linking both approaches to investment to the market efficiency theory, we can state that fundamental analysis assumes that financial markets are not efficient in the semi-strong sense, whereas technical analysis assumes that financial markets are not efficient in the weak sense. But the trading activity of both fundamental analysts and technical analysts make the markets more efficient.

Related posts on the SimTrade blog

   ▶ Shruti CHAND Technical Analysis

   ▶ Jayati WALIA Trend Analysis and Trading Signals

Useful resources

SimTrade course Market information

About the author

The article was written in November 2022 by Henri VANDECASTEELE (ESSEC Business School, Master in Strategy & Management of International Business (SMIB), 2021-2022).

Understand the mechanism of inflation in a few minutes?

Understand the mechanism of inflation in a few minutes?

Louis DETALLE

In this article, Louis DETALLE (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) explains everything you have to know about inflation.

What is inflation and how can it make us poorer?

In a liberal economy, the prices of goods and services consumed vary over time. In France, for example, when the price of wheat rises, the price of wheat flour rises and so the price of a loaf of bread may also rises as a consequence of the rise in the price of the raw materials used for its production… This small example is only designed to make the evolution of prices concrete for one good only. It helps us understand what happens when the increase in price happens not only for a loaf of bread, but for all the goods of an economy.

Inflation is when prices rise overall, not just the prices of a few goods and services. When this is the case, over time, each unit of money buys fewer and fewer products. In other words, inflation gradually erodes the value of money (purchasing power).

If we take the example of a loaf of bread which costs €1 in year X, while the price of the 20g of wheat flour contained in a loaf is 20 cents. In year X+1, if the 20g of wheat flour now costs 22 cents, i.e., a 10% increase over one year, the price of the loaf of bread will have to reflect this increase, otherwise the baker will be the only one to suffer the increase in the price of his raw material. The price of a loaf of bread will then be €1.02.

We can see that here, with one euro, i.e., the same amount of the same currency, from one year to the next, it is not possible for us to buy a loaf of bread because it costs €1.02 and not €1 anymore.

This is a very schematic way of understanding the mechanism of inflation and how it destroys the purchasing power of consumers in an economy.

How is the inflation computed and what does a x% inflation mean?

In France, Insee (Institut national de la statistique et des études économiques in French) is responsible for calculating inflation. It obtains it by comparing the price of a basket of goods and services each month. The content of this basket is updated once a year to reflect household consumption patterns as closely as possible. In detail, the statistics office uses the distribution of consumer expenditure by item as assessed in the national accounts, and then weights each product in proportion to its weight in household consumption expenditure.

What is important to understand is that Insee calculates the price of an overall household expenditure basket and evaluates the variation of its price over time.

When inflation is announced at X%, this means that the overall value spent in the year by a household will increase by X%.

However, if the price of goods increases but wages remain the same, then purchasing power deteriorates, and this is why low-income households are the most affected by the rise in the price of everyday goods. Indeed, low-income households can’t easily cope with a 10% increase in price of their daily products, whereas the middle & upper classes can better deal with such a situation.

What can we do to reduce inflation?

It is the regulators who control inflation through major macroeconomic levers. It is therefore central banks and governments that can act and they do so in various ways (as an example, we use the context of the War in Ukraine in 2022):

They raise interest rates: when inflation is too high, central banks raise interest rates to slow down the economy and bring inflation down. This is what the European Central Bank (ECB) has just done because of the economic consequences of the War in Ukraine. The economic sanctions have seen the price of energy commodities soar, which has pushed up inflation.

Blocking certain prices: This is what the French government is still doing on energy prices. Thus, in France, the increase in gas and electricity tariffs will be limited to 15% for households, compared to a freeze on gas prices and an increase limited to 4% for electricity in 2022. Without this “tariff shield”, the French would have had to endure an increase of 120%.

Distribute one-off aid: These measures are often considered too costly and can involve an increase in salaries.

Bear in mind that “miracle” methods do not exist, otherwise inflation would never be a subject discussed in the media. However, these three methods are the most used by governments and central banks but only time will tell us whether they succeed.

Figure 1. Inflation in France.
Sans titre
Source: Insee / Les Echos.

Useful resources

Inflation rates across the World

Insee’s forecast of the French inflation rate

Related posts on the SimTrade blog

▶ Bijal GANDHI Inflation Rate

▶ Alexandre VERLET Inflation and the economic crisis of the 1970s and 1980s

▶ Alexandre VERLET The return of inflation

▶ Raphaël ROERO DE CORTANZE Inflation & deflation

About the author

The article was written in October 2022 by Louis DETALLE (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).

What are LBOs and how do they work?

What are LBOs and how do they work?

Louis DETALLE

In this article, Louis DETALLE (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) explains why LBOs are so trendy and what they consist in.

What does a LBO consist in & how is it built?

LBO stands for a Leverage Buy-Out. It means a company acquisition which is funded with a lot of debt. Often, when an LBO is performed, 70% of the funds used for the acquisition come from debt, the 30% left being equity.

Figure 1. Schematic plan of the organization of an LBO.

Sans titre
Source: the author.

To perform an LBO, the company wishing to buy the company called Target in this example will have to create a Holding company specially for this purpose. The holding will then take on some debt with specific lenders (banks, debt funds) under the form of loan or bonds. After that, the holding will have both some initial equity from the company wishing to acquire Target and some debt to buy Target.

What happens after the target has been bought?

Well, after the target has been bought, since the target company has an operating activity which motivated the acquiring company to buy it, this means that the target company had great financial performance. And it better to be the case! Otherwise, the large amount of debt taken for the operation will never be reimbursed to the lenders.

The principle is that target’s financial cash flows will be redistributed to the holding in the form of dividends, and the holding will use these dividends to pay back the debt to the lenders until all debt is reimbursed.

What makes a company a good LBO target?

A good LBO target should respect a few conditions related to the target company: important operating cashflows, a mature market, A company whose development cycle is over.

Important operating cashflows

First & foremost, without great cashflows, the holding will never be able to reimburse the debt taken with the dividend if they are insufficient. For that matter, the company targeted for the LBO should have both regular & important cashflows.

A mature market

When looking at the bigger picture, the company willing to acquire a target with a LBO must make sure that the market in which the potential target evolves is stabilized. Because LBO means major financial risk due to the amount of debt involved, a company cannot also add operational risk.

A company whose development cycle is over

Once again, the target company will ensure the reimbursement of a high debt. This is why all capital expenditures (CAPEX) and major investments such as machines, fleets of vehicles should have been already done.

Useful resources

Vernimmen’s book chapters on LBOs

Youtube video on a LBO Case Study

Related posts on the SimTrade blog

   ▶ Frédéric ADAM Senior banker (coverage)

   ▶ Marie POFF Book review: Barbarians at the gate

   ▶ Akshit GUPTA Analysis of Barbarians at the Gate movie

About the author

The article was written in October 2022 by Louis DETALLE (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).

Time Series Forecasting: Applications and Artificial Neural Networks

Time Series Forecasting: Applications and Artificial Neural Networks

Micha FISHER

In this article, Micha FISHER (University of Mannheim, MSc. Management, 2021-2023) discusses on the applications of time series forecasting and the use of artificial neural networks for this purpose.

This article will offer a short introduction to the different applications of time-series forecasting and forecasting in general, will then describe the theoretical aspects of simple artificial neural networks and finish with a practical example on how to implement a forecast based on these networks.

Overview

The American economist and diplomat John Kenneth Galbraith once said: “The function of economic forecasting is to make astrology look respectable”. Certainly, the failure of mainstream economics to predict several financial crises is testimony to this quote.

However, on a smaller scale, forecast can be very useful in different applications and this article describes several use cases for the forecasting of time series data and a special method to perform such analyses.

Different Applications of Time Series Forecasting

Different methods of forecasting are used in various settings. Central banks and economic research institutes use complex forecasting methods with a vast amount of input factors to forecast GDP growth and other macroeconomic figures. Technical analysts forecast the evolution of asset prices based on historical patterns to make trading gains. Businesses forecast the demand for their products by including seasonal trends (e.g., utility providers) and economic developments.

This article will deal with the latter two applications of forecasting that is focused on the analysis of historical patterns and seasonality. Using different input factors to come up with a prediction, like for example a multivariate regression analysis does, can be a successful way of making prediction. However, it also inherently includes the problem of determining those input factors as well in the first place.

The practical methods described in this article circumvent this problem by exclusively using historical time series data (e.g., past sales per month, historical electricity demand per hour of the day, etc.). This makes the use of those methods easy and both methods can be used to predict helpful input parameters of DCF models for example.

Artificial neural networks

Artificial Intelligence (AI) is a frequently used buzzword in the advertising of products and services. However, the concept of artificial intelligence is going back to the 1940s, when mathematicians McCulloch and Pitts first presented a mathematical model that was based on the neural activity of the human brain.

Before delving into the practical aspects of an exemplary simple artificial neural network, it is important to understand the terminology. These networks are one – although not the only one – of the key aspects of “Machine Learning”. Machine Learning itself is in turn a subtopic of Artificial Intelligence, which itself employs different tools besides Machine Learning.

Figure 1. Neural network.
Neural network
Source: internet.

To give a simple example of an artificial neural network we will focus on a so-called feedforward neural network. Those networks deliver and transform information from the left side to the right side of the schematic picture below without using any loops. This process is called Forward Propagation. Historic time series data is simply put into the first layer of neurons. The actual transformation of the data is done by the individual neurons of the network. Some neurons simply put different weights on the input parameter. Neurons of the hidden layers then use several non-linear functions to manipulate the data given to them by the initial layer. Eventually the manipulated data is consolidated in the output layer.

This sounds all very random and indeed it is. At the beginning, a neural network is totally unaware of its actual best solution and the first computations are done via random weights and functions. But after a first result is compiled, the algorithm compares the result with the actual true value. Of course, this is not possible for values that lye in the future. Therefore, the algorithm divides the historic time series into a section used for training (data that is put into the network) and into a section for testing (data that can be compared to the transformed training data). The deviation between compiled value and true value is then minimized via the process of so-called backpropagation. Weights and functions are changed iteratively until an optimal solution is reached and the network it sufficiently trained. This optimal solution then servers to compute the “real” future values.

This description is a very theoretical presentation of such an artificial neural network and the question arises, how to handle such complex algorithms. Therefore, the last part of this article focuses on the implementation of such a forecasting tool. One very useful tool for statistical forecasting via artificial neural networks is the programming language R and the well-known development environment RStudio. RStudio enables the user to directly download user-created packages, to import historical data from Excel sheets and to export graphical presentations of forecasts.

A very easy first approach is the nnetar function of R. This function can be simply used to analyze existing time series data and it will automatically define an artificial neural network (number of layers, neurons etc.) and train it. Eventually it also allows to use the trained model to forecast future data points.

The chart below is a result of this function used on simulated sales data between 2015 and 2021 to forecast the sales of 2022. In this case the nnetar function used one layer of hidden neurons and correctly recognized a 12-month seasonality in the data.

Figure 2. Simulated sales data.
Simulated sales data
Source: internet.

Why should I be interested in this post?

Artificial neural networks are a powerful tool to forecast time-series data. By using development environments like RStudio, even users without a sophisticated background in data science can make apply those networks to forecast data they might need for other purposes like DCF models, logistical planning, or internal financial modelling.

Useful resources

RStudio Official Website

Rob Hyndman and George Athanasopoulos Forecasting: Principles and Practice

Related posts on the SimTrade blog

   ▶ All posts about financial techniques

   ▶ Jayati WALIA Logistic regression

   ▶ Daksh GARG Use of AI in investment banking

About the author

The article was written in October 2022 by Micha FISHER (University of Mannheim, MSc. Management, 2021-2023).

Simple interest rate and compound interest rate

Simple interest rate and compound interest rate

 Sébastien PIAT

In this article, Sébastien PIAT (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2024) explains the difference between simple interest rate and compound interest rate.

Introduction

When dealing with interest rates, it can be useful to be able to switch from a yearly rate to a period rate that is used to compute interests on a period for an investment or a loan. But you should be aware that the computation is different when working with simple interests and compounded interests.

Below is the method to switch back and forth between a period rate and a yearly rate.

With simple interests

If you think of an investment that generates yearly incomes at a rate of 6%, you might want to know what your monthly return is.

As we deal with simple interests, the monthly rate of this investment will be 0.5% (=6/12).

With simple interests, the interests on a given period are computed with the initial capital:

Interests computed a simple rate

Assuming that the interests are computed over p periods during the year, the capital of the investment at the end of the year is equal to

Interests computed a simple rate

The equivalent yearly rate of return Ry gives the same capital value at the end of the year

Interests computed a simple rate

By equating the two formulas for the capital at the end of the year, we obtain a relation between the period rate Rp and the equivalent yearly rate Ry:

Formula to switch from a period rate to the equivalent yearly rate with simple interests

 Formula to switch from a yearly rate to the corresponding period rate with simple interests

With compound interests

Things get a little trickier when dealing with compound interests as interests get reinvested period after period.

Compounded interests can be considered by the following equation:

Interests computed a compound rate

Where Rp is the period rate of the investment and Cn is your capital at the end of the nth period.

Assuming that the interests are computed over p periods during the year, the capital of the investment at the end of the year is equal to

Interests computed a compound rate

The equivalent yearly rate of return Ry gives the same capital value at the end of the year

Interests computed a compound rate

By equating the two formulas for the capital at the end of the year, we obtain a relation between the period rate Rp and the equivalent yearly rate Ry:

Formula to switch from a period rate to the equivalent yearly rate with compound interests

 Formula to switch from a yearly rate to the corresponding period rate with compound interests

Excel file to compute interests of an investment

You can download below the Excel file for the computation of interests with simple and compound interests and the equivalent yearly interest rate.

Download the Excel file to compute interests with simple and compound interest rates

You can download below the Excel file to switch from a period interest rate to a yearly interest rate and vice versa.

Download the Excel file to compute interests with simple and compound interest rates

Why should I be interested in this post?

This post should help you switch between a period rate and the equivalent yearly rate of an investment.

This is particularly useful when we deal with cash flows that do not appear with a yearly frequency but with a monthly or quarterly frequency. With non-yearly cash flows, it is necessary to consider a period rate to compute the present value (PV), net present value (NPV) and internal rate of return (IRR).

Useful resources

longin.fr website Cours Gestion financière (in French).

Related posts on the SimTrade blog

   ▶ Raphaël ROERO DE CORTANZE The Internal Rate of Return

   ▶ Léopoldine FOUQUES The IRR, XIRR and MIRR functions in Excel

   ▶ Jérémy PAULEN The IRR function in Excel

   ▶ William LONGIN How to compute the present value of an asset?

   ▶ Maite CARNICERO MARTINEZ How to compute the net present value of an investment in Excel

About the author

The article was written in October 2022 by Sébastien PIAT (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2024) .

Enjeux de la pratique de la pleine conscience et de l’intelligence émotionnelle dans la fonction de contrôle de gestion

Enjeux de la pratique de la pleine conscience et de l’intelligence émotionnelle dans la fonction de contrôle de gestion

Jessica BAOUNON

Dans cet article, Jessica BAOUNON (ESSEC Business School, Executive Master in Direction Financière et Contrôle de Gestion, 2020-2022) explique les enjeux de la pratique de la pleine conscience et de l’intelligence émotionnelle dans la fonction contrôle de gestion. Le monde de l’entreprise s’est considérablement transformé avec la crise du COVID-19. L’appel à l’intelligence émotionnelle n’a jamais été aussi important pour faire face aux situations les plus complexes.

La fonction contrôle de gestion est en pleine évolution. Ses missions ne portent plus uniquement sur la production et la communication d’indicateurs financiers. Son rôle consiste désormais à accompagner dirigeants et managers dans l’amélioration de la performance financière, c’est-à-dire à les conseiller sur les décisions d’orientations stratégiques.

La crise Covid-19 a projeté le contrôle de gestion davantage vers un rôle de « coach. En effet, en étant proche de ceux qui ont dû garantir la continuité des activités, le contrôle de gestion a dû se pencher sur l’empathie dans sa relation établie avec dirigeants et managers. On attend de lui une attitude d’écoute, de disponibilité, une capacité à se placer dans le contexte de son interlocuteur pour agir avec efficacité et désamorcer des situations de crise.

En d’autres termes, acquérir des compétences relationnelles et se doter d’un capital émotionnel sont aujourd’hui des qualités recherchées. L’action d’un contrôleur de gestion s’inscrit de plus en plus dans un état d’esprit collaboratif. Il remplit une fonction de business partner.

Or comment imaginer qu’un contrôleur de gestion puisse construire une relation de partenariat pérenne s’il n’est lui-même pas pleinement conscient de l’environnement dans lequel il évolue ? Sa prise de conscience de soi et des autres doit faciliter ses interactions sociales.

A ce titre, s’exercer à une pratique régulière de méditation de pleine conscience peut s’avérer efficace pour travailler son intelligence émotionnelle. En effet, l’exercice de la pleine conscience implique avant tout de ressentir et comprendre les émotions en portant une qualité d’attention sur une expérience vécue. C’est une attitude qui propose d’ouvrir un espace d’observation sans filtre, sans attente, de ses sensations, pensées, émotions d’une action, d’un évènement dans l’acceptation et sans jugement.

Ce processus d’observation permet ainsi de mieux aller vers l’autre en apportant une réponse adaptée et clairvoyante dans des dialogues de gestion. Elle permet notamment de reprendre possession de soi dans des situations de stress ou de gestion de conflit.

Origines et impact de la pratique de la pleine conscience dans la fonction contrôle de gestion

Jon Kabat Zin, professeur de médecine à l’Université du Massachussetts et docteur en biologie, est le père-fondateur de la méditation de pleine conscience. Intitulé Mindfullness-Based Stress Reduction (MBSR), ce programme laïque inspiré du bouddhisme, offre une initiation à la méditation sur une période de huit semaines.

Cette pratique, à l’origine millénaire, s’est progressivement répandue avec succès dans les écoles scientifiques, philosophiques et psychologiques. Elle émerge depuis quelques années dans les entreprises telle que chez EDF, Google ou L’Oréal au travers de formations certifiées.

Google, précurseur, propose à ses collaborateurs depuis 2007 un programme de méditation nommé « Search Inside Yourself ». Chade-Meng Tan, ingénieur chez Google, a réuni une équipe d’experts en technique de pleine conscience et intelligence émotionnelle pour construire cette formation. L’objectif est de développer des compétences d’intelligence émotionnelle pour créer une cohésion sociale favorable à l’épanouissement individuelle et collectif chez Google. Ces cours ont été dispensés auprès de plus de 10 000 personnes et dans plus de 50 pays.

Cette pratique se démocratise et est perçue de moins en moins comme une bizarrerie. Face à un contexte de crises successives, burn out, démotivation des collaborateurs, rééquilibrer les esprits pour évoluer dans un environnement sain devient un enjeu de performance cruciale. Plus que jamais, et en témoigne la récente crise du Covid-19, la responsabilité sociale d’une entreprise est de créer les conditions qui permettront une cohésion sociale durable.

En outre, face à l’ampleur d’imprévisibles changements, la mission du contrôle de gestion consistant à assurer la stabilité des processus de gestion doit s’accompagner d’une réflexion constante sur l’évolution des outils et systèmes d’information. Si les solutions d’automatisation des processus de gestion gagnent du terrain pour répondre à une volonté de rapidité d’exécution, elle ne doit pas pour autant conduire à un mode de pilotage automatique des taches d’un contrôleur de gestion.

Cette approche machinale de la fonction contrôle de gestion doit être signe d’alerte. En effet, le danger de cette posture est de se laisser gouverner, de ne plus observer activement les choses sous un regard nouveau et d’en perdre le sens. Dans un monde où l’humain rivalise de plus en plus avec les machines, développer un état d’esprit créatif et stimuler sa conscience d’esprit est un enjeu essentiel. La pleine conscience, en tant qu’outil, agit comme un accélérateur de créativité. Elle oblige à se libérer d’un mode de fonctionnement mécanique des processus en étant attentif à ce que l’on fait et à ce qui nous entoure pour cheminer vers des nouvelles idées. Avec la montée en puissance des technologies, cette qualité encore absente du langage courant, se retrouvera plus encore demain, dans les exigences de compétences requises en contrôle de gestion.

Innover avec un style de management durable

Dans cette même dynamique de changement, on assiste à une « reconnaissance accrue du rôle des émotions comme action et effet dans les organisations » (1). Celle-ci questionne les modèles de management classiques jugé trop bureaucratique et militaire « dans leur tentative de contrôler, supprimer toute émotion qui interférer la rationalité d’actions souhaitées » (1). L’essoufflement du modèle tayloriste est en train de laisser progressivement place à de nouveaux paradigmes. Cette transformation s’explique par une logique de revalorisation du capital humain subordonnée à celle de l’efficience productive. En outre, la montée en puissance de la Responsabilité Sociale des Entreprises (RSE) a donné lieu à d’importants renversements.

« La recherche de profit n’est pas en soi problématique, ce qui l’est c’est de ne souligner que le profit au détriment de la complexité de réalités humaines » (Bibard Laurent). En témoigne l’affaire Bhopal ou Orange qui ont eu pour effet de révéler une profonde dévalorisation des conditions de travail. Un renversement de rôle qui renvoie également à la question du sens, d’une humanité en prise de conscience sur ce qui ne fonctionne plus, sur la nécessité de l’entreprise à s’ancrer dans un monde durable et servir l’intérêt général.

Pour arriver à cet objectif de durabilité, reconstruire un modèle de management responsable en s’appuyant sur les acquis de la psychologie cognitive et sociale constitue une première solution. Les émotions ont été rejeté pendant très longtemps des visions managériales des entreprises. Or les récentes découvertes en psychologie démontrent que développer des compétences en intelligence émotionnelle permet de développer de réelles qualités relationnelles, de prendre de meilleures décisions et de se montrer bien plus créatif.

Dans un monde incertain rythmé par des crises financières, environnementales et sociales, chaque individu doit être en mesure de pouvoir se défaire de biais cognitifs, en se libérant de ses croyances limitantes pour contribuer à une vision d’un monde juste et responsable. La pratique de la pleine conscience et de l’intelligence émotionnelle contribue à mobiliser une connaissance de soi. Elle permet aux contrôleurs de gestion ainsi qu’à l’ensemble des collaborateurs de questionner la pertinence de leurs actions et décisions sous l’angle de leurs émotions. Cette pratique invite ainsi à nous rappeler ce que nous sommes : des êtres humains.

En quoi ça m’intéresse ?

Dans un monde où l’humain rivalise de plus en plus avec les machines, développer un état d’esprit créatif et stimuler sa conscience d’esprit est essentiel. Cet article présente les bénéfices de la pratique de la pleine conscience et de l’intelligence émotionnelle dans la fonction contrôle de gestion afin d’y apporter d’un éclairage sur ces nouvelles compétences recherchées.

Articles sur le blog SimTrade

   ▶ POUZOL Chloé Mon expérience de contrôleuse de gestion chez Edgar Suites

Ressources utiles

Teneau, Gilles, Empathie et compassion en entreprise, 2014, ISTE Editions.

Tan, Cheng-Made, Search Inside Yourself, 2015, Harper Collins Libri

Kotsou, Ilios – « Intelligence émotionnelle & management », 2016, De Boeck

Cappelletti, Laurent. Le management de la relation client des professions : un nouveau sujet d’investigation pour le contrôle de gestion, 2010, Revue Management et Avenir.

A propos de l’auteure

Cet article a été écrit en octobre 2022 par Jessica BAOUNON (ESSEC Business School, Executive Master in Direction Financière et Contrôle de Gestion 2020-2022).

Extreme Value Theory: the Block-Maxima approach and the Peak-Over-Threshold approach

Extreme Value Theory: the Block-Maxima approach and the Peak-Over-Threshold approach

Shengyu ZHENG

In this article, Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) presents the extreme value theory (EVT) and two commonly used modelling approaches: block-maxima (BM) and peak-over-threshold (PoT).

Introduction

There are generally two approaches to identify and model the extrema of a random process: the block-maxima approach where the extrema follow a generalized extreme value distribution (BM-GEV), and the peak-over-threshold approach that fits the extrema in a generalized Pareto distribution (POT-GPD):

  • BM-GEV: The BM approach divides the observation period into nonoverlapping, continuous and equal intervals and collects the maximum entries of each interval. (Gumbel, 1958) Maxima from these blocks (intervals) can be fitted into a generalized extreme value (GEV) distribution.
  • POT-GPD: The POT approach selects the observations that exceed a certain high threshold. A generalized Pareto distribution (GPD) is usually used to approximate the observations selected with the POT approach. (Pickands III, 1975)

Figure 1. Illustration of the Block-Maxima approach
BM-GEV
Source: computation by the author.

Figure 2. Illustration of the Peak-Over-Threshold approach

POT-GPD
Source: computation by the author.

BM-GEV

Block-Maxima

Let’s take a step back and have a look again at the Central Limit Theorem (CLT):

 Illustration of the POT approach

The CLT describes that the distribution of sample means approximates a normal distribution as the sample size gets larger. Similarly, the extreme value theory (EVT) studies the behavior of the extrema of samples.

The block maximum is defined as such:

 Illustration of the POT approach

Generalized extreme value distribution (GEV)

 Illustration of the POT approach

The GEV distributions have three subtypes corresponding to different tail feathers [von Misès (1936); Hosking et al. (1985)]:

 Illustration of the POT approach

POT-GPD

The block maxima approach is under reproach for its inefficiency and wastefulness of data usage, and it has been largely superseded in practice by the peak-over-threshold (POT) approach. The POT approach makes use of all data entries above a designated high threshold u. The threshold exceedances could be fitted into a generalized Pareto distribution (GPD):

 Illustration of the POT approach

Illustration of Block Maxima and Peak-Over-Threshold approaches of the Extreme Value Theory with R

We now present an illustration of the two approaches of the extreme value theory (EVT), the block maxima with the generalized extreme value distribution (BM-GEV) approach and the peak-over-threshold with the generalized Pareto distribution (POT-GPD) approach, realized with R with the daily return data of the S&P 500 index from January 01, 1970, to August 31, 2022.

Packages and Libraries

 packages and libraries

Data loading, processing and preliminary inspection

Loading S&P 500 daily closing prices from January 01, 1970, to August 31, 2022 and transforming the daily prices to daily logarithm returns (multiplied by 100). Month and year information are also extracted from later use.

 data loading

Checking the preliminary statistics of the daily logarithm series.

 descriptive stats data

We can get the following basic statistics for the (logarithmic) daily returns of the S&P 500 index over the period from January 01, 1970, to August 31, 2022.

Table 1. Basic statistics of the daily return of the S&P 500 index.
Basic statistics of the daily return of the S&P 500 index
Source: computation by the author.

In terms of daily return, we can observe that the distribution is negatively skewed, which mean the negative tail is longer. The kurtosis is far higher than that of a normal distribution, which means that extreme outcomes are more frequent compared with a normal distribution. the minimum daily return is even more than twice of the maximum daily return, which could be interpreted as more prominent downside risk.

Block maxima – Generalized extreme value distribution (BM-GEV)

We define each month as a block and get the maxima from each block to study the behavior of the block maxima. We can also have a look at the descriptive statistics for the monthly downside extrema variable.

 block maxima

With the commands, we obtain the following basic statistics for the monthly minima variable:

Table 2. Basic statistics of the monthly minimal daily return of the S&P 500 index.
Basic statistics of the monthly minimal daily return of the S&P 500 index
Source: computation by the author.

With the block extrema in hand, we can use the fevd() function from the extReme package to fit a GEV distribution. We can therefore get the following parameter estimations, with standard errors presented within brackets.

GEV

Table 3 gives the parameters estimation results of the generalized extreme value (GEV) for the monthly minimal daily returns of the S&P 500 index. The three parameters of the GEV distribution are the shape parameter, the location parameter and the scale parameter. For the period from January 01, 1970, to August 31, 2022, the estimation is based on 632 observations of monthly minimal daily returns.

Table 3. Parameters estimation results of GEV for the monthly minimal daily return of the S&P 500 index.
Parameters estimation results of GEV for the monthly minimal daily return of the S&P 500 index
Source: computation by the author.

With the “plot” command, we are able to obtain the following diagrams.

  • The top two respectively compare empirical quantiles with model quantiles, and quantiles from model simulation with empirical quantiles. A good fit will yield a straight one-to-one line of points and in this case, the empirical quantiles fall in the 95% confidence bands.
  • The bottom left diagram is a density plot of empirical data and that of the fitted GEV distribution.
  • The bottom right diagram is a return period plot with 95% pointwise normal approximation confidence intervals. The return level plot consists of plotting the theoretical quantiles as a function of the return period with a logarithmic scale for the x-axis. For example, the 50-year return level is the level expected to be exceeded once every 50 years.

gev plots

Peak over threshold – Generalized Pareto distribution (POT-GPD)

With respect to the POT approach, the threshold selection is central, and it involves a delicate trade-off between variance and bias where too high a threshold would reduce the number of exceedances and too low a threshold would incur a bias for poor GPD fitting (Rieder, 2014). The selection process could be elaborated in a separate post and here we use the optimal threshold of 0.010 (0.010*100 in this case since we multiply the logarithm return by 100) for stock index downside extreme movement proposed by Beirlant et al. (2004).

POT

With the following commands, we get to fit the threshold exceedances to a generalized Pareto distribution, and we obtain the following parameter estimation results.

Table 4 gives the parameters estimation results of GPD for the daily return of the S&P 500 index with a threshold of -1%. In addition to the threshold, the two parameters of the GPD distribution are the shape parameter and the scale parameter. For the period from January 01, 1970, to August 31, 2022, the estimation is based on 1,669 observations of daily returns exceedances (12.66% of the total number of daily returns).

Table 4. Parameters estimation results of the generalized Pareto distribution (GPD) for the daily return negative exceedances of the S&P 500 index.
Parameters estimation results of GEV for the monthly minimal daily return of the S&P 500 index
Source: computation by the author.

Download R file to understand the BM-GEV and POT-GPD approaches

You can find below an R file (file with txt format) to understand the BM-GEV and POT-GPD approaches.

Illustration_of_EVT_with_R

Why should I be interested in this post

Financial crises arise alongside disruptive events such as pandemics, wars, or major market failures. The 2007-2008 financial crisis has been a recent and pertinent opportunity for market participants and academia to reflect on the causal factors to the crisis. The hindsight could be conducive to strengthening the market resilience faced with such events in the future and avoiding dire consequences that were previously witnessed. The Gaussian copula, a statistical tool used to manage the risk of the collateralized debt obligations (CDOs) that triggered the flare-up of the crisis, has been under serious reproach for its essential flaw to overlook the occurrence and the magnitude of extreme events. To effectively understand and cope with the extreme events, the extreme value theory (EVT), born in the 19th century, has regained its popularity and importance, especially amid the financial turmoil. Capital requirements for financial institutions, such as the Basel guidelines for banks and the Solvency II Directive for insurers, have their theoretical base in the EVT. It is therefore indispensable to be equipped with knowledge in the EVT for a better understanding of the multifold forms of risk that we are faced with.

Related posts on the SimTrade blog

▶ Shengyu ZHENG Optimal threshold selection for the peak-over-threshold approach of extreme value theory

▶ Gabriel FILJA Application de la théorie des valeurs extrêmes en finance de marchés

▶ Shengyu ZHENG Extreme returns and tail modelling of the S&P 500 index for the US equity market

▶ Nithisha CHALLA The S&P 500 index

Resources

Academic research (articles)

Aboura S. (2009) The extreme downside risk of the S&P 500 stock index. Journal of Financial Transformation, 2009, 26 (26), pp.104-107.

Gnedenko, B. (1943). Sur la distribution limite du terme maximum d’une série aléatoire. Annals of mathematics, 423–453.

Hosking, J. R. M., Wallis, J. R., & Wood, E. F. (1985) “Estimation of the generalized extreme-value distribution by the method of probability-weighted moments” Technometrics, 27(3), 251–261.

Longin F. (1996) The asymptotic distribution of extreme stock market returns Journal of Business, 63, 383-408.

Longin F. (2000) From VaR to stress testing : the extreme value approach Journal of Banking and Finance, 24, 1097-1130.

Longin F. et B. Solnik (2001) Extreme correlation of international equity markets Journal of Finance, 56, 651-678.

Mises, R. v. (1936). La distribution de la plus grande de n valeurs. Rev. math. Union interbalcanique, 1, 141–160.

Pickands III, J. (1975). Statistical Inference Using Extreme Order Statistics. The Annals of Statistics, 3(1), 119– 131.

Academic research (books)

Embrechts P., C. Klüppelberg and T Mikosch (1997) Modelling Extremal Events for Insurance and Finance.

Embrechts P., R. Frey, McNeil A. J. (2022) Quantitative Risk Management, Princeton University Press.

Gumbel, E. J. (1958) Statistics of extremes. New York: Columbia University Press.

Longin F. (2016) Extreme events in finance: a handbook of extreme value theory and its applications Wiley Editions.

Other materials

Extreme Events in Finance

Rieder H. E. (2014) Extreme Value Theory: A primer (slides).

About the author

The article was written in October 2022 by Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).