Historical Volatility

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School) explains the concept of historical volatility used in financial markets to represent and measure the changes in asset prices.

Introduction

Volatility in financial markets refers to the degree of variation in an asset’s price or returns over time. Simply put, an asset is considered highly volatile when its price experiences large upward or downward movements, and less volatile when those movements are relatively small. Volatility plays a central role in finance as an indicator of risk and is widely used in various portfolio and risk management techniques.

In practice, the concept of volatility can be operationalized in different ways: historical volatility and implied volatility. Traders and analysts use historical volatility to understand an asset’s past performance and implied volatility as a forward-looking measure of upcoming uncertainties in the market.

Historical volatility measures the actual variability of an asset’s price over a past period, calculated as the standard deviation of its historical returns. Computed over different periods (say a month), historical volatility allows investors to identify trends in volatility and assess how an asset has reacted to market conditions in the past.

Practical Example: Analysis of the S&P 500 Index (2020 – 2025)

Let us consider the S&P 500 index as an example of the calculation of volatility.

Figure 1 below illustrates the daily closing price of the S&P 500 index over the period from January 2020 to December 2025.

Figure 1. Daily closing prices of the S&P 500 index (2020-2025).
Daily closing prices of the S&P 500 Index (2020-2025)
Source: computation by the author.

Returns

Returns are the percentage gain or loss on the asset’s investment and are generally calculated using one of two methods: arithmetic (simple) or logarithmic (continuously compounded).


Returns Formulas

Where Ri represents the rate of return, and Pi denotes the asset’s price at a given point in time.

The preference for logarithmic returns stems from their property of time-additivity, which simplifies multi-period calculations (the monthly log return is equal to the sum of the daily log returns of the month, which is not the case for arithmetic return). Furthermore, logarithmic returns align with the geometric mean thereby mathematically capture the effects of compounding, unlike arithmetic return, which can overstate performance in volatile markets.

Distribution of returns

A statistical distribution describes the likelihood of different outcomes for a random variable. It begins with classifying the data as either discrete or continuous.

Figure 2 below illustrates the distribution of daily returns for S&P 500 index over the period from January 2020 to December 2025.

Figure 2. Historical distribution of daily returns of the S&P 500 index (2020-2025).< br> Historical distribution of daily returns of the S&P 500 index (2020-2025)
Source: computation by the author.

Standard deviation of the distribution of returns

In real life, as we do not know the mean and standard deviation of returns, these parameters have to be estimated with data.

The estimator for the mean μ, denoted by μ̂, and the estimator for the variance σ2, denoted by σ̂2, are given by the following formulas:


Formulas for the mean and variance estimators

With the following notations:

  • Ri = rate of return for the ith data point
  • μ̂ = mean of the data
  • σ̂2 = variance of the data
  • n = total number of days for the data

These estimators are unbiased and efficient (note the Bessel’s correction for the standard deviation when we divide by (n–1) instead of n).


Unbiased estimators of the mean and variance

For the distribution of returns in Figure 2, the mean and standard deviation calculated using the formulas above are 0.049% and 1.068%, respectively (in daily units).

Annualized volatility

As the usual time frame for human is the year, volatility is often annualized. In order to obtain annual (or annualized) volatility, we scale the daily volatility by the square root of the number of days in that period (τ), as shown below.


Annual Volatility formula

Where  is the number of trading days during the calendar year.

In the U.S. equity market, the annual number of trading days typically ranges from 250 to 255 (252 tradings days in 2025). This variation reflects the holiday calendar: when a holiday falls on a weekday, the exchange closes ; when it falls on a weekend, trading is unaffected. In contrast, the cryptocurrency market has as many trading days as there are calendar days in a year, since it operates continuously, 24/7.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized volatility is given by


 S&P500index Annual Volatility formula

Annualized mean

The calculated mean for the 5-year S&P 500 logarithmic returns is also the daily average return for the period. The annualized average return is given by the formula below.


Annualized mean formula

Where τ is the number of trading days during the calendar year.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized average return is given by


Annualized mean formula

If the value of daily average return is much less than 1, annual average return can be approximated as


Annualized mean value

Application: Estimating the Future Price Range of the S&P 500 index

To develop an intuitive understanding of these figures, we can estimate the one-standard-deviation price range for the S&P 500 index over the next year. From the above calculations, we know that the annualized mean return is 12.534% and the annualized standard deviation is 16.953%.

Under the assumption of normally distributed logarithmic returns, we can say approximately with 68% confidence that the value of S&P 500 index is likely to be in the range of:


Upper and lower limits

The ranges calculated above are based on logarithmic returns (continuously compounded). To convert them into simple returns (effective annual rates), we use the following formula:


Effective rate formula

If the current value of the S&P 500 index is $6,830, then converting these log-return estimates into price levels gives:


Upper and lower price limits

Based on a 68% confidence interval, the S&P 500 index is likely to trade in the range of $6,534 to $9,172 over the next year.

Historical Volatility

Historical volatility represents the variability of an asset’s returns over a chosen lookback period. The annualized historical volatility is estimated using the formula below.


 Historical volatility formula

With the following notations:

  • σ = Standard deviation
  • Ri = Return
  • n = total number of trading days in the period (21 for 1 month, 63 for 3 months, etc.)
  • τ = Number of trading days in a calendar year

Volatility calculated over different periods must be annualized to a common timeframe to ensure comparability, as the standard convention in finance is to express volatility on an annual basis. Therefore, when working with daily returns, we annualize the volatility by multiplying it by the square root of 252.

For example, for the S&P 500 index, the annualized historical volatilities over the last 1 month, 3 months, and 6 months, computed on December 3, 2025, are 14.80%, 12.41%, and 11.03%, respectively. The results suggest, since the short term (1 month) volatility is higher than medium (3 months) and long term (6 months) volatility, the recent market movements have been turbulent as compared to the past few months, and due to volatility clustering, periods of high volatility often persist, suggesting that this elevated turbulence may continue in the near term.

Unconditional Volatility

Unconditional volatility is a single volatility number using all historical data, which in our example is the entire five years data; It does not account for the fact that recent market behavior is more relevant for predicting tomorrow’s risk than events from past years, implying that volatility changes over time. It is frequently observed that after any sudden boom or crash in the market, as the storm passes away the volatility tends to revert to a constant value and that value is given by the unconditional volatility of the entire period. This tendency is referred to as mean reversion.

For instance, using S&P 500 index data from 2020 to 2025, the unconditional volatility (annualized standard deviation) is calculated to be 16.952%.

Rolling historical volatility

A single volatility number often fails to capture changing market regimes. Therefore, a rolling historical volatility is usually generated to track the evolution of market risk. By calculating the standard deviation over a moving window, we can observe how volatility has expanded or contracted historically. This is illustrated in Figure 3 below for the annualized 3-month historical volatility of the S&P 500 index over the period 2020-2025.

Figure 3. 3-month rolling historical volatility of the S&P500 index (2020-2025).
3-month rolling historical volatility of the S&P500 index
Source: computation by the author.

In Figure 3, the 3-month rolling historical volatility is plotted along with the unconditional volatility computed over the entire period, calculated using overlapping windows to generate a continuous series. This provides a clear historical perspective, showcasing how the asset’s volatility has fluctuated relative to its long-term average.

For example, during the start of Russia–Ukraine war (February 2022 – August 2022), a noticeable jump in volatility occurred as energy and food prices surged amid fears of supply chain disruptions, given that Russia and Ukraine are major exporters of oil, natural gas, wheat, and other commodities.

The rolling window can be either overlapping or non-overlapping, resulting in continuous or discrete graphs, respectively. Overlapping windows shift by one day, creating a smooth and continuous volatility series, whereas non-overlapping windows shift by one time period, producing a discrete series.

You can download the Excel file provided below, which contains the computation of returns, their historical distribution, the unconditional historical volatility, and the 3-month rolling historical volatility of the S&P 500 index used in this article.

Download the Excel file for returns and volatility calculation

You can download the Python code provided below, which contains the computation of returns, first four moments of the distribution, and experiment with the x-month rolling historical volatility function to visualize the evolution of historical volatility over time.

Download the Python code for returns and volatility calculation.

Alternatively, you can download the R code below with the same functionality as in the Python file.

Download the R code for returns and volatility calculation.

Alterative measures of volatility

We now mention a few other ways volatility can be measured: Parkinson volatility, Implied volatility, ARCH model, and stochastic volatility model.

Parkinson volatility

The Parkinson model (1980) uses the highest and lowest prices during a given period (say a month) for the purpose of measurement of volatility. This model is a high-low volatility measure, based on the difference between the maximum and minimum prices observed during a certain period.

Parkinson volatility is a range-based variance estimator that replaces squared returns with the squared high–low log price range, scaled to remain unbiased. It assumes a driftless (expected growth rate of the stock price equal to zero) geometric Brownian motion, it is five times more efficient than close-to-close returns because it accounts for fluctuation of stock price within a day.

For a sample of n observations (say days), the Parkinson volatility is given by


Parkinson Volatility formula

where:

  • Ht is the highest price on period t
  • Lt is the lowest price on period t

Implied volatility

Implied Volatility (IV) is the level of volatility for the underlying asset that, when plugged into an option pricing model such as Black–Scholes–Merton, makes the model’s theoretical option price equal to the option’s observed market price.

It is a forward looking measure because it reflects the market’s expectation of how much the underlying asset’s price is likely to fluctuate over the remaining life of the option, rather than how much it has moved in the past.

The Chicago Board Options Exchange (CBOE), a leading global financial exchange operator provides implied volatility indices like the VIX and Implied Correlation Index, measuring 30-day expected volatility from SPX options. These are used by traders to gauge market fear, speculate via futures/options/ETPs, hedge equity portfolios and manage risk during volatility spikes.

ARCH model

Autoregressive Conditional Heteroscedasticity (ARCH) models address time-varying volatility in time series data. Introduced by Engle in 1982, ARCH models look at the size of past shocks to estimate how volatile the next period is likely to be. If recent movements were big, the model expects higher volatility; if they were small, it expects lower volatility justifying the idea of volatility clustering. Originally applied to inflation data, this model has been widely used in to model financial data.

ARCH model capture volatility clustering, which refers to an observation about how volatility behaves in the short term, a large movement is usually followed by another large movement, thus volatility is predictable in the short term. Historical volatility gives a short-term hint of the near future changes in the market because recent noise often continues.

Generalized Autoregressive Conditional Heteroscedasticity (GARCH) extends ARCH by past predicted volatility, not just past shocks, as refined by Bollerslev in 1986 from Engle’s work. Both of these methods are more accurate methods to forecast volatility than what we had discussed as they account for the time varying nature of volatility.

Stochastic volatility models

In practice, volatility is time-varying: it exhibits clustering, persistence, and mean reversion. To capture these empirical features, stochastic volatility (SV) models treat volatility not as a constant parameter but as a stochastic process jointly evolving with the asset price. Among these models, the Heston (1993) specification is one of the most influential.

The Heston model assumes that the asset price follows a diffusion process analogous to geometric Brownian motion, while the instantaneous variance evolves according to a mean-reverting square-root process. Moreover, the innovations to the price and variance processes are correlated, thereby capturing the leverage effect frequently observed in equity markets.

Applications in finance

This section covers key mathematical concepts and fundamental principles of portfolio management, highlighting the role of volatility in assessing risk.

The normal distribution

The normal distribution is one of the most commonly used probability distribution of a random variable with a unimodal, symmetric and bell-shaped curve. The probability distribution function is given by


Normal distribution function

A random variable X is said to follow standard normal distribution if its mean is zero and variance is one.

The figure below represents the confidence intervals, showing the percentage of data falling within one, two, and three standard deviations from the mean.

Figure 4. Standard normal distribution.
Standard normal distribution” width=
Source: computation by the author

Brownian motion

Robert Brown first observed Brownian motion was as the erratic and random movement of pollen particles suspended in water due to constant collision with water molecules. It was later formulated mathematically by Norbert Wiener and is also known as the Wiener process.

The random walk theory suggests that it’s impossible to predict future stock prices as they move randomly, and when the timestep of this theory becomes infinitesimally small it becomes, Brownian Motion.

In the context of financial stochastic process, when the market is modeled by the standard Brownian motion, the probability distribution function of the future price is a normal distribution, whereas when modeled by Geometric Brownian Motion, the future prices are said to be lognormally distributed. This is also called the Brownian Motion hypothesis on the movement of stock prices.

The process of a standard Brownian motion is given by:


Standard Brownian motion formula.

The process of a geometric Brownian motion is given by:


Geometric Brownian motion formula.

Where, dSt is the change in asset price in continuous time dt, dXt is a random variable from the normal distribution (N (0, 1)) or Wiener process at a time t, σ represents the price volatility, and μ represents the expected growth rate of the asset price, also known as the ‘drift’.

Modern Portfolio Theory (MPT)

Modern Portfolio Theory (MPT), developed by Nobel Laureate, Harry Markowitz, in the 1950s, is a framework for constructing optimal investment portfolios, derived from the foundational mean-variance model.

The Markowitz mean–variance model suggests that risk can be reduced through diversification. It proposes that risk-averse investors should optimize their portfolios by selecting a combination of assets that balances expected return and risk, thereby achieving the best possible return for the level of risk they are willing to take. The optimal trade-off curve between expected return and risk, commonly known as the efficient frontier, represents the set of portfolios that maximizes expected return for each level of standard deviation (risk).

Capital Asset Pricing Model (CAPM)

The Capital Asset Pricing Model (CAPM) builds on the model of portfolio choice developed by Harry Markowitz (1952), stated above. CAPM states that, assuming full agreement on return distributions and either risk-free borrowing/lending or unrestricted short selling, the value-weighted market portfolio of risky assets is mean-variance efficient, and expected returns are linear in the market beta.

The main result of the CAPM is a simple mathematical formula that links the expected return of an asset to its risk measured by the beta of the asset:


CAPM formula

Where:

  • E(Ri) = expected return of asset i
  • Rf = risk-free rate
  • βi = measure of the risk of asset i
  • E(Rm) = expected return of the market
  • E(Rm) − Rf = market risk premium

CAPM recognizes that an asset’s total risk has two components: systematic risk and specific risk, but only systematic risk is compensated in expected returns.

Returns decomposition fromula.
 Returns decomposition fromula

Where the realized (actual) returns of the market (Rm) and the asset (Ri) exceed their expected values only because of consideration of systematic risk (ε).

Decomposition of risk.
Decompositionion of risk

Systematic risk is a macro-level form of risk that affects a large number of assets to one degree or another, and therefore cannot be eliminated. General economic conditions, such as inflation, interest rates, geopolitical risk or exchange rates are all examples of systematic risk factors.

Specific risk (also called idiosyncratic risk or unsystematic risk), on the other hand, is a micro-level form of risk that specifically affects a single asset or narrow group of assets. It involves special risk that is unconnected to the market and reflects the unique nature of the asset. For example, company specific financial or business decisions which resulted in lower earnings and affected the stock prices negatively. However, it did not impact other asset’s performance in the portfolio. Other examples of specific risk might include a firm’s credit rating, negative press reports about a business, or a strike affecting a particular company.

Why should I be interested in this post?

Understanding different measures of volatility, is a pre-requisite to better assess potential losses, optimize portfolio allocation, and make informed decisions to balance risk and expected return. Volatility is fundamental to risk management and constructing investment strategies.

Related posts on the SimTrade blog

Risk and Volatility

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Youssef LOURAOUI Systematic Risk

   ▶ Youssef LOURAOUI Specific Risk

   ▶ Jayati WALIA Implied Volatility

   ▶ Mathias DUMONT Pricing Weather Risk

   ▶ Jayati WALIA Black-Scholes-Merton Option Pricing Model

Portfolio Theory and Models

   ▶ Jayati WALIA Returns

   ▶ Youssef LOURAOUI Portfolio

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Optimal Portfolio

Financial Indexes

   ▶ Nithisha CHALLA Financial Indexes

   ▶ Nithisha CHALLA Calculation of Financial Indexes

   ▶ Nithisha CHALLA The S&P 500 Index

Useful Resources

Academic research

Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity, Journal of Econometrics, 31(3), 307–327.

Engle, R. F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation, Econometrica, 50(4), 987–1007.

Fama, E. F., & French, K. R. (2004). The Capital Asset Pricing Model: Theory and Evidence, Journal of Economic Perspectives, 18(3), 25–46.

Heston, S. L. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options, The Journal of Finance, 48(3), 1–24.

Markowitz, H. M. (1952). Portfolio Selection, The Journal of Finance, 7(1), 77–91.

Parkinson, M. (1980). The extreme value method for estimating the variance of the rate of return. Journal of Business, 53(1), 61–65.

Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk, The Journal of Finance, 19(3), 425–442.

Tsay, R. S. (2010). Analysis of financial time series, John Wiley & Sons.

Other

NYU Stern Volatility Lab Volatility analysis documentation.

Extreme Events in Finance Risk maps: extreme risk, risk and performance.

About the author

The article was written in December 2025 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School).

   ▶ Read all articles by Saral BINDAL.

The “lemming effect” in finance

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the “lemming effect” in financial markets, inspired by the animated movie Zootopia.

About the concept

The “lemming effect” refers to situations where individuals follow the crowd unthinkingly, just as lemmings are believed to follow one another off a cliff. In finance, this idea is linked to herd behaviour: investors imitate the actions of others instead of relying on their own information or analysis.

The image above is a cartoon showing a line of lemmings running off a cliff, with several already falling through the air. The caption “The Lemming Effect: Stop! There is another way” warns that blindly following others can lead to disaster, even if “everyone is doing it.” The message is to think independently, question group behaviour, and choose an alternative path instead of copying the crowd.

In Zootopia, there is a scene where lemmings dressed as bankers leave their office and are supposed to walk straight home after work. However, after one lemming notices Nick selling popsicles and suddenly changes direction to buy one, the rest of the lemmings automatically follow and queue up too, even though this is completely different from their original route and plan. This illustrates how individuals can abandon their own path and intentions simply because they see someone else acting first, much like investors may follow others into a trade or trend without conducting independent analysis.

Watch the video!


Source: Zootopia (Disney, 2016).

The first image shows Nick Wilde (the fox) holding a red paw-shaped popsicle. In the film, Nick uses this eye‑catching pawpsicle as a marketing tool to attract the lemmings and earn a profit.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The second image shows a group of identical lemmings in suits walking in and out of a building labelled “Lemming Brothers Bank.” This is a parody of the real investment bank “Lehman Brothers,” which collapsed during the 2008 financial crisis. When one lemming notices the pawpsicle, it immediately changes direction from going home and heads toward Nick to buy the product, illustrating how one individual’s choice triggers the rest to follow.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The third image shows Nick successfully selling pawpsicles to a whole line of lemmings. Nick is exploiting the lemmings’ herd‑like behaviour: once a few begin buying, the others automatically copy them and all purchase the same pawpsicle. The humour lies in how Nick profits from their conformity, using their predictable group behaviour—the “lemming effect”—to make easy money.

zootopia lemmings
Source: Zootopia (Disney, 2016).

Behavioural finance uses the lemming effect to describe deviations from perfectly rational decision-making. Rather than analysing fundamentals calmly, investors may be influenced by social proof, fear of missing out (FOMO) or the comfort of doing what “everyone else” seems to be doing.

Understanding the lemming effect is important both for professional investors and students of finance. It helps to explain why markets sometimes move far away from fundamental values and reminds decision-makers to be cautious when “the whole market” points in the same direction.

How the lemming effect appears in markets

In practice, the lemming effect can be seen when large numbers of investors buy the same “hot” stocks simply because prices are rising, they assume that so many others doing the same thing cannot be wrong.

It applies in reverse during market downturns. Bad news, rumours, or sharp price declines can trigger a wave of selling. The fear of being the last one can push them to copy others’ behaviour rather than stick to their original plan.

Such herd-driven moves can amplify volatility, push prices far above or below intrinsic value, and create opportunities or risks that would not exist in a purely rational market. Recognising these dynamics helps investors to step back and question whether they are thinking independently.

Related financial concepts

The lemming effect connects naturally with several basic financial ideas: diversification, risk-return trade-off, market efficiency, Keynes’ beauty contest and gamestop story. It shows how human behaviour can distort these textbook concepts in real markets.

Diversification

Diversification means not putting all your money in the same blanket (asset or sector), so that the poor performance of one investment does not destroy the whole. When the lemming effect is strong, investors often forget diversification and concentrate on a few “popular” stocks. From a diversification perspective, following the crowd can increase risk without necessarily increasing expected returns.

Risk and return

Finance said that higher expected returns usually come with higher risk. However, when many investors behave like lemmings, they may underestimate the true risk of crowded trades. Rising prices can create an illusion of safety, even if fundamentals do not justify the move. Understanding the lemming effect reminds investors to ask whether a sustainable increase in expected return really compensates the extra risk taken by following the crowd.

Market efficiency

In an efficient market, prices should reflect all available information. Herd behaviour and the lemming effect demonstrate that markets can deviate from this ideal when many investors react based on emotions or social cues rather than information. Short-term mispricing created by herding can eventually be corrected when new information becomes available or when rational investors intervene. For students, this illustrates why theoretical models of perfect efficiency are useful benchmarks but do not fully capture real-world behaviour.

Keynes’ beauty contest

Keynes’ “beauty contest” analogy describes investors who do not choose stocks based on their own view of fundamental value, but instead try to guess what everyone else will think is beautiful.Instead of asking “Which company is truly best?”, they ask “Which company does the average investor think others will like?” and buy that, hoping to sell to the next person at a higher price. This links directly to the lemming effect: investors watch each other and pile into the same trades, just like the lemmings all changing direction to follow the first one who goes for the pawpsicle.

GameStop story

The GameStop short squeeze in 2021 is a modern real‑world illustration of herd behaviour. A large crowd of retail investors on Reddit and other forums started buying GameStop shares together, partly for profit and partly as a social movement against hedge funds, driving the price far above what traditional valuation models would suggest. Once the price started to rise sharply, more and more people jumped in because they saw others making money and feared missing out, reinforcing the crowd dynamic in a very “lemming‑like” way.

Why should I be interested in this post?

For business and finance students, the lemming effect is a bridge between psychology and technical finance. It helps explain why prices sometimes move in surprising ways, and why sticking mindlessly to the crowd can be dangerous for long-term wealth.

Whether you plan to work in banking, asset management, consulting or corporate finance, understanding herd behaviour can improve your judgment. It encourages you to combine quantitative tools with a critical view of market sentiment, so that you do not become the next “lemming” in a crowded trade.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE “The market is never wrong, only opinions are“ – Jesse Livermore

   ▶ Hadrien PUCHE “It’s not whether you’re right or wrong that’s important, but how much money you make when you’re right and how much you lose when you’re wrong.”– George Soros

   ▶ Daksh GARG Social Trading

   ▶ Raphaël ROERO DE CORTANZE Gamestop: how a group of nostalgic nerds overturned a short-selling strategy

Useful resources

BBC Five animals to spot in a post-Covid financial jungle

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Gupta, S., & Shrivastava, M. (2022). Herding and loss aversion in stock markets: mediating role of fear of missing out (FOMO) in retail investors. International Journal of Emerging Markets, 17(7), 1720-1737.

Gupta, S., & Shrivastava, M. (2022). Argan, M., Altundal, V., & Tokay Argan, M. (2023). What is the role of FoMO in individual investment behavior? The relationship among FoMO, involvement, engagement, and satisfaction. Journal of East-West Business, 29(1), 69-96.

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Time value of money

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the time value of money, a simple but fundamental concept used in all areas of finance.

Overview of the time value of money

The time value of money (TVM) is the idea that one euro today is worth more than one euro in the future because today’s money can be invested to earn interest. In other words, receiving cash earlier gives more opportunities to save, invest, and grow wealth over time. This principle serves as the foundation for valuing loans, bonds, investment projects, and many everyday financial decisions.

To work with TVM, finance uses a few key tools: present value (the value today of future cash flows), future value (the value in the future of money invested today),etc. With these elements, it becomes possible to compare different cash-flow patterns that occur at various dates consistently.

Future value

The future value (FV) of money answers the question: if I invest a certain amount today at a given interest rate, how much will I have after some time? Future value uses the principle of compounding, which means that interest earns interest when it is reinvested.

For a simple case with annual compounding, the formula is:

Future Value (FV)

where PV is the amount invested today, r is the annual interest rate, and T is the number of years.

For example, if 1,000 euros are invested at 5% per year for 3 years, the future value is FV = 1,000 × (1.05)^3 = 1,157.63 euros. This shows how even a modest interest rate can increase the value of an investment over time.

Compounding frequency can also change the result. If interest is compounded monthly instead of annually, the formula is adjusted to use a periodic rate and the total number of periods. The more frequently interest is added, the higher the future value for the same nominal annual rate, illustrating why compounding is such a powerful mechanism in long-term investing.

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

You can download the Excel file provided below, which contains the computation of an investment to illustrate the impact of the frequency on the compounding mechanism.

Download the Excel file for computation of an investment to illustrate the impact of the frequency on the compounding mechanism

Present value

Present value (PV) is the reverse operation of future value and answers the question: how much is a future cash flow worth today? To find PV, the future cash flow is “discounted” back to today using an appropriate discount rate that reflects opportunity cost, risk and inflation.

For a single future cash flow, the present value formula is:

Present Value (PV)

Where FV is the future amount, r is the discount rate per period, and T is the number of periods.

For example, if an investor expects to receive 1,000 euros in 2 years and the discount rate is 5% per year, the present value is PV = 1,000 / (1.05)^2 = 907.03 euros. This means the investor would be indifferent between receiving 907.03 euros today or 1,000 euros in two years at that discount rate.

Choosing the discount rate is a key step: for a safe cash flow, a risk-free rate such as a government bond yield might be used, while for a risky project, a higher rate reflecting the required return of investors would be more appropriate. A higher discount rate reduces present values, making future cash flows less attractive compared to money today.

Applications of the time value of money

The time value of money is used in almost every area of finance. In corporate finance, it forms the basis of discounted cash-flow (DCF) analysis, where the expected future cash flows of a project or company are discounted to estimate the net present value. Investment decisions are typically made by comparing the present value to the initial cost.

DCF

In banking and personal finance, TVM is essential to design and understand loans, deposits and retirement plans. Customers who understand how interest rates and compounding work can better compare offers, negotiate terms and plan their savings. In capital markets, bond pricing, yield calculations and valuation of many other instruments depend directly on discounting streams of cash flows.

Even outside professional finance, TVM helps individuals answer simple but important questions: is it better to take a lump sum now or a stream of payments later, how much should be saved each month to reach a future target, or what is the true cost of borrowing at a given interest rate? A good intuition for TVM improves financial decision-making in everyday life.

Why should I be interested in this post?

As a university student, understanding TVM is essential because it underlies more advanced techniques such as discounted cash-flow (DCF) valuation, bond pricing and project evaluation. It is usually one of the first technical topics taught in introductory corporate finance and quantitative methods courses.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE The four most dangerous words in investing are, it’s different this time

   ▶ Hadrien PUCHE Remember that time is money

Useful resources

Harvard Business School Online Time value of money

Investing.com Time value of money: formula and examples

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Deep Dive into evergreen funds

Emmanuel CYROT

In this article, Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) introduces the ELTIF 2.0 Evergreen Fund.

Introduction

The asset management industry is pivoting to democratize private market access for the wealth segment. We are moving from the rigid Capital Commitment Model (the classic “blind pool” private equity structure) to the flexible NAV-Based Model, an open-ended structure where subscriptions and redemptions are executed at periodic asset valuations rather than through irregular capital calls. For technical product specialists, the ELTIF 2.0 regulation isn’t just a compliance update, it’s the architectural blueprint for the democratization of private markets. Here is the deep dive into how these “Semi-Liquid” or “Evergreen” structures actually work, the European landscape, and the engineering behind them.

The Liquidity Continuum: Solving the “J-Curve” Problem

To understand the evergreen structure, you have to understand what it fixes. In a traditional Closed-End Fund (the “Old Guard”):

  • The Cash Drag: You commit €100k, but the manager only calls 20% in Year 1. Your money sits idle.
  • The J-Curve: You pay fees on committed capital immediately, but the portfolio value drops initially due to costs before rising (the “J” shape).
  • The Lock: Your capital is trapped for 10-12 years. Secondary markets are your only (expensive) exit.

The Evergreen / Semi-Liquid Solution represents the structural convergence of private market asset exposure with an open-ended fund’s periodic subscription and redemption framework.

  • Fully Invested Day 1: Unlike the Capital Commitment model, your capital is put to work almost immediately upon subscription.
  • Perpetual Life: There is no “end date.” The fund can run for 99 years, recycling capital from exited deals into new ones.
  • NAV-Based: You buy in at the current Net Asset Value (NAV), similar to a mutual fund, rather than making a commitment.

The difference in investment processes between evergreen funds and closed ended funds
 The difference in investment processes between evergreen funds and closed ended funds
Source: Medium.

The European Landscape: The Rise of ELTIF 2.0

The “ELTIF 2.0” regulation (Regulation (EU) 2023/606) is the game-changer. It removed the extra local rules that held the market back in Europe. These rules included high national minimum investment thresholds for retail investors and overly restrictive limits on portfolio composition and liquidity features imposed by national regulators.

Market Data as of 2025 (Morgan Lewis)

  • Volume: The market is rapidly expanding, with over 160+ registered ELTIFs now active across Europe as of 2025.
  • The Hubs: Luxembourg is the dominant factory (approx. 60% of funds), followed by France (strong on the Fonds Professionnel Spécialisé or FPS wrapper) and Ireland.
  • The Arbitrage: The killer feature is the EU Marketing Passport. A French ELTIF can be sold to a retail investor in Germany or Italy without needing a local license. This allows managers to aggregate retail capital on a massive scale.

Structural Engineering: Liquidity

This section delves into the precise engineering required to reconcile the illiquidity of the underlying assets with the promise of periodic investor liquidity in Evergreen/Semi-Liquid funds. This is achieved through a combination of Asset Allocation Constraints and robust Liquidity Management Tools (LMTs).

The primary allocation constraint is the “Pocket” Strategy, or the 55/45 Rule. The fund is structurally divided into two distinct components. First, the Illiquid Core, which must represent greater than 55% of the portfolio, is the alpha engine holding long-term, illiquid assets such as Private Equity, Private Debt, or Infrastructure. Notably, ELTIF 2.0 has broadened the scope of this core to include newer asset classes like Fintechs and smaller listed companies. Second, the Liquid Pocket, which can be up to 45%, serves as the fund’s buffer, holding easily redeemable, UCITS-eligible assets like money market funds or government bonds. While the regulation permits a high 45% pocket, efficient fund operation typically keeps this buffer closer to 15%–20% to mitigate performance-killing “cash drag”.

Crucial to managing liquidity risk is the Gate Mechanism. Although the fund offers conditional liquidity (often quarterly), the Gate prevents a systemic crisis if many investors attempt to exit simultaneously. This mechanism works by capping redemptions at a specific percentage of the Net Asset Value (NAV) per period, commonly set at 5%. If aggregate redemption requests exceed this threshold (e.g., requests total 10%), all withdrawing investors receive a pro-rata share of the allowable 5% and the remainder of their request is deferred to the next liquidity window.

Finally, managers utilize Anti-Dilution Tools like Swing Pricing to protect the financial interests of the long-term investors remaining in the fund. In a scenario involving heavy redemptions, where the fund manager is forced to sell assets quickly and incur high transaction costs, Swing Pricing adjusts the NAV downwards only for the exiting investors. This critical mechanism ensures that those demanding liquidity—the “leavers”—bear the transactional “cost of liquidity,” thereby insulating the NAV of the “stayers” from dilution.

Why should I be interested in this post?

Mastering ELTIF 2.0 architecture offers a definitive edge over the standard curriculum. With the industry pivoting toward the “retailization” of private markets, understanding the engineering behind evergreen funds and liquidity gates demonstrates a level of practical sophistication that moves beyond theory—exactly what recruiters at top-tier firms like BlackRock or Amundi are seeking for their next analyst class.

Related posts on the SimTrade blog

   ▶ David-Alexandre BLUM The selling process of funds

Useful resources

Société Générale Fonds Evergreen et ELTIF 2 : Débloquer les Marchés Privés pour les Investisseurs Particuliers

About the author

The article was written in December 2025 by Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Emmanuel CYROT.

Interest Rates and M&A: How Market Dynamics Shift When Rates Rise or Fall

 Emanuele BAROLI

In this article, Emanuele BAROLI (MiF 2025–2027, ESSEC Business School) examines how shifts in interest rates shape the M&A market, outlining how deal structures differ when central banks raise versus cut rates.

Context and objective

The purpose is to explain what interest rates are, how they interact with inflation and liquidity, and how these variables shape merger and acquisition (M&A) activity. The intended outcome is an operational lens you can use to read the current monetary cycle and translate it into cost of capital, valuation, financing structure, and execution windows for deals, distinguishing—when useful—between corporate acquirers and private-equity sponsors.

What are interest rates

Interest rates are the intertemporal price of funds. In economic terms they remunerate the deferral of consumption, insure against expected inflation, and compensate for risk. For real decisions the relevant object is the real rate because it governs the trade-off between investing or consuming today versus tomorrow.

Central banks anchor the very short end through the policy rate and the management of system liquidity (reserve remuneration, market operations, balance-sheet policies). Markets then map those signals into the entire yield curve via expectations about future policy settings and required term premia. When liquidity is ample and cheap, risk-free yields and credit spreads tend to compress; when liquidity becomes scarcer or dearer, yields and spreads widen even without a headline change in the policy rate. This transmission, with its usual lags, is the bridge from monetary conditions to firms’ investment choices.

M&A industry — a definition

The M&A industry comprises mergers and acquisitions undertaken by strategic (corporate) acquirers and by financial sponsors. Activity is the joint outcome of several blocks: the cost and elasticity of capital (both debt and equity), expectations about sectoral cash flows, absolute and relative valuations for public and private assets, regulatory and antitrust constraints, and the degree of managerial confidence. Interest rates sit at the center because they enter the denominator of valuation models—through the discount rate—and they shape bankability constraints through the debt service burden. In other words, rates influence both the price a buyer can rationally pay and the feasibility of financing that price.

Use of leverage

Leverage translates a given cash-flow profile into equity returns. In leveraged acquisitions—especially LBOs—the all-in cost of debt is set by a market benchmark (in practice, Term SOFR at three or six months in the U.S., and Euribor in the euro area) plus a spread reflecting credit risk, liquidity, seniority, and the supply–demand balance across channels such as term loans, high-yield bonds, and private credit. That all-in cost determines sustainable leverage, shapes covenant design, and fixes the headroom on metrics like interest coverage and net leverage. It ultimately caps the bid a sponsor can submit while still meeting target returns. Corporate acquirers usually employ more modest leverage, yet remain rate-sensitive because medium-to-long risk-free yields and investment-grade spreads feed both fixed-rate borrowing costs and the WACC used in DCF and accretion tests, and they influence the value of stock consideration in mixed or stock-for-stock deals.

How interest rates impact the M&A industry

The connection from rates to M&A operates through three main channels. The first is valuation: holding cash flows constant, a higher risk-free rate or higher term premia lifts discount rates, lowers present values, and compresses multiples, thereby narrowing the economic room to pay a control premium. The second is bankability: higher benchmarks and wider spreads raise coupons and interest expense, reduce sustainable leverage, and shrink the set of financeable deals—most visibly for sponsors whose equity returns depend on the spread between debt cost and EBITDA growth. The third is market access: heightened rate volatility and tighter liquidity reduce underwriting depth and risk appetite in loans and bonds, delaying signings or closings; the mirror image under easing—lower rates, stable curves, and tighter spreads—reopens windows, enabling new-money term funding and refinancing of maturities. The net effect is a function of level, slope, and volatility of the curve: lower and calmer curves with steady spreads tend to support volumes; high or unstable curves, even with unchanged spreads, enforce selectivity.

Evidence from 2021–2024 and what the chart shows

M&A deals and interest rates (2021-2024).
M&A deals and interest rates (2021-2024)
Source: Fed.

The global pattern over 2021–2024 is consistent with this mechanism. In 2021, deal counts reached a cyclical peak in an environment of near-zero short-term rates, abundant liquidity, and elevated equity valuations; frictions on the cost of capital were minimal and access to debt markets was easy, so the economic threshold for completing transactions was lower. Between 2022 and 2024, monetary tightening lifted short-term benchmarks rapidly while spreads and uncertainty rose; global deal counts fell materially and the market became more selective, favoring higher-quality assets, resilient sectors, and transactions with stronger industrial logic. Over this period, global deal counts were 58,308 in 2021, 50,763 in 2022, 39,603 in 2023, and 36,067 in 2024, while U.S. short-term rates moved from roughly 0.14% to above 5%; the chart shows an inverse co-movement between the cost of money and activity. Correlation is not causation—antitrust enforcement, energy shocks, equity multiple swings, and the rise of private credit also mattered—but the macro signal aligns with monetary transmission.

What does academic research say

Academic research broadly confirms the mechanism sketched above: when policy rates rise and financing conditions tighten, both the volume and composition of M&A activity change. Using U.S. data, Adra, Barbopoulos, and Saunders (2020) show that increases in the federal funds rate raise expected financing costs, are followed by more negative acquirer announcement returns, and significantly increase the probability that deals are withdrawn, especially when monetary policy uncertainty is high. Fischer and Horn (2023) and Horn (2021) exploit high-frequency monetary-policy shocks and find that a contractionary shock leads to a persistent fall in aggregate deal numbers and values—on the order of 20–30%—with the effect concentrated among financially constrained bidders; at the same time, the average quality of completed deals improves because weaker acquirers are screened out. Work on leveraged buyouts links this to credit conditions: Axelson et al. (2013) document that cheap and abundant credit is associated with higher leverage and higher buyout prices relative to comparable public firms, while theoretical models such as Nicodano (2023) show how optimal LBO leverage and default risk respond systematically to the level of risk-free rates and credit spreads.

Related posts on the SimTrade blog

   ▶ Bijal GANDHI Interest Rates

   ▶ Nithisha CHALLA Relation between gold price and interest rate

   ▶ Roberto RESTELLI My internship at Valori Asset Management

Useful resources

Academic articles

Adra, S., Barbopoulos, L., & Saunders, A. (2020). The impact of monetary policy on M&A outcomes. Journal of Corporate Finance, 62, 1-61.

Fischer, J. and Horn, C.-W. (2023), Monetary Policy and Mergers and Acquisitions, Working paper Available at SSRN

Horn, C.-W. (2021) Does Monetary Policy Affect Mergers and Acquisitions? Working paper.

Axelson, U., Jenkinson, T., Strömberg, P., & Weisbach, M. S. (2013) Borrow Cheap, Buy High? The Determinants of Leverage and Pricing in Buyouts, The Journal of Finance, 68(6), 2223-2267.

Financial data

Federal Reserve Bank of New York Effective Federal Funds Rate (EFFR): methodology and data

Federal Reserve Bank of St. Louis Effective Federal Funds Rate (FEDFUNDS)

OECD Data Long-term interest rates

About the author

The article was written in November 2025 by Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027).

   ▶ Read all articles by Emanuele BAROLI.

Drafting an Effective Sell-Side Information Memorandum: Insights from a Sell-Side Investment Banking Experience

 Emanuele BAROLI

In this article, Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027) explains how to draft an M&A Information Memorandum, translating sell-side investment-banking practice into a clear, evidence-based guide that buyers can use to progress from interest to a defensible bid.

What is an Info Memo

An information memorandum is a confidential, evidence-based sales document used in M&A processes to enable credible offers while safeguarding the sell-side process. It sets out what is being sold, why it is attractive, and how the deal is framed, and it is structured—consistently and without redundancy—around the following chapters: Executive Summary, Key Investment Highlights, Market Overview, Business Overview, Historical Financial Performance and Current-Year Budget, Business Plan, and Appendix. Each section builds on the previous one so that every claim in the narrative is traceable to data, definitions, and documents referenced in the appendix and the data room.

Executive summary

The executive summary is the gateway to the memorandum and must allow a prospective acquirer to grasp, within a few pages, what is being sold, why the asset is attractive, and how the transaction is framed. It should state the perimeter of the deal, the nature of the stake or assets included, and the essence of the equity story in language that is direct, verifiable, and consistent with the evidence presented later. The narrative should situate the company in its market, outline the recent trajectory of scale, profitability, and cash generation, and articulate—in plain terms—the reasons an informed buyer might assign strategic or financial value. Nothing here should rely on empty superlatives; every claim in the summary must be traceable to supporting material in subsequent sections and to documents made available in the data room. Clarity and internal consistency matter more than flourish: the reader should finish this section knowing what the asset is, why it matters, and what next steps the process anticipates.

Key investment highlights

This section filters the equity story into a small number of decisive arguments, each of which combines a clear assertion, hard evidence, and an explicit investor implication. The prose should explain, not advertise sustainable growth drivers, defensible competitive positioning, quality and predictability of revenue, conversion of earnings into cash, discipline in capital allocation, credible management execution, and identifiable avenues for organic expansion or bolt-on M&A. Each highlight should read as a self-contained reasoning chain—statement, proof, consequence—so that a buyer can connect operational facts to valuation logic.

Market overview

The market overview demonstrates that the asset operates within an addressable space that is sizeable, healthy, and legible. Begin by defining the market perimeter with precision so that later revenue segmentations align with it. Describe the current size and structure of demand, the expected growth over a three-to-five-year horizon, and the drivers that sustain or threaten that growth—technological shifts, regulatory trends, customer procurement cycles, and macro sensitivities. Map the competitive landscape in terms of concentration, barriers to entry, switching costs, and price dynamics across channels. Distinguish between the immediate market in which the company competes and the broader industry environment at national or international level, explaining how each influences pricing power, customer acquisition, and margin stability. All figures and characterizations should be sourced to independent references, allowing the reader to verify both methodology and magnitude.

Business overview

The business overview explains plainly how the company creates value. It should describe what is sold, to whom, and through which operating model, covering products and services, relevant intellectual property or certifications, customer segments and geographies served, and the logic of revenue generation and pricing. The text should make the differentiation intelligible—quality, reliability, speed, functionality, service levels, or total cost of ownership—and then connect that differentiation to commercial traction. Operations deserve a concise, concrete treatment: footprint, capacity and utilization, supply-chain architecture, service levels, and, where material, the technology stack and data security posture. The section should close with the people who actually run the company and are expected to remain post-closing, outlining roles, governance, and incentive alignment. The aim is not to impress with jargon but to let an investor see a coherent engine that turns inputs into outcomes.

Historical financial performance and budget

This chapter turns performance into an intelligible narrative. Present the historical income statement, balance sheet, and cash flow over a three-to-five-year window—preferably audited—and reconcile management accounts with statutory figures so that definitions, policies, and adjustments are transparent. Replace tables-for-tables’ sake with analysis: show where growth and margins come from by decomposing revenue into volume, price, and mix; explain EBITDA dynamics through efficiency, pricing, and non-recurring items; separate maintenance from growth capex; and trace how earnings convert into cash by discussing working-capital movements and seasonality. In a live process, the current-year budget should set out the explicit operating assumptions behind it, the key milestones and risks, and a brief intra-year read so a buyer can compare budget to year-to-date performance. If carve-outs, acquisitions, or other discontinuities exist, present clean pro forma views so the time series remains comparable.

Business plan

The business plan translates the equity story into forward-looking numbers and commitments that can withstand diligence. Build the plan from drivers rather than percentages: revenue as a function of volumes, pricing, mix, and retention; costs split between fixed and variable components with operational leverage and efficiency initiatives laid out; capital needs expressed through capex, working-capital discipline, and any anticipated financing structure. Provide a three-to-five-year view of P&L, cash flow, and balance-sheet implications, making explicit the capacity constraints, hiring requirements, and lead times that link initiatives to outcomes. A sound plan includes a base case and either sensitivities or alternative scenarios, together with risk mitigations that are actually within management control. If bolt-on M&A features in the strategy, describe the screening criteria, integration capability, and the nature of the synergies in a way that distinguishes aspiration from execution.

Appendix

The appendix holds detail without overloading the core narrative and preserves auditability. It should contain the full legal disclaimer and confidentiality terms, a glossary of definitions and KPIs to eliminate ambiguity, detailed financial schedules and reconciliation notes, methodological summaries and citations for market data, concise contractual information for key customers and suppliers where material, operational and ESG indicators that genuinely affect value, and a process note with timeline, bid instructions, Q&A protocols, and site-visit guidance. The organizing principle is traceability: any figure or claim in the memorandum should be traceable to a line item or document referenced here and made available in the data room.

Why should you be interested in this post?

For students interested in corporate finance and M&A, this post shows how to translate sell-side practice into a rigorous structure that investors can actually diligence—an essential skill for internships and analyst roles.

Related posts on the SimTrade blog

   ▶ Roberto RESTELLI BCapital Fund at Bocconi: building a student-run investment fund

   ▶ Louis DETALLE A quick presentation of the M&A field…

   ▶ Ian DI MUZIO My Internship Experience at ISTA Italia as an In-House M&A Intern

Useful resources

Corporate Finance Institute – (CFI) Confidential Information Memorandum (CIM)

DealRoom How to Write an M&A Information Memorandum

About the author

The article was written in December 2025 by Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027).

   ▶ Read all articles by Emanuele BAROLI.

At what point does diversification becomes “Diworsification”?

Yann TANGUY

In this article, Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains the concept of “diworsification” and shows how to avoid falling into its trap.

The Concept of Diworsification

The word “diworsification” was coined by famous portfolio manager Peter Lynch to denote the habit of supplementing a portfolio with investments which, instead of improving risk-adjusted return, add complexity. It demonstrates a common misconception of one of the fundamental pillars of the Modern Portfolio Theory (MPT): diversification.

Whereas the adage “don’t put all your eggs in one basket” exemplifies the foundation of prudent portfolio building, diworsification occurs when an investor adds too many baskets and thus loses sight of the quality and purpose of each one.

This mistake comes from a fundamental misunderstanding of what diversification actually is. Diversification is not a function of the quantity of assets owned by an investor but of the interconnections of assets. If an investor introduces assets highly correlated with assets owned to a portfolio, the diversification effect of risk is greatly reduced, and a portfolio’s possible return can be diluted.

Practical Example

Let’s assume there are two investors.

An investor who is interested in the tech industry may hold shares in 20 different software and hardware companies. This portfolio appears diversified on the surface. However, since all the companies are in the same industry, they are all subject to the same market forces and risks. In a decline of the tech industry, it is likely many of the stocks will decline at the same time due to their high correlation.

A second investor maintains a portfolio of three low-cost index funds: one dedicated to the total US stock market, another for the total international stock market, and a third focusing on the total bond market. Despite the simplicity of holding just these three positions, this investor enjoys a far more effective level of diversification in their portfolio. The assets, US stocks, international stocks, and bonds, have a low correlation with one another. Consequently, poor performance in one asset class is likely to be counterbalanced by stable or positive returns in another, resulting in a smoother return profile and a reduction in overall portfolio risk.

The portfolio of the first investor is a perfect case of diworsification. Increasing the number of technology stocks did not do any sort of risk diversification, but it introduced complexity and diluted the effect of performing stocks.

The point at which diversification began to operate to its own harm can be identified with several factors. Diversification’s initial goal is to improve the risk-adjusted return, a concept often evaluated using the Sharpe ratio. Diworsification begins when adding a new asset does not contribute to an improvement in the portfolio’s Sharpe ratio.

You can download the Excel below with a numerical example of the impact of correlation in diversification.

Download the Excel file for mortgage

Here is a short summary of what is shown in the Excel spreadsheet.

We used two different portfolios, each with 2 assets and both portfolios having a similar expected return and average volatility of assets. The only difference is that the first portfolio has correlated assets, whereas the second portfolio has non-correlated assets.

Correlated portfolio returns over volatility

Non-Correlated portfolio returns over volatility

As you can see in these graphs, the diversification effect is much more potent for the non-correlated portfolio, leading to higher returns for a given volatility.

Target number of assets for a diversified portfolio

One of the most important considerations when assembling a portfolio is determining the optimal number of assets relative to which greater diversification can be realized prior to the onset of diworsification. Studies of equity markets had indicated that a portfolio of 20 to 30 stocks could diversify away unsystematic risk.

However, this number varies according to different asset classes and the complexity of the assets. In the world of alternative investments, a landmark study, “Hedge fund diversification: how much is enough?,” was published by authors François-Serge Lhabitant and Michelle Learned in 2002, for the Journal of Alternative Investments. The authors aimed to dispel the myth that ‘more is better’ in the complex world of hedge funds. They analyzed the effect of the size of the portfolio on risk and return, determining that although adding to the portfolio reduces risk, the marginal benefits of diversification diminished rapidly.

Importantly they found that adding too many funds could lead to a convergence toward average market returns, effectively eroding the “alpha” (excess return) that investors seek from active management. Furthermore, even when volatility is reduced, other forms of risks, such as skewness and kurtosis, can get worse. The significance of this research is that it offers empirical evidence for the phenomenon of ‘diworsification’—the idea that, after a certain point, adding assets to a portfolio worsens its efficiency.

Crossover from Diversification to Diworsification

The crossover from diversification to diworsification is normally marked by three main factors.

The first is diluted returns, as the number of assets increases, the performance of the portfolio starts to resemble that of a market index, albeit with elevated costs. The favorable influence of a handful of significant winners is offset by the poor performance of many other investments.

The second is an increase in costs as each asset, and particularly each asset owned through a managed fund, comes with some costs. These can be transaction costs, management fees, or costs of research. The more assets there are, the costs add up and ultimately impose a drag on final performance.

The third is unnecessary complexity as a portfolio with too many holdings becomes hard to keep tabs on, analyze, and rebalance. Which can confuse an investor about his or her asset allocation and expose the portfolio to unnecessary risk.

Causes of Diworsification

The causes for diworsification differ systematically between individual and institutional investors. For individual investors, this fundamental mistake arises from an incorrect understanding of genuine diversification, far too often leading to an emphasis on numbers rather than quality. Behavioral biases, such as familiarity bias, manifested in a preference for investing in well-known names of firms, or fear of missing out, which drives investors toward recently outperforming “hot” stocks, can generate portfolios concentrated in highly correlated securities.

The causes of diworsification for institutional investors are fundamentally different. The asset management business puts on a lot of strain that can lead to diworsification. Fund managers, measured against a comparator index, may prefer to build oversized funds whose portfolios are similar to the index, a process called “closet indexing.” Even if such a strategy reduces the risk of underperforming the comparator and thus losing clients, it also ensures that the fund will not show meaningful outperformance, all the time collecting fees for what is wrongly qualified as active management. In addition, the sale of complex product types like “funds of funds” adds further levels of fees and can mask the fact that the underlying assets are often far from unique.

How to avoid Diworsification

Diworsification doesn’t refer to an abandonment of diversification. Rather, it demands a more intelligent strategy. The emphasis should move from raw number of holdings to the correct asset allocation of the portfolio. The key is to mix asset classes with low or even adverse correlations to each other, for example, stocks, government securities, real estate, and commodities. This method allows for a more solid shelter from price fluctuations than keeping a long list of homogeneous stocks.

A low-cost and efficient means for many investors to achieve this goal is to utilize broad-market index funds and ETFs. These financial products give exposure to thousands of underlying securities representing full asset classes within a single holding, thus eliminating the difficulties and high costs of creating an equivalent portfolio of single assets.

Conclusion

Modern Portfolio Theory provides an intriguing framework for crafting portfolios for investments, and its essential concept of diversification still forms its basis. However, implementing this concept requires thoughtful consideration. Diworsification represents a misinterpretation of the objective, and not an objective to add assets simply in numbers, but to improve the risk-return of the portfolio as a whole.

A successful diversification strategy is built on a foundation of asset allocation to low-correlation assets. By focusing on the quality of diversification rather than the quantity of positions, investors can create portfolios that are closer to what they want, avoiding unnecessary costs and lower returns of a diworsified outcome.

Why should I be interested in this post?

Diworsification is a trap that should be avoided, and is really easy to avoid when you understand the mechanisms at work behind it.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Raphael TRAEN Understanding Correlation

   ▶ Youssef LOURAOUI Minimum Volatility Portfolio

Useful resources

Lhabitant, F.-S., M. Learned (2002) Hedge fund diversification: how much is enough? Journal of Alternative Investments, 5(3):23-49.

Lynch P., J. Rothchild (2000) One up on Wall Street. New York: Simon & Schuster.

Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1), 77–91.

About the author

This article was written in November 2025 by Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Understanding Snowball Products: Payoff Structure, Risks, and Market Behavior

Tianyi WANG

In this article, Tianyi WANG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026) explains the structure, payoff, and risks of Snowball products — one of the most popular and complex structured products in Asian financial markets.

Introduction

Structured products can be positioned along a broad risk–return spectrum.

Snowball Structure Product .
Snowball Structure Product
Source: public market data.

As shown in the figure below, Snowball Notes belong to the category of yield-enhancement products, typically offering annualized returns of around 8% to 15%. These products sit between capital-protected structures—which provide lower but more stable returns—and high-risk leveraged instruments such as warrants. This placement highlights a key feature of Snowballs: while they provide attractive coupons under normal market conditions, they come with conditional downside risk once the knock-in barrier is breached. Understanding this relative positioning helps explain why Snowballs are widely marketed during stable or range-bound markets but may expose investors to significant losses when volatility spikes.

Snowball options have become widely traded structured products in Asian equity markets, especially in China, Korea, and Hong Kong. They appeal to investors seeking stable returns in range-bound markets. However, their path-dependent nature and embedded option risks make them highly sensitive to market volatility. During periods of rapid market decline, many Snowball products experience “knock-in” events or even large losses.

To be more specific, a knock-in event occurs when the underlying asset’s price falls below (or rises above, depending on the product design) a predetermined barrier level during the life of the product. Once this barrier is breached, the Snowball option “activates” the embedded option exposure—typically converting what was originally a principal-protected or coupon-paying structure into one that behaves like a short option position. As a result, the investor becomes directly exposed to downside risks of the underlying asset, often leading to significant mark-to-market losses.

This article explains how Snowball products work, their payoff structure, the embedded risks, and how market behavior affects investor outcomes.

Who buys Snowball products?

Snowball products are purchased mainly by:

  • Retail investors — especially in mainland China and Korea, attracted by high coupons and the perception of stability.
  • High-net-worth individuals (HNWI) — through private banking channels.
  • Institutional investors — such as securities firms and structured product funds, often using Snowballs for yield enhancement.

Because Snowballs involve complex embedded options, they are considered unsuitable for inexperienced retail investors. Nevertheless, retail participation has grown significantly in Asian markets.

What is a Snowball product?

A Snowball is a structured product linked to an equity index (e.g., CSI 500, HSCEI) or a single stock. It provides a fixed coupon if the underlying asset stays within certain price barriers. The product contains three key components:

  • Autocall (Knock-out) — product terminates early at a profit if the underlying rises above a set level.
  • Knock-in — if the underlying falls below a certain barrier, the investor becomes exposed to downside risk.
  • Coupon payment — paid periodically as long as knock-in does not occur and knock-out does not trigger.

Snowballs earn steady income in stable markets, but losses can become severe when markets experience sharp declines.

The name “Snowball” comes from the idea of a snowball rolling downhill: it grows larger over time. In structured products, the coupon accumulates (or “rolls”) as long as the product does not knock-in or knock-out. As the months go by, the investor receives a growing stream of accrued coupons — similar to a snowball becoming bigger. However, like a snowball that can suddenly break apart if it hits an obstacle, the product can suffer significant losses once the knock-in barrier is breached.

Market behavior: what does it mean?

In the context of Snowball pricing and risk, “market behavior” refers to two dimensions:

  • Financial market behavior (price dynamics) — movements of the underlying index or stock, volatility levels, liquidity conditions, and short-term shocks. This includes trends such as rallies, range-bound phases, or sharp sell-offs that affect knock-in and knock-out probabilities.
  • Investor behavior — how different market participants react: hedging flows from issuers, panic selling during downturns, retail speculation, institutional risk reduction, and shifts in investor sentiment. These behaviors can reinforce price moves and alter Snowball risk.

Together, these elements form “market behavior”: the interaction between market movements and investor actions. For Snowballs, this directly affects whether the product pays coupons, knocks out early, or falls into knock-in and creates losses.

Key barriers in Snowball products

Knock-out (Autocall) barrier

If at any observation date the price exceeds the knock-out barrier (e.g., 103%), the product terminates early and investors receive principal plus accumulated coupons.

Knock-in barrier

If the price falls below the knock-in barrier (e.g., 80%), the product enters a risk state. If at maturity the price remains below the strike, the investor bears the underlying’s loss.

How Snowball payoffs work

The payoff of a Snowball is path-dependent, meaning it depends on the entire trajectory of the underlying index, not just the final price at maturity.

There are three typical outcomes:

Knock-out outcome (early exit)

If the underlying exceeds the knock-out level early, the investor receives:
Principal + accumulated coupons

No knock-in, no knock-out (maturity coupon)

If the underlying never crosses either barrier:
Principal + full coupons

Knock-in triggered (risky outcome)

If knock-in occurs and the final price ends below strike:
The investor bears the underlying loss

Thus, Snowballs deliver strong returns in stable or mildly rising markets but carry significant losses in bear markets.

Why Snowball products are risky

Although marketed as “income products,” Snowballs are essentially short-volatility strategies. The issuer sells downside protection to the investor in exchange for coupons.

Key risks include:

  • High volatility increases knock-in probability
  • Sharp declines lead to principal losses
  • Liquidity risk
  • Complex payoff makes risks hard to evaluate for retail investors

Case study: Why many Snowballs were hit in 2022–2023

During 2022–2023, Chinese equity markets — especially the CSI 500 and CSI 1000 — experienced large drawdowns due to geopolitical tensions, policy uncertainty, and weak economic recovery. Volatility spiked, and mid-cap indices saw rapid declines.

As a result:

  • Many Snowballs hit knock-in levels
  • Investors faced large mark-to-market losses
  • Issuers reduced new Snowball supply due to elevated volatility

This period highlights how market sentiment and volatility regimes directly impact structured product outcomes.

According to Bloomberg (January 2024), more than $13 billion worth of Chinese Snowball products were approaching knock-in triggers. A rapid decline in the CSI 1000 index pushed many products close to their 80% knock-in barrier.

Some investors experienced immediate 15–25% losses as the embedded short-put exposure was activated.

This real-world case demonstrates how quickly Snowball risk materializes when market volatility rises.

Snowball Take Out.
Snowball Take Out
Source: public market data.

How market behavior affects Snowball performance

Volatility

High volatility increases the likelihood of crossing both barriers.

Trend direction

  • Upward trends → more knock-outs
  • Range-bound markets → steady coupon income
  • Downward trends → knock-in risk and principal loss

Liquidity and investor flows

During sell-offs, Snowball hedging can amplify downward pressure, creating feedback loops.

Snowball knock-in chart.
Snowball knock-in chart
Source: public market data.

Explanation: The chart illustrates a steep market decline where the underlying index falls below its knock-in barrier. When such drawdowns occur rapidly, Snowball products transition into risk mode, immediately exposing investors to the underlying’s downside. This visualizes how market volatility and negative sentiment can activate the hidden risks in Snowball structures.

Conclusion

Snowball products are appealing due to their attractive coupons, but they involve significant downside risks during volatile markets. Understanding the path-dependent nature of their payoff, barrier mechanics, and market behavior is crucial for investors and product designers.

By analyzing Snowball structures, investors gain deeper insight into how derivative products are created, priced, and risk-managed in real financial markets.

Related posts on the SimTrade blog

   ▶ Shengyu ZHENG Barrier Options

   ▶ Slah BOUGHATTAS Book by Slah Boughattas: State of the Art in Structured Products

   ▶ Akshit GUPTA Equity Structured Products

About the author

The article was written in November 2025 by Tianyi WANG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

Managing Corporate Risk: How Consulting and Export Finance Complement Each Other

Julien MAUROY

In this article, Julien MAUROY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025) shares technical knowledge on risk management in the business world based on his experiences. The concepts of financial risk in business, risk management, and risk analysis will be presented. All of this information is drawn from my experiences and supported by literature on the subject.

Risk as a strategic lever

This topic aims to explore how companies manage risk and transform it into a lever for decision-making and value creation. It ties in with my academic background at ESSEC Business School and my professional experience in two complementary environments: finance and risk consulting at BearingPoint and export financing at Bpifrance. Today, risk-related issues are omnipresent in business. Whether it is competitiveness, investment decisions or international expansion, every strategy involves a degree of uncertainty.

Risk is no longer just a threat, it is anticipated, studied, calculated and has a market price: the cost of seeking advice, the cost of insurance, etc. It therefore becomes a key management factor for companies that can identify, measure and integrate it into their strategic thinking. This is why understanding risk management means understanding how organisations balance growth, stability and performance. It is this dual approach : consulting (risk reduction) and insurance and export financing (risk assessment and pricing), that I would like to share with you.

Reducing and structuring risk with consulting

During my internship at BearingPoint, I discovered how consulting could help companies reduce and structure their strategic, financial and operational risks. Consultants bring an external perspective to a company’s activities. They use an analytical, neutral approach to identify organisational weaknesses and make more informed decisions.

Within the Finance & Risk department, my assignments consisted of improving the financial performance and financial management of the company’s activities. The main topics were data reliability, reporting automation, and optimisation of budgeting and forecasting processes.

By improving the quality of financial information and its analysis, we helped companies become more agile and better able to manage their business. Companies gained visibility and the ability to anticipate future developments. Consulting is therefore the ideal way to transform uncertainty into a structured and effective methodology for addressing the challenges facing these sectors.

It helps companies adopt rigorous governance, allocate resources and budgets more effectively to each activity, and avoid costly strategic errors.

Finally, consulting helps reduce companies’ exposure to risk by providing support at all levels. It makes decision-making more rational, measurable and aligned with long-term strategy in light of competition and industry challenges.

Measuring and pricing risk with export insurance and financing

My experience in Bpifrance’s Export Insurance department gave me a different perspective on risk, this time more quantitative and institutional.

In this organisation, risk is not borne solely by the customer seeking insurance, but also by Bpifrance, which insures French exporters against risk arising from foreign buyers. The risk is therefore shared between the lending bank, the insurer and the French exporter.

In export insurance, risk is not abstract: it is analysed, measured and valued. The accuracy of the analysis is paramount, involving financial, extra-financial and geopolitical analysis. An in-depth study of exporting companies and their international counterparties makes it possible to assess their solidity and their ability to honour their financial commitments.

Each project is subject to a detailed risk assessment: counterparty risk, country risk, sectoral or political risk. These factors have an immediate impact on the premium rate applied to the export guarantee. In other words, the higher the risk of loss, the higher the cost of coverage. This approach, based on collaboration with the French Treasury and the OECD, has enabled me to understand how institutions can price risk on a global scale.

In comparison, consulting helps to anticipate, explore solutions and reduce risk, while insurance seeks to assess and price risk. At that point, risk is not avoidable, but is an integral part of the economic model.

Understanding risk in order to leverage it

These two experiences taught me that risk management is not just about protecting yourself from risk, but understanding it so you can use it as a lever for growth.

In consulting, risk is controlled through better organisation, reliable information and a clear strategy. In finance, risk becomes a measurable parameter, integrated into decision-making models and valued according to its potential impact.

These two approaches are therefore complementary: one aims to make the company more resilient, the other enables it to grow despite uncertainty.

These two perspectives show that risk, far from being a constraint, can become a strategic management tool, a driver of adaptation and a source of sustainable competitiveness.

Conclusion: the strategic value of risk management

Through these experiences, I have understood that risk management is at the heart of finance and strategy.

At BearingPoint, I acquired analytical rigour and the ability to structure my thinking, at Bpifrance I gained a macroeconomic vision and a concrete understanding of the link between risk and financial performance.

This dual perspective on qualitative and quantitative risk convinced me that knowing how to assess, integrate and explain risk is a key skill for the future of business.

In an uncertain world, managing risk means managing the relevance of decisions: this is what distinguishes companies that are able to anticipate the future from those that simply react to it.

Opening the topic with the vision of Frank Knight and Nassim Taleb

The study of risk in business has been the subject of earlier studies and research, notably initiated by Frank Knight in 1921 in Risk, Uncertainty and Profit. Knight distinguishes between two essential realities: risk, which can be quantified and insured against, and uncertainty, which cannot be quantified.

This distinction is further developed by Nassim Taleb in The Black Swan (2007), where he shows that certain extreme disruptions, known as ‘black swans’, cannot be predicted or incorporated into traditional models. Examples include pandemics, political shocks and sectoral collapses. For Taleb, the issue is not only one of prediction, but of building resilient organisations capable of absorbing unexpected shocks.

These two perspectives are directly reflected in corporate risk management. I have observed how consulting helps organisations reduce their exposure to ‘measurable’ risk, and conversely, my experience at Bpifrance immersed me in an approach where risk is quantified and priced. But neither consulting nor finance can eliminate uncertainty in Knight’s sense or Taleb’s ‘black swans’. Their role is to help the company better prepare for them by strengthening strategic robustness and adaptability.

That is why risk is no longer just a threat: it becomes a management tool and a lever for structuring action, in order to build organisations that are resilient in the face of the unexpected.

Related posts on the SimTrade blog

   ▶ Rishika YADAV Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

   ▶ Mathias DUMONT Pricing Weather Risk: How to Value Agricultural Derivatives with Climate-Based Volatility Inputs

   ▶ Vardaan CHAWLA Real-Time Risk Management in the Trading Arena

   ▶ Snehasish CHINARA My Apprenticeship Experience as Customer Finance & Credit Risk Analyst at Airbus

   ▶ Marine SELLI Political Risk: An Example in France in 2024

   ▶ Julien MAUROY My internship experience at BearingPoint – Finance & Risk Analyst

   ▶ Julien MAUROY My internship experience at Bpifrance – Finance Export Analyst

Useful resources

BearingPoint

Didier Louro (25/09/2024) Le risk management au service de la croissance Bearing Point x Sellia (podcast).

Bpifrance

OECD

Treasury department

Academic articles and books

Cohen E. (1991) Gestion financière de l’entreprise et développement financier, AUF / EDICEF.

Hassid O. (2011) Le management des risques et des crises Dunod.

Knight, F. H. (1921) Risk, Uncertainty and Profit Houghton Mifflin Company.

Mefteh S. (2005) Les déterminants de la gestion des risques financiers des entreprises non financières : une synthèse de la littérature, CEREG Université Paris Dauphine, Cahier de recherche n°2005-03.

Taleb N.N. (2008) The Black Swan Penguin Group.

About the author

The article was written in November 2025 by Julien MAUROY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025).

The role of DCF in valuation

Roberto Restelli

In this article, Roberto RESTELLI (ESSEC Business School, Master in Finance (MiF), 2025–2026) explains the role of discounted cash flow (DCF) within the broader toolkit of company valuation—when to use it, how to build it, and where its limits lie.

Introduction to company valuation

Valuation is the process of determining the value of any asset, whether financial (for example, shares, bonds or options) or real (for example, factories, office buildings or land). It is fundamental in many economic and financial contexts and provides a crucial input for decision-making. In particular, the importance of proper company valuation emerges in the preparation of corporate strategic plans, during restructuring or liquidation phases, and in extraordinary transactions such as mergers and acquisitions (M&As). Company valuations are also useful in regulatory and tax contexts (for example, transfers of ownership stakes or determining value for tax purposes). Entrepreneurs and investors can evaluate the economic attractiveness of strategic options, including selling or acquiring corporate assets.

The need for a company valuation typically arises to answer three questions: Who needs a valuation? When is it necessary? Why is it useful?

Users and uses of company valuation

Different categories rely on valuation. In investment banks, Equity Capital Markets use it for IPO research and coverage (including fairness opinions), while M&A teams analyze transactions and prepare fairness opinions to inform deal decisions. In Private Equity and Venture Capital, valuation supports majority/minority acquisitions, startup assessments, and LBOs. Strategic investors use it for acquisitions or divestitures, stock‑option plans, and financial reporting. Accountants and appraisal experts (CPAs) prepare fairness opinions, tax valuations, technical appraisals in legal disputes, and arbitration advisory.

Beyond these, regulators and supervisory bodies (e.g., the SEC in the U.S., CONSOB in Italy) require precise valuations to ensure market transparency and investor protection. Corporate directors and managers need valuations to define growth strategies, allocate capital, and monitor performance. Courts and arbitrators request valuations in disputes involving contract breaches, expropriations, asset divisions, or shareholder conflicts. Owners of SMEs—backbone of the Italian economy—use valuations to set sale prices, manage generational transfers, or attract investors.

Examples of valuation

Valuations appear in equity research (e.g., a UBS report on Netflix indicating a short‑ to medium‑term target price based on public information), in M&A deal analyses (including subsidiary valuations and group structure changes), and in fairness opinions (e.g., Volkswagen’s acquisition of Scania). They are central in IPOs to set offer prices and expectations. Banks also rely on valuations in lending decisions to assess enterprise value and credit risk, clarifying the allocation of requested capital.

Core competencies in valuation

High‑quality valuation requires business and strategy foundations (industry analysis, competitive context, business‑model strength), theoretical and technical finance (NPV, pricing models, corporate cash‑flow modeling), and economic theory (uncertainty vs. value and limits of standard models). Valuation is not just technique: it balances modeling choices with empirical evidence and fit‑for‑purpose estimates.

A fundamental principle is that a firm’s value is driven by its ability to generate future cash flows, which must be estimated realistically and paired with an appropriate risk assessment. Higher uncertainty in cash‑flow estimates implies a higher discount rate and a lower present value. Discount‑rate choice depends on the model (e.g., CAPM for systematic risk via beta). Sustainability also matters: modern practice increasingly integrates environmental, social, and governance (ESG) factors—climate risk, regulation, and reputation—into valuation.

General approaches and specific methods

Income Approach. Present value of future benefits, risk‑adjusted and long‑term (e.g., discounted cash flows).
Market Approach. Value estimated by comparing to similar, already‑traded assets.
Cost (Asset‑Based) Approach. Value derived by remeasuring assets/liabilities to current condition.

Within these, DCF is among the most studied and used. It can be computed from the asset perspective via free cash flow to the firm (FCFF) or from the equity perspective via free cash flow to equity (FCFE). Under the asset‑based approach, other methods include net asset value and liquidation value. Additional families include economic profit (e.g., EVA, residual income) and market‑based analyses: trading multiples (e.g., P/E, EV/EBITDA), deal multiples, and premium analysis (control premia). Four further techniques often considered are current market value (market capitalisation), real options (valuing flexible investment opportunities), broker/analyst consensus, and LBO analysis (value supported by leveraged acquisition capacity).

Critical aspects and limits of valuation models

Each method has strengths and limits. In DCF, accuracy depends on projection quality; macro cycles can render forecasts unreliable. In market‑multiple analysis, industry/geography differences and poor comparables can distort results. Real options are powerful for uncertainty but require subjective parameters (e.g., volatility), introducing error bands.

Practical applications of company valuation

Firms use valuation to plan growth, allocate capital, and budget projects. In disputes and restructurings, it informs liquidation values and creditor negotiations. It also supports governance and incentives (e.g., option plans) that align managers with shareholders. In short, valuation enables both day‑to‑day management and extraordinary decisions.

Discounted Cash Flow (DCF)

What is a DCF?

The discounted cash flow (DCF) method values a company by forecasting and discounting future cash flows. Originating with John Burr Williams (The Theory of Investment Value), DCF seeks intrinsic value by projecting cash flows and applying the time value of money: one euro today is worth more than one euro tomorrow because it can be invested.

Advantages include accuracy (when inputs are sound) and flexibility (applicable across firms/projects). Risks include reliance on uncertain projections and difficulty estimating both discount rates and cash flows; hence outputs are estimates and should be complemented with other methods.

Uses of DCF

DCF is widely applied to value companies, analyse investments in public firms, and support financial planning. The five fundamental steps are:

  1. Estimate expected future cash flows.
  2. Determine the growth rate of those cash flows.
  3. Calculate the terminal value.
  4. Define the discount rate.
  5. Discount future cash flows and the terminal value to the present.

DCF components.
 DCF components
Source: author.

Discounted cash flow formula (with a perpetuity‑growth terminal value):

DCF = CF1 / (1 + r)1 + CF2 / (1 + r)2 + … + CFT / (1 + r)T + (CFT+1 / (r – g)) · 1 / (1 + r)T

Where CFt are cash flows in year t, r is the discount rate, and g is the long‑term growth rate.

Building a DCF

Start from operating cash flow (cash‑flow statement) and typically move to free cash flow (FCF) by subtracting capital expenditures. Example: if operating cash flow is €30m and capex is €5m, FCF = €25m. Project future FCF using growth assumptions (e.g., if 2020 FCF was €22.5m and 2021 FCF €25m, growth is ~11.1%). Use near‑term high‑growth and longer‑term fade assumptions to reflect maturation.

Determining the terminal value

The terminal value represents long‑term growth beyond the explicit forecast. A common formula is:

Terminal Value = CFT+1 / (r – g)

Ensure g is consistent with long‑run economic growth and the firm’s reinvestment needs.

Defining the discount rate

The discount rate reflects risk. Common choices include the risk‑free government yield, the opportunity cost of capital, and the WACC (weighted average cost of capital). In equity‑side models, CAPM is often used to estimate the cost of equity via beta (systematic risk).

Discounting the cash flows

Finally, discount projected cash flows and terminal value at the chosen rate to obtain present value. Sensitivity analysis (varying r, g, margins, capex) and scenario analysis (bull/base/bear) are essential to understand valuation drivers.

Example

You can download below an Excel file with an example of DCF. It deals with Maire Tecnimont, which is an Italian engineering and consulting company specializing in the fields of chemistry and petrochemicals, oil and gas, energy and civil engineering.

Download the Excel file for an example of DCF applied to  Maire Tecnimont

Why should I be interested in this post?

If you are an ESSEC student aiming for roles in investment banking, private equity, or equity research, mastering DCF is table‑stakes. This post distills how DCF fits among valuation approaches, the exact steps to build one, and the pitfalls you must stress‑test before using your number in IPOs, M&A, or buy‑side models.

Related posts on the SimTrade blog

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ William LONGIN How to compute the present value of an asset?

   ▶ Maite CARNICERO MARTINEZ How to compute the net present value of an investment in Excel

   ▶ Andrea ALOSCARI Valuation methods

Useful resources

Damodaran online New York University (NYU).

SEC EDGAR company filings

European Central Bank (ECB) statistics

Maire Tecnimont

About the author

The article was written in November 2025 by Roberto RESTELLI (ESSEC Business School, Master in Finance (MiF), 2025–2026).

Book by Slah Boughattas: State of the Art in Structured Products

Slah Boughattas

In this post, Slah BOUGHATTAS (Ph.D., Associate of the Chartered Institute for Securities & Investment (CISI), London) provides an extract from the book ‘State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging’ (2022).

This post presents pedagogical philosophy, structure, and target audience, including graduate students in finance, university professors, and practitioners in derivatives and structured products.

State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging
 State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging
Source: the company.

Summary of the book

The book aims to provide both the theoretical background and the practical applications of structured products in modern financial markets. It systematically explores the fundamentals of derivatives, equity and interest rate markets, stochastic calculus, Monte Carlo simulations, Constant Proportion Portfolio Insurance (CPPI), risk management, and the financial engineering processes involved in designing, pricing, and hedging structured products.

Financial concepts related to the book

Structured Products, Derivatives, Options, Swaps, Structured Notes, Bonus certificates, Constant Proportion Portfolio Insurance (CPPI), Monte Carlo Simulation, Fixed Income, Floating Rate-Note (FRN), Reverse FRN, CMS-Linked Notes, Callable Bond, Financial Engineering, Risk Management, Pricing, and Hedging.

Context and Motivation

The financial engineering of structured products remains one of the most sophisticated domains of quantitative finance. While the literature on derivatives pricing is vast, comprehensive references specifically dedicated to the end-to-end process of structured product creation — designing, pricing, and hedging — remain scarce.

State of the Art in Structured Products bridges this gap. The work is structured to serve both as a teaching manual and a professional reference, progressively building from fundamental principles to advanced practical implementations.

Structure of the Book

  • Derivatives Fundamentals and Market Instruments – recalls the essential mechanics of equity and interest-rate derivatives
  • Designing Structured Products – shows how term sheets and payoff structures emerge logically from financial objectives
  • Pricing and Risk Analysis – provides analytical and simulation-based approaches, including Monte Carlo method
  • Hedging and Risk Management – explores dynamic replication, sensitivities, and practical hedging of structured notes.
  • Advanced Topics – covers Constant Proportion Portfolio Insurance (CPPI), callable and floating-rate instruments, and swaptions

Why should I be interested in this post?

The book’s main contribution lies in its integrated approach combining conceptual clarity, quantitative rigor, and practical implementation examples. It is intended for professors and instructors of Master’s programs in Finance, graduate students specializing in derivatives or structured products, and professionals such as financial engineers, product controllers, traders, dealing room staff and salespeople, risk managers, quantitative analysts, middle office managers, fund managers, investors, senior managers, research and system developers.

The book is currently referenced in several academic libraries, including ESSEC Business School Paris, Princeton University, London School of Economics, HEC Montreal, Erasmus University Rotterdam, ETH Zurich, IE University, Erasmus University Rotterdam, and NTU Singapore.

Related posts on the SimTrade blog

   ▶ Mahé FERRET Selling Structured Products in France

   ▶ Akshit GUPTA Equity Structured Products

   ▶ Youssef LOURAOUI Interest rate term structure and yield curve calibration

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Shengyu ZHENG Capital Guaranteed Products

   ▶ Shengyu ZHENG Reverse Convertibles

Useful resources

Slah Boughattas (2022) State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging Advanced Education in Financial Engineering Editions.

About the author

The article was written in November 2025 by Slah BOUGHATTAS (Ph.D., Associate of the Chartered Institute for Securities & Investment (CISI), London).

The Business Model of Proprietary Trading Firms

Anis MAAZ

In this article, Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains how prop firms work, from understanding their business model and evaluation processes, to fee structures and risk management rules. The goal is not to promise guaranteed profits, but to provide a transparent, realistic overview of how proprietary trading firms operate and what traders should know before joining one.

Context and objective

  • Goal: demystify how prop firms make money, how their rules work, and what realistic outcomes look like, even if you are new to prop firms.
  • Outcome: a technical but accessible guide with a simple numeric example and a due diligence checklist.

What a prop firm is

Proprietary trading firms (prop firms) use their own capital to trade in financial markets, leveraging advanced risk management techniques and state-of-the-art technologies. But how exactly do prop firms make money, and what makes them attractive to aspiring traders? Traders who meet the firm’s rules get access to buying power and share in the profits. Firms protect their capital with strict risk limits (daily loss, max drawdown, product caps). Two operating styles you will encounter: In house/desk model: you trade live firm capital on a desk with a risk manager. Evaluation (“challenge”) model: you pay a fee to prove you can hit a target without breaking rules. If you pass, you receive a “funded” account with payout rules. For example, a classic challenge can be to reach a profit of 6% without losing more than 4% of your initial challenge capital to become funded.

The Proprietary Trading Industry: Origins and Scale

Proprietary trading as a business model emerged in the 1980s-1990s in the US, initially within investment banks’ trading desks before regulatory changes (notably the Volcker Rule in 2010) pushed prop trading into independent firms. The modern “retail prop firm” model, offering funded accounts to individual traders via evaluation challenges, gained momentum in the 2010s, particularly after 2015 with firms like FTMO (Czech Republic, 2014) and TopstepTrader (US, 2012).

Today, the industry includes an estimated 200+ prop firms globally, concentrated in the US, UK, and UAE (Dubai has become a hub due to favorable regulations). Major players include FTMO, TopstepTrader, Apex Trader Funding, Alphafutures, and MyForexFunds. Most are privately owned by founders or small investor groups and some (like Topstep) have received venture capital. The market size is difficult to quantify precisely, but industry reports estimate the global prop trading sector handles billions in trading capital, with the retail-focused segment growing 40-50% annually from 2020-2024.

Core Characteristics of prop firms

  • Capital Allocation: Prop firms provide traders with access to firm capital, enabling them to trade larger positions than they could on their own.
  • Profit Sharing: A trader’s earnings are typically a percentage of the profits generated. This incentivizes high-caliber performance.
  • Training Programs: Many prop firms invest in the development of new traders via structured training programs, equipping them with proven strategies and technologies.
  • Diverse Markets: Prop traders operate across various asset classes, such as stocks, forex, options, cryptocurrencies, and commodities.

How the business model works

The money comes from evaluation fees and resets: a major revenue line for challenge-style firms because most applicants do not pass the challenges. Once funded, a trader keeps the majority of the profits generated (often 70–90%) and the firm keeps the rest. Some firms charge for platform, data or advanced tools such as a complete order book, and pay exchange/clearing fees on futures.

In some cases, firms may charge onboarding or monthly platform fees to cover operational costs, such as trading infrastructure, data services, and proprietary software. However, top firms often waive such fees for consistently profitable traders.

For example, a firm charging $150 for a $50,000 evaluation challenge that attracts 10,000 applicants per month generates $1.5M in fee revenue. If 8% pass (800 traders) and receive funded accounts, and only 20% of those (160) reach a payout, the firm pays out perhaps $500,000-$800,000 in profit splits while retaining the rest as margin. Add-on services (resets at $100 each, platform fees) further boost revenue.

Who Are the Traders?

Prop firm traders come from diverse backgrounds: retail traders seeking leverage, former bank traders, students, and career-changers. No formal degree is required. The average trader age ranges from 25-40, though firms accept anyone 18+. Most traders operate as “independent contractors”, not employees, they receive profit splits, bearing their own tax obligations.

Retention is actually very low: industry data suggests 60-70% of funded traders lose their accounts within 3 months due to rule violations or drawdowns. Only 10-15% maintain funded status beyond 6 months. The model is inherently high-churn: firms continuously recruit through affiliates and ads, knowing most will fail but a small percentage will generate consistent trading activity and profit-share revenue.

What successful traders share :

  • The ability to manage risk and follow rules.
  • Analytical skills and a deep understanding of market behavior.
  • Psychological toughness to handle the highs and lows of trading.

It’s not an easy industry at all, and it’s better to have a real job, because only a small fraction of traders pass and an even smaller fraction reach payouts after succeeding in a challenge. Fee income arrives upfront, payouts happen later and only for those who succeed and manage to be disciplined through time.

For new traders, it’s not easy to pass a challenge when the rules are strict, because trading with someone else’s capital often amplifies fear and greed. Success is judged not only by profitability but also by consistency and adherence to firm guidelines, and any new traders struggle to maintain profitability and burn out within months.

EU regulators have long reported that most retail accounts lose money on leveraged products like CFDs: typically 74–89%, which helps explain why challenge pass rates are low without strong process and discipline.

Success rates: what is typical and why most traders fail

“Pass rate” (applicants who complete the challenge) is commonly cited around 5–10%. “Payout rate among funded traders” is often ~20%. End to end, only ~1–2% of all applicants reach a payout. All of these statistics vary by firm, product, and rules. Most people fail due to rule breaches under pressure (daily loss, news locks), overtrading, and inconsistent execution. Psychological factors like revenge trading, FOMO (Fear of missing out), are the usual culprits.

Trading Strategies, Markets, and Tools

Which Markets?

Most prop firms focus on futures (E-mini S&P, Nasdaq, crude oil), forex (EUR/USD, GBP/USD), and increasingly cryptocurrencies (Bitcoin, Ethereum). Some firms also offer equities (US stocks). The choice depends on the firm’s clearing relationships and risk appetite. Futures dominate because of high leverage, deep liquidity, and high trading windows.

Common Strategies

Prop traders typically employ “intraday strategies”:

  • Scalping (holding positions seconds to minutes)
  • Momentum trading (riding short-term trends), and mean reversion (fading extremes)
  • Swing trading (multi-day holds) is less common due to overnight risk rules
  • High-frequency strategies are rare in retail prop firms, and most traders use setups based on technical indicators (moving averages, RSI, volume profiles).

Tools and Platforms

Firms provide access to professional platforms like NinjaTrader, TradingView, MetaTrader 4/5,). Traders receive Level 2 data (order book), news feeds (Bloomberg, Reuters), and sometimes proprietary risk dashboards. Some firms offer replay tools to practice historical data.

The key performance idea

Positive expectancy = you make more on your average winning trade than you lose on your average losing trade, often enough to overcome costs. Here is a simple way to check:

Step 1: Out of 10 trades, how many are winners? Example: 5 winners, 5 losers (50% win rate). Step 2: What’s your average win and average loss? Example: average win €120; average loss €80. Step 3: Expected profit per trade ≈ (wins × avg win − losses × avg loss) ÷ number of trades. Here: (5 × 120 − 5 × 80) ÷ 10 = (€600 − €400) ÷ 10 = €20 per trade. If costs/slippage are below €20 per trade, you likely have an edge worth scaling, subject to the firm’s risk limits.

The firm wants you to stay inside limits, your average loss is controlled (stops respected), and your results are repeatable across days. They avoid the “luck factor” by putting rules like 2 minimum winning days to pass a challenge and impossible to make more than half of the challenge target in one day.

There are many ways to pass a challenge, depending on your trading strategy: If you aim for trades where your win is 5 times higher than what you risk, you do not need a winrate of 50% or 80% to pass the challenges and be profitable.

Payout mechanics: example with Topstep (to clarify the “50%” point)

Profit split: you keep 100% of the first $ 10,000 you withdraw; after that, the split is 90% to you / 10% to Topstep (per trader, across accounts).

Per request cap: Express Funded Account: request up to the lesser of $ 5,000 or 50% of your account balance per payout, after 5 winning days. Live Funded Account: up to 50% of the balance per request (no $ 5,000 cap). After 30 non consecutive “winning days” in Live, you can unlock daily payouts up to 100% of balance.

Note: “50%” here is a cap on how much you may withdraw per request—not the profit split. Other firms differ (some advertise 80–90% splits, 7–30 day payout cycles, or higher first withdrawal shares), so always read the current Terms.

Why traders choose prop firms (psychology and practical reasons)

Traders are attracted to prop firms for both psychological and practical reasons. The appeal starts with small upfront risk: instead of depositing a large personal account, you pay a fixed evaluation fee. If you perform well within the rules, you gain access to greater buying power, which lets you scale faster than you could with a small personal account.

But this method is indeed a psychological trap, because most of the traders will fail their first account, buy another one because it’s “cheap” and it will become an addiction when they will start burning accounts every day because it “doesn’t feel real” for them. The trade offs are real, evaluation fees and resets can add up, rules may feel restrictive, and pressure tends to spike near limits or payout thresholds. All these factors contribute to why many candidates ultimately fail.

However, for experimented traders who can manage psychology, the built in structure, risk limits, reviews, and a community adds accountability and often improves discipline. Payouts can also serve as a capital building path, gradually seeding your own account over time.

Regulation: A Gray Zone

Proprietary trading firms operate in a largely unregulated space, especially the evaluation-based model. In the US, prop firms are not broker-dealers; they typically collaborate with registered FCMs (Futures Commission Merchants) or brokers who handle execution and clearing, but the firm itself is often a private LLC with minimal oversight. The CFTC (Commodity Futures Trading Commission) regulates futures markets but not prop firms’ internal challenge mechanisms.

In France, the AMF has issued warnings about unregulated prop firms and emphasized that if a firm collects fees from French residents, it may fall under consumer protection law. Some firms have pulled out of France or adjusted terms. The UK FCA has similarly warned consumers. The UAE (DIFC, DMCC) offers more permissive environments, attracting many firms to Dubai.

Conclusion

Prop trading firms offer a compelling proposition: controlled access to institutional sized buying power, standardized risk limits, and a structured pathway for transforming skill into capital without large personal deposits. In this model, firms protect capital through rules and fees, while profitable traders create a scalable environment for strategy development and execution.

At the same time, the evaluation-and-payout cycle can amplify cognitive and emotional traps. Fee resets, drawdown thresholds, and profit targets concentrate attention on short-term outcomes, which can foster overtrading, sensation seeking, and schedule-driven risk-taking. The same leverage that accelerates account growth also magnifies behavioral errors and variance, making intermittent reinforcement (occasional big wins amid frequent setbacks) psychologically sticky and potentially addictive.

In the end, prop firms are neither shortcut nor scam, but a high-constraint laboratory. They reward, stable execution, rule adherence, and penalize improvisation and impulse. As a venue, they are well suited to disciplined traders with repeatable processes, robust risk controls, and patience for incremental scale. Without those traits, the structure that protects the firm can become a treadmill for the trader.

At the end of the day, the prop firm model is designed for the firm to profit from fees, not trader success. With 1-2% end-to-end success rates, it’s closer to a paid training lottery than a career path.

If your goal is to learn trading, SimTrade, paper trading, or small personal accounts teach discipline without predatory fee structures. Joining a bank’s graduate program gives you access to senior traders, research, and real market-making or flow trading experience.

If you’ve already traded profitably for 1-2 years, have a proven strategy, need leverage, and fully understand the fee economics, then a top-tier firm (FTMO, Topstep) could provide capital to scale. But as a first step out of ESSEC, I would prioritize banking or buy-side roles that offer mentorship, stability, and credentials.

Why should I be interested in this post?

Prop firms reveal how trading businesses monetize edge while enforcing strict risk management and incentive design. Grasping evaluation rules, fee structures, and payout mechanics sharpens your ability to assess unit economics and governance. This knowledge is directly applicable to careers in trading, risk, and fintech—helping you make informed choices before joining a program.

Related posts on the SimTrade blog

   ▶ Theo SCHWERTLE Can technical analysis actually help to make better trading decisions?

   ▶ Michel VERHASSELT Trading strategies based on market profiles and volume profiles

   ▶ Vardaan CHAWLA Real-Time Risk Management in the Trading Arena

Useful Resources

Topstep payout policy and FAQs (current rules and examples)

The Funded Trader statistics on pass/payout rates

How prop firms make money (evaluation fees vs profit share): neutral primers and industry explainers

General overviews of prop trading mechanics and risk controls

About the author

The article was written in October 2025 by Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Modern Portfolio Theory: What is it and what are its limitations?

Yann TANGUY

In this article, Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains the Modern Portfolio Theory and how Post-Modern Portfolio Theory solves some of its limitations.

Creation of Modern Portfolio Theory (MPT)

Developed in 1952 by Nobel laureate Harry Markowitz, MPT revolutionized the way investors think about portfolios. Before Markowitz, investment decisions were mostly based on the relative nature of each investment. MPT changed the way to think about investing by showing that an investment cannot be thought of in isolation but as part of contribution to portfolio risk and return.

At the center of MPT is the diversification theory. The adage “don’t put all your eggs in one basket” is the base of this theory. By diversifying a portfolio with assets having different risk and return profiles and a low correlation, an investor can build a portfolio that has a lower risk than any of its components.

A Practical Example

Let’s assume that we have just two assets: stocks and bonds. Stocks have given higher returns over a long period of time compared to bonds but are riskier. On the other hand, bonds are less risky but return less.

An investor who puts all their money in stocks will have huge returns in a bull market but will suffer huge losses in a bear market. A conservative investor who puts money in bonds alone will have a smooth portfolio but will be denied the chance of better growth.

MPT believes that the combination of different investments in a portfolio can have a better risk-reward ratio than single investments. The key is the correlation of the assets. If the correlation is less than 1, the portfolio’s risk will be less than the weighted average of each individual asset’s risk. In this simplified example, stocks are performing poorly when bonds are performing well and vice versa, so they have a negative correlation, hedging out the overall returns of the portfolio.

Mathematical explanation

To estimate the risk of a portfolio, MPT uses statistical measures like variance and standard deviation. Variance is calculated to then obtain the standard deviation, which we use to assess the risk of an asset as it indicates how much said asset’s price fluctuates.

On the other hand, correlation and covariance quantify how two assets move compared to each other. Covariance and correlation give an indication of change in value, i.e. Do the assets move in the same way. Correlation is between -1 and 1, a correlation of 1 means that the asset moves in the exact same way and -1 means that they move in opposite ways.

The portfolio variance is calculated as follows for a portfolio of asset A and asset B:

Portfolio Variance Formula

Where:

  • R = return
  • w = weight of the asset
  • Var = variance
  • Cov = covariance

The variance of a portfolio is then not equal to the weighted average risk of its components because we factor in the covariance of said components.

The aim of MPT is to find the optimal portfolio mix that minimizes the portfolio standard deviation for a given level of expected return or that maximizes the portfolio expected return for a given level of standard deviation. This can be graphically represented as the efficient frontier, a line representing the set of optimal portfolios.

This Efficient Frontier represents different allocations of assets in a portfolio. All portfolios on this frontier are called efficient portfolios, meaning that they have the best risk adjusted returns possible with this combination of assets. This means that when choosing the allocation for a portfolio one should pick a portfolio located on the frontier based on their risk tolerance and return objective.

The figure below represents the efficient frontier when investors can invest in risky assets only.

Efficient Portfolio Frontier.
Portfolio Efficient Frontier
Source: Computation by the Author.

Quantifying performance

To quantify the performance of a portfolio, MPT utilizes Sharpe ratio. The Sharpe ratio measures the excess return of the portfolio (the return over the risk-free rate) for the risk of the portfolio (defined by portfolio standard deviation). The formula is as follows:

Sharpe Ratio Formula

Where:

  • E(RP) = expected return of portfolio P
  • Rf = risk-free rate
  • σP = standard deviation of returns of portfolio P

A higher Sharpe ratio indicates a better risk-adjusted return.

Limitations of MPT

Even though MPT has been around in finance for decades now, it is not universally accepted. The biggest criticism against it is that it employs standard deviation to measure price movement, but the problem is that no difference is made between positive and negative volatility. They are both seen as risky.

However, many investors would be happy with a portfolio that performs 20% or 40% returns every year, but this portfolio could be considered risky by MPT, even if it always performs better than the return needed as there is a lot of variation, however this variation does not matter to you if your return objective is always met. This means that investors care more about downside risks, the risk of performing worse than your return objective.

Emergence of Post-Modern Portfolio Theory (PMPT)

PMPT, introduced in 1991 by software designers Brian M. Rom and Kathleen Ferguson, is a refinement of MPT to overcome its main shortcoming. The key difference lies in the fact that PMPT focuses on downside deviation as a measure of risk, rather than the normal standard deviation that takes every form of deviation into account.

The origins of PMPT can be linked to the work of A. D. Roy with his “Safety First” principle in his 1952 paper, “Safety First and the Holding of Assets”. In his paper, Roy argued that investors are primarily motivated by the desire to avoid disaster rather than to maximize their gains. As he put it, “Decisions taken in practice are less concerned with whether a little more of this or of that will yield the largest net increase in satisfaction than with avoiding known rocks of uncertain position or with deploying forces so that, if there is an ambush round the next corner, total disaster is avoided.” Roy proposed that investors should seek to minimize the probability that their portfolio’s return will fall below a certain minimum acceptable level, or “disaster” level which is now known as MAR for “Minimum Acceptable Return”.

PMPT introduces the concept of the Minimum Acceptable Return (MAR), i.e., the lowest return that the investor wishes to receive. Instead of looking at the overall volatility of a portfolio, PMPT looks only at the returns below the MAR.

Calculating Downside Deviation

To compute downside deviation, we carry out the following:

  1. Define the Minimum Acceptable Return (MAR).
  2. Calculate the difference between the portfolio return and the MAR for each period.
  3. Square the negative differences.
  4. Sum the squared negative differences.
  5. Divide by the number of periods.
  6. Take the square root of the result to obtain the downside deviation.

You can download the Excel file below which illustrates the difference between MPT and PMPT with two examples of market conditions (correlation).

Download the Excel file for the data for MPT and PMPT

In this file we find 2 combinations of assets: Example 1 and Example 2. The first combination has a positive correlation (0.72) and the second combination a negative one (-0.75) all the while having very similar standard deviation and returns for each asset.

First, using MPT, we demonstrate how high correlation leads to a worsened diversification effect, and a lower increase in portfolio efficiency (Sharpe Ratio) compared to a very similar portfolio with a low correlation.

Diversification effect on Sharpe Ratio (High correlation)

Diversification effect on Sharpe Ratio (Low correlation)

Afterwards, we use PMPT to show how correlation also impacts the diversification effect through the lens of downside deviation, meaning how much does the portfolio moves below the MAR, keeping in mind that these portfolios have only around a 0.1% difference in average return and originally have almost the same volatility.

Diversification effect on Downside Deviation (High correlation)

Diversification effect on Downside Deviation (Low correlation)

Focusing on downside risk is made even more important when you consider that financial returns are rarely normally distributed, as is often assumed in MPT. In their 2004 paper, “Portfolio Diversification Effects of Downside Risk,” Namwon Hyung and Casper G. de Vries show that returns often show signs of what they call “fat tails,” meaning that extreme negative events are more common than a normal distribution would predict.

They find that in this environment; diversification is even more powerful in reducing downside risk. They state: “The VaR-diversification-speed is higher for the class of (finite variance) fat tailed distributions in comparison to the normal distribution”. Meaning that for investors concerned about downside risk, diversification is a more potent tool than they might realize as diversification becomes even more efficient when taking into account the real distribution of returns.

Conclusion

Modern Portfolio Theory has been the main theory used by investors for more than half a century. Its basic premise of diversification and asset allocation is as valid as it ever was. But the usage of Standard Deviation of returns only gives a side of picture, a picture fully captured by PMPT.

Post-Modern Portfolio Theory is more advanced way of managing risk. With its focus on downside deviation, it provides investors with an accurate sense of what they are risking and allows them to build portfolios better aligned with their goals and risk tolerance. MPT was the first iteration, but PMPT has built a more practical framework to effectively diversify a portfolio.

An effective diversification strategy is built on a solid foundation of asset allocation among low-correlation asset classes. By focusing on the quality of diversification rather than only the quantity of holdings, investors can build portfolios that are better aligned with their goals, avoiding the unnecessary costs and diluted returns that come with a diworsified approach.

Why should I be interested in this post?

MPT is a theory widely used in Asset management, the understanding of its principles and limitations is primordial in nowadays financial landscape.

Related posts on the SimTrade blog

   ▶ Rayan AKKAWI Warren Buffet and his basket of eggs

   ▶ Raphael TRAEN Understanding Correlation in the Financial Landscape: How It Drives Portfolio Diversification

   ▶ Rishika YADAV Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

   ▶ Youssef LOURAOUI Minimum Volatility Portfolio

   ▶ All posts about Financial techniques

Useful resources

Ferguson, K. (1994) Post-Modern Portfolio Theory Comes of Age, The Journal of Investing, 1:349-364

Geambasu, C., Sova, R., Jianu, I., and Geambasu, L., (2013) Risk measurement in post-modern portfolio theory: Differences from modern portfolio theory, Economic Computation and Economic Cybernetics Studies and Research, 47:113-132.

Markowitz, H. (1952) Portfolio Selection, The Journal of Finance, 7(1):77–91.

Roy, A.D. (1952) Safety First and the Holding of Assets, Econometrica, 20, 431-449.

Hyung, N., & de Vries, C. G. (2004) Portfolio Diversification Effects of Downside Risk, Working paper.

Sharpe, W.F. (1966) Mutual Fund Performance, Journal of Business, 39(1), 119–138.

Sharpe, W.F. (1994) The Sharpe Ratio, Journal of Portfolio Management, 21(1), 49–58.

About the author

This article was written in October 2025 by Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

Rishika YADAV

In this article, Rishika YADAV (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023–2027) explains the concept of risk-adjusted return, with a focus on the Sharpe ratio and complementary performance measures used in portfolio management.

Risk-adjusted return

Risk-adjusted return measures how much return an investment generates relative to the level of risk taken. This allows meaningful comparisons across portfolios and funds. For example, two portfolios may both generate a 12% return, but the one with lower volatility is superior because most investors are risk-averse — they prefer stable and predictable returns. A portfolio that achieves the same return with less risk provides higher utility to a risk-averse investor. In other words, it offers better compensation for the risk taken, which is precisely what risk-adjusted measures like the Sharpe Ratio capture.

The Sharpe Ratio

The Sharpe Ratio is the most widely used risk-adjusted performance measure. It standardizes excess return (return minus the risk-free rate) by total volatility and answers the question: how much additional return does an investor earn per unit of risk?

Sharpe Ratio = (E[RP] − Rf) / σP

where Rp = portfolio return, Rf = risk-free rate (e.g., T-bill yield), and σp = standard deviation of portfolio returns (volatility).

Interpretation

The Sharpe Ratio was developed by Nobel Laureate William F. Sharpe (1966) as a way to measure the excess return of an investment relative to its risk. A higher Sharpe ratio indicates better risk-adjusted performance.

  • < 1 = sub-optimal
  • 1–2 = acceptable to good
  • 2–3 = very good
  • > 3 = excellent (rarely achieved consistently)

In real financial markets, sustained Sharpe Ratios above 1.0 are uncommon. Over the past four decades, broad equity indices like the S&P 500 have averaged between 0.4 and 0.7, while balanced multi-asset portfolios often fall in the 0.6–0.9 range. Only a handful of hedge funds or quantitative strategies have achieved Sharpe ratios consistently above 1.0, and values exceeding 1.5 are exceptionally rare. Thus, while the Sharpe ratio is a useful comparative tool, the theoretical thresholds (e.g., >3 as “excellent”) are not typically observed in real markets.

Capital Allocation Line (CAL) and Capital Market Line (CML)

The Capital Allocation Line (CAL) represents the set of portfolios obtainable by combining a risk-free asset with a chosen risky portfolio P. It is a straight line in the (risk, expected return) plane: investors choose a point on the CAL according to their risk preference.

The equation of the CAL is:

E[RQ] = Rf + ((E[RP] − Rf) / σP) × σQ

where:

  • E[Rp] = expected return of the combined portfolio
  • Rf = risk-free rate
  • E[RP] = expected return of risky portfolio P
  • σP = standard deviation of P
  • σQ = resulting standard deviation of the combined portfolio (proportional to weight in P)

The slope of the CAL equals the Sharpe ratio of portfolio P:

Slope(CAL) = (E[RP] − Rf) / σP = Sharpe(P)

The Capital Market Line (CML) is the CAL when the risky portfolio Q is the market portfolio (M). Under CAPM/Markowitz assumptions the market portfolio is the tangent (highest Sharpe) point on the efficient frontier and the CML is tangent to the efficient frontier at M.

The equation of the CML is:

E[RQ] = Rf + ((E[RM] − Rf) / σM) × σQ

where M denotes the market portfolio.

The slope of the CML, (E[RM] − Rf) / σM, is the Sharpe ratio of the market portfolio.

The link between the CAL, CML and Sharpe ratio is illustrated in the figure below.

Figure 1. Capital Allocation Line (CAL), Capital Market Line (CML) and the Sharpe ratio.
Capital Allocation Line and Sharpe ratio
Source: computation by author.

Strengths of the Sharpe Ratio

  • Simple and intuitive — easy to compute and interpret.
  • Versatile — applicable across asset classes, funds, and portfolios.
  • Balances reward and risk — combines excess return and volatility into a single metric.

Limitations of the Sharpe Ratio

  • Assumes returns are approximately normally distributed — real returns often show skewness and fat tails.
  • Penalizes upside and downside volatility equally — it does not distinguish harmful downside movements from beneficial upside.
  • Sensitive to the chosen risk-free rate and the return measurement horizon (daily/monthly/annual).

Beyond Sharpe: Alternative measures

  • Treynor Ratio — uses systematic risk (β) instead of total volatility: Treynor = (Rp − Rf) / βp. Best for well-diversified portfolios.
  • Sortino Ratio — focuses only on downside deviation, so it penalizes harmful volatility (losses) but not upside variability.
  • Jensen’s Alpha — α = Rp − [Rf + βp(Rm − Rf)]; measures manager skill relative to CAPM expectations.
  • Information Ratio — active return (vs benchmark) divided by tracking error; useful for evaluating active managers.

Applications in portfolio management

Risk-adjusted metrics are used by asset managers to screen and rank funds, by institutional investors for capital allocation, and by analysts to determine whether outperformance is due to skill or increased risk exposure. When two funds have similar absolute returns, the one with the higher Sharpe Ratio is typically preferred.

Why should I be interested in this post?

Understanding the Sharpe Ratio and complementary risk-adjusted measures is essential for students interested in careers in asset management, equity research, or investment analysis. These tools help you evaluate performance meaningfully and make better investment decisions.

Related posts on the SimTrade blog

   ▶ Capital Market Line (CML)

   ▶ Understanding Correlation and Portfolio Diversification

   ▶ Implementing the Markowitz Asset Allocation Model

   ▶ Markowitz and Modern Portfolio Theory

Useful resources

Jensen, M. (1968) The Performance of Mutual Funds in the Period 1945–1964, Journal of Finance, 23(2), 389–416.

Sharpe, W.F. (1966) Mutual Fund Performance, Journal of Business, 39(1), 119–138.

Sharpe, W.F. (1994) The Sharpe Ratio, Journal of Portfolio Management, 21(1), 49–58.

Sortino, F. and Price, L. (1994) Performance Measurement in a Downside Risk Framework, Journal of Investing, 3(3), 59–64.

About the author

This article was written in October 2025 by Rishika YADAV (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023–2027). Her academic interests lie in strategy, finance, and global industries, with a focus on the intersection of policy, innovation, and sustainable development.

US Treasury Bonds

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM, 2021-2024) gives a comprehensive overview of U.S. Treasury bonds, covering their features, benefits, risks, and how to invest in them.

Introduction

Treasury bonds, often referred to as T-bonds, are long-term debt securities issued by the U.S. Department of the Treasury. They are regarded as one of the safest investments globally, offering a fixed interest rate and full backing by the U.S. government. This article aims to provide an in-depth understanding of Treasury bonds, from their basics to advanced concepts, making it an essential read for finance students and professionals.

What Are Treasury Bonds?

Treasury bonds are government debt instruments with maturities ranging from 10 to 30 years. Investors receive semi-annual interest payments and are repaid the principal amount upon maturity. Due to their low credit risk, Treasury bonds are a popular choice for conservative investors and serve as a benchmark for other interest-bearing securities.

Types of Treasury Securities

Treasury bonds are part of a broader category of U.S. Treasury securities, which include:

  • Treasury Bills (T-bills): Short-term securities with maturities of one year or less, sold at a discount and matured at face value.
  • Treasury Notes (T-notes): Medium-term securities with maturities between 2 and 10 years, offering fixed interest payments.
  • Treasury Inflation-Protected Securities (TIPS): Securities adjusted for inflation to protect investors’ purchasing power.
  • Treasury Bonds (T-bonds): Long-term securities with maturities of up to 30 years, ideal for investors seeking stable, long-term income.

Historical Performance of Treasury Bonds

Historically, Treasury bonds have been a cornerstone of risk-averse portfolios. During periods of economic uncertainty, they act as a haven, preserving capital and providing reliable income. For instance, during the 2008 financial crisis and the COVID-19 pandemic, Treasury bond yields dropped significantly as investors flocked to their safety.

Despite their stability, T-bonds are sensitive to interest rate fluctuations. When interest rates rise, bond prices typically fall, and vice versa. Over the long term, they have delivered modest returns compared to equities but excel in capital preservation.

Investing in Treasury Bonds

Investing in Treasury bonds can be done through various channels like Direct Purchase, Brokerage Accounts, Mutual Funds and ETFs, and Retirement Accounts:

  • Direct Purchase: Investors can buy T-bonds directly from the U.S. Treasury via the TreasuryDirect website.
  • Brokerage Accounts: Treasury bonds are also available on secondary markets through brokers.
  • Mutual Funds and ETFs: Investors can gain exposure to Treasury bonds through funds that focus on government securities.
  • Retirement Accounts: T-bonds are often included in 401(k) plans and IRAs for diversification.

Factors Affecting Treasury Bond Prices

Several factors influence the prices and yields of Treasury bonds such as Interest Rates, Inflation Expectations, Federal Reserve Policy, and Economic Conditions:

  • Interest Rates: An inverse relationship exists between bond prices and interest rates.
  • Inflation Expectations: Higher inflation erodes the real return on bonds, causing prices to drop.
  • Federal Reserve Policy: The Federal Reserve’s actions, such as changing the federal funds rate or engaging in quantitative easing, directly impact Treasury yields.
  • Economic Conditions: In times of economic turmoil, demand for Treasury bonds increases, driving up prices and lowering yields.

Relationship between bond price and current bond yield

Let us consider a US Treasury bond with nominal value M, coupon C, maturity T, and interests paid twice a year every semester. The coupon (or interest paid every period) is computed with the coupon rate. The nominal value is reimbursed at maturity. The current yield is the market rate, which may be lower or greater than the rate at the time of issuance of the bond (the coupon rate used to compute the dollar value of the coupon). The formula below gives the formula for the price of the bond (we consider a date just after the issuance date and different yield rates.

Formula for the price of the bond
 Formula for the price of the bond
Source: The author

Relationship between bond price and current bond yield
Relationship between bond price and current bond yield
Source: The author

You can download below the Excel file for the data used to build the figure for the relationship between bond price and current bond yield.

Download the Excel file to compute the bond price as a function of the current yield

Risks and Considerations

While Treasury bonds are low-risk investments, they are not entirely risk-free, there are several factors to consider, such as Interest Rate Risk (Rising interest rates can lead to capital losses for bondholders), Inflation Risk (Fixed payments lose purchasing power during high inflation periods), Opportunity Cost (Low returns on T-bonds may be less attractive compared to higher-yielding investments like stocks).

Treasury Bond Futures

Treasury bond futures are standardized contracts that allow investors to speculate on or hedge against future changes in bond prices. These derivatives are traded on exchanges like the Chicago Mercantile Exchange (CME) and are essential tools for managing interest rate risk in sophisticated portfolios.

Treasury Bonds in the Global Market

The U.S. Treasury market is the largest and most liquid government bond market worldwide. It plays a pivotal role in the global financial system:

  • Reserve Currency: Many central banks hold U.S. Treasury bonds as a key component of their foreign exchange reserves.
  • Benchmark for Other Securities: Treasury yields serve as a reference point for pricing other debt instruments.
  • Foreign Investment: Countries like China and Japan are significant holders of U.S. Treasury bonds, underscoring their global importance.

Conclusion

Treasury bonds are fundamental to the financial landscape, offering safety, stability, and insights into broader economic dynamics. Whether you are a finance student building foundational knowledge or a professional refining investment strategies, understanding Treasury bonds is indispensable. As of 2023, the U.S. Treasury market exceeds $24 trillion in outstanding debt, reflecting its vast scale and importance. By mastering the nuances of Treasury bonds, you gain a competitive edge in navigating the complexities of global finance.

Why should I be interested in this post?

Understanding Treasury bonds is crucial for anyone pursuing a career in finance. These instruments provide insights into Monetary Policy, Fixed-Income Analysis, Portfolio Management, and Macroeconomic Indicators.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLA Datastream

Useful resources

Treasury Direct Treasury Bonds

Fiscal data U.S. Treasury Monthly Statement of the Public Debt (MSPD)

Treasury Direct Understanding Pricing and Interest Rates

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

Herfindahl-Hirschmann Index

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024) delves into the Herfindahl-Hirschmann Index(HHI).

History of the Herfindahl-Hirschmann Index(HHI)

The Herfindahl–Hirschman Index (HHI) originated in the mid-20th century as a measure of market concentration. Its roots trace back to Albert O. Hirschman, who in 1945 introduced a squaring-based method to assess trade concentration in his book “National Power and the Structure of Foreign Trade.” A few years later, Orris C. Herfindahl independently applied a similar concept in his 1950 doctoral dissertation on the U.S. steel industry, formalizing the formula that sums the squares of firms’ market shares to capture dominance. Over time, economists combined their contributions, naming it the Herfindahl–Hirschman Index.

During the 1970s and 1980s, the measure gained prominence in industrial organization and competition economics. In 1982, the U.S. Department of Justice and the Federal Trade Commission officially adopted the HHI in their Merger Guidelines to evaluate market concentration and the impact of mergers, establishing it as a global standard. Since then, competition authorities worldwide, including the European Commission and the OECD, have incorporated HHI into their antitrust frameworks, and it remains widely used today to assess competition across various industries, such as banking, telecommunications, and energy.

The Herfindahl-Hirschman Index (HHI) is a widely used measure of market concentration and competition in various industries. The HHI has become a crucial tool for finance professionals, policymakers, and regulatory bodies to assess the level of competition in a market. In this article, we will delve into the basics of the HHI, its calculation, interpretation, and advanced applications, including recent statistics and news.

What is the Herfindahl-Hirschman Index (HHI)?

The HHI is a numerical measure that calculates the market concentration of a particular industry by considering the market share of each firm. The index ranges from 0 to 10,000, where higher values indicate greater market concentration and reduced competition. For example, a market comprising four firms with market shares of 30%, 30%, 20%, and 20% would have an HHI of 2,600 (30² + 30² + 20² + 20² = 2,600).

Calculation of the HHI

The HHI is calculated by summing the squares of the market shares of each firm in the industry. The market share is typically expressed as a percentage of the total market size. The formula for calculating the HHI is:

Formula for the Herfindahl-Hirschman Index (HHI).
 Formula for the Herfindahl-Hirschman Index (HHI).
Source: the author.

where MSi is the market share of firm i, and N the number of firms.

The HHI ranges from 0 (perfect competition) to 10,000 (monopoly).

Interpretation of the HHI

According to the HHI, the concentration of sectors can be categorized as low, moderate and high:

  • Low concentration (HHI < 1,500): Indicates a highly competitive market with many firms.
  • Moderate concentration (1,500 ≤ HHI < 2,500): Suggests a moderately competitive market with some dominant firms.
  • High concentration (HHI ≥ 2,500): Indicates a highly concentrated market with limited competition.

I built an Excel file to illustrate the three cases: low, moderate, and high concentration.

Low concentration: HHI < 1,500
 Low concentration (according to the HHI)
Source: the author.

Moderate concentration: 1,500 < HHI < 2,500
 Moderate concentration (according to the HHI)
Source: the author.

High concentration: HHI > 2,500
High concentration (according to the HHI)
Source: the author.

You can download below the Excel file for the data used to build the figure for the HH index.

Download the Excel file for the data used to build the figure for the  HH index

Advanced Applications of the HHI

The HHI has several advanced applications in finance, economics, and regulatory frameworks. Some of these applications include:

  • Merger analysis: Regulatory bodies, such as the US Federal Trade Commission (FTC), use the HHI to assess the potential impact of mergers and acquisitions on market competition.
  • Industry analysis: Finance professionals use the HHI to analyze the competitive landscape of an industry and identify potential investment opportunities.
  • Antitrust policy: The HHI is used to inform antitrust policy and enforcement, helping to prevent anti-competitive practices and promote competition.
  • Market structure analysis: The HHI is used to analyze the market structure of an industry, including the number of firms, market shares, and barriers to entry.

Criticisms and Limitations of the HHI

While the HHI is a widely used and useful measure of market concentration, it has several criticisms and limitations. Some of these include:

  • Simplistic assumption: The HHI assumes that market shares are a good proxy for market power, which may not always be the case.
  • Ignorance of other factors: The HHI ignores other factors that can affect market competition, such as barriers to entry, product differentiation, and firm conduct.
  • Sensitive to market definition: The HHI is sensitive to the definition of the market, which can affect the calculation of market shares and the resulting HHI value.

Real-World Examples

US Airline Industry: The HHI for the US airline industry has increased significantly over the past two decades, indicating growing market concentration. According to a 2020 report by the US Government Accountability Office, the HHI for the US airline industry increased from 1,041 in 2000 to 2,041 in 2020.

US Technology Industry: The HHI for the US technology industry has also increased significantly over the past decade, indicating growing market concentration. According to a 2022 report by the US FTC, the HHI for the US technology industry increased from 1,500 in 2010 to 3,000 in 2020.

Recent Statistics and News

  • A 2021 FTC staff report on acquisitions by major technology firms highlighted a “systemic nature of their acquisition strategies,” indicating a clear trend toward market concentration as they frequently acquired startups and potential competitors.
  • A 2020 article by the American Enterprise Institute noted that while the HHI for the US airline industry had increased by 41% since the early 2000s, inflation-adjusted ticket prices had actually fallen.
  • In its 2019 antitrust lawsuit to block the T-Mobile and Sprint merger, the US Department of Justice argued the deal was “presumptively anticompetitive,” citing HHI calculations that showed the merger would substantially increase concentration in the mobile wireless market.
  • Recent studies have utilized the HHI to analyze hospital market concentrations. For example, research on New Jersey’s hospital markets revealed increasing consolidation, with several regions classified as “highly concentrated” based on HHI scores. This information is crucial for understanding the implications of market concentration on healthcare accessibility and pricing

Regulatory Framework

The HHI is widely used by regulatory bodies around the world to assess market competition and concentration. In the US, the FTC and the Department of Justice use the HHI to evaluate mergers and acquisitions and to enforce antitrust laws. Similarly, in the European Union, the European Commission uses the HHI to assess market competition and concentration in various industries.

Conclusion

The Herfindahl-Hirschman Index remains a fundamental instrument for assessing market concentration and competition. Its applications have evolved across various sectors, providing valuable insights into market structures. However, practitioners should be mindful of its limitations and consider complementing the HHI with other analytical tools for a comprehensive market assessment.

Why should I be interested in this post?

The Herfindahl-Hirschman Index is a powerful tool for analyzing market structure and assessing competitive dynamics. As markets continue to evolve, the HHI will remain an essential tool for navigating the complexities of competition in the modern economy. So as business and finance students, it is necessary to know such an important index to keep up with the evolving world around us.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLADatastream

Useful resources

United states Department of Justice Herfindahl–Hirschman index

Eurostat Glossary:Herfindahl Hirschman Index (HHI)

United States Census Bureau Herfindahl–Hirschman index

Academic articles

Bach, G. D. (2020, March 18). Strong Competition Among US Airlines Before COVID-19 Pandemic. American Enterprise Institute.

Federal Trade Commission. (2021, September). FTC Staff Presents Report on Nearly a Decade of Unreported Acquisitions by the Biggest Technology Companies. Federal Trade Commission

United States Department of Justice. (2019, June 11). Complaint, United States of America et al. v. Deutsche Telekom AG et al. (Case 1:19-cv-01713). United States District Court for the District of Columbia

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

Overview of US Treasuries

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024) gives an overview of US Treasuries, their types, characteristics, and advanced applications.

Introduction

US Treasuries are a cornerstone of global financial markets, serving as a benchmark for risk-free investments and a safe-haven asset during times of economic uncertainty. As a finance professional, understanding the basics and intricacies of US Treasuries is essential for making informed investment decisions and navigating the complexities of global finance. In this article, we will provide a comprehensive overview of US Treasuries, covering the basics, types, characteristics, market structure, and advanced applications.

What are US Treasuries?

US Treasuries are debt securities issued by the US Department of the Treasury to finance government spending and pay off maturing debt. They are considered one of the safest investments globally, backed by the full faith and credit of the US government.

Types of US Treasuries

There are four main types of US Treasuries:

Treasury Bills (T-bills)

  • Short-term securities with maturities ranging from a few weeks to 52 weeks
  • Sold at a discount to face value, with the difference representing the interest earned.
  • Low risk, low return investment (low duration fixed-income securities)

Treasury Notes (T-Notes)

  • Medium-term securities with maturities ranging from 2 to 10 years
  • Sold at face value, with interest paid semi-annually
  • Moderate risk, moderate return investment (medium duration fixed-income securities)

Treasury Bonds (T-Bonds)

  • Long-term securities with maturities ranging from 10 to 30 years
  • Sold at face value, with interest paid semi-annually
  • Higher risk, higher return investment (high duration fixed-income securities)

Treasury Inflation-Protected Securities (TIPS)

  • Securities with principal and interest rates adjusted to reflect inflation
  • Designed to provide a hedge against inflation
  • Low risk, low return investment

Figure 1 below gives the Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024.

Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024
Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024
Source: U.S. Department of Treasury

Figure 2 below gives the U.S. Federal Debt by Security Type on August 31, 2025.

U.S. Federal Debt by Security Type on August 31, 2025
US Federal Debt by Security Type on August 31, 2025
Source: U.S. Department of Treasury

Characteristics of US Treasuries

US Treasuries have several key characteristics such as Risk-free status, Liquidity, Taxation, and Return characteristics

Risk-free status: US Treasuries are considered one of the safest investments globally, backed by the full faith and credit of the US government.

Liquidity: US Treasuries are highly liquid, with a large and active market.

Taxation: Interest earned on US Treasuries is exempt from state and local taxes.

Return characteristics: US Treasuries offer a relatively low return compared to other investments, but provide a high degree of safety and liquidity.

Market Structure

The US Treasury market is one of the largest and most liquid markets globally, with a wide range of participants, including:

  • Primary dealers: Authorized dealers that participate in US Treasury auctions.
  • Investment banks: Firms that provide underwriting, trading, and advisory services.
  • Asset managers: Firms that manage investment portfolios on behalf of clients.
  • Central banks: Institutions that manage a country’s monetary policy and foreign exchange reserves.

Advanced Applications of US Treasuries

US Treasuries have several advanced applications, including:

  • Yield curve analysis: US Treasuries are used to construct the yield curve, which is a graphical representation of interest rates across different maturities.
  • Hedging strategies: US Treasuries are used to hedge against interest rate risk, inflation risk, and credit risk.

Figure 3 below gives the yield curve for the Treasuries in the United States on June 28, 2024.

Yield curve for US Treasuries (31/12/2024)
Yield curve for US Treasuries (31/12/2024)
Source: U.S. Department of Treasury

You can download below the Excel file for the data used to build the figure for the yield curve for US Treasuries.

Download the Excel file for the data used to build the figure for the yield curve for US Treasuries

Conclusion

US Treasuries are a fundamental component of global financial markets, offering a safe-haven asset and a benchmark for risk-free investments. By understanding the basics and intricacies of US Treasuries, finance professionals can make informed investment decisions and navigate the complexities of global finance.

Why should I be interested in this post?

Understanding US Treasuries is crucial for anyone pursuing a career in finance. These instruments provide insights into Monetary Policy, Fixed-Income Analysis, Portfolio Management, and Macroeconomic Indicators.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLA Datastream

   ▶ Ziqian ZONG The Yield Curve

   ▶ Youssef LOURAOUI Interest rate term structure and yield curve calibration

   ▶ William ARRATA My experiences as Fixed Income portfolio manager then Asset Liability Manager at Banque de France

Useful resources

Treasury Direct Treasury Bonds

US Treasury Yield curve data

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

The Art of a Stock Pitch: From Understanding a Company to Building a Coherent Logics

Dawn DENG

In this article, Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026) offers a practical introduction to building a beginner-friendly stock pitch—from selecting a company you truly understand, to structuring the investment thesis, and translating logic into valuation. The goal is not to produce “perfect numbers,” but to make your reasoning coherent, transparent, and testable.

Why learn to do a stock pitch?

Learning to pitch a stock is learning to tell a story in financial language. Whether you are aiming at investment banking, asset management, or equity research roles—or competing in a student investment fund—the stock pitch is a core exercise that reveals both how you think and how you communicate. Within ten minutes, you must answer three questions: Who is this company? Why is it worth investing in? And how much is it worth? A strong pitch convinces not by breadth of information, but by reasoning that is consistent, evidence-based, and verifiable.

Choosing a company: balance understanding and interest

For beginners, picking the right company matters more than picking the right industry. Do not start by hunting the next “multibagger.” Start with a business you can truly explain: how it makes money, who its customers are, and what drives its costs. Familiar products and clear business models are your best teachers. I first learned how to build a stock pitch during my Investment Banking Preparatory Program at my home university, Queen’s Smith School of Business. The program was designed to train first- and second-year students in the fundamentals of financial modeling, valuation, and investment reasoning. In my first pitch with the audience from school investment clubs and the professor, I chose L3Harris Technologies (NYSE: LHX)—working across defense communications and space systems. Its complexity pushed me to locate it precisely in the value chain: not a weapons maker, but a critical node in command-and-control. No valuation model can substitute that kind of business understanding.

Industry analysis: space, structure, and cycle

The defense sector operates under multi-year budget cycles, long procurement timelines, and high barriers to entry. The market is dominated by five major U.S. contractors—Lockheed Martin, Northrop Grumman, General Dynamics, Raytheon, and L3Harris. While peers tend to focus on platform manufacturing, L3Harris differentiates itself through integrated communication and command systems, giving it recurring revenue and a lighter asset base. This focus positions the company at the intersection of AI-driven defense innovation and space-based data systems—a niche expected to grow rapidly as military operations become more network-centric.

Investment thesis: three key arguments

(1) Strategic Layer – “Why now”

The defense industry is entering a new digitalization cycle. L3Harris’s acquisition of Aerojet Rocketdyne expands its vertical integration into propulsion and guidance, while its strong exposure to secure communication networks aligns with rising defense budgets for AI and satellite modernization.

(2) Competitive Layer – “Why this company”

Compared to peers, L3Harris demonstrates strong operational efficiency and disciplined capital allocation. Its EBITDA margin of ~20% and R&D intensity near 4% of revenue outperform sector averages. Management has proven its ability to sustain synergy realization post-merger, reducing leverage faster than expected.

(3) Financial Layer – “Why it matters”

The company’s robust cash generation supports consistent dividend growth and share repurchases, signaling confidence and financial flexibility. Our base-case target price was USD 287, implying ~12% upside, supported by improving free cash flow yield and moderate multiple expansion.

Valuation: turn logic into numbers

Valuation quantifies your logic. At the beginner level, focus on two complementary methods: Relative Valuation and Absolute Valuation (DCF). The first tells you how markets price similar assets; the second estimates intrinsic value under your assumptions. Use them to cross-check each other.

Relative Valuation

We benchmarked L3Harris Technologies against major U.S. defense peers including Lockheed Martin, Northrop Grumman, and Raytheon Technologies, using EV/EBITDA and P/E multiples as our key comparative metrics. Peers traded at around 14–16× EV/EBITDA, consistent with the industry’s steady cash-flow profile. However, given L3Harris’s stronger growth visibility, improving free cash flow, and synergies expected from the Aerojet Rocketdyne acquisition, we assigned a justified multiple of 17× EV/EBITDA—positioning it slightly above the sector average. This premium reflects not only its operational efficiency but also its role in the ongoing digital transformation of defense communications and space systems.

Absolute Valuation (Discounted Cash Flow)

DCF values the business as the present value of future free cash flows. Build operational drivers in business terms (volume/price, mix, scale effects), then translate into FCF:
FCF = EBIT × (1 – tax rate) + D&A – CapEx – ΔWorking Capital. Choose a WACC consistent with long-term capital structure (equity via CAPM; debt via yield or recent financing, after tax). For terminal value, use a perpetual growth rate aligned with nominal GDP and industry logic, or an exit multiple consistent with your relative valuation. Present a range via sensitivity (WACC, terminal growth, margins, CapEx) rather than a single precise point. Where DCF and multiples converge, your target price gains credibility; where they diverge, explain the source—cycle position, peer distortions, or different long-term assumptions.

Risks and catalysts: define uncertainty

Every pitch must face uncertainty head-on. Map the fragile links in your logic—macro and policy (rates, budgets, regulation), competition and disruption (new entrants, technology shifts), execution and governance (integration, capacity ramp-up, incentives). Then specify catalysts and timing windows: earnings and guidance, major contracts, launches or pricing moves, structural margin inflections, M&A progress, or regulatory milestones. Make it explicit what would validate your thesis and when you would reassess.

Related posts on the SimTrade blog

   ▶ Cornelius HEINTZE Two-Stage Valuation Method: Challenges

   ▶ Andrea ALOSCARI Valuation Methods

   ▶ Jorge KARAM DIB Multiples Valuation Method for Stocks

Useful resources

Mergers & Inquisitions How to Write a Stock Pitch

Training You Stock Pitch en Finance de Marché : définition et méthode

Harvard Business School Understanding the Discounted Cash Flow (DCF) Method

Corporate Finance Institute Types of Valuation Multiples and How to Use Them

About the author

The article was written in October 2025 by Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026).

Assessing a Company’s Creditworthiness: Understanding the 5C Framework and Its Practical Applications

Posts

Dawn DENG

In this article, Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026) presents a practical framework for assessing a company’s creditworthiness. The analysis integrates both financial and non-financial dimensions of trust, using the classic 5C framework widely adopted in banking and corporate finance.

Why assess creditworthiness

In corporate finance, assessing a company’s creditworthiness lies at the heart of lending, underwriting, and risk management. For banks, it is not only a “yes/no” lending decision (and also the level of interest rate propose to the client); it is a structured way to understand repayment capacity, operating quality, and long-term sustainability. The goal is not to label a company as “good” or “bad,” but to answer three questions: Can it repay? Will it repay? If not, how much can be recovered?

The five pillars of credit analysis: the 5C framework

The 5C framework, an industry standard that crystallized over decades of banking practice and supervisory guidance, assesses five core dimensions: Character, Capacity, Capital, Collateral, and Conditions. Rather than originating from a single author or institution, it emerged progressively across lenders’ credit manuals, central-bank training, and regulator handbooks, and is now embedded in banks’ risk-rating and loan-pricing models. These components are interdependent: strength in one area can mitigate weaknesses in another, while vulnerabilities may compound when several Cs deteriorate at the same time.

The five pillars of credit analysis: the 5C framework.
The five pillars of credit analysis: the 5C framework
Source: the author.

Character: reputation and track record

Character covers the firm’s reputation and willingness to honor obligations. Analysts review borrowing history, repayment behavior, disclosure practices, management integrity, and banking relationships. A consistent record of timely payments and transparent reporting typically earns a stronger credit score.

For example, a mid-sized manufacturer that consistently meets payment deadlines and maintains transparent reporting will typically be viewed as a low-risk borrower, even if its margins are moderate.

Capacity: ability to repay

Capacity assesses whether operating cash flow can service debt on time. Core indicators include: Interest Coverage (EBIT/Interest), DSCR, and Liquidity ratios (Current/Quick/Cash). As a rule of thumb, an interest coverage below 2× or DSCR below 1.0× often signals liquidity pressure.

For example, in 2023, several property developers in China exhibited DSCR levels below 1.0 amid declining sales, illustrating how even profitable firms can face repayment stress when cash inflows weaken.

Capital: structure and leverage

Capital reflects how the company balances debt and equity. Key metrics are Debt-to-Equity, Debt-to-Assets, and Net Debt/EBITDA. Higher leverage raises financial risk, but acceptable ranges are industry-specific: capital-intensive sectors may tolerate 2–3× EBITDA, while asset-light tech/retail often sit closer to 0.5–1.5×.

A practical example: L3Harris Technologies, a U.S. defense contractor, maintains moderate leverage with strong cash conversion, reinforcing its credit profile despite large-scale acquisitions.

Collateral: security and guarantees

Collateral is the lender’s safety net. Recoveries depend on the value and liquidity of pledged assets (property, receivables, equipment). Asset-light firms lack hard collateral and thus rely more on cash-flow quality and relationship history to mitigate risk.

Asset-light companies (e.g., software, consulting) rely more on cash flow and relationship capital rather than tangible assets, making consistent performance crucial to maintaining credit access.

Conditions: macro and industry context

Conditions cover both external factors (interest rates, regulations, economic cycles) and loan-specific purposes.

During tightening monetary cycles, higher financing costs can compress margins, while in recessionary or trade-sensitive sectors, declining demand directly raises default risk. For example, during 2022’s rate hikes, small exporters with floating-rate debt experienced significant declines in credit ratings due to rising interest expenses.

Financial perspective: reading credit signals in the statements

Effective credit analysis connects the three statements: the income statement (profitability), balance sheet (capital structure and asset quality), and cash flow statement (true repayment capacity).

Income statement: focus on revenue stability, margin trends, and the weight of non-recurring items. Persistent declines in gross or operating margins may indicate weakening competitiveness.

Balance sheet: examine asset quality and liability mix. High receivables or inventory build-ups can flag liquidity strain; heavy short-term debt raises refinancing risk.

Cash flow statement: the practical health check. Sustainable, positive operating cash flow that covers interest and capex signals solvency; strong accounting profits with chronically negative cash flow suggest poor earnings quality.

Useful cross-checks include Operating Cash Flow/Total Debt (coverage of principal from operations) and the persistence of negative free cash flow funded by external capital (a sign of structural vulnerability).

Beyond numbers: governance, transparency, and relationship capital

Creditworthiness extends beyond ratios. Governance quality, reporting transparency, competitive barriers, and banking relationships shape real-world risk. Policy-sensitive sectors (e.g., energy, real estate) exhibit higher cyclicality; tech and retail hinge on stable cash generation and customer retention. Stable leadership, prudent accounting, and timely disclosures build lender confidence. Long-standing cooperation and on-time performance often translate into better terms, a compounding of “relationship capital.”

At its core, credit is a form of deferred trust: banks lend to future behaviors and cash flows. Whether a firm deserves that trust depends on how it balances transparency, responsibility, and disciplined execution.

Conclusion

Credit analysis is not merely about numbers, it is about understanding how financial structure, behavioral consistency, and institutional trust interact. The 5C framework provides a structured map, yet effective analysts also recognize the fluid connections among its components: good character supports capital access, strong capacity reinforces collateral confidence, and favorable conditions amplify all others. Assessing creditworthiness is thus the art of finding order amid uncertainty, of determining whether a company can remain stable when markets turn turbulent.

Related posts on the SimTrade blog

About credit risk

   ▶ Jayati WALIA Credit risk

   ▶ Jayati WALIA Quantitative risk management

   ▶ Bijal GANDHI Credit Rating

About professional experiences

   ▶ Snehasish CHINARA My Apprenticeship Experience as Customer Finance & Credit Risk Analyst at Airbus

   ▶ Jayati WALIA My experience as a credit analyst at Amundi Asset Management

   ▶ Aamey MEHTA My experience as a credit analyst at Wells Fargo

Useful resources

Allianz Trade Determining Customer Creditworthiness

Emagia blog Assessing a Company’s Creditworthiness

About the author

The article was written in October 2025 by Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026).

The Two-Stage Valuation Method and its challenges

Cornelius HEINTZE

In this article, Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains how the two-stage valuation model and the segmentation in growth stage and stable phase impact the valuation of companies and which problems tend to arise with the use of this model.

Why this is important

The valuation of companies is always present in the world of finance. We see it in Mergers and Acquisitions (M&A), initial public offerings (IPOs) and daily stock market pricing where firms are valued within seconds based on new information. For markets to function properly, valuations need to represent the underlying company as precisely as possible. Otherwise, information asymmetries increase, leading to inefficient or even dysfunctional markets.

The Two-Stage Model

The Two-Stage Model is the traditional model that is used by finance experts across the world. What makes it stand out is the segmentation of the valuation in two steps:

  • Growth phase (explicit forecast period): In this phase, the company’s future cash flows are projected in detail for each year t = 1 … T. These cash flows are then discounted back to the valuation date using the discount rate r:

    PV(Growth phase) = Σt=1…T ( CFt ) / (1 + r)t

  • Stable phase (terminal value): After the explicit forecast horizon, the company is assumed to enter a stable stage. There are two assumptions needed to fulfill this stage and its equations. First it is assumed that the company can realize the cashflows over an indefinite timespan. Second, it is assumed that the perpetual growth rate g does not exceed the growth rate of the whole economy. The two common resulting equations are:
    • No growth (steady state):
      PV(Stable phase) = CFstable / (r * (1 + r)T)

    • Constant growth in perpetuity:
      PV(Stable phase) = CFT+1 / ((r − g) * (1 + r)T)

Total firm value is then the sum of both parts:

Value = PV(Growth phase) + PV(Stable phase)

Problems with the Two-Stage Model

If we look closer at the equations for the stable phase you will realize that they show a perpetuity. Looking at the assumptions given, this is also the only possible outcome. But given this circumstance we encounter the first big problem of the Two-Stage Model: the stable phase often makes up over 50% of the firm value. This is a problem as the assumptions for the stable phase are often very subjective and not very realistic. The problem evolves even more when it is assumed that there is a constant growth rate. Let’s look at this through an example:

Assumptions: discount rate r = 10%, explicit forecast over T = 5 years with free cash flows (in €m): 80, 90, 95, 98, 100. After year 5, we consider two terminal cases.

Phase 1 – Present value of explicit cash flows

  • Year 1: 80 / (1.10)1 = 72.73
  • Year 2: 90 / (1.10)2 = 74.38
  • Year 3: 95 / (1.10)3 = 71.37
  • Year 4: 98 / (1.10)4 = 66.94
  • Year 5: 100 / (1.10)5 = 62.09

PV(Phase 1) ≈ 347.51 (€m)

Phase 2 – Stable phase

  • (a) No growth: CFstable = 100 ⇒ TV at t=5
    PV(Terminal) = 100 /(0.1*(1.10)5) = 620.92

  • (b) Constant growth g = 2%: CFT+1 = 100 ⇒ TV at t=5
    PV(Terminal) = 100/((0.10-0.08) * (1.10)5) = 776.15

Total value and weights

  • No growth: Total = 347.51 + 620.92 = 968.43 ⇒ Stable Phase share ≈ 64.1%, Phase-1 share ≈ 35.9%
  • g = 2%: Total = 347.51 + 776.15 = 1,123.66 ⇒ Stable Phase share ≈ 69.1%, Phase-1 share ≈ 30.9%
  • Impact of growth: Increase in the firm value of 155.23 or ≈ 16%

Takeaway: A modest increase in the perpetual growth rate from 0% to 2% raises the terminal present value by ~155 (€m) and lifts its weight from ~64% to ~69% of total value. This illustrates the strong sensitivity of the two-stage model to terminal assumptions.

If you want to try out for yourself and learn more about the sensitivity of the growth rate in relation to the firm value you can do so in the excel-file I have created in order for this example as shown below:

Two-Stage Model Example 1

Another very interesting fact gets visible, while trying out the model, which is commonly seen in early tech startups or general startups, that have very high early investment costs (for example software development). They will have a negative firm value in the growth phase but in the long run it is assumed that these companies will have a constant growth rate and positive cashflows, therefore evening out the negative growth phase. This again shows how much of an impact the stable and the growth phase has on the firm value.

Two-Stage Model Example Startup

You can download the excel file here:

Download the Excel file for Two-Stage-Model Analysis

Implications for practical use and solutions

As seen in the example, the impact of the stable phase and therefore the assumptions about the cashflows and the circumstances of the company as to whether it is appropriate to use a growth rate plays a big role in on the valuation of the firm. Deciding these assumptions lies at the feet of the firms that valuate the company or at the company valuating itself. Therefore, they are highly subjective and must be transparent at all times to ensure an appropriate valuation of the firm. If this is not the case firms can be valued at a much higher value than it is appropriate and therefore convey false information.

To fight this it is recommended to incorporate various valuation methods to verify that the value is not too high or too low but rather on a bandwidth of values which are plausible. This is often times part of a fairness opinion which is issued by an independent company. You can see an example here when Morgan Stanley drafted a fairness opinion for Monsanto for the merger with Bayer:

Full SEC Statement for the merger

To sum up…

The Two-Stage Valuation Model remains a cornerstone in corporate finance because of its simplicity and structured approach. However, as the example shows, the stable phase dominates the overall result and makes valuation highly sensitive to small changes in assumptions. In practice, analysts and other users of the information provided by the valuing company should therefore apply the model with caution, test alternative scenarios, and complement it with other methods. Looking ahead, the combination of traditional models with advanced techniques such as multi-stage models, sensitivity analyses, or even simulation approaches can provide a more balanced and reliable picture of a company’s value.

Why should I be interested in this post?

Whether you are a student of finance, an investor, or simply curious about how firms are valued, understanding the Two-Stage Valuation Model is essential. It is one of the most widely used approaches in practice and often determines the prices we see in the markets, from IPOs to M&A. By being aware of both its strengths and its limitations, you can better interpret valuation results and make more informed financial decisions.

Related posts on the SimTrade blog

   ▶ All posts about financial techniques

   ▶ Jorge KARAM DIB Multiples valuation methods

   ▶ Andrea ALOSCARI Valuation methods

   ▶ Samuel BRAL Valuing the Delisting of Best World International Using DCF Modeling

Useful resources

Paul Pignataro (2022) “Financial modeling and valuation: a practical guide to investment banking and private equity” Wiley, Second edition.

Tim Koller, Marc Goedhart, David Wessels (2010) “Valuation: Measuring and Managing the Value of Companies”, McKinsey and Company.

Fairness Opinion Example

About the author

The article was written in October 2025 by Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025