Understanding WACC: a student-friendly guide

Daniel LEE

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027) explains the Weighted Average Cost of Capital (WACC).

Introduction

The Weighted Average Cost of Capital (WACC) is one of the most important concepts in corporate finance and valuation. I know that for some students, it feels abstract or overly technical. In reality, WACC is simpler than we think.

Whether it is a DCF, investment decision or assessing long-term value creation, understanding WACC is essential to interpret the financial world. In a DCF, WACC is used as the discount rate applied for FCF. Moreover, a higher WACC lowers the PV of future cashflows whereas a lower WACC increases the firm value. That is why WACC is a benchmark for value creation.

What is the cost of capital?

Every company needs funding to operate, which comes from two main sources: debt and equity. Debt is provided by banks or bondholders and equity is provided by shareholders. Both expect to be compensated for the risk they take. Shareholders typically require a higher return because they bear greater risk, as they are paid only after all other obligations have been met. In contrast, debt investors mainly expect regular interest payments and face lower risk because they are paid before shareholders in case of financial difficulty. The cost of capital represents the return required by each group of investors, and the Weighted Average Cost of Capital (WACC) combines these required returns into a single percentage.

The cost of capital is the return required by each investor group and WACC combines these two expectations with a simple %.

Breaking down the WACC formula

WACC is calculated with this formula:

Formula for the WACC

To gather these elements, we use several methods such as:

Cost of Equity: CAPM model

Cost of equity = Risk-free rate + β (Expected market return – Risk-free rate)

Beta measures how sensitive a company’s returns are to movements in the overall market. It captures systematic risk, meaning the risk that cannot be eliminated through diversification. A beta above 1 indicates that the firm is more volatile than the market, while a beta below 1 means it is less sensitive to market changes.

It is important to distinguish between unlevered beta and levered beta. The unlevered beta reflects only the risk of the firm’s underlying business activities, assuming the company has no debt. It represents the pure business risk of the firm and is especially useful when comparing companies within the same industry, as it removes the effect of different financing choices. This is why analysts often unlever betas from comparable firms and then relever them to match a target capital structure.

The levered beta, on the other hand, includes both business risk and financial risk created by the use of debt. When a company takes on more debt, shareholders face greater risk because interest payments must be made regardless of the firm’s performance. This increases the volatility of equity returns, leading to a higher levered beta and a higher cost of equity.

The risk-free rate represents the return investors can earn without taking any risk and is usually approximated by long-term government bond yields. It acts as the baseline return in the CAPM, since investors will only accept risky investments if they offer a return above this rate. Choosing the correct risk-free rate is important: it should match the currency and the time horizon of the cash flows. Changes in the risk-free rate have a direct impact on the cost of equity and, therefore, on firm valuation.

Cost of Debt

The interest payments are tax-deductible. That’s why we include 1-T in the formula. For example: if a company pays 5% interest annually and the corporate tax rate is 30% then the net cost of debt is 5%*(1-0.3) = 3.5%.

Capital Structure Weights

The weights Equity/(Equity+Debt) and Debt/(Equity+Debt) represents the proportion of equity and debt in the company’s balance sheet. We can then assume that a firm with more debt will have a lower WACC because debt is cheaper, but too much debt is risky. That is why the balance is very important for valuation and that usually you use a “target capitalization”. Target capitalization is an assumption of the level of debt and equity that a company is expected to have in the long term, rather than the current one.

Understanding risk through the WACC

WACC is a measure of risk. A higher WACC means the company is riskier and a lower WACC means it’s safer.

WACC is also closely linked to a firm’s capability to create value. If ROIC > WACC then the company creates value, but if ROIC < WACC, the company destroys value. This rule is widely used by CFO and investors to take decisions.

How is WACC used in practice?

  • WACC is the discount rate applied to FCF in the DCF > Lower WACC = Higher valuation; Higher WACC = Lower Valuation
  • As said before, it helps to assess value creation and find NPV
  • Assessing capital structure > helps to find the optimal balance between debt and equity
  • Comparing companies > good preliminary step to look at similar companies in the same company, the WACC will tell you a lot about their risk

Example

To illustrate how the WACC formula is used in practice, let us take the DCF valuation for Alstom that I made recently. In this valuation, WACC is used as the discount rate to convert future free cash flows into present value.

Alstom’s capital structure is defined using a target capitalization, that was chosen on the industry and the comps. Equity represents 75% of total capital and debt 25%. The cost of equity is estimated using the CAPM. Based on the base-case assumptions, Alstom has a levered beta that reflects both its industrial business risk and its use of debt. Combined with a risk-free rate and an equity risk premium, this leads to a cost of equity of 8.3%.

The cost of debt is estimated using Alstom’s borrowing conditions. Alstom pays an average interest rate of 4.12% on its debt. Since interest expenses are tax-deductible, we adjust for taxes. With a corporate tax rate of 25.8%, the after-tax cost of debt is:

4.12%×(1-0.258)=3.05%

We can now compute the WACC:

WACC=75%×8.3%+25%×3.05%=6.98%

This WACC represents the minimum return Alstom must generate on its invested capital to satisfy both shareholders and lenders. In the DCF, this rate is applied to discount future free cash flows. A higher WACC would reduce Alstom’s valuation, while a lower WACC would increase it, highlighting how sensitive valuations are to financing assumptions.

Conclusion

To conclude, WACC may look a bit complicated, but it represents a simple idea: the company must generate enough to reward its investors for the risk they take. Understanding WACC allows people to interpret valuations, understand how capital structure influences risk and compare businesses across industries. Once you master the WACC, it is one of the best tools to dig your intuition about risk and valuation.

Related posts on the SimTrade blog

   ▶ Snehasish CHINARA Academic perspectives on optimal debt structure and bankruptcy costs

   ▶ Snehasish CHINARA Optimal capital structure with corporate and personal taxes: Miller 1977

   ▶ Snehasish CHINARA Optimal capital structure with no taxes: Modigliani and Miller 1958

Useful resources

Damodaran, A. (2001) Corporate Finance: Theory and Practice. 2nd edn. New York: John Wiley & Sons.

Modigliani, F., M.H. Miller (1958) The Cost of Capital, Corporation Finance and the Theory of Investment, American Economic Review, 48(3), 261-297.

Modigliani, F., M.H. Miller (1963) Corporate Income Taxes and the Cost of Capital: A Correction, American Economic Review, 53(3), 433-443.

Vernimmen, P., Quiry, P. and Le Fur, Y. (2022) Corporate Finance: Theory and Practice, 6th Edition. Hoboken, NJ: Wiley.

About the author

The article was written in January 2026 by Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027).

   ▶ Read all articles by Daniel LEE.

Crypto ETP

Alberto BORGIA

In this article, Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025) explains about ETPs on crypto.

Introduction

An Exchange-Traded Product (ETP) is a type of regulated financial instrument, which is traded on stock exchanges and allows exposure to the price movements of an underlying asset or a benchmark without requiring direct ownership of the asset.

Crypto ETPs are instruments that provide regulated access to all market participants. Since their inception, they have become the main access point for traditional investors seeking exposure to digital assets. Every year, the value of assets in this category continues to grow and in their latest report, 21Shares analysts agree that by 2026 these assets will be able to surpass $400 billion globally.

The picture shows how rapidly crypto ETPs have scaled from early 2024 to late 2025. Assets under management (blue area) rise in successive waves, moving from roughly the tens of billions to just under the $300B range by late October 2025, while cumulative net inflows (yellow line) trend steadily upward toward ~$100B, signaling that growth has been supported by persistent new capital in addition to market performance.

As regulated access expands through mainstream distribution channels and more jurisdictions formalize frameworks for crypto investment vehicles, ETPs increasingly become the default wrapper for exposure. As the market deepens, secondary-market liquidity typically improves and execution costs compress, reducing short-term dislocations around the product and reinforcing further allocations.

Crypto ETP Asset under Management (AUM)
Crypto ETP AUM
Source: 21Shares.

This trend is driven not only by retail clients’ demand, but also by an increasing openness of traditional markets toward these types of products, meaning that established exchanges, broker-dealers, custodians and market-makers are increasingly willing to list, distribute and support crypto-linked ETPs within the same governance, disclosure and risk-management frameworks used for other exchange-traded instruments. In the US, more and more structural barriers are being removed thanks to new approval processes for crypto investment vehicles, as regulators and exchanges have been moving toward clearer, more standardized filing and review pathways and more predictable disclosure expectations.

By the end of 2025, more than 120 ETP applications were pending review in the USA, under assessment by the SEC and, where relevant, by the national securities exchanges seeking to list these products, positioning the market for significant inflows beyond Bitcoin and Ethereum in the new year.

We see this trend in other countries as well: the UK has removed the ban for retail investors, Luxembourg’s sovereign fund has invested as much as 1% of its portfolio in Bitcoin ETPs, while countries such as the Czech Republic and Pakistan have even started using such assets for national reserves. In Asia and Latin America, regulatory frameworks are also being formed, making crypto ETPs the global standard for regulated access.

This will lead to a virtuous cycle that will attract more and more capital: AUM growth enables a reduction in spreads, volatility decreases and liquidity increases, improving price efficiency and execution quality and reducing short-term dislocations, thereby supporting the growth of the asset class.

ETP o ETF

An Exchange-Traded Product is a broad category of regulated instruments that give investors transparent, tradable exposure to an underlying asset, index or a strategy. An Exchange-Traded Fund is a specific type of ETP that is legally structured as an investment fund, typically holding the underlying assets and calculating a net asset value. The key difference is therefore the legal form and the risk profile: ETFs are fund vehicles with segregated assets held for investors, whereas many non-ETF ETPs (such as ETNs) are debt instruments whose performance can also depend on the issuer’s creditworthiness. So, all ETFs are ETPs, but not all ETPs are ETFs.

Structure

There are two methods for replicating the underlying: physical and synthetic. Physical ETPs are created through the purchase and holding of the asset by the issuing entity, thus allowing a replication directly linked to the performance of the underlying. As for synthetic ETPs instead, they are created from a SWAP contract with a counterparty, for example a bank, in order to provide the return of that asset. To protect the liquidity of the daily return, the counterparty is required to post liquid collateral with the issuer and the amount of this collateral then fluctuates based on the value of the underlying asset and its volatility profile. Based on the data shown by Vanguard’s discussion of physical vs. synthetic ETF structures and with industry evidence showing that physical replication dominates European ETF AUM, we can say that in recent years investors have generally preferred physical ETPs, thanks to their transparency, the absence of counterparty risk and their relative simplicity rather than synthetic structures. In particular with regard to crypto, given the simplicity of holding the asset and their liquidity, almost all of these derivatives on cryptocurrencies are physical.

For this reasons, when you purchase this type of financial asset, you do not directly own the physical cryptocurrency (the underlying), but rather a debt security of the issuer, backed by the crypto and with a guarantee provided by the relationship with the trustee (This entity’s task is to represent the interests of investors, receiving all rights over the physical assets that collateralize the ETP. It therefore acts as a third and independent party that protects the ETP’s assets and ensures that it is managed in accordance with the terms and conditions established beforehand.)

Structure of Exchange Traded Product
ETP’s structure
Source: Sygnum Bank.

Single or diversified

Depending on the exposure the investor wants to obtain, various types of these assets can be purchased:

  • Some may replicate a specific cryptocurrency by tracking the value of a single digital coin. Their task is therefore only to replicate the market of the underlying asset in a simple and efficient way.
  • Other ETPs can replicate a basket or an index of cryptocurrencies; this is done to gain exposure simultaneously to different markets, diversifying risk.
  • We can find an example of this in the products offered by 21Shares. Part of it is represented by diversified products, such as the 21Shares Crypto Basket Equal Weight ETP, where several cryptocurrencies make up the product. The majority, however, both in terms of AUM and number of products, is single-asset, with only one underlying. Examples include the 21Shares Bitcoin ETP or the 21Shares Bitcoin Core ETP.
  • When speaking specifically about these two products, there is a distinctive feature that makes 21Shares unique. The company was the first to bring these products to market and, for this reason, having a “monopoly” at the time, it was able to charge extremely high fees. With the arrival of new players, however, it was forced to reduce them and, thanks to its structure and competitive advantages, was able to offer extremely low fees, the lowest on the market, without delisting the previous products, as they remained profitable. In fact, the two products mentioned above have no differences of any kind, except for their costs.

BTC ETP
21Shares BTC ETP
Source: 21Shares.

Advantages compared to traditional crypto

The reasons that may lead to the purchase of this type of financial instrument can be multiple. First of all, navigating the world of cryptocurrencies can seem difficult, but ETPs remove much of the complexity. Instead of relying on unregulated platforms or paying extremely high fees to traditional funds that invest only marginally in cryptocurrencies, investors have the opportunity to buy this asset directly as they would with other securities. ETPs will then sit alongside all other investments in the portfolio, thus enabling a simpler analysis of it and also comparison with other products. Moreover, even if these intermediaries do not offer true financial advice, they provide investor support that is far higher than that of classic crypto platforms.

Another element in their favor is the security and transparency on which they are based. In particular in Europe, these instruments are subject to stringent financial regulations and are required to comply with accounting, disclosure, and transparency rules. Then, since they are predominantly physically collateralized, their structure makes it possible to protect the client and the asset itself in the event of bankruptcy or insolvency of the issuer, limiting exposure to the underlying.

Why should I be interested in this post?

The crypto market is a complex world and constantly changing. This article can be read by anyone who intends even just to deepen their understanding or discover concepts that nowadays are becoming increasingly important and fundamental in financial markets and in everyday life, not only by those who want to pursue a career in the cryptocurrency sector.

Related posts on the SimTrade blog

   ▶ Snehasish CHINARA Top 10 Cryptocurrencies by Market Capitalization

   ▶ Hugo MEYER The regulation of cryptocurrencies: what are we talking about

Useful resources

CoinShares

21Shares

Swem, N. and F. Carapella (28/03/2025) Crypto ETPs: An Examination of Liquidity and NAV Premium FEDS Notes.

sygnum

Vanguard: Replication methodology / ETF knowledge

About the author

The article was written in December 2025 by Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025).

   ▶ Read all articles by Alberto BORGIA.

EBITDA: Uses, Benefits and Limitations

Alberto BORGIA

In this article, Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025) explains about EBITDA, how it can be used, its advantages and disadvantages.

Introduction

Earning Before Interest, Taxes, Depreciation and Amortization (EBITDA) is ones of the most used financial metric and its goal is to understand a company’s operating performance before considering the effects of financial choices (interests), taxation and non-cash accounting charges related to long-lived asset and acquired intangibles.

So in intuition behind it is that if two or more firms sell similar products, the analyst should be able to compare their “core operating engine”, even if they differ from a debt (higher debt), tax (different tax jurisdiction) or asset base prospective.

Because EBITDA is a key component capable of influence valuation and decisions, it is crucial to understand both how it is obtained and what it does.

How it is obtained

To calculate this metric, we begin with the income statement and add back the expenses that are excluded by the EBITDA definition:

EBITDA = Net Income + Interest + Taxes + Depreciation + Amortization

Another way is to start from the EBIT:

EBITDA = EBIT + Depreciation + Amortization

Using Carrefour as a real-case example, I calculated EBITDA starting from the company’s income-statement figures. First, I reproduced an “operating-style” EBITDA by taking Gross Margin and subtracting selling, general and administrative expenses, which gives a core operating profit measure before financing and taxes. Then, for a second approach, I computed EBITDA as Recurring Operating Income + Depreciation + Amortization. This shows how EBITDA is obtained in practice from published financial statement components.

These two formulae can look clean and easy, but the real computation is messier, depending on how the company structure his income statement and on what is included in the amortization and depreciation class. For these reasons EBITDA is usually accompanied by a set of clear definitions and reconciliations.

Another key factor is that the “earnings” are not always interpreted in the same way, that is why the SEC has decided that in the context of EBITDA and EBIT described in its adopting release, “earnings” means GAAP net income as presented in the statement of operations. So, if a measure is calculated differently, it should not be labeled EBITDA, but Adjusted EBITDA.

Adjusted EBITDA

For many documents we see the term Adjusted EBITDA, because it modifies the measure by excluding values that are considered by the management as “non-core” or “non-recurring”. These adjustments typically include items such as restructuring costs, acquisition-related expenses, unusual or non-recurring gains and losses, and stock-based compensation. The goal is to estimate a normalized operating result. This can however create risks when comparing different firms or the same one in different years.

What it is used for

The reasons why EBITDA is one of the metrics most taken into account by financial analysts are multiple as its ways of use. First of all, by excluding interest rates, it is suitable as a proxy for comparing companies from an operating perspective, even when they have different tax or capital structures. It is then used to compare various risk indicators or to limit leverage and protect lenders (in the debt market).

It is also used for company valuation and for the calculation of multiples, such as EV/EBITDA. Here, EV indicates the total value of the firm. According to the technical literature, the reasons why this multiple is particularly useful and widely used include, for example, the possibility of calculating it even when net income is negative, for this reason, it is extremely common in markets where significant infrastructure investments are present and in leveraged buyouts and naturally because it allows the comparison of companies with totally different levels of financial leverage.

They are also particularly useful (EBITDA and its variants) for communicating to investors and analysts, even though it is necessary to be especially careful about any modifications aimed at “inflating” the results.

Last it is considered by analysts as a starting point, pairing it with cash-flow measures, such as free cash flow, for a fuller view.

Advantages

There are several advantages to using EBITDA; for instance, it can be calculated quickly from publicly available financial statements or is often directly disclosed by companies. In industries where leverage varies a lot it is useful to analyze companies in it or when assessing a target in M&A where capital structure can change immediately after the acquisition. Finally operating results are less sensitive to life assigned to asset when we add back depreciation and amortization.

Disadvantages

However, EBITDA is also associated with several notable drawbacks. Even by adding back depreciation and amortization, the value does not take into account changes in working capital and capex needed to increase or maintain productive capacity, it is more like a “rough” measure of operating cash flows.

As previously noted, EBITDA is also susceptible to manipulation, as it is inherently open to interpretation. Consequently, it should be complemented with other financial metrics to provide a more comprehensive and balanced assessment, thereby reducing the risk of misinterpretation driven by management’s attempts to influence investors’ perceptions..

EBITDA Margin

To express the EBITDA relative to revenue, we can use EBITDA margin:

EBITDA Margin = EBITDA / Revenue

It is calculated to understand how much operating earnings the firm generates per unit of sales, in particular it can be used to compare a firm’s profitability with its peers or to track trends. Even though it is particularly useful in financial analysis, the EBITDA margin presents the same issues as the original metric. If the first value is defined incorrectly, then this one will also be wrong. Just like normal EBITDA, this metric can be used best when it is accompanied by Operating Cash Flow (OCF), which reflects the cash generated by a company’s core operating activities, and Free Cash Flow (FCF), which represents the cash available after capital expenditures necessary to maintain or expand the asset base, and by an industry context.

Example

I provide below an example for the computation of EBITDA based on Carrefour, a French firm operating in the retail sector, more precisely in mass-market distribution (retail grocery).

Example of EBITDA calculation: Carrefour
Example of EBITDA calculation: Carrefour

You can download the Excel file provided below, which contains the calculations of EBITDA for Carrefour.

Download the Excel file.

Why should I be interested in this post?

EBITDA represents a fundamental concept for anyone who wants to build their career in the financial field, but not only. Understanding how it works, as well as its weaknesses and strengths, is necessary in order to build the knowledge required to become a competent and respected professional. This article, in fact, starts from the basics in order to explain the principles behind this metric even to those who are not in the field, helping them understand it.

Related posts on the SimTrade blog

   ▶ Cornelius HEINTZE DCF vs. Multiples: Why Different Valuation Methods Lead to Different Results

   ▶ Dawn DENG Assessing a Company’s Creditworthiness: Understanding the 5C Framework and Its Practical Applications

Useful resources

Non-GAAP Financial Measures

Deloitte Accounting Research Tool (DART) 3.5 EBIT, EBITDA, and adjusted EBITDA

Damodaran EBITDA concept, margins, interpretation

Moody’s (202/11/2024) EBITDA: Used and Abused

Faria-e-Castro, M., Gopalan R., Pal, A, Sanchez J.M., and Yerramilli V. (2021) EBITDA Add-backs in Debt Contracting: A Step Too Far? Working paper.

Damodaran EBITDA vs cash flow logic; reinvestment/capex relevance

About the author

The article was written in December 2025 by Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025).

   ▶ Read all articles by Alberto BORGIA.

How to approach a stock pitch

Daniel LEE

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027) explains how to approach a stock pitch.

Introduction

Are you preparing for an interview for investment banking? Hedge fund? Or just participating in a finance competition? Learning how to realize a stock pitch is one of the most useful skills you can develop early in your career.

A stock pitch combines fundamental analysis, strategy, valuation skills and even communication. The goal of this article is to break down the process of a stock pitch that anyone can apply.

What is a stock pitch?

A stock pitch is a recommendation (Buy, Hold or Sell) on a stock supported by:

  • A lot of research on the company and the industry to better understand the context
  • Financial analysis and valuation
  • Investment logic

A stock pitch is structured almost every time the same way.

1. Business Overview

Here the goal is to understand the company and some of the key questions are: What is the business model? What are the revenue drivers? Is the company competitive?

2. Industry Overview

In order to put a context into the company, you will have to study key metrics like market size & growth; competitive landscape; barriers to entry and industry trends

3. Investment Thesis

Investment thesis (generally 3) are the reasons why an investor should follow your recommendation? The thesis must be backed up by evidence and specific points. Just saying “the company is a leading player in the industry” doesn’t work. A strong investment thesis should be based on management guidance and analyst’s consensus. For ex: “The company plans to deleverage by x billion $”

4. Valuation

Valuation is probably the most difficult and the core of the pitch. It is here that you must justify that the stock is undervalued/overvalued. Usually (because exceptions exist depending on the industry or the company and you have to pay attention to that!) you use a relative valuation and an intrinsic valuation.

Relative valuation is comparing your company to its competitors to have a better idea of the multiples implied in the industry. Most used metrics are EV/EBITDA; P/E or EV/EBIT. Again, some metrics could change depending on the company or industry it is really important to understand that 2 pitches won’t be the same. The choice of the comps is also very important, and every company should be justified based on specific criteria. For example, a company that makes apple, you won’t compare it to a company that produces oil.

The intrinsic valuation is the Discounted Cash Flow which forecasts the company’s performance over the next 5 years. You typically forecast revenue growth, margins or working capital needs. The DCF is highly sensitive to the assumptions you made so it is very important to do the research work before starting the valuation. In order to consider some errors or unexpected events usually people do sensitivity analysis to Perpetual Growth Rate and WACC but also a Bull & Bear analysis. These analyses show that your pitch is robust and not based on unrealistic assumptions.

Finally, with all these elements you arrive at a final price. For example, with a 50-50 weight between the trading comps (25$) and DCF (30$) > (25+30)/2 = 27.5$ will be your final share price.

5. Risks & Catalysts

This last part is here to balance between optimism and realistic downside scenarios. Considering these elements is very important. A good stock pitch is not buying recommendation with a 100% upside, a good stock pitch is an objective view on a stock including business risks.

What I learned from my previous experiences?

Working on a few stock pitches taught me several lessons:

  • Keep the pitch simple and structured: a 15-20 slides deck is enough and do not make things complicated
  • Your thesis must be defensible: It is great to have a huge upside, but you have to explain your numbers, your assumptions and your model
  • Use Capital IQ: at ESSEC, students have a free account with Capital IQ, very useful to gather financial data!
  • Tell a story: Incorporating a story is essential to make a good impression and keep the public’s attention to your presentation.

Conclusion

To conclude, a stock pitch is one of the most accessible exercises for anyone who wants to learn financial modelling skills or how to understand a business from a 360° perspective. Moreover, it is always useful to have a stock pitch ready for an interview as it is a question that comes up often.

Related posts on the SimTrade blog

Dawn DENG The Art of a Stock Pitch: From Understanding a Company to Building a Coherent Logics

Emanuele GHIDONI Reinventing Wellness: How il Puro Brings Personalization to Nutrition

Max ODEN Leveraged Finance: My Experience as an Analyst Intern at Haitong Bank

Saral BINDAL Implied Volatility and Option Prices

Adam MERALLI BALLOU The Private Equity Secondary Market: from liquidity mechanism to structural pillar

Useful resources

Vernimmen, P., Quiry, P., Dallocchio, M., Le Fur, Y. and Salvi, A. (2023) Corporate Finance: Theory and Practice.

CFA Research Challenge

Damodaran, A. (2012) Investment Valuation: Tools and Techniques for Determining the Value of Any Asset..

About the author

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027).

   ▶ Read all articles by Daniel LEE.

Implied Volatility and Option Prices

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School) explains how implied volatility is calculated or extracted from option prices using an option pricing model.

Introduction

In financial markets characterized by uncertainty, volatility is a fundamental factor shaping the pricing and dynamics of financial instruments. Implied volatility stands out as a key metric as a forward-looking measure that captures the market’s expectations of future price fluctuations, as reflected in current market prices of options.

The Black-Scholes-Merton model

In the early 1970s, Fischer Black and Myron Scholes jointly developed an option pricing formula, while Robert Merton, working in parallel and in close contact with them, provided an alternative and more general derivation of the same formula.

Together, their work produced what is now called the Black Scholes Merton (BSM) model, which revolutionized investing and led to the award of 1997 Nobel Prize in Economic Sciences in Memory of Alfred Nobel to Myron Scholes and Robert Merton “for a new method to determine the value of derivatives,” developed in close collaboration with the late Fischer Black.

The Black-Scholes-Merton model provides a theoretical framework for options pricing and catalyzed the growth of derivatives markets. It led to development of sophisticated trading strategies (hedging of options) that transformed risk management practices and financial markets.

The model is built on several key assumptions such as, the stock price follows a geometric Brownian motion with constant drift and volatility, no arbitrage opportunities, constant risk-free interest rate and options are European-style (options that can only be exercised at maturity).

Key Parameters

In the BSM model, there are five essential parameters to compute the theoretical value of a European-style option is calculated are:

  • Strike price (K): fixed price specified in an option contract at which the option holder can buy (for a call) or sell (for a put) the underlying asset if the option is exercised.
  • Time to expiration (T): time left until the option expires.
  • Current underlying price (S0): the market price of underlying asset (commodities, precious metals like gold, currencies, bonds, etc.).
  • Risk-free interest rate (r): the theoretical rate of return on an investment that is continuously compounded per annum.
  • Volatility (σ): standard deviation of the returns of the underlying asset.

The strike price (exercise price) and time to expiration (maturity) correspond to characteristics of the option while the current underlying asset price, the risk-free interest rate, and volatility reflect market conditions.

Option payoff

The payoff for a call option gives the value of the option at the moment it expires (T) and is given by the expression below:


Payoff formula for call option

Where CT is the call option value at expiration, ST the price of the underlying asset at expiration, and K is the strike price (exercise price) of the option.

Figure 1 below illustrates the payoff function described above for a European-style call option. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days.

Figure 1. Payoff value as a function of the underlying asset price.
Payoff function
Source: computation by the author.

Call option value

While the value of an option is known at maturity (being determined by its payoff function), its value at any earlier time prior to maturity, and in particular at issuance, is not directly observable. Consequently, a valuation model is required to determine the option’s price at those earlier dates.

The Black–Scholes–Merton model is formulated as a stochastic partial differential equation and the solution to the partial differential equation (PDE) gives the BSM formula for the value of the option.

For a European-style call option, the call option value at issuance is given by the following formula:


Formula for the call option value according to the BSM model

with


Formula for the call option value according to the BSM model

Where the notations are as follows:

  • C0= Call option value at issuance (time 0) based on the Black-Scholes-Merton model
  • K = Strike price (exercise price)
  • T = Time to expiration
  • S0 = Current underlying price (time 0)
  • r = Risk-free interest rate
  • σ = Volatility of the underlying asset returns
  • N(·) = Cumulative distribution function of the standard normal distribution

Figure 2 below illustrates the call option value as a function of the underlying asset price. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current price of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield, and the volatility is assumed to be 15%.

Figure 2. Call option value as a function of the underlying asset price.
Call option value as a function of the underlying asset price.
Source: computation by the author (BSM model).

Option and volatility

In the Black–Scholes–Merton model, the value of a European call or put option is a monotonically increasing function of volatility. Higher volatility increases the probability of finishing in-the-money while losses remain limited to the option premium, resulting in a strictly positive vega (the first derivative of the option value with respect to volatility) for both calls and puts.

As volatility approaches zero, the option value converges to its intrinsic value, forming a lower bound. With increasing volatility, option values rise toward a finite upper bound equal to the underlying price for calls (and bounded by the strike for puts). An inflection point occurs where volga (the second derivative of the option value with respect to volatility) changes sign: at this point vega is maximized (at-the-money) and declines as the option becomes deep in- or out-of-the-money or as time to maturity decreases.

The upper limit and the lower limit for the call option value function is given below (Hull, 2015, Chapter 11).


Formula for upper and lower limits of the option price

Figure 3 below illustrates the value of a European call option as a function of the underlying asset’s price volatility. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current price of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield. A deliberately wide (and economically unrealistic) range of volatility values is employed in order to highlight the theoretical limits of option prices: as volatility tends to infinity, the option value converges to an upper bound ($6,000 in our example), while as volatility approaches zero, the option value converges to a lower bound $1,015.51).

Figure 3. Call option value as a function of price volatility
 Call option value as a function of price volatility
Source: computation by the author (BSM model).

Volatility: the unobservable parameter of the model

When we think of options, the basic equation to remember is “Option = Volatility”. Unlike stocks or bonds, options are not primarily quoted in monetary units (dollars or euros), but rather in terms of implied volatility, expressed as a percentage.

Volatility is not directly observable in financial markets. It is an unobservable (latent) parameter of the pricing model, inferred endogenously from observed option prices through an inversion of the valuation formula given by the BSM model. As a result, option markets are best interpreted as markets for volatility rather than markets for prices.

Out of the five essential parameters of the Black-Scholes-Merton model listed above, the volatility parameter is the unobservable parameter as it is the future fluctuation in price of the underlying asset over the remaining life of the option from the time of observation. Since future volatility cannot be directly observed, practitioners use the inverse of the BSM model to estimate the market’s expectation of this volatility from option market prices, referred to as implied volatility.

Implied Volatility

In practice, implied volatility is the volatility parameter that when input into the Black-Scholes-Merton formula yields the market price of the option and represents the market’s expectation of future volatility.

Calculating Implied volatility

The BSM model maps five input variables (S, K, r, T, σimplied) to a single output variable uniquely: the call option value (Price), such that it’s a bijective function. When the market call option price (CBSM) is known, we invert this relationship using (S, K, r, T, CBSM) as inputs to solve for the implied volatility, σimplied.


Formula for implied volatility

Newton-Raphson Method

As there is no closed form solution to calculate implied volatility from the market price, we need a numerical method such as the Newton–Raphson method to compute it. This involves finding the volatility for which the Black–Scholes–Merton option value CBSM equals the observed market option price CMarket.

We define the function f as the difference between the call option value given by the BSM model and the observed market price of the call option:


Function for the Newton-Raphson method.

Where x represents the unknown variable (implied volatility) to find and CMarket is considered as a constant in the Newton–Raphson method.

Using the Newton-Raphson method, we can iteratively estimate the root of the function, until the difference between two consecutive estimations is less than the tolerance level (ε).


Formula for the iterations in the Newton-Raphson method

In practice, the inflexion point (Tankov, 2006) is taken as the initial guess, because the function f(x) is monotonic, so for very large or very small initial values, the derivative becomes extremely small (see Figure 3), causing the Newton–Raphson update step to overshoot the root and potentially diverge. Selecting the inflection point also minimizes approximation error, as the second derivative of the function at this point is approximately zero, while the first derivative remains non-zero.


Formula for calculating the volatility at inflexion point.

Where σinflection is the volatility at the inflection point.

Figure 4 below illustrates how implied volatility varies with the call option price for different values of the market price (computed using the Newton–Raphson method). As before, the example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current level of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield.

Figure 4. Implied volatility vs. Call Option value
 Implied volatility as a function of call option price
Source: computation by the author.

You can download the Excel file provided below, which contains the calculations and charts illustrating the payoff function, the option price as a function of the underlying asset’s price, the option price as a function of volatility, and the implied volatility as a function of the option price.

Download the Excel file.

You can download the Python code provided below, to calculate the price of a European-style call or put option and calculate the implied volatility from the option market price (BSM model). The Python code uses several libraries.

Download the Python code to calculate the price of a European option.

Alternatively, you can download the R code below with the same functionality as in the Python file.

 Download the R code to calculate the price of a European option.

Why should I be interested in this post?

The seminal Black–Scholes–Merton model was originally developed to price European options. Over time, it has been extended to accommodate a wide range of derivatives, including those based on currencies, commodities, and dividend-paying stocks. As a result, the model is of fundamental importance for anyone seeking to understand the derivatives market and to compute implied volatility as a measure of risk.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Options

   ▶ Jayati WALIA Black-Scholes-Merton Option Pricing Model

   ▶ Jayati WALIA Implied Volatility

   ▶ Akshit GUPTA Option Greeks – Vega

Useful resources

Academic research

Black F. and M. Scholes (1973) The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654.

Merton R.C. (1973) Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1), 141–183.

Hull J.C. (2022) Options, Futures, and Other Derivatives, 11th Global Edition, Chapter 15 – The Black–Scholes–Merton model, 338–365.

Cox J.C. and M. Rubinstein (1985) Options Markets, First Edition, Chapter 5 – An Exact Option Pricing Formula, 165-252.

Tankov P. (2006) Calibration de Modèles et Couverture de Produits Dérivés (Model calibration and derivatives hedging), Working Paper, Université Paris-Diderot. Available at https://cel.hal.science/cel-00664993/document.

About the BSM model

The Nobel Prize Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1997

Harvard Business School Option Pricing in Theory & Practice: The Nobel Prize Research of Robert C. Merton

Other

NYU Stern Volatility Lab Volatility analysis documentation.

About the author

The article was written in December 2025 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School).

   ▶ Read all posts written by Saral BINDAL.

The Private Equity Secondary Market: from liquidity mechanism to structural pillar

Adam MERALLI BALLOU

In this article, Adam MERALLI BALLOU (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) introduces the Secondary Market in Private Equity.

Introduction

Over the past decade, the private equity secondary market has undergone a profound transformation. Originally conceived as a marginal liquidity outlet for constrained investors, it has progressively become a central component of private markets architecture, as it increasingly shapes how capital circulates within the private equity ecosystem rather than merely how it exits it. The secondary market now acts as a mechanism through which investors actively manage portfolio duration, smooth cash-flow profiles, and adjust exposure across vintages and strategies, while allowing General Partners to optimize asset holding periods in response to market conditions. This evolution reflects a broader shift away from a rigid, linear fund lifecycle toward a more dynamic and continuous model of capital allocation.

This transformation has accelerated in the recent cycle, as exit activity slowed materially and fund durations extended beyond initial expectations, amplifying the need for alternative liquidity and capital recycling solutions. According to the Preqin Secondaries in 2025 report and the William Blair Private Capital Advisory Secondary Market Report (2025), year 2025 is expected to mark a historical milestone, with global secondary transaction volumes reaching approximately $175bn, the highest level ever recorded. This surge reflects not only cyclical pressures on liquidity, but also a deeper structural shift in how private equity portfolios are managed, financed, and recycled across market cycles.

Global Secondary Market Volume
Global Secondary Market Volume
Source: Willliam Blair.

This figure illustrates the rapid expansion of the global private equity secondary market. According to the William Blair Private Capital Advisory Secondary Market Report (2025), transaction volumes grew from $28bn in 2013 to $156bn in 2024, with $175bn projected for 2025. The increasing share of GP-led transactions highlights the growing role of secondary markets in addressing liquidity needs and exit constraints.

LP-led vs GP-led secondaries: complementary functions within the ecosystem

The secondary market is fundamentally organized around two distinct transaction types: LP-led and GP-led, each fulfilling a different economic function within the private equity ecosystem. LP-led transactions represent the original backbone of the market. In these deals, Limited Partners sell existing fund interests to obtain liquidity, rebalance their portfolios, or reduce exposure following overallocation to private equity. Data from the 2025 Preqin report shows that LP-led transactions tend to dominate in number, particularly during periods of market stress, as institutional investors respond to denominator effects, regulatory constraints, or liability-matching requirements. However, while LP-led transactions account for a high share of deal count, their relative weight in value terms has become more balanced. In 2024, LP-led secondaries represented roughly $80bn, or close to half of total market volume. GP-led by contrast, are initiated by the General Partner rather than by investors. In a GP-led secondary, the GP transfers one or several assets from an existing fund into a new vehicle. In 2024, GP-led transactions represented approximately $76bn in value, accounting for a share comparable to LP-led transactions despite being fewer in number, which reflects their significantly larger average deal sizes.

The explosion of continuation funds and the normalization of GP-led structures

Within the GP-led universe, the rapid rise of continuation funds stands out as the most consequential development of the past few years. Once viewed as exceptional restructuring tools for underperforming or illiquid assets, continuation funds have become mainstream instruments used to extend the ownership of high-quality portfolio companies. The Preqin report identifies 401 continuation funds launched between 2006 and 2025, with a striking acceleration after 2020. Of these, 340 funds are already closed, representing an aggregate capital base of approximately $182.7bn. In value terms, continuation funds now account for around 45–50% of total secondary market volume and nearly 80% of GP-led transactions. This expansion has been driven by a combination of prolonged exit timelines, improved governance standards, systematic use of third-party valuations, and stronger alignment mechanisms such as GP carry rollovers. The data confirms that continuation funds are no longer marginal or opportunistic structures, but rather standardized tools for managing asset life cycles and sustaining value creation beyond the constraints of traditional closed-end fund structures.

Capital concentration, pricing normalization, and the strategic role of secondaries

Beyond transaction structures, the scale of capital committed to the secondary market underscores its growing strategic importance. The William Blair report highlights that secondary-focused investors held more than $200bn of dry powder in 2024, equivalent to approximately 43% of total secondary AUM (Asset under Management), a proportion materially higher than that observed in private equity primaries. This accumulation of capital has enabled the execution of increasingly large and complex transactions and has supported a notable improvement in pricing conditions. In 2024, 91% of single-asset continuation fund transactions were priced at or above 90% of NAV (Net Asset Value, i.e. the estimated fair value of a fund’s underlying), while multi-asset continuation funds also saw a significant normalization in discounts. At the same time, performance data from Preqin indicates that secondaries continue to offer a differentiated risk-return profile, characterized by lower dispersion of outcomes and faster cash-flow generation relative to primary funds. In an environment marked by distribution scarcity and heightened uncertainty, these characteristics help explain why the secondary market has moved from a peripheral liquidity solution to a structural stabilizer of the private equity ecosystem .

Why should I be interested in this post?

As the private equity secondary market reached record transaction volumes of around $156bn in 2024 and could grow to nearly $300bn by 2030, understanding its mechanics has become essential for anyone interested in private markets. This post provides a data-driven explanation of LP-led and GP-led transactions and highlights why continuation funds now account for a large share of secondary activity. These structures are central to liquidity management, portfolio rebalancing, and capital recycling in a constrained exit environment. The different industry reports used in this analysis can be found in the “Useful information” section below.

Related posts on the SimTrade blog

   ▶ All posts about Private Equity

   ▶ Emmanuel CYROT Deep Dive into evergreen funds

   ▶ Lilian BALLOIS Discovering Private Equity: Behind the Scenes of Fund Strategies

   ▶ Adam MERALLI BALLOU My internship experience in Investor Relation at Eurazeo

Useful resources

William Blair (March 2025) William Blair Private Capital Advisory: 2025 Secondary Market Report

Preqin (June 2025) Secondaries in 2025

About the author

The article was written in December 2025 by < https://www.linkedin.com/in/adam-meralli-ballou/" target="_blank">Adam Meralli Ballou (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Adam MERALLI BALLOU.

DCF vs. Multiples: Why Different Valuation Methods Lead to Different Results

Cornelius HEINTZE

In this article, Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains how the usage of different valuation methods can lead to different outcomes and how to use them.

Why this is important

In finance valuation is always present and it is not only a mechanical exercise. Analysts are working with discounted cash flow models, multiples or other procedures to value a company. Using these different methods will lead to different outcomes and it is crucial to understand why these differences occur and if this is in line with your expectations or differing from them. This helps to avoid misleading conclusions and relying only on a single method or having difficulties interpreting multiple methods.

The DCF model: measuring intrinsic value

The discounted cash flow (DCF) model aims to measure the intrinsic value of a company. It does this by forecasting the expected future cash flows generated by the company and discounting them back to the present using an appropriate discount rate that reflects the risk specific for the company. The goal is to estimate the equity value of the company. The discount rate is often the WACC (weighted-average cost of capital), or the cost of equity based on the method you are using. The method can either be to estimate the enterprise value which would represent the value of the whole company including its assets and its liabilities. For this method you would use the WACC. To get to the equity value directly you have to subtract the part of the liabilites that contribute to the cash flows and create cash flows that are only generated by equity. You will also have to find out the cost of equity, which can be done using the CAPM. After doing this you will have the equity value of the company.

DCF logic (simplified):

  • Explicit forecast period: Forecast cash flows CFt for years t = 1 … T and discount them at rate r.
  • Terminal value: Estimate the value beyond year T using a stable long-term assumption. This is referred to as an annual perpetuity and can include a growth factor if it aligns with the assumptions about the company.

Formula (illustrative):

Value = Σt=1…T CFt / (1 + r)t + Terminal Value / (1 + r)T

This formula can differ based on which type of DCF model you are using. If you are using the WACC to discount your cashflows you will be left with the enterprise value of the company’s total assets and liabilities and not the equity value. To get to the equity value, you will have to subtract the liabilities.

Equity value using WACC = (Σt=1…T CFt / (1 + WACC)t + Terminal Value / (1 + r)T) – Liabilities

If you are using the cash flows that can be assigned to the equity of the company and the cost of equity to discount these cash flows you will automatically end up with the equity value.

Equity value = Σt=1…T CFtequity / (1 + requity)t + Terminal Value / (1 + r)T

Strengths of the DCF

You can already see that there are differences within one single model that need to be understood. In practice the method for the total company value is widely used. This is because of its fundamental strength, which is its simplicity and its convenience. It is very easy to follow and you can see how different assumptions will affect the firm value in different ways. It therefore forces the analyst to evaluate and model the key drivers of financial growth. Like looking at the growth rate, investments in working capital and the risk the company is currently facing. As a result, DCF valuations are often used for long-term strategic decisions, mergers and acquisitions, and fairness opinions.

Weaknesses of the DCF

Following this the major problems with the DCF-models are its assumptions. They are based on historical values and the CAPM, which both give no valuable outlook on the future. But as there is no better method currently to predict future cash flows, the method is holding strong in practice, although empirically it seems to be unsuitable. The resulting model is also very sensitive to the assumptions made. Especially looking at the growth rate or the discount rate which will accumulate over time.

Multiples valuation: estimating relative value

Now coming to the multiples-based valuation, this valuation method focuses on looking at the relative value of a company rather than the intrinsic value of a company. This means that the firm is compared to similar companies using different key values such as:

  • Price-to-Earnings (P/E)
  • Enterprise Value to EBITDA (EV/EBITDA)
  • Enterprise Value to Sales (EV/Sales)

The process of choosing and working with multiples is simple. There are two main approaches: the similar public company-method, which is used to create and compare multiples based on data from a company that is publicly traded on a stock market. The second method is the recent acquisition-method, which will look at the transaction prices for a similar company. As the name of the first method indicates, the chosen companies must be similar to the valued company. You can achieve this by looking at the size of the company, the industry and the location or other features and specifying different values for these aspects (i.e. number of employees, pharmacy, Germany).

Implicit assumptions behind multiples

Although multiples are often perceived as simpler ways of valuing a company, they embed the same fundamental assumptions as a DCF model, albeit in a less transparent way.

A valuation multiple implicitly reflects:

  • Expected growth
  • Risk and discount rates
  • Capital structure
  • Profitability and reinvestment needs

For example, a high EV/EBITDA multiple usually signals that the market expects strong future growth or low risk. In other words, the market has already performed a form of discounted cash flow analysis — but the assumptions are hidden inside the multiple.

Strengths of multiples

Multiples are an easy way to get an overview of the value of a company and compare the estimated values to other companies on the market. They can also be used to quickly check the plausibility of a firm value estimated with the DCF model. The main strength is again its simplicity but this time in a much faster and easier way. They are used to compare the company to competitors and to give insights on how the company would perform against them and on the stock market. It’s also very helpful when valuing smaller companies because they might not have the amount of historical data organized and needed to value this with a DCF method.

Weaknesses of multiples

One of the biggest weaknesses is the requirement of finding a similar company that is traded on the stock market, or which information is publicly available. They therefore can also be manipulated easily because they are less transparent than other methods and can be adjusted very easily (what is “similar”?). They also cannot be seen as objective values as the market is estimating them with no individual interferences. Therefore, they are not consistent and have to be used with care. You should always make plausible assumptions, that can be explained by the multiple and the current situation of the company.

When to use them and the “football field”

To really get behind the use of multiples and the DCF model all together we are looking how to combine them together in a meaningful way. Multiples are often used to create a “football field”. This technique describes a graph that is summarizing valuation ranges across methods rather than delivering a single point estimate. This is especially helpful when you are currently negotiating on an M&A deal to see if the offered prices are aligned with your assumptions and whether you want to accept or not.

A great example for the combination of the DCF model and multiples is the acquisition from Actelion by Johnson & Johnson. To see if the offer was acceptable and fair, they hired valuation professionals from Alantra. Alantra gathered data and estimated multiple values to compare the offer. They used the graph of the “football field” to make it visually appealing and instinctive. You can see that the green line is far right beyond the red line and therefore it can be seen as a fair offer considering the other values that have been estimated by Alantra in their fairness opinion for Actelion. This is because the more the value is on the right side of the graph, the higher it is.

Alantra Fairness Opinion example

You can download the full fairness opinion here

DCF vs. Multiples Example

You can download the Excel file provided below, which contains the “Football field” example.

 Download the Excel file for the football field DCF and Multiples valuation methods

You can see here that the estimated value from the DCF is a more on the lower end than the consensus of the market. This is not necessarily a problem as the market might have already considered synergies or future events that the DCF model did not capture or simply is not expecting, due to a lack of information (insiders etc.).

As you can see, rather than choosing between DCF and multiples, practitioners usually apply both approaches in a complementary way:

  • DCF models are well suited for estimating intrinsic value and analyzing long-term fundamentals.
  • Multiples are useful for understanding how the market currently prices similar firms.
  • In IPOs and M&A transactions, both methods are typically combined to form a valuation range.

A robust valuation rarely relies on a single number. Instead, it emerges from comparing and reconciling different approaches.

Conclusion

DCF and multiples-based valuation often lead to different results because they answer different questions. DCF models aim to estimate intrinsic value based on explicit assumptions, while multiples reflect relative value and prevailing market expectations.

Recognizing the strengths and limitations of each method is essential for sound financial analysis. By combining both approaches and critically assessing their underlying assumptions, analysts can arrive at more balanced and informative valuation outcomes.

To sum up…

Both DCF and multiples are useful tools, but neither should be applied mechanically. A solid valuation comes from understanding what each method captures, where it can mislead, and how results change when assumptions or peer groups change. In practice, triangulating across methods provides the most reliable foundation for decision-making.

Why should I be interested in this post?

For a student interested in business and finance, this post provides a concrete bridge between theory and practice. Valuation models such as the two-stage DCF are not only central to courses in corporate finance, but also widely used in internships, case interviews, and real-world transactions. Understanding how sensitive firm values are to assumptions on growth and discount rates helps students critically assess valuation outputs rather than taking them at face value, and prepares them for practical applications in consulting, investment banking, or asset management.

Related posts on the SimTrade blog

   ▶ All posts about financial techniques

   ▶ Jorge KARAM DIB Multiples valuation method for stocks

   ▶ Andrea ALOSCARI Valuation methods

   ▶ Samuel BRAL Valuing the Delisting of Best World International Using DCF Modeling

   ▶ Cornelius HEINTZE The effect of a growth rate in DCF

Useful resources

Paul Pignataro (2022) Financial modeling and valuation: a practical guide to investment banking and private equity Wiley, second edition.

Aswath Damodaran (2015)Explanations on Multiples

About the author

The article was written in December 2025 by Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025).

   ▶ Read all articles by Cornelius HEINTZE .

Risk-based Audit : From Risks to Assertions to Audit Procedures

Iris ORHAND

In this article, Iris ORHAND (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) shares a technical article about risk-based audit.

Introduction

Financial statements are not audited by “checking everything”. In practice, auditors use a risk-based approach: they identify what could materially go wrong, link those risks to specific financial statement assertions, and then design the right audit procedures to obtain sufficient and appropriate evidence. “Materially” means that an error or omission is significant enough to influence the decisions of users of the financial statements, meaning it has a real impact on how the financial information is interpreted.

This article explains a simple but powerful framework widely used in audit: Risks→Assertions→Procedures. It’s the logic I applied during my experience in financial audit at EY, where this methodology helps teams prioritize work, structure fieldwork, and produce clear conclusions.

The audit risk model: why “risk-based” makes sense

At a high level, auditors aim to reduce the risk of issuing an inappropriate opinion. A classic way to express this is:

Audit Risk (AR) = Inherent Risk (IR) × Control Risk (CR) × Detection Risk (DR)

  • Inherent risk (IR): the risk a material misstatement exists before considering controls (complexity, estimates, judgment, volatile business, etc.).
  • Control risk (CR): the risk that internal controls fail to prevent or detect a misstatement.
  • Detection risk (DR): the risk that audit procedures fail to detect a misstatement that exists.

In practice, when IR and/or CR are high, auditors respond by lowering DR through stronger procedures: more evidence, better targeting, larger samples, more reliable sources, and more experienced review.

Materiality: focusing on what matters

Because financial statement users care about decisions, audit planning relies on materiality (and performance materiality) to size the work. Materiality helps answer:

  • What could influence users’ decisions?
  • Which line items/disclosures require deeper work?
  • What magnitude of error becomes unacceptable?

This is also why “risk-based” is essential: the audit effort is scaled to what is material and risky, not what is merely easy to test.

Assertions: translating accounting lines into “what could be wrong”

Assertions are management’s implicit claims behind each number. Auditors use them to define the nature of possible misstatements. The most common are:

  • Existence / Occurrence: the asset/revenue is real and actually happened
  • Completeness: nothing important is missing
  • Rights & obligations: the entity truly owns/owes it
  • Valuation / Accuracy: amounts are measured correctly (estimates, provisions…)
  • Cut-off: recorded in the correct period
  • Presentation & disclosure: correctly described and disclosed

This is a key step: a “risk” becomes actionable only when you connect it to one (or several) assertions.

From risk to procedures: the core workflow

A practical “risk-based audit” workflow looks like this:

  • Firstly : Identify significant risks (business model, incentives, complexity, unusual transactions, estimates, prior year issues).
  • Secondly : Map each risk to assertions (e.g. : revenue fraud risk → occurrence, cut-off).
  • Thirdly : Choose the response: 1) Tests of controls (TOC) if relying on internal controls; 2) Substantive tests (analytical procedures + tests of details)
  • Finally : Execute, document, conclude: evidence must be sufficient, appropriate, and consistent.

Concrete examples: what we do in practice

Example 1: Revenue recognition

Typical risks : overstated revenue, early recognition, fictitious sales, side agreements. Key assertions : occurrence, cut-off, accuracy, presentation.

Common procedures:

  • Analytical review (trends, margins, monthly patterns) to spot anomalies
  • Cut-off testing around year-end (invoices, delivery notes, contracts)
  • Tests of details on samples (supporting documents, customer confirmations when relevant)
  • Review of revenue recognition policy and contract terms (IFRS 15 logic, performance obligations)

Example 2: Inventory (valuation and existence)

Typical risks : obsolete stock, wrong costing, missing inventory, poor count controls. Key assertions : existence, valuation, completeness, rights.

Common procedures:

  • Attendance/observation of physical inventory count
  • Reconciliation count-to-ERP, and ERP-to-FS
  • Price testing, cost build-up testing, NRV/obsolescence analysis
  • Movement testing and cut-off around receiving/shipping

Example 3: Provisions & estimates (judgment-heavy)

Provisions and estimates refer to amounts recorded in the accounts for obligations or future events that are uncertain but likely enough to require recognition, which means management must use judgment to estimate their value based on the best information available.

Typical risks : management bias, under/over provisioning, inconsistent assumptions. Key assertions: valuation, completeness, presentation.

Common procedures:

  • Understanding process + key assumptions and governance
  • Back-testing prior-year estimates vs actual outcomes
  • Sensitivity analysis on assumptions (rates, volumes, timelines)
  • Lawyer letters / review of claims, contracts, contingencies

Conclusion

Risk-based audit is more than a buzzword: it’s the method that turns financial statement complexity into a structured plan. By linking risks to specific assertions, auditors can design procedures that are both efficient and defensible, especially under time pressure and tight deadlines.

Why should I be interested in this post?

If you are interested in audit, accounting, corporate finance, or risk, understanding the risk-based approach is foundational. It explains how auditors prioritize, how they challenge information, and why audit work is ultimately about building confidence in financial reporting through evidence.

Related posts on the SimTrade blog

Professional experiences

   ▶ Posts about Professional experiences

   ▶ Iris ORHAND My apprenticeship experience as a Junior Financial Auditor at EY

   ▶ Iris ORHAND My apprenticeship experience as an Executive Assistant in Internal Audit (Inspection Générale) at Bpifrance

   ▶ Annie Yeung My Audit Summer Internship experience at KPMG

   ▶ Mahe Ferret My internship at NAOS – Internal Audit and Control

Useful resources

Site economie.gouv Méthodologie de conduite d’une mission d’audit interne

Site L-expert-comptable.com (25/02/2025) La méthodologie d’audit : Les assertions

Corcentric Les étapes clefs d’un processus d’audit comptable et financier

Cabinet Narquin & Associés Les méthodes d’audit utilisées par les commissaires aux comptes

About the author

The article was written in December 2025 by Iris ORHAND (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Iris ORHAND

Historical Volatility

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School) explains the concept of historical volatility used in financial markets to represent and measure the changes in asset prices.

Introduction

Volatility in financial markets refers to the degree of variation in an asset’s price or returns over time. Simply put, an asset is considered highly volatile when its price experiences large upward or downward movements, and less volatile when those movements are relatively small. Volatility plays a central role in finance as an indicator of risk and is widely used in various portfolio and risk management techniques.

In practice, the concept of volatility can be operationalized in different ways: historical volatility and implied volatility. Traders and analysts use historical volatility to understand an asset’s past performance and implied volatility as a forward-looking measure of upcoming uncertainties in the market.

Historical volatility measures the actual variability of an asset’s price over a past period, calculated as the standard deviation of its historical returns. Computed over different periods (say a month), historical volatility allows investors to identify trends in volatility and assess how an asset has reacted to market conditions in the past.

Practical Example: Analysis of the S&P 500 Index

Let us consider the S&P 500 index as an example of the calculation of volatility.

Prices

Figure 1 below illustrates the daily closing price of the S&P 500 index over the period from January 2020 to December 2025.

Figure 1. Daily closing prices of the S&P 500 index (2020-2025).
Daily closing prices of the S&P 500 Index (2020-2025)
Source: computation by the author.

Returns

Returns are the percentage gain or loss on the asset’s investment and are generally calculated using one of two methods: arithmetic (simple) or logarithmic (continuously compounded).


Returns Formulas

Where Ri represents the rate of return, and Pi denotes the asset’s price at a given point in time.

The preference for logarithmic returns stems from their property of time-additivity, which simplifies multi-period calculations (the monthly log return is equal to the sum of the daily log returns of the month, which is not the case for arithmetic return). Furthermore, logarithmic returns align with the geometric mean thereby mathematically capture the effects of compounding, unlike arithmetic return, which can overstate performance in volatile markets.

Distribution of returns

A statistical distribution describes the likelihood of different outcomes for a random variable. It begins with classifying the data as either discrete or continuous.

Figure 2 below illustrates the distribution of daily returns for S&P 500 index over the period from January 2020 to December 2025.

Figure 2. Historical distribution of daily returns of the S&P 500 index (2020-2025).
Historical distribution of daily returns of the S&P 500 index (2020-2025)
Source: computation by the author.

Standard deviation of the distribution of returns

In real life, as we do not know the mean and standard deviation of returns, these parameters have to be estimated with data.

The estimator for the mean μ, denoted by μ̂, and the estimator for the variance σ2, denoted by σ̂2, are given by the following formulas:


Formulas for the mean and variance estimators

With the following notations:

  • Ri = rate of return for the ith day
  • μ̂ = estimated mean of the data
  • σ̂2 = estimated variance of the data
  • n = total number of days for the data

These estimators are unbiased and efficient (note the Bessel’s correction for the standard deviation when we divide by (n–1) instead of n).


Unbiased estimators of the mean and variance

For the distribution of returns in Figure 2, the mean and standard deviation calculated using the formulas above are 0.049% and 1.068%, respectively (in daily units).

Annualized volatility

As the usual time frame for human is the year, volatility is often annualized. In order to obtain annual (or annualized) volatility, we scale the daily volatility by the square root of the number of days in that period (τ), as shown below.


Annual Volatility formula

Where  is the number of trading days during the calendar year.

In the U.S. equity market, the annual number of trading days typically ranges from 250 to 255 (252 tradings days in 2025). This variation reflects the holiday calendar: when a holiday falls on a weekday, the exchange closes ; when it falls on a weekend, trading is unaffected. In contrast, the cryptocurrency market has as many trading days as there are calendar days in a year, since it operates continuously, 24/7.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized volatility is given by


 S&P500index Annual Volatility formula

Annualized mean

The calculated mean for the 5-year S&P 500 logarithmic returns is also the daily average return for the period. The annualized average return is given by the formula below.


Annualized mean formula

Where τ is the number of trading days during the calendar year.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized average return is given by


Annualized mean formula

If the value of daily average return is much less than 1, annual average return can be approximated as


Annualized mean value

Application: Estimating the Future Price Range of the S&P 500 index

To develop an intuitive understanding of these figures, we can estimate the one-standard-deviation price range for the S&P 500 index over the next year. From the above calculations, we know that the annualized mean return is 12.534% and the annualized standard deviation is 16.953%.

Under the assumption of normally distributed logarithmic returns, we can say approximately with 68% confidence that the value of S&P 500 index is likely to be in the range of:


Upper and lower limits

If the current value of the S&P 500 index is $6,830, then converting these return estimates into price levels gives:


Upper and lower price limits

Based on a 68% confidence interval, the S&P 500 index is likely to trade in the range of $6,526 to $8,838 over the next year.

Historical Volatility

Historical volatility represents the variability of an asset’s returns over a chosen lookback period. The annualized historical volatility is estimated using the formula below.


 Historical volatility formula

With the following notations:

  • σ = Standard deviation
  • Ri = Return
  • n = total number of trading days in the period (21 for 1 month, 63 for 3 months, etc.)
  • τ = Number of trading days in a calendar year

Volatility calculated over different periods must be annualized to a common timeframe to ensure comparability, as the standard convention in finance is to express volatility on an annual basis. Therefore, when working with daily returns, we annualize the volatility by multiplying it by the square root of 252.

For example, for the S&P 500 index, the annualized historical volatilities over the last 1 month, 3 months, and 6 months, computed on December 3, 2025, are 14.80%, 12.41%, and 11.03%, respectively. The results suggest, since the short term (1 month) volatility is higher than medium (3 months) and long term (6 months) volatility, the recent market movements have been turbulent as compared to the past few months, and due to volatility clustering, periods of high volatility often persist, suggesting that this elevated turbulence may continue in the near term.

Unconditional Volatility

Unconditional volatility is a single volatility number using all historical data, which in our example is the entire five years data; It does not account for the fact that recent market behavior is more relevant for predicting tomorrow’s risk than events from past years, implying that volatility changes over time. It is frequently observed that after any sudden boom or crash in the market, as the storm passes away the volatility tends to revert to a constant value and that value is given by the unconditional volatility of the entire period. This tendency is referred to as mean reversion.

For instance, using S&P 500 index data from 2020 to 2025, the unconditional volatility (annualized standard deviation) is calculated to be 16.952%.

Rolling historical volatility

A single volatility number often fails to capture changing market regimes. Therefore, a rolling historical volatility is usually generated to track the evolution of market risk. By calculating the standard deviation over a moving window, we can observe how volatility has expanded or contracted historically. This is illustrated in Figure 3 below for the annualized 3-month historical volatility of the S&P 500 index over the period 2020-2025.

Figure 3. 3-month rolling historical volatility of the S&P500 index (2020-2025).
3-month rolling historical volatility of the S&P500 index
Source: computation by the author.

In Figure 3, the 3-month rolling historical volatility is plotted along with the unconditional volatility computed over the entire period, calculated using overlapping windows to generate a continuous series. This provides a clear historical perspective, showcasing how the asset’s volatility has fluctuated relative to its long-term average.

For example, during the start of Russia–Ukraine war (February 2022 – August 2022), a noticeable jump in volatility occurred as energy and food prices surged amid fears of supply chain disruptions, given that Russia and Ukraine are major exporters of oil, natural gas, wheat, and other commodities.

The rolling window can be either overlapping or non-overlapping, resulting in continuous or discrete graphs, respectively. Overlapping windows shift by one day, creating a smooth and continuous volatility series, whereas non-overlapping windows shift by one time period, producing a discrete series.

You can download the Excel file provided below, which contains the computation of returns, their historical distribution, the unconditional historical volatility, and the 3-month rolling historical volatility of the S&P 500 index used in this article.

Download the Excel file for returns and volatility calculation

You can download the Python code provided below, which contains the computation of returns, first four moments of the distribution, and experiment with the x-month rolling historical volatility function to visualize the evolution of historical volatility over time.

Download the Python code for returns and volatility calculation.

Alternatively, you can download the R code below with the same functionality as in the Python file.

Download the R code for returns and volatility calculation.

Alterative measures of volatility

We now mention a few other ways volatility can be measured: Parkinson volatility, Implied volatility, ARCH model, and stochastic volatility model.

Parkinson volatility

The Parkinson model (1980) uses the highest and lowest prices during a given period (say a month) for the purpose of measurement of volatility. This model is a high-low volatility measure, based on the difference between the maximum and minimum prices observed during a certain period.

Parkinson volatility is a range-based variance estimator that replaces squared returns with the squared high–low log price range, scaled to remain unbiased. It assumes a driftless (expected growth rate of the stock price equal to zero) geometric Brownian motion, it is five times more efficient than close-to-close returns because it accounts for fluctuation of stock price within a day.

For a sample of n observations (say days), the Parkinson volatility is given by


Parkinson Volatility formula

where:

  • Ht is the highest price on period t
  • Lt is the lowest price on period t

Implied volatility

Implied Volatility (IV) is the level of volatility for the underlying asset that, when plugged into an option pricing model such as Black–Scholes–Merton, makes the model’s theoretical option price equal to the option’s observed market price.

It is a forward looking measure because it reflects the market’s expectation of how much the underlying asset’s price is likely to fluctuate over the remaining life of the option, rather than how much it has moved in the past.

The Chicago Board Options Exchange (CBOE), a leading global financial exchange operator provides implied volatility indices like the VIX and Implied Correlation Index, measuring 30-day expected volatility from SPX options. These are used by traders to gauge market fear, speculate via futures/options/ETPs, hedge equity portfolios and manage risk during volatility spikes.

ARCH model

Autoregressive Conditional Heteroscedasticity (ARCH) models address time-varying volatility in time series data. Introduced by Engle in 1982, ARCH models look at the size of past shocks to estimate how volatile the next period is likely to be. If recent movements were big, the model expects higher volatility; if they were small, it expects lower volatility justifying the idea of volatility clustering. Originally applied to inflation data, this model has been widely used in to model financial data.

ARCH model capture volatility clustering, which refers to an observation about how volatility behaves in the short term, a large movement is usually followed by another large movement, thus volatility is predictable in the short term. Historical volatility gives a short-term hint of the near future changes in the market because recent noise often continues.

Generalized Autoregressive Conditional Heteroscedasticity (GARCH) extends ARCH by past predicted volatility, not just past shocks, as refined by Bollerslev in 1986 from Engle’s work. Both of these methods are more accurate methods to forecast volatility than what we had discussed as they account for the time varying nature of volatility.

Stochastic volatility models

In practice, volatility is time-varying: it exhibits clustering, persistence, and mean reversion. To capture these empirical features, stochastic volatility (SV) models treat volatility not as a constant parameter but as a stochastic process jointly evolving with the asset price. Among these models, the Heston (1993) specification is one of the most influential.

The Heston model assumes that the asset price follows a diffusion process analogous to geometric Brownian motion, while the instantaneous variance evolves according to a mean-reverting square-root process. Moreover, the innovations to the price and variance processes are correlated, thereby capturing the leverage effect frequently observed in equity markets.

Applications in finance

This section covers key mathematical concepts and fundamental principles of portfolio management, highlighting the role of volatility in assessing risk.

The normal distribution

The normal distribution is one of the most commonly used probability distribution of a random variable with a unimodal, symmetric and bell-shaped curve. The probability distribution function for a random variable X following a normal distribution with mean μ and variance σ2 is given by


Normal distribution function

A random variable X is said to follow standard normal distribution if its mean is zero and variance is one.

The figure below represents the confidence intervals, showing the percentage of data falling within one, two, and three standard deviations from the mean.

Figure 4. Probability density function and confidence intervals for a standard normal varaible.
Standard normal distribution” width=
Source: computation by the author

Brownian motion

Robert Brown first observed Brownian motion was as the erratic and random movement of pollen particles suspended in water due to constant collision with water molecules. It was later formulated mathematically by Norbert Wiener and is also known as the Wiener process.

The random walk theory suggests that it’s impossible to predict future stock prices as they move randomly, and when the timestep of this theory becomes infinitesimally small it becomes, Brownian Motion.

In the context of financial stochastic process, when the market is modeled by the standard Brownian motion, the probability distribution function of the future price is a normal distribution, whereas when modeled by Geometric Brownian Motion, the future prices are said to be lognormally distributed. This is also called the Brownian Motion hypothesis on the movement of stock prices.

The process of a standard Brownian motion is given by:


Standard Brownian motion formula.

The process of a geometric Brownian motion is given by:


Geometric Brownian motion formula.

Where, dSt is the change in asset price in continuous time dt, dXt is a random variable from the normal distribution (N (0, 1)) or Wiener process at a time t, σ represents the price volatility, and μ represents the expected growth rate of the asset price, also known as the ‘drift’.

Modern Portfolio Theory (MPT)

Modern Portfolio Theory (MPT), developed by Nobel Laureate, Harry Markowitz, in the 1950s, is a framework for constructing optimal investment portfolios, derived from the foundational mean-variance model.

The Markowitz mean–variance model suggests that risk can be reduced through diversification. It proposes that risk-averse investors should optimize their portfolios by selecting a combination of assets that balances expected return and risk, thereby achieving the best possible return for the level of risk they are willing to take. The optimal trade-off curve between expected return and risk, commonly known as the efficient frontier, represents the set of portfolios that maximizes expected return for each level of standard deviation (risk).

Capital Asset Pricing Model (CAPM)

The Capital Asset Pricing Model (CAPM) builds on the model of portfolio choice developed by Harry Markowitz (1952), stated above. CAPM states that, assuming full agreement on return distributions and either risk-free borrowing/lending or unrestricted short selling, the value-weighted market portfolio of risky assets is mean-variance efficient, and expected returns are linear in the market beta.

The main result of the CAPM is a simple mathematical formula that links the expected return of an asset to its risk measured by the beta of the asset:


CAPM formula

Where:

  • E(Ri) = expected return of asset i
  • Rf = risk-free rate
  • βi = measure of the risk of asset i
  • E(Rm) = expected return of the market
  • E(Rm) − Rf = market risk premium

CAPM recognizes that an asset’s total risk has two components: systematic risk and specific risk, but only systematic risk is compensated in expected returns.

Returns decomposition fromula.
 Returns decomposition fromula

Where the realized (actual) returns of the market (Rm) and the asset (Ri) exceed their expected values only because of consideration of systematic risk (ε).

Decomposition of risk.
Decompositionion of risk

Systematic risk is a macro-level form of risk that affects a large number of assets to one degree or another, and therefore cannot be eliminated. General economic conditions, such as inflation, interest rates, geopolitical risk or exchange rates are all examples of systematic risk factors.

Specific risk (also called idiosyncratic risk or unsystematic risk), on the other hand, is a micro-level form of risk that specifically affects a single asset or narrow group of assets. It involves special risk that is unconnected to the market and reflects the unique nature of the asset. For example, company specific financial or business decisions which resulted in lower earnings and affected the stock prices negatively. However, it did not impact other asset’s performance in the portfolio. Other examples of specific risk might include a firm’s credit rating, negative press reports about a business, or a strike affecting a particular company.

Why should I be interested in this post?

Understanding different measures of volatility, is a pre-requisite to better assess potential losses, optimize portfolio allocation, and make informed decisions to balance risk and expected return. Volatility is fundamental to risk management and constructing investment strategies.

Related posts on the SimTrade blog

Risk and Volatility

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Youssef LOURAOUI Systematic Risk

   ▶ Youssef LOURAOUI Specific Risk

   ▶ Jayati WALIA Implied Volatility

   ▶ Mathias DUMONT Pricing Weather Risk

   ▶ Jayati WALIA Black-Scholes-Merton Option Pricing Model

Portfolio Theory and Models

   ▶ Jayati WALIA Returns

   ▶ Youssef LOURAOUI Portfolio

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Optimal Portfolio

Financial Indexes

   ▶ Nithisha CHALLA Financial Indexes

   ▶ Nithisha CHALLA Calculation of Financial Indexes

   ▶ Nithisha CHALLA The S&P 500 Index

Useful Resources

Academic research

Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity, Journal of Econometrics, 31(3), 307–327.

Engle, R. F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation, Econometrica, 50(4), 987–1007.

Fama, E. F., & French, K. R. (2004). The Capital Asset Pricing Model: Theory and Evidence, Journal of Economic Perspectives, 18(3), 25–46.

Heston, S. L. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options, The Journal of Finance, 48(3), 1–24.

Markowitz, H. M. (1952). Portfolio Selection, The Journal of Finance, 7(1), 77–91.

Parkinson, M. (1980). The extreme value method for estimating the variance of the rate of return. Journal of Business, 53(1), 61–65.

Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk, The Journal of Finance, 19(3), 425–442.

Tsay, R. S. (2010). Analysis of financial time series, John Wiley & Sons.

Other

NYU Stern Volatility Lab Volatility analysis documentation.

Extreme Events in Finance Risk maps: extreme risk, risk and performance.

About the author

The article was written in December 2025 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School).

   ▶ Read all articles by Saral BINDAL.

The “lemming effect” in finance

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the “lemming effect” in financial markets, inspired by the animated movie Zootopia.

About the concept

The “lemming effect” refers to situations where individuals follow the crowd unthinkingly, just as lemmings are believed to follow one another off a cliff. In finance, this idea is linked to herd behaviour: investors imitate the actions of others instead of relying on their own information or analysis.

The image above is a cartoon showing a line of lemmings running off a cliff, with several already falling through the air. The caption “The Lemming Effect: Stop! There is another way” warns that blindly following others can lead to disaster, even if “everyone is doing it.” The message is to think independently, question group behaviour, and choose an alternative path instead of copying the crowd.

In Zootopia, there is a scene where lemmings dressed as bankers leave their office and are supposed to walk straight home after work. However, after one lemming notices Nick selling popsicles and suddenly changes direction to buy one, the rest of the lemmings automatically follow and queue up too, even though this is completely different from their original route and plan. This illustrates how individuals can abandon their own path and intentions simply because they see someone else acting first, much like investors may follow others into a trade or trend without conducting independent analysis.

Watch the video!


Source: Zootopia (Disney, 2016).

The first image shows Nick Wilde (the fox) holding a red paw-shaped popsicle. In the film, Nick uses this eye‑catching pawpsicle as a marketing tool to attract the lemmings and earn a profit.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The second image shows a group of identical lemmings in suits walking in and out of a building labelled “Lemming Brothers Bank.” This is a parody of the real investment bank “Lehman Brothers,” which collapsed during the 2008 financial crisis. When one lemming notices the pawpsicle, it immediately changes direction from going home and heads toward Nick to buy the product, illustrating how one individual’s choice triggers the rest to follow.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The third image shows Nick successfully selling pawpsicles to a whole line of lemmings. Nick is exploiting the lemmings’ herd‑like behaviour: once a few begin buying, the others automatically copy them and all purchase the same pawpsicle. The humour lies in how Nick profits from their conformity, using their predictable group behaviour—the “lemming effect”—to make easy money.

zootopia lemmings
Source: Zootopia (Disney, 2016).

Behavioural finance uses the lemming effect to describe deviations from perfectly rational decision-making. Rather than analysing fundamentals calmly, investors may be influenced by social proof, fear of missing out (FOMO) or the comfort of doing what “everyone else” seems to be doing.

Understanding the lemming effect is important both for professional investors and students of finance. It helps to explain why markets sometimes move far away from fundamental values and reminds decision-makers to be cautious when “the whole market” points in the same direction.

How the lemming effect appears in markets

In practice, the lemming effect can be seen when large numbers of investors buy the same “hot” stocks simply because prices are rising, they assume that so many others doing the same thing cannot be wrong.

It applies in reverse during market downturns. Bad news, rumours, or sharp price declines can trigger a wave of selling. The fear of being the last one can push them to copy others’ behaviour rather than stick to their original plan.

Such herd-driven moves can amplify volatility, push prices far above or below intrinsic value, and create opportunities or risks that would not exist in a purely rational market. Recognising these dynamics helps investors to step back and question whether they are thinking independently.

Related financial concepts

The lemming effect connects naturally with several basic financial ideas: diversification, risk-return trade-off, market efficiency, Keynes’ beauty contest and gamestop story. It shows how human behaviour can distort these textbook concepts in real markets.

Diversification

Diversification means not putting all your money in the same blanket (asset or sector), so that the poor performance of one investment does not destroy the whole. When the lemming effect is strong, investors often forget diversification and concentrate on a few “popular” stocks. From a diversification perspective, following the crowd can increase risk without necessarily increasing expected returns.

Risk and return

Finance said that higher expected returns usually come with higher risk. However, when many investors behave like lemmings, they may underestimate the true risk of crowded trades. Rising prices can create an illusion of safety, even if fundamentals do not justify the move. Understanding the lemming effect reminds investors to ask whether a sustainable increase in expected return really compensates the extra risk taken by following the crowd.

Market efficiency

In an efficient market, prices should reflect all available information. Herd behaviour and the lemming effect demonstrate that markets can deviate from this ideal when many investors react based on emotions or social cues rather than information. Short-term mispricing created by herding can eventually be corrected when new information becomes available or when rational investors intervene. For students, this illustrates why theoretical models of perfect efficiency are useful benchmarks but do not fully capture real-world behaviour.

Keynes’ beauty contest

Keynes’ “beauty contest” analogy describes investors who do not choose stocks based on their own view of fundamental value, but instead try to guess what everyone else will think is beautiful.Instead of asking “Which company is truly best?”, they ask “Which company does the average investor think others will like?” and buy that, hoping to sell to the next person at a higher price. This links directly to the lemming effect: investors watch each other and pile into the same trades, just like the lemmings all changing direction to follow the first one who goes for the pawpsicle.

GameStop story

The GameStop short squeeze in 2021 is a modern real‑world illustration of herd behaviour. A large crowd of retail investors on Reddit and other forums started buying GameStop shares together, partly for profit and partly as a social movement against hedge funds, driving the price far above what traditional valuation models would suggest. Once the price started to rise sharply, more and more people jumped in because they saw others making money and feared missing out, reinforcing the crowd dynamic in a very “lemming‑like” way.

Why should I be interested in this post?

For business and finance students, the lemming effect is a bridge between psychology and technical finance. It helps explain why prices sometimes move in surprising ways, and why sticking mindlessly to the crowd can be dangerous for long-term wealth.

Whether you plan to work in banking, asset management, consulting or corporate finance, understanding herd behaviour can improve your judgment. It encourages you to combine quantitative tools with a critical view of market sentiment, so that you do not become the next “lemming” in a crowded trade.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE “The market is never wrong, only opinions are“ – Jesse Livermore

   ▶ Hadrien PUCHE “It’s not whether you’re right or wrong that’s important, but how much money you make when you’re right and how much you lose when you’re wrong.”– George Soros

   ▶ Daksh GARG Social Trading

   ▶ Raphaël ROERO DE CORTANZE Gamestop: how a group of nostalgic nerds overturned a short-selling strategy

Useful resources

BBC Five animals to spot in a post-Covid financial jungle

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Gupta, S., & Shrivastava, M. (2022). Herding and loss aversion in stock markets: mediating role of fear of missing out (FOMO) in retail investors. International Journal of Emerging Markets, 17(7), 1720-1737.

Gupta, S., & Shrivastava, M. (2022). Argan, M., Altundal, V., & Tokay Argan, M. (2023). What is the role of FoMO in individual investment behavior? The relationship among FoMO, involvement, engagement, and satisfaction. Journal of East-West Business, 29(1), 69-96.

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Time value of money

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the time value of money, a simple but fundamental concept used in all areas of finance.

Overview of the time value of money

The time value of money (TVM) is the idea that one euro today is worth more than one euro in the future because today’s money can be invested to earn interest. In other words, receiving cash earlier gives more opportunities to save, invest, and grow wealth over time. This principle serves as the foundation for valuing loans, bonds, investment projects, and many everyday financial decisions.

To work with TVM, finance uses a few key tools: present value (the value today of future cash flows), future value (the value in the future of money invested today),etc. With these elements, it becomes possible to compare different cash-flow patterns that occur at various dates consistently.

Future value

The future value (FV) of money answers the question: if I invest a certain amount today at a given interest rate, how much will I have after some time? Future value uses the principle of compounding, which means that interest earns interest when it is reinvested.

For a simple case with annual compounding, the formula is:

Future Value (FV)

where PV is the amount invested today, r is the annual interest rate, and T is the number of years.

For example, if 1,000 euros are invested at 5% per year for 3 years, the future value is FV = 1,000 × (1.05)^3 = 1,157.63 euros. This shows how even a modest interest rate can increase the value of an investment over time.

Compounding frequency can also change the result. If interest is compounded monthly instead of annually, the formula is adjusted to use a periodic rate and the total number of periods. The more frequently interest is added, the higher the future value for the same nominal annual rate, illustrating why compounding is such a powerful mechanism in long-term investing.

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

You can download the Excel file provided below, which contains the computation of an investment to illustrate the impact of the frequency on the compounding mechanism.

Download the Excel file for computation of an investment to illustrate the impact of the frequency on the compounding mechanism

Present value

Present value (PV) is the reverse operation of future value and answers the question: how much is a future cash flow worth today? To find PV, the future cash flow is “discounted” back to today using an appropriate discount rate that reflects opportunity cost, risk and inflation.

For a single future cash flow, the present value formula is:

Present Value (PV)

Where FV is the future amount, r is the discount rate per period, and T is the number of periods.

For example, if an investor expects to receive 1,000 euros in 2 years and the discount rate is 5% per year, the present value is PV = 1,000 / (1.05)^2 = 907.03 euros. This means the investor would be indifferent between receiving 907.03 euros today or 1,000 euros in two years at that discount rate.

Choosing the discount rate is a key step: for a safe cash flow, a risk-free rate such as a government bond yield might be used, while for a risky project, a higher rate reflecting the required return of investors would be more appropriate. A higher discount rate reduces present values, making future cash flows less attractive compared to money today.

Applications of the time value of money

The time value of money is used in almost every area of finance. In corporate finance, it forms the basis of discounted cash-flow (DCF) analysis, where the expected future cash flows of a project or company are discounted to estimate the net present value. Investment decisions are typically made by comparing the present value to the initial cost.

DCF

In banking and personal finance, TVM is essential to design and understand loans, deposits and retirement plans. Customers who understand how interest rates and compounding work can better compare offers, negotiate terms and plan their savings. In capital markets, bond pricing, yield calculations and valuation of many other instruments depend directly on discounting streams of cash flows.

Even outside professional finance, TVM helps individuals answer simple but important questions: is it better to take a lump sum now or a stream of payments later, how much should be saved each month to reach a future target, or what is the true cost of borrowing at a given interest rate? A good intuition for TVM improves financial decision-making in everyday life.

Why should I be interested in this post?

As a university student, understanding TVM is essential because it underlies more advanced techniques such as discounted cash-flow (DCF) valuation, bond pricing and project evaluation. It is usually one of the first technical topics taught in introductory corporate finance and quantitative methods courses.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE The four most dangerous words in investing are, it’s different this time

   ▶ Hadrien PUCHE Remember that time is money

Useful resources

Harvard Business School Online Time value of money

Investing.com Time value of money: formula and examples

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Deep Dive into evergreen funds

Emmanuel CYROT

In this article, Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) introduces the ELTIF 2.0 Evergreen Fund.

Introduction

The asset management industry is pivoting to democratize private market access for the wealth segment. We are moving from the rigid Capital Commitment Model (the classic “blind pool” private equity structure) to the flexible NAV-Based Model, an open-ended structure where subscriptions and redemptions are executed at periodic asset valuations rather than through irregular capital calls. For technical product specialists, the ELTIF 2.0 regulation isn’t just a compliance update, it’s the architectural blueprint for the democratization of private markets. Here is the deep dive into how these “Semi-Liquid” or “Evergreen” structures actually work, the European landscape, and the engineering behind them.

The Liquidity Continuum: Solving the “J-Curve” Problem

To understand the evergreen structure, you have to understand what it fixes. In a traditional Closed-End Fund (the “Old Guard”):

  • The Cash Drag: You commit €100k, but the manager only calls 20% in Year 1. Your money sits idle.
  • The J-Curve: You pay fees on committed capital immediately, but the portfolio value drops initially due to costs before rising (the “J” shape).
  • The Lock: Your capital is trapped for 10-12 years. Secondary markets are your only (expensive) exit.

The Evergreen / Semi-Liquid Solution represents the structural convergence of private market asset exposure with an open-ended fund’s periodic subscription and redemption framework.

  • Fully Invested Day 1: Unlike the Capital Commitment model, your capital is put to work almost immediately upon subscription.
  • Perpetual Life: There is no “end date.” The fund can run for 99 years, recycling capital from exited deals into new ones.
  • NAV-Based: You buy in at the current Net Asset Value (NAV), similar to a mutual fund, rather than making a commitment.

The difference in investment processes between evergreen funds and closed ended funds
 The difference in investment processes between evergreen funds and closed ended funds
Source: Medium.

The European Landscape: The Rise of ELTIF 2.0

The “ELTIF 2.0” regulation (Regulation (EU) 2023/606) is the game-changer. It removed the extra local rules that held the market back in Europe. These rules included high national minimum investment thresholds for retail investors and overly restrictive limits on portfolio composition and liquidity features imposed by national regulators.

Market Data as of 2025 (Morgan Lewis)

  • Volume: The market is rapidly expanding, with over 160+ registered ELTIFs now active across Europe as of 2025.
  • The Hubs: Luxembourg is the dominant factory (approx. 60% of funds), followed by France (strong on the Fonds Professionnel Spécialisé or FPS wrapper) and Ireland.
  • The Arbitrage: The killer feature is the EU Marketing Passport. A French ELTIF can be sold to a retail investor in Germany or Italy without needing a local license. This allows managers to aggregate retail capital on a massive scale.

Structural Engineering: Liquidity

This section delves into the precise engineering required to reconcile the illiquidity of the underlying assets with the promise of periodic investor liquidity in Evergreen/Semi-Liquid funds. This is achieved through a combination of Asset Allocation Constraints and robust Liquidity Management Tools (LMTs).

The primary allocation constraint is the “Pocket” Strategy, or the 55/45 Rule. The fund is structurally divided into two distinct components. First, the Illiquid Core, which must represent greater than 55% of the portfolio, is the alpha engine holding long-term, illiquid assets such as Private Equity, Private Debt, or Infrastructure. Notably, ELTIF 2.0 has broadened the scope of this core to include newer asset classes like Fintechs and smaller listed companies. Second, the Liquid Pocket, which can be up to 45%, serves as the fund’s buffer, holding easily redeemable, UCITS-eligible assets like money market funds or government bonds. While the regulation permits a high 45% pocket, efficient fund operation typically keeps this buffer closer to 15%–20% to mitigate performance-killing “cash drag”.

Crucial to managing liquidity risk is the Gate Mechanism. Although the fund offers conditional liquidity (often quarterly), the Gate prevents a systemic crisis if many investors attempt to exit simultaneously. This mechanism works by capping redemptions at a specific percentage of the Net Asset Value (NAV) per period, commonly set at 5%. If aggregate redemption requests exceed this threshold (e.g., requests total 10%), all withdrawing investors receive a pro-rata share of the allowable 5% and the remainder of their request is deferred to the next liquidity window.

Finally, managers utilize Anti-Dilution Tools like Swing Pricing to protect the financial interests of the long-term investors remaining in the fund. In a scenario involving heavy redemptions, where the fund manager is forced to sell assets quickly and incur high transaction costs, Swing Pricing adjusts the NAV downwards only for the exiting investors. This critical mechanism ensures that those demanding liquidity—the “leavers”—bear the transactional “cost of liquidity,” thereby insulating the NAV of the “stayers” from dilution.

Why should I be interested in this post?

Mastering ELTIF 2.0 architecture offers a definitive edge over the standard curriculum. With the industry pivoting toward the “retailization” of private markets, understanding the engineering behind evergreen funds and liquidity gates demonstrates a level of practical sophistication that moves beyond theory—exactly what recruiters at top-tier firms like BlackRock or Amundi are seeking for their next analyst class.

Related posts on the SimTrade blog

   ▶ David-Alexandre BLUM The selling process of funds

Useful resources

Société Générale Fonds Evergreen et ELTIF 2 : Débloquer les Marchés Privés pour les Investisseurs Particuliers

About the author

The article was written in December 2025 by Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Emmanuel CYROT.

Interest Rates and M&A: How Market Dynamics Shift When Rates Rise or Fall

 Emanuele BAROLI

In this article, Emanuele BAROLI (MiF 2025–2027, ESSEC Business School) examines how shifts in interest rates shape the M&A market, outlining how deal structures differ when central banks raise versus cut rates.

Context and objective

The purpose is to explain what interest rates are, how they interact with inflation and liquidity, and how these variables shape merger and acquisition (M&A) activity. The intended outcome is an operational lens you can use to read the current monetary cycle and translate it into cost of capital, valuation, financing structure, and execution windows for deals, distinguishing—when useful—between corporate acquirers and private-equity sponsors.

What are interest rates

Interest rates are the intertemporal price of funds. In economic terms they remunerate the deferral of consumption, insure against expected inflation, and compensate for risk. For real decisions the relevant object is the real rate because it governs the trade-off between investing or consuming today versus tomorrow.

Central banks anchor the very short end through the policy rate and the management of system liquidity (reserve remuneration, market operations, balance-sheet policies). Markets then map those signals into the entire yield curve via expectations about future policy settings and required term premia. When liquidity is ample and cheap, risk-free yields and credit spreads tend to compress; when liquidity becomes scarcer or dearer, yields and spreads widen even without a headline change in the policy rate. This transmission, with its usual lags, is the bridge from monetary conditions to firms’ investment choices.

M&A industry — a definition

The M&A industry comprises mergers and acquisitions undertaken by strategic (corporate) acquirers and by financial sponsors. Activity is the joint outcome of several blocks: the cost and elasticity of capital (both debt and equity), expectations about sectoral cash flows, absolute and relative valuations for public and private assets, regulatory and antitrust constraints, and the degree of managerial confidence. Interest rates sit at the center because they enter the denominator of valuation models—through the discount rate—and they shape bankability constraints through the debt service burden. In other words, rates influence both the price a buyer can rationally pay and the feasibility of financing that price.

Use of leverage

Leverage translates a given cash-flow profile into equity returns. In leveraged acquisitions—especially LBOs—the all-in cost of debt is set by a market benchmark (in practice, Term SOFR at three or six months in the U.S., and Euribor in the euro area) plus a spread reflecting credit risk, liquidity, seniority, and the supply–demand balance across channels such as term loans, high-yield bonds, and private credit. That all-in cost determines sustainable leverage, shapes covenant design, and fixes the headroom on metrics like interest coverage and net leverage. It ultimately caps the bid a sponsor can submit while still meeting target returns. Corporate acquirers usually employ more modest leverage, yet remain rate-sensitive because medium-to-long risk-free yields and investment-grade spreads feed both fixed-rate borrowing costs and the WACC used in DCF and accretion tests, and they influence the value of stock consideration in mixed or stock-for-stock deals.

How interest rates impact the M&A industry

The connection from rates to M&A operates through three main channels. The first is valuation: holding cash flows constant, a higher risk-free rate or higher term premia lifts discount rates, lowers present values, and compresses multiples, thereby narrowing the economic room to pay a control premium. The second is bankability: higher benchmarks and wider spreads raise coupons and interest expense, reduce sustainable leverage, and shrink the set of financeable deals—most visibly for sponsors whose equity returns depend on the spread between debt cost and EBITDA growth. The third is market access: heightened rate volatility and tighter liquidity reduce underwriting depth and risk appetite in loans and bonds, delaying signings or closings; the mirror image under easing—lower rates, stable curves, and tighter spreads—reopens windows, enabling new-money term funding and refinancing of maturities. The net effect is a function of level, slope, and volatility of the curve: lower and calmer curves with steady spreads tend to support volumes; high or unstable curves, even with unchanged spreads, enforce selectivity.

Evidence from 2021–2024 and what the chart shows

M&A deals and interest rates (2021-2024).
M&A deals and interest rates (2021-2024)
Source: Fed.

The global pattern over 2021–2024 is consistent with this mechanism. In 2021, deal counts reached a cyclical peak in an environment of near-zero short-term rates, abundant liquidity, and elevated equity valuations; frictions on the cost of capital were minimal and access to debt markets was easy, so the economic threshold for completing transactions was lower. Between 2022 and 2024, monetary tightening lifted short-term benchmarks rapidly while spreads and uncertainty rose; global deal counts fell materially and the market became more selective, favoring higher-quality assets, resilient sectors, and transactions with stronger industrial logic. Over this period, global deal counts were 58,308 in 2021, 50,763 in 2022, 39,603 in 2023, and 36,067 in 2024, while U.S. short-term rates moved from roughly 0.14% to above 5%; the chart shows an inverse co-movement between the cost of money and activity. Correlation is not causation—antitrust enforcement, energy shocks, equity multiple swings, and the rise of private credit also mattered—but the macro signal aligns with monetary transmission.

What does academic research say

Academic research broadly confirms the mechanism sketched above: when policy rates rise and financing conditions tighten, both the volume and composition of M&A activity change. Using U.S. data, Adra, Barbopoulos, and Saunders (2020) show that increases in the federal funds rate raise expected financing costs, are followed by more negative acquirer announcement returns, and significantly increase the probability that deals are withdrawn, especially when monetary policy uncertainty is high. Fischer and Horn (2023) and Horn (2021) exploit high-frequency monetary-policy shocks and find that a contractionary shock leads to a persistent fall in aggregate deal numbers and values—on the order of 20–30%—with the effect concentrated among financially constrained bidders; at the same time, the average quality of completed deals improves because weaker acquirers are screened out. Work on leveraged buyouts links this to credit conditions: Axelson et al. (2013) document that cheap and abundant credit is associated with higher leverage and higher buyout prices relative to comparable public firms, while theoretical models such as Nicodano (2023) show how optimal LBO leverage and default risk respond systematically to the level of risk-free rates and credit spreads.

Related posts on the SimTrade blog

   ▶ Bijal GANDHI Interest Rates

   ▶ Nithisha CHALLA Relation between gold price and interest rate

   ▶ Roberto RESTELLI My internship at Valori Asset Management

Useful resources

Academic articles

Adra, S., Barbopoulos, L., & Saunders, A. (2020). The impact of monetary policy on M&A outcomes. Journal of Corporate Finance, 62, 1-61.

Fischer, J. and Horn, C.-W. (2023), Monetary Policy and Mergers and Acquisitions, Working paper Available at SSRN

Horn, C.-W. (2021) Does Monetary Policy Affect Mergers and Acquisitions? Working paper.

Axelson, U., Jenkinson, T., Strömberg, P., & Weisbach, M. S. (2013) Borrow Cheap, Buy High? The Determinants of Leverage and Pricing in Buyouts, The Journal of Finance, 68(6), 2223-2267.

Financial data

Federal Reserve Bank of New York Effective Federal Funds Rate (EFFR): methodology and data

Federal Reserve Bank of St. Louis Effective Federal Funds Rate (FEDFUNDS)

OECD Data Long-term interest rates

About the author

The article was written in November 2025 by Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027).

   ▶ Read all articles by Emanuele BAROLI.

Drafting an Effective Sell-Side Information Memorandum: Insights from a Sell-Side Investment Banking Experience

 Emanuele BAROLI

In this article, Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027) explains how to draft an M&A Information Memorandum, translating sell-side investment-banking practice into a clear, evidence-based guide that buyers can use to progress from interest to a defensible bid.

What is an Info Memo

An information memorandum is a confidential, evidence-based sales document used in M&A processes to enable credible offers while safeguarding the sell-side process. It sets out what is being sold, why it is attractive, and how the deal is framed, and it is structured—consistently and without redundancy—around the following chapters: Executive Summary, Key Investment Highlights, Market Overview, Business Overview, Historical Financial Performance and Current-Year Budget, Business Plan, and Appendix. Each section builds on the previous one so that every claim in the narrative is traceable to data, definitions, and documents referenced in the appendix and the data room.

Executive summary

The executive summary is the gateway to the memorandum and must allow a prospective acquirer to grasp, within a few pages, what is being sold, why the asset is attractive, and how the transaction is framed. It should state the perimeter of the deal, the nature of the stake or assets included, and the essence of the equity story in language that is direct, verifiable, and consistent with the evidence presented later. The narrative should situate the company in its market, outline the recent trajectory of scale, profitability, and cash generation, and articulate—in plain terms—the reasons an informed buyer might assign strategic or financial value. Nothing here should rely on empty superlatives; every claim in the summary must be traceable to supporting material in subsequent sections and to documents made available in the data room. Clarity and internal consistency matter more than flourish: the reader should finish this section knowing what the asset is, why it matters, and what next steps the process anticipates.

Key investment highlights

This section filters the equity story into a small number of decisive arguments, each of which combines a clear assertion, hard evidence, and an explicit investor implication. The prose should explain, not advertise sustainable growth drivers, defensible competitive positioning, quality and predictability of revenue, conversion of earnings into cash, discipline in capital allocation, credible management execution, and identifiable avenues for organic expansion or bolt-on M&A. Each highlight should read as a self-contained reasoning chain—statement, proof, consequence—so that a buyer can connect operational facts to valuation logic.

Market overview

The market overview demonstrates that the asset operates within an addressable space that is sizeable, healthy, and legible. Begin by defining the market perimeter with precision so that later revenue segmentations align with it. Describe the current size and structure of demand, the expected growth over a three-to-five-year horizon, and the drivers that sustain or threaten that growth—technological shifts, regulatory trends, customer procurement cycles, and macro sensitivities. Map the competitive landscape in terms of concentration, barriers to entry, switching costs, and price dynamics across channels. Distinguish between the immediate market in which the company competes and the broader industry environment at national or international level, explaining how each influences pricing power, customer acquisition, and margin stability. All figures and characterizations should be sourced to independent references, allowing the reader to verify both methodology and magnitude.

Business overview

The business overview explains plainly how the company creates value. It should describe what is sold, to whom, and through which operating model, covering products and services, relevant intellectual property or certifications, customer segments and geographies served, and the logic of revenue generation and pricing. The text should make the differentiation intelligible—quality, reliability, speed, functionality, service levels, or total cost of ownership—and then connect that differentiation to commercial traction. Operations deserve a concise, concrete treatment: footprint, capacity and utilization, supply-chain architecture, service levels, and, where material, the technology stack and data security posture. The section should close with the people who actually run the company and are expected to remain post-closing, outlining roles, governance, and incentive alignment. The aim is not to impress with jargon but to let an investor see a coherent engine that turns inputs into outcomes.

Historical financial performance and budget

This chapter turns performance into an intelligible narrative. Present the historical income statement, balance sheet, and cash flow over a three-to-five-year window—preferably audited—and reconcile management accounts with statutory figures so that definitions, policies, and adjustments are transparent. Replace tables-for-tables’ sake with analysis: show where growth and margins come from by decomposing revenue into volume, price, and mix; explain EBITDA dynamics through efficiency, pricing, and non-recurring items; separate maintenance from growth capex; and trace how earnings convert into cash by discussing working-capital movements and seasonality. In a live process, the current-year budget should set out the explicit operating assumptions behind it, the key milestones and risks, and a brief intra-year read so a buyer can compare budget to year-to-date performance. If carve-outs, acquisitions, or other discontinuities exist, present clean pro forma views so the time series remains comparable.

Business plan

The business plan translates the equity story into forward-looking numbers and commitments that can withstand diligence. Build the plan from drivers rather than percentages: revenue as a function of volumes, pricing, mix, and retention; costs split between fixed and variable components with operational leverage and efficiency initiatives laid out; capital needs expressed through capex, working-capital discipline, and any anticipated financing structure. Provide a three-to-five-year view of P&L, cash flow, and balance-sheet implications, making explicit the capacity constraints, hiring requirements, and lead times that link initiatives to outcomes. A sound plan includes a base case and either sensitivities or alternative scenarios, together with risk mitigations that are actually within management control. If bolt-on M&A features in the strategy, describe the screening criteria, integration capability, and the nature of the synergies in a way that distinguishes aspiration from execution.

Appendix

The appendix holds detail without overloading the core narrative and preserves auditability. It should contain the full legal disclaimer and confidentiality terms, a glossary of definitions and KPIs to eliminate ambiguity, detailed financial schedules and reconciliation notes, methodological summaries and citations for market data, concise contractual information for key customers and suppliers where material, operational and ESG indicators that genuinely affect value, and a process note with timeline, bid instructions, Q&A protocols, and site-visit guidance. The organizing principle is traceability: any figure or claim in the memorandum should be traceable to a line item or document referenced here and made available in the data room.

Why should you be interested in this post?

For students interested in corporate finance and M&A, this post shows how to translate sell-side practice into a rigorous structure that investors can actually diligence—an essential skill for internships and analyst roles.

Related posts on the SimTrade blog

   ▶ Roberto RESTELLI BCapital Fund at Bocconi: building a student-run investment fund

   ▶ Louis DETALLE A quick presentation of the M&A field…

   ▶ Ian DI MUZIO My Internship Experience at ISTA Italia as an In-House M&A Intern

Useful resources

Corporate Finance Institute – (CFI) Confidential Information Memorandum (CIM)

DealRoom How to Write an M&A Information Memorandum

About the author

The article was written in December 2025 by Emanuele BAROLI (ESSEC Business School, Master in Finance (MiF), 2025–2027).

   ▶ Read all articles by Emanuele BAROLI.

At what point does diversification becomes “Diworsification”?

Yann TANGUY

In this article, Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains the concept of “diworsification” and shows how to avoid falling into its trap.

The Concept of Diworsification

The word “diworsification” was coined by famous portfolio manager Peter Lynch to denote the habit of supplementing a portfolio with investments which, instead of improving risk-adjusted return, add complexity. It demonstrates a common misconception of one of the fundamental pillars of the Modern Portfolio Theory (MPT): diversification.

Whereas the adage “don’t put all your eggs in one basket” exemplifies the foundation of prudent portfolio building, diworsification occurs when an investor adds too many baskets and thus loses sight of the quality and purpose of each one.

This mistake comes from a fundamental misunderstanding of what diversification actually is. Diversification is not a function of the quantity of assets owned by an investor but of the interconnections of assets. If an investor introduces assets highly correlated with assets owned to a portfolio, the diversification effect of risk is greatly reduced, and a portfolio’s possible return can be diluted.

Practical Example

Let’s assume there are two investors.

An investor who is interested in the tech industry may hold shares in 20 different software and hardware companies. This portfolio appears diversified on the surface. However, since all the companies are in the same industry, they are all subject to the same market forces and risks. In a decline of the tech industry, it is likely many of the stocks will decline at the same time due to their high correlation.

A second investor maintains a portfolio of three low-cost index funds: one dedicated to the total US stock market, another for the total international stock market, and a third focusing on the total bond market. Despite the simplicity of holding just these three positions, this investor enjoys a far more effective level of diversification in their portfolio. The assets, US stocks, international stocks, and bonds, have a low correlation with one another. Consequently, poor performance in one asset class is likely to be counterbalanced by stable or positive returns in another, resulting in a smoother return profile and a reduction in overall portfolio risk.

The portfolio of the first investor is a perfect case of diworsification. Increasing the number of technology stocks did not do any sort of risk diversification, but it introduced complexity and diluted the effect of performing stocks.

The point at which diversification began to operate to its own harm can be identified with several factors. Diversification’s initial goal is to improve the risk-adjusted return, a concept often evaluated using the Sharpe ratio. Diworsification begins when adding a new asset does not contribute to an improvement in the portfolio’s Sharpe ratio.

You can download the Excel below with a numerical example of the impact of correlation in diversification.

Download the Excel file for mortgage

Here is a short summary of what is shown in the Excel spreadsheet.

We used two different portfolios, each with 2 assets and both portfolios having a similar expected return and average volatility of assets. The only difference is that the first portfolio has correlated assets, whereas the second portfolio has non-correlated assets.

Correlated portfolio returns over volatility

Non-Correlated portfolio returns over volatility

As you can see in these graphs, the diversification effect is much more potent for the non-correlated portfolio, leading to higher returns for a given volatility.

Target number of assets for a diversified portfolio

One of the most important considerations when assembling a portfolio is determining the optimal number of assets relative to which greater diversification can be realized prior to the onset of diworsification. Studies of equity markets had indicated that a portfolio of 20 to 30 stocks could diversify away unsystematic risk.

However, this number varies according to different asset classes and the complexity of the assets. In the world of alternative investments, a landmark study, “Hedge fund diversification: how much is enough?,” was published by authors François-Serge Lhabitant and Michelle Learned in 2002, for the Journal of Alternative Investments. The authors aimed to dispel the myth that ‘more is better’ in the complex world of hedge funds. They analyzed the effect of the size of the portfolio on risk and return, determining that although adding to the portfolio reduces risk, the marginal benefits of diversification diminished rapidly.

Importantly they found that adding too many funds could lead to a convergence toward average market returns, effectively eroding the “alpha” (excess return) that investors seek from active management. Furthermore, even when volatility is reduced, other forms of risks, such as skewness and kurtosis, can get worse. The significance of this research is that it offers empirical evidence for the phenomenon of ‘diworsification’—the idea that, after a certain point, adding assets to a portfolio worsens its efficiency.

Crossover from Diversification to Diworsification

The crossover from diversification to diworsification is normally marked by three main factors.

The first is diluted returns, as the number of assets increases, the performance of the portfolio starts to resemble that of a market index, albeit with elevated costs. The favorable influence of a handful of significant winners is offset by the poor performance of many other investments.

The second is an increase in costs as each asset, and particularly each asset owned through a managed fund, comes with some costs. These can be transaction costs, management fees, or costs of research. The more assets there are, the costs add up and ultimately impose a drag on final performance.

The third is unnecessary complexity as a portfolio with too many holdings becomes hard to keep tabs on, analyze, and rebalance. Which can confuse an investor about his or her asset allocation and expose the portfolio to unnecessary risk.

Causes of Diworsification

The causes for diworsification differ systematically between individual and institutional investors. For individual investors, this fundamental mistake arises from an incorrect understanding of genuine diversification, far too often leading to an emphasis on numbers rather than quality. Behavioral biases, such as familiarity bias, manifested in a preference for investing in well-known names of firms, or fear of missing out, which drives investors toward recently outperforming “hot” stocks, can generate portfolios concentrated in highly correlated securities.

The causes of diworsification for institutional investors are fundamentally different. The asset management business puts on a lot of strain that can lead to diworsification. Fund managers, measured against a comparator index, may prefer to build oversized funds whose portfolios are similar to the index, a process called “closet indexing.” Even if such a strategy reduces the risk of underperforming the comparator and thus losing clients, it also ensures that the fund will not show meaningful outperformance, all the time collecting fees for what is wrongly qualified as active management. In addition, the sale of complex product types like “funds of funds” adds further levels of fees and can mask the fact that the underlying assets are often far from unique.

How to avoid Diworsification

Diworsification doesn’t refer to an abandonment of diversification. Rather, it demands a more intelligent strategy. The emphasis should move from raw number of holdings to the correct asset allocation of the portfolio. The key is to mix asset classes with low or even adverse correlations to each other, for example, stocks, government securities, real estate, and commodities. This method allows for a more solid shelter from price fluctuations than keeping a long list of homogeneous stocks.

A low-cost and efficient means for many investors to achieve this goal is to utilize broad-market index funds and ETFs. These financial products give exposure to thousands of underlying securities representing full asset classes within a single holding, thus eliminating the difficulties and high costs of creating an equivalent portfolio of single assets.

Conclusion

Modern Portfolio Theory provides an intriguing framework for crafting portfolios for investments, and its essential concept of diversification still forms its basis. However, implementing this concept requires thoughtful consideration. Diworsification represents a misinterpretation of the objective, and not an objective to add assets simply in numbers, but to improve the risk-return of the portfolio as a whole.

A successful diversification strategy is built on a foundation of asset allocation to low-correlation assets. By focusing on the quality of diversification rather than the quantity of positions, investors can create portfolios that are closer to what they want, avoiding unnecessary costs and lower returns of a diworsified outcome.

Why should I be interested in this post?

Diworsification is a trap that should be avoided, and is really easy to avoid when you understand the mechanisms at work behind it.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Raphael TRAEN Understanding Correlation

   ▶ Youssef LOURAOUI Minimum Volatility Portfolio

Useful resources

Lhabitant, F.-S., M. Learned (2002) Hedge fund diversification: how much is enough? Journal of Alternative Investments, 5(3):23-49.

Lynch P., J. Rothchild (2000) One up on Wall Street. New York: Simon & Schuster.

Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1), 77–91.

About the author

This article was written in November 2025 by Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Understanding Snowball Products: Payoff Structure, Risks, and Market Behavior

Tianyi WANG

In this article, Tianyi WANG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026) explains the structure, payoff, and risks of Snowball products — one of the most popular and complex structured products in Asian financial markets.

Introduction

Structured products can be positioned along a broad risk–return spectrum.

Snowball Structure Product .
Snowball Structure Product
Source: public market data.

As shown in the figure below, Snowball Notes belong to the category of yield-enhancement products, typically offering annualized returns of around 8% to 15%. These products sit between capital-protected structures—which provide lower but more stable returns—and high-risk leveraged instruments such as warrants. This placement highlights a key feature of Snowballs: while they provide attractive coupons under normal market conditions, they come with conditional downside risk once the knock-in barrier is breached. Understanding this relative positioning helps explain why Snowballs are widely marketed during stable or range-bound markets but may expose investors to significant losses when volatility spikes.

Snowball options have become widely traded structured products in Asian equity markets, especially in China, Korea, and Hong Kong. They appeal to investors seeking stable returns in range-bound markets. However, their path-dependent nature and embedded option risks make them highly sensitive to market volatility. During periods of rapid market decline, many Snowball products experience “knock-in” events or even large losses.

To be more specific, a knock-in event occurs when the underlying asset’s price falls below (or rises above, depending on the product design) a predetermined barrier level during the life of the product. Once this barrier is breached, the Snowball option “activates” the embedded option exposure—typically converting what was originally a principal-protected or coupon-paying structure into one that behaves like a short option position. As a result, the investor becomes directly exposed to downside risks of the underlying asset, often leading to significant mark-to-market losses.

This article explains how Snowball products work, their payoff structure, the embedded risks, and how market behavior affects investor outcomes.

Who buys Snowball products?

Snowball products are purchased mainly by:

  • Retail investors — especially in mainland China and Korea, attracted by high coupons and the perception of stability.
  • High-net-worth individuals (HNWI) — through private banking channels.
  • Institutional investors — such as securities firms and structured product funds, often using Snowballs for yield enhancement.

Because Snowballs involve complex embedded options, they are considered unsuitable for inexperienced retail investors. Nevertheless, retail participation has grown significantly in Asian markets.

What is a Snowball product?

A Snowball is a structured product linked to an equity index (e.g., CSI 500, HSCEI) or a single stock. It provides a fixed coupon if the underlying asset stays within certain price barriers. The product contains three key components:

  • Autocall (Knock-out) — product terminates early at a profit if the underlying rises above a set level.
  • Knock-in — if the underlying falls below a certain barrier, the investor becomes exposed to downside risk.
  • Coupon payment — paid periodically as long as knock-in does not occur and knock-out does not trigger.

Snowballs earn steady income in stable markets, but losses can become severe when markets experience sharp declines.

The name “Snowball” comes from the idea of a snowball rolling downhill: it grows larger over time. In structured products, the coupon accumulates (or “rolls”) as long as the product does not knock-in or knock-out. As the months go by, the investor receives a growing stream of accrued coupons — similar to a snowball becoming bigger. However, like a snowball that can suddenly break apart if it hits an obstacle, the product can suffer significant losses once the knock-in barrier is breached.

Market behavior: what does it mean?

In the context of Snowball pricing and risk, “market behavior” refers to two dimensions:

  • Financial market behavior (price dynamics) — movements of the underlying index or stock, volatility levels, liquidity conditions, and short-term shocks. This includes trends such as rallies, range-bound phases, or sharp sell-offs that affect knock-in and knock-out probabilities.
  • Investor behavior — how different market participants react: hedging flows from issuers, panic selling during downturns, retail speculation, institutional risk reduction, and shifts in investor sentiment. These behaviors can reinforce price moves and alter Snowball risk.

Together, these elements form “market behavior”: the interaction between market movements and investor actions. For Snowballs, this directly affects whether the product pays coupons, knocks out early, or falls into knock-in and creates losses.

Key barriers in Snowball products

Knock-out (Autocall) barrier

If at any observation date the price exceeds the knock-out barrier (e.g., 103%), the product terminates early and investors receive principal plus accumulated coupons.

Knock-in barrier

If the price falls below the knock-in barrier (e.g., 80%), the product enters a risk state. If at maturity the price remains below the strike, the investor bears the underlying’s loss.

How Snowball payoffs work

The payoff of a Snowball is path-dependent, meaning it depends on the entire trajectory of the underlying index, not just the final price at maturity.

There are three typical outcomes:

Knock-out outcome (early exit)

If the underlying exceeds the knock-out level early, the investor receives:
Principal + accumulated coupons

No knock-in, no knock-out (maturity coupon)

If the underlying never crosses either barrier:
Principal + full coupons

Knock-in triggered (risky outcome)

If knock-in occurs and the final price ends below strike:
The investor bears the underlying loss

Thus, Snowballs deliver strong returns in stable or mildly rising markets but carry significant losses in bear markets.

Why Snowball products are risky

Although marketed as “income products,” Snowballs are essentially short-volatility strategies. The issuer sells downside protection to the investor in exchange for coupons.

Key risks include:

  • High volatility increases knock-in probability
  • Sharp declines lead to principal losses
  • Liquidity risk
  • Complex payoff makes risks hard to evaluate for retail investors

Case study: Why many Snowballs were hit in 2022–2023

During 2022–2023, Chinese equity markets — especially the CSI 500 and CSI 1000 — experienced large drawdowns due to geopolitical tensions, policy uncertainty, and weak economic recovery. Volatility spiked, and mid-cap indices saw rapid declines.

As a result:

  • Many Snowballs hit knock-in levels
  • Investors faced large mark-to-market losses
  • Issuers reduced new Snowball supply due to elevated volatility

This period highlights how market sentiment and volatility regimes directly impact structured product outcomes.

According to Bloomberg (January 2024), more than $13 billion worth of Chinese Snowball products were approaching knock-in triggers. A rapid decline in the CSI 1000 index pushed many products close to their 80% knock-in barrier.

Some investors experienced immediate 15–25% losses as the embedded short-put exposure was activated.

This real-world case demonstrates how quickly Snowball risk materializes when market volatility rises.

Snowball Take Out.
Snowball Take Out
Source: public market data.

How market behavior affects Snowball performance

Volatility

High volatility increases the likelihood of crossing both barriers.

Trend direction

  • Upward trends → more knock-outs
  • Range-bound markets → steady coupon income
  • Downward trends → knock-in risk and principal loss

Liquidity and investor flows

During sell-offs, Snowball hedging can amplify downward pressure, creating feedback loops.

Snowball knock-in chart.
Snowball knock-in chart
Source: public market data.

Explanation: The chart illustrates a steep market decline where the underlying index falls below its knock-in barrier. When such drawdowns occur rapidly, Snowball products transition into risk mode, immediately exposing investors to the underlying’s downside. This visualizes how market volatility and negative sentiment can activate the hidden risks in Snowball structures.

Conclusion

Snowball products are appealing due to their attractive coupons, but they involve significant downside risks during volatile markets. Understanding the path-dependent nature of their payoff, barrier mechanics, and market behavior is crucial for investors and product designers.

By analyzing Snowball structures, investors gain deeper insight into how derivative products are created, priced, and risk-managed in real financial markets.

Related posts on the SimTrade blog

   ▶ Shengyu ZHENG Barrier Options

   ▶ Slah BOUGHATTAS Book by Slah Boughattas: State of the Art in Structured Products

   ▶ Akshit GUPTA Equity Structured Products

About the author

The article was written in November 2025 by Tianyi WANG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

Managing Corporate Risk: How Consulting and Export Finance Complement Each Other

Julien MAUROY

In this article, Julien MAUROY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025) shares technical knowledge on risk management in the business world based on his experiences. The concepts of financial risk in business, risk management, and risk analysis will be presented. All of this information is drawn from my experiences and supported by literature on the subject.

Risk as a strategic lever

This topic aims to explore how companies manage risk and transform it into a lever for decision-making and value creation. It ties in with my academic background at ESSEC Business School and my professional experience in two complementary environments: finance and risk consulting at BearingPoint and export financing at Bpifrance. Today, risk-related issues are omnipresent in business. Whether it is competitiveness, investment decisions or international expansion, every strategy involves a degree of uncertainty.

Risk is no longer just a threat, it is anticipated, studied, calculated and has a market price: the cost of seeking advice, the cost of insurance, etc. It therefore becomes a key management factor for companies that can identify, measure and integrate it into their strategic thinking. This is why understanding risk management means understanding how organisations balance growth, stability and performance. It is this dual approach : consulting (risk reduction) and insurance and export financing (risk assessment and pricing), that I would like to share with you.

Reducing and structuring risk with consulting

During my internship at BearingPoint, I discovered how consulting could help companies reduce and structure their strategic, financial and operational risks. Consultants bring an external perspective to a company’s activities. They use an analytical, neutral approach to identify organisational weaknesses and make more informed decisions.

Within the Finance & Risk department, my assignments consisted of improving the financial performance and financial management of the company’s activities. The main topics were data reliability, reporting automation, and optimisation of budgeting and forecasting processes.

By improving the quality of financial information and its analysis, we helped companies become more agile and better able to manage their business. Companies gained visibility and the ability to anticipate future developments. Consulting is therefore the ideal way to transform uncertainty into a structured and effective methodology for addressing the challenges facing these sectors.

It helps companies adopt rigorous governance, allocate resources and budgets more effectively to each activity, and avoid costly strategic errors.

Finally, consulting helps reduce companies’ exposure to risk by providing support at all levels. It makes decision-making more rational, measurable and aligned with long-term strategy in light of competition and industry challenges.

Measuring and pricing risk with export insurance and financing

My experience in Bpifrance’s Export Insurance department gave me a different perspective on risk, this time more quantitative and institutional.

In this organisation, risk is not borne solely by the customer seeking insurance, but also by Bpifrance, which insures French exporters against risk arising from foreign buyers. The risk is therefore shared between the lending bank, the insurer and the French exporter.

In export insurance, risk is not abstract: it is analysed, measured and valued. The accuracy of the analysis is paramount, involving financial, extra-financial and geopolitical analysis. An in-depth study of exporting companies and their international counterparties makes it possible to assess their solidity and their ability to honour their financial commitments.

Each project is subject to a detailed risk assessment: counterparty risk, country risk, sectoral or political risk. These factors have an immediate impact on the premium rate applied to the export guarantee. In other words, the higher the risk of loss, the higher the cost of coverage. This approach, based on collaboration with the French Treasury and the OECD, has enabled me to understand how institutions can price risk on a global scale.

In comparison, consulting helps to anticipate, explore solutions and reduce risk, while insurance seeks to assess and price risk. At that point, risk is not avoidable, but is an integral part of the economic model.

Understanding risk in order to leverage it

These two experiences taught me that risk management is not just about protecting yourself from risk, but understanding it so you can use it as a lever for growth.

In consulting, risk is controlled through better organisation, reliable information and a clear strategy. In finance, risk becomes a measurable parameter, integrated into decision-making models and valued according to its potential impact.

These two approaches are therefore complementary: one aims to make the company more resilient, the other enables it to grow despite uncertainty.

These two perspectives show that risk, far from being a constraint, can become a strategic management tool, a driver of adaptation and a source of sustainable competitiveness.

Conclusion: the strategic value of risk management

Through these experiences, I have understood that risk management is at the heart of finance and strategy.

At BearingPoint, I acquired analytical rigour and the ability to structure my thinking, at Bpifrance I gained a macroeconomic vision and a concrete understanding of the link between risk and financial performance.

This dual perspective on qualitative and quantitative risk convinced me that knowing how to assess, integrate and explain risk is a key skill for the future of business.

In an uncertain world, managing risk means managing the relevance of decisions: this is what distinguishes companies that are able to anticipate the future from those that simply react to it.

Opening the topic with the vision of Frank Knight and Nassim Taleb

The study of risk in business has been the subject of earlier studies and research, notably initiated by Frank Knight in 1921 in Risk, Uncertainty and Profit. Knight distinguishes between two essential realities: risk, which can be quantified and insured against, and uncertainty, which cannot be quantified.

This distinction is further developed by Nassim Taleb in The Black Swan (2007), where he shows that certain extreme disruptions, known as ‘black swans’, cannot be predicted or incorporated into traditional models. Examples include pandemics, political shocks and sectoral collapses. For Taleb, the issue is not only one of prediction, but of building resilient organisations capable of absorbing unexpected shocks.

These two perspectives are directly reflected in corporate risk management. I have observed how consulting helps organisations reduce their exposure to ‘measurable’ risk, and conversely, my experience at Bpifrance immersed me in an approach where risk is quantified and priced. But neither consulting nor finance can eliminate uncertainty in Knight’s sense or Taleb’s ‘black swans’. Their role is to help the company better prepare for them by strengthening strategic robustness and adaptability.

That is why risk is no longer just a threat: it becomes a management tool and a lever for structuring action, in order to build organisations that are resilient in the face of the unexpected.

Related posts on the SimTrade blog

   ▶ Rishika YADAV Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

   ▶ Mathias DUMONT Pricing Weather Risk: How to Value Agricultural Derivatives with Climate-Based Volatility Inputs

   ▶ Vardaan CHAWLA Real-Time Risk Management in the Trading Arena

   ▶ Snehasish CHINARA My Apprenticeship Experience as Customer Finance & Credit Risk Analyst at Airbus

   ▶ Marine SELLI Political Risk: An Example in France in 2024

   ▶ Julien MAUROY My internship experience at BearingPoint – Finance & Risk Analyst

   ▶ Julien MAUROY My internship experience at Bpifrance – Finance Export Analyst

Useful resources

BearingPoint

Didier Louro (25/09/2024) Le risk management au service de la croissance Bearing Point x Sellia (podcast).

Bpifrance

OECD

Treasury department

Academic articles and books

Cohen E. (1991) Gestion financière de l’entreprise et développement financier, AUF / EDICEF.

Hassid O. (2011) Le management des risques et des crises Dunod.

Knight, F. H. (1921) Risk, Uncertainty and Profit Houghton Mifflin Company.

Mefteh S. (2005) Les déterminants de la gestion des risques financiers des entreprises non financières : une synthèse de la littérature, CEREG Université Paris Dauphine, Cahier de recherche n°2005-03.

Taleb N.N. (2008) The Black Swan Penguin Group.

About the author

The article was written in November 2025 by Julien MAUROY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025).

   ▶ Read all articles by Julien MAUROY.

The role of DCF in valuation

Roberto Restelli

In this article, Roberto RESTELLI (ESSEC Business School, Master in Finance (MiF), 2025–2026) explains the role of discounted cash flow (DCF) within the broader toolkit of company valuation—when to use it, how to build it, and where its limits lie.

Introduction to company valuation

Valuation is the process of determining the value of any asset, whether financial (for example, shares, bonds or options) or real (for example, factories, office buildings or land). It is fundamental in many economic and financial contexts and provides a crucial input for decision-making. In particular, the importance of proper company valuation emerges in the preparation of corporate strategic plans, during restructuring or liquidation phases, and in extraordinary transactions such as mergers and acquisitions (M&As). Company valuations are also useful in regulatory and tax contexts (for example, transfers of ownership stakes or determining value for tax purposes). Entrepreneurs and investors can evaluate the economic attractiveness of strategic options, including selling or acquiring corporate assets.

The need for a company valuation typically arises to answer three questions: Who needs a valuation? When is it necessary? Why is it useful?

Users and uses of company valuation

Different categories rely on valuation. In investment banks, Equity Capital Markets use it for IPO research and coverage (including fairness opinions), while M&A teams analyze transactions and prepare fairness opinions to inform deal decisions. In Private Equity and Venture Capital, valuation supports majority/minority acquisitions, startup assessments, and LBOs. Strategic investors use it for acquisitions or divestitures, stock‑option plans, and financial reporting. Accountants and appraisal experts (CPAs) prepare fairness opinions, tax valuations, technical appraisals in legal disputes, and arbitration advisory.

Beyond these, regulators and supervisory bodies (e.g., the SEC in the U.S., CONSOB in Italy) require precise valuations to ensure market transparency and investor protection. Corporate directors and managers need valuations to define growth strategies, allocate capital, and monitor performance. Courts and arbitrators request valuations in disputes involving contract breaches, expropriations, asset divisions, or shareholder conflicts. Owners of SMEs—backbone of the Italian economy—use valuations to set sale prices, manage generational transfers, or attract investors.

Examples of valuation

Valuations appear in equity research (e.g., a UBS report on Netflix indicating a short‑ to medium‑term target price based on public information), in M&A deal analyses (including subsidiary valuations and group structure changes), and in fairness opinions (e.g., Volkswagen’s acquisition of Scania). They are central in IPOs to set offer prices and expectations. Banks also rely on valuations in lending decisions to assess enterprise value and credit risk, clarifying the allocation of requested capital.

Core competencies in valuation

High‑quality valuation requires business and strategy foundations (industry analysis, competitive context, business‑model strength), theoretical and technical finance (NPV, pricing models, corporate cash‑flow modeling), and economic theory (uncertainty vs. value and limits of standard models). Valuation is not just technique: it balances modeling choices with empirical evidence and fit‑for‑purpose estimates.

A fundamental principle is that a firm’s value is driven by its ability to generate future cash flows, which must be estimated realistically and paired with an appropriate risk assessment. Higher uncertainty in cash‑flow estimates implies a higher discount rate and a lower present value. Discount‑rate choice depends on the model (e.g., CAPM for systematic risk via beta). Sustainability also matters: modern practice increasingly integrates environmental, social, and governance (ESG) factors—climate risk, regulation, and reputation—into valuation.

General approaches and specific methods

Income Approach. Present value of future benefits, risk‑adjusted and long‑term (e.g., discounted cash flows).
Market Approach. Value estimated by comparing to similar, already‑traded assets.
Cost (Asset‑Based) Approach. Value derived by remeasuring assets/liabilities to current condition.

Within these, DCF is among the most studied and used. It can be computed from the asset perspective via free cash flow to the firm (FCFF) or from the equity perspective via free cash flow to equity (FCFE). Under the asset‑based approach, other methods include net asset value and liquidation value. Additional families include economic profit (e.g., EVA, residual income) and market‑based analyses: trading multiples (e.g., P/E, EV/EBITDA), deal multiples, and premium analysis (control premia). Four further techniques often considered are current market value (market capitalisation), real options (valuing flexible investment opportunities), broker/analyst consensus, and LBO analysis (value supported by leveraged acquisition capacity).

Critical aspects and limits of valuation models

Each method has strengths and limits. In DCF, accuracy depends on projection quality; macro cycles can render forecasts unreliable. In market‑multiple analysis, industry/geography differences and poor comparables can distort results. Real options are powerful for uncertainty but require subjective parameters (e.g., volatility), introducing error bands.

Practical applications of company valuation

Firms use valuation to plan growth, allocate capital, and budget projects. In disputes and restructurings, it informs liquidation values and creditor negotiations. It also supports governance and incentives (e.g., option plans) that align managers with shareholders. In short, valuation enables both day‑to‑day management and extraordinary decisions.

Discounted Cash Flow (DCF)

What is a DCF?

The discounted cash flow (DCF) method values a company by forecasting and discounting future cash flows. Originating with John Burr Williams (The Theory of Investment Value), DCF seeks intrinsic value by projecting cash flows and applying the time value of money: one euro today is worth more than one euro tomorrow because it can be invested.

Advantages include accuracy (when inputs are sound) and flexibility (applicable across firms/projects). Risks include reliance on uncertain projections and difficulty estimating both discount rates and cash flows; hence outputs are estimates and should be complemented with other methods.

Uses of DCF

DCF is widely applied to value companies, analyse investments in public firms, and support financial planning. The five fundamental steps are:

  1. Estimate expected future cash flows.
  2. Determine the growth rate of those cash flows.
  3. Calculate the terminal value.
  4. Define the discount rate.
  5. Discount future cash flows and the terminal value to the present.

DCF components.
 DCF components
Source: author.

Discounted cash flow formula (with a perpetuity‑growth terminal value):

DCF = CF1 / (1 + r)1 + CF2 / (1 + r)2 + … + CFT / (1 + r)T + (CFT+1 / (r – g)) · 1 / (1 + r)T

Where CFt are cash flows in year t, r is the discount rate, and g is the long‑term growth rate.

Building a DCF

Start from operating cash flow (cash‑flow statement) and typically move to free cash flow (FCF) by subtracting capital expenditures. Example: if operating cash flow is €30m and capex is €5m, FCF = €25m. Project future FCF using growth assumptions (e.g., if 2020 FCF was €22.5m and 2021 FCF €25m, growth is ~11.1%). Use near‑term high‑growth and longer‑term fade assumptions to reflect maturation.

Determining the terminal value

The terminal value represents long‑term growth beyond the explicit forecast. A common formula is:

Terminal Value = CFT+1 / (r – g)

Ensure g is consistent with long‑run economic growth and the firm’s reinvestment needs.

Defining the discount rate

The discount rate reflects risk. Common choices include the risk‑free government yield, the opportunity cost of capital, and the WACC (weighted average cost of capital). In equity‑side models, CAPM is often used to estimate the cost of equity via beta (systematic risk).

Discounting the cash flows

Finally, discount projected cash flows and terminal value at the chosen rate to obtain present value. Sensitivity analysis (varying r, g, margins, capex) and scenario analysis (bull/base/bear) are essential to understand valuation drivers.

Example

You can download below an Excel file with an example of DCF. It deals with Maire Tecnimont, which is an Italian engineering and consulting company specializing in the fields of chemistry and petrochemicals, oil and gas, energy and civil engineering.

Download the Excel file for an example of DCF applied to  Maire Tecnimont

Why should I be interested in this post?

If you are an ESSEC student aiming for roles in investment banking, private equity, or equity research, mastering DCF is table‑stakes. This post distills how DCF fits among valuation approaches, the exact steps to build one, and the pitfalls you must stress‑test before using your number in IPOs, M&A, or buy‑side models.

Related posts on the SimTrade blog

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ William LONGIN How to compute the present value of an asset?

   ▶ Maite CARNICERO MARTINEZ How to compute the net present value of an investment in Excel

   ▶ Andrea ALOSCARI Valuation methods

Useful resources

Damodaran online New York University (NYU).

SEC EDGAR company filings

European Central Bank (ECB) statistics

Maire Tecnimont

About the author

The article was written in November 2025 by Roberto RESTELLI (ESSEC Business School, Master in Finance (MiF), 2025–2026).

Book by Slah Boughattas: State of the Art in Structured Products

Slah Boughattas

In this post, Slah BOUGHATTAS (Ph.D., Associate of the Chartered Institute for Securities & Investment (CISI), London) provides an extract from the book ‘State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging’ (2022).

This post presents pedagogical philosophy, structure, and target audience, including graduate students in finance, university professors, and practitioners in derivatives and structured products.

State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging
 State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging
Source: the company.

Summary of the book

The book aims to provide both the theoretical background and the practical applications of structured products in modern financial markets. It systematically explores the fundamentals of derivatives, equity and interest rate markets, stochastic calculus, Monte Carlo simulations, Constant Proportion Portfolio Insurance (CPPI), risk management, and the financial engineering processes involved in designing, pricing, and hedging structured products.

Financial concepts related to the book

Structured Products, Derivatives, Options, Swaps, Structured Notes, Bonus certificates, Constant Proportion Portfolio Insurance (CPPI), Monte Carlo Simulation, Fixed Income, Floating Rate-Note (FRN), Reverse FRN, CMS-Linked Notes, Callable Bond, Financial Engineering, Risk Management, Pricing, and Hedging.

Context and Motivation

The financial engineering of structured products remains one of the most sophisticated domains of quantitative finance. While the literature on derivatives pricing is vast, comprehensive references specifically dedicated to the end-to-end process of structured product creation — designing, pricing, and hedging — remain scarce.

State of the Art in Structured Products bridges this gap. The work is structured to serve both as a teaching manual and a professional reference, progressively building from fundamental principles to advanced practical implementations.

Structure of the Book

  • Derivatives Fundamentals and Market Instruments – recalls the essential mechanics of equity and interest-rate derivatives
  • Designing Structured Products – shows how term sheets and payoff structures emerge logically from financial objectives
  • Pricing and Risk Analysis – provides analytical and simulation-based approaches, including Monte Carlo method
  • Hedging and Risk Management – explores dynamic replication, sensitivities, and practical hedging of structured notes.
  • Advanced Topics – covers Constant Proportion Portfolio Insurance (CPPI), callable and floating-rate instruments, and swaptions

Why should I be interested in this post?

The book’s main contribution lies in its integrated approach combining conceptual clarity, quantitative rigor, and practical implementation examples. It is intended for professors and instructors of Master’s programs in Finance, graduate students specializing in derivatives or structured products, and professionals such as financial engineers, product controllers, traders, dealing room staff and salespeople, risk managers, quantitative analysts, middle office managers, fund managers, investors, senior managers, research and system developers.

The book is currently referenced in several academic libraries, including ESSEC Business School Paris, Princeton University, London School of Economics, HEC Montreal, Erasmus University Rotterdam, ETH Zurich, IE University, Erasmus University Rotterdam, and NTU Singapore.

Related posts on the SimTrade blog

   ▶ Mahé FERRET Selling Structured Products in France

   ▶ Akshit GUPTA Equity Structured Products

   ▶ Youssef LOURAOUI Interest rate term structure and yield curve calibration

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Shengyu ZHENG Capital Guaranteed Products

   ▶ Shengyu ZHENG Reverse Convertibles

Useful resources

Slah Boughattas (2022) State of the Art in Structured Products: Fundamentals, Designing, Pricing, and Hedging Advanced Education in Financial Engineering Editions.

About the author

The article was written in November 2025 by Slah BOUGHATTAS (Ph.D., Associate of the Chartered Institute for Securities & Investment (CISI), London).

The Business Model of Proprietary Trading Firms

Anis MAAZ

In this article, Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains how prop firms work, from understanding their business model and evaluation processes, to fee structures and risk management rules. The goal is not to promise guaranteed profits, but to provide a transparent, realistic overview of how proprietary trading firms operate and what traders should know before joining one.

Context and objective

  • Goal: demystify how prop firms make money, how their rules work, and what realistic outcomes look like, even if you are new to prop firms.
  • Outcome: a technical but accessible guide with a simple numeric example and a due diligence checklist.

What a prop firm is

Proprietary trading firms (prop firms) use their own capital to trade in financial markets, leveraging advanced risk management techniques and state-of-the-art technologies. But how exactly do prop firms make money, and what makes them attractive to aspiring traders? Traders who meet the firm’s rules get access to buying power and share in the profits. Firms protect their capital with strict risk limits (daily loss, max drawdown, product caps). Two operating styles you will encounter: In house/desk model: you trade live firm capital on a desk with a risk manager. Evaluation (“challenge”) model: you pay a fee to prove you can hit a target without breaking rules. If you pass, you receive a “funded” account with payout rules. For example, a classic challenge can be to reach a profit of 6% without losing more than 4% of your initial challenge capital to become funded.

The Proprietary Trading Industry: Origins and Scale

Proprietary trading as a business model emerged in the 1980s-1990s in the US, initially within investment banks’ trading desks before regulatory changes (notably the Volcker Rule in 2010) pushed prop trading into independent firms. The modern “retail prop firm” model, offering funded accounts to individual traders via evaluation challenges, gained momentum in the 2010s, particularly after 2015 with firms like FTMO (Czech Republic, 2014) and TopstepTrader (US, 2012).

Today, the industry includes an estimated 200+ prop firms globally, concentrated in the US, UK, and UAE (Dubai has become a hub due to favorable regulations). Major players include FTMO, TopstepTrader, Apex Trader Funding, Alphafutures, and MyForexFunds. Most are privately owned by founders or small investor groups and some (like Topstep) have received venture capital. The market size is difficult to quantify precisely, but industry reports estimate the global prop trading sector handles billions in trading capital, with the retail-focused segment growing 40-50% annually from 2020-2024.

Core Characteristics of prop firms

  • Capital Allocation: Prop firms provide traders with access to firm capital, enabling them to trade larger positions than they could on their own.
  • Profit Sharing: A trader’s earnings are typically a percentage of the profits generated. This incentivizes high-caliber performance.
  • Training Programs: Many prop firms invest in the development of new traders via structured training programs, equipping them with proven strategies and technologies.
  • Diverse Markets: Prop traders operate across various asset classes, such as stocks, forex, options, cryptocurrencies, and commodities.

How the business model works

The money comes from evaluation fees and resets: a major revenue line for challenge-style firms because most applicants do not pass the challenges. Once funded, a trader keeps the majority of the profits generated (often 70–90%) and the firm keeps the rest. Some firms charge for platform, data or advanced tools such as a complete order book, and pay exchange/clearing fees on futures.

In some cases, firms may charge onboarding or monthly platform fees to cover operational costs, such as trading infrastructure, data services, and proprietary software. However, top firms often waive such fees for consistently profitable traders.

For example, a firm charging $150 for a $50,000 evaluation challenge that attracts 10,000 applicants per month generates $1.5M in fee revenue. If 8% pass (800 traders) and receive funded accounts, and only 20% of those (160) reach a payout, the firm pays out perhaps $500,000-$800,000 in profit splits while retaining the rest as margin. Add-on services (resets at $100 each, platform fees) further boost revenue.

Who Are the Traders?

Prop firm traders come from diverse backgrounds: retail traders seeking leverage, former bank traders, students, and career-changers. No formal degree is required. The average trader age ranges from 25-40, though firms accept anyone 18+. Most traders operate as “independent contractors”, not employees, they receive profit splits, bearing their own tax obligations.

Retention is actually very low: industry data suggests 60-70% of funded traders lose their accounts within 3 months due to rule violations or drawdowns. Only 10-15% maintain funded status beyond 6 months. The model is inherently high-churn: firms continuously recruit through affiliates and ads, knowing most will fail but a small percentage will generate consistent trading activity and profit-share revenue.

What successful traders share :

  • The ability to manage risk and follow rules.
  • Analytical skills and a deep understanding of market behavior.
  • Psychological toughness to handle the highs and lows of trading.

It’s not an easy industry at all, and it’s better to have a real job, because only a small fraction of traders pass and an even smaller fraction reach payouts after succeeding in a challenge. Fee income arrives upfront, payouts happen later and only for those who succeed and manage to be disciplined through time.

For new traders, it’s not easy to pass a challenge when the rules are strict, because trading with someone else’s capital often amplifies fear and greed. Success is judged not only by profitability but also by consistency and adherence to firm guidelines, and any new traders struggle to maintain profitability and burn out within months.

EU regulators have long reported that most retail accounts lose money on leveraged products like CFDs: typically 74–89%, which helps explain why challenge pass rates are low without strong process and discipline.

Success rates: what is typical and why most traders fail

“Pass rate” (applicants who complete the challenge) is commonly cited around 5–10%. “Payout rate among funded traders” is often ~20%. End to end, only ~1–2% of all applicants reach a payout. All of these statistics vary by firm, product, and rules. Most people fail due to rule breaches under pressure (daily loss, news locks), overtrading, and inconsistent execution. Psychological factors like revenge trading, FOMO (Fear of missing out), are the usual culprits.

Trading Strategies, Markets, and Tools

Which Markets?

Most prop firms focus on futures (E-mini S&P, Nasdaq, crude oil), forex (EUR/USD, GBP/USD), and increasingly cryptocurrencies (Bitcoin, Ethereum). Some firms also offer equities (US stocks). The choice depends on the firm’s clearing relationships and risk appetite. Futures dominate because of high leverage, deep liquidity, and high trading windows.

Common Strategies

Prop traders typically employ “intraday strategies”:

  • Scalping (holding positions seconds to minutes)
  • Momentum trading (riding short-term trends), and mean reversion (fading extremes)
  • Swing trading (multi-day holds) is less common due to overnight risk rules
  • High-frequency strategies are rare in retail prop firms, and most traders use setups based on technical indicators (moving averages, RSI, volume profiles).

Tools and Platforms

Firms provide access to professional platforms like NinjaTrader, TradingView, MetaTrader 4/5,). Traders receive Level 2 data (order book), news feeds (Bloomberg, Reuters), and sometimes proprietary risk dashboards. Some firms offer replay tools to practice historical data.

The key performance idea

Positive expectancy = you make more on your average winning trade than you lose on your average losing trade, often enough to overcome costs. Here is a simple way to check:

Step 1: Out of 10 trades, how many are winners? Example: 5 winners, 5 losers (50% win rate). Step 2: What’s your average win and average loss? Example: average win €120; average loss €80. Step 3: Expected profit per trade ≈ (wins × avg win − losses × avg loss) ÷ number of trades. Here: (5 × 120 − 5 × 80) ÷ 10 = (€600 − €400) ÷ 10 = €20 per trade. If costs/slippage are below €20 per trade, you likely have an edge worth scaling, subject to the firm’s risk limits.

The firm wants you to stay inside limits, your average loss is controlled (stops respected), and your results are repeatable across days. They avoid the “luck factor” by putting rules like 2 minimum winning days to pass a challenge and impossible to make more than half of the challenge target in one day.

There are many ways to pass a challenge, depending on your trading strategy: If you aim for trades where your win is 5 times higher than what you risk, you do not need a winrate of 50% or 80% to pass the challenges and be profitable.

Payout mechanics: example with Topstep (to clarify the “50%” point)

Profit split: you keep 100% of the first $ 10,000 you withdraw; after that, the split is 90% to you / 10% to Topstep (per trader, across accounts).

Per request cap: Express Funded Account: request up to the lesser of $ 5,000 or 50% of your account balance per payout, after 5 winning days. Live Funded Account: up to 50% of the balance per request (no $ 5,000 cap). After 30 non consecutive “winning days” in Live, you can unlock daily payouts up to 100% of balance.

Note: “50%” here is a cap on how much you may withdraw per request—not the profit split. Other firms differ (some advertise 80–90% splits, 7–30 day payout cycles, or higher first withdrawal shares), so always read the current Terms.

Why traders choose prop firms (psychology and practical reasons)

Traders are attracted to prop firms for both psychological and practical reasons. The appeal starts with small upfront risk: instead of depositing a large personal account, you pay a fixed evaluation fee. If you perform well within the rules, you gain access to greater buying power, which lets you scale faster than you could with a small personal account.

But this method is indeed a psychological trap, because most of the traders will fail their first account, buy another one because it’s “cheap” and it will become an addiction when they will start burning accounts every day because it “doesn’t feel real” for them. The trade offs are real, evaluation fees and resets can add up, rules may feel restrictive, and pressure tends to spike near limits or payout thresholds. All these factors contribute to why many candidates ultimately fail.

However, for experimented traders who can manage psychology, the built in structure, risk limits, reviews, and a community adds accountability and often improves discipline. Payouts can also serve as a capital building path, gradually seeding your own account over time.

Regulation: A Gray Zone

Proprietary trading firms operate in a largely unregulated space, especially the evaluation-based model. In the US, prop firms are not broker-dealers; they typically collaborate with registered FCMs (Futures Commission Merchants) or brokers who handle execution and clearing, but the firm itself is often a private LLC with minimal oversight. The CFTC (Commodity Futures Trading Commission) regulates futures markets but not prop firms’ internal challenge mechanisms.

In France, the AMF has issued warnings about unregulated prop firms and emphasized that if a firm collects fees from French residents, it may fall under consumer protection law. Some firms have pulled out of France or adjusted terms. The UK FCA has similarly warned consumers. The UAE (DIFC, DMCC) offers more permissive environments, attracting many firms to Dubai.

Conclusion

Prop trading firms offer a compelling proposition: controlled access to institutional sized buying power, standardized risk limits, and a structured pathway for transforming skill into capital without large personal deposits. In this model, firms protect capital through rules and fees, while profitable traders create a scalable environment for strategy development and execution.

At the same time, the evaluation-and-payout cycle can amplify cognitive and emotional traps. Fee resets, drawdown thresholds, and profit targets concentrate attention on short-term outcomes, which can foster overtrading, sensation seeking, and schedule-driven risk-taking. The same leverage that accelerates account growth also magnifies behavioral errors and variance, making intermittent reinforcement (occasional big wins amid frequent setbacks) psychologically sticky and potentially addictive.

In the end, prop firms are neither shortcut nor scam, but a high-constraint laboratory. They reward, stable execution, rule adherence, and penalize improvisation and impulse. As a venue, they are well suited to disciplined traders with repeatable processes, robust risk controls, and patience for incremental scale. Without those traits, the structure that protects the firm can become a treadmill for the trader.

At the end of the day, the prop firm model is designed for the firm to profit from fees, not trader success. With 1-2% end-to-end success rates, it’s closer to a paid training lottery than a career path.

If your goal is to learn trading, SimTrade, paper trading, or small personal accounts teach discipline without predatory fee structures. Joining a bank’s graduate program gives you access to senior traders, research, and real market-making or flow trading experience.

If you’ve already traded profitably for 1-2 years, have a proven strategy, need leverage, and fully understand the fee economics, then a top-tier firm (FTMO, Topstep) could provide capital to scale. But as a first step out of ESSEC, I would prioritize banking or buy-side roles that offer mentorship, stability, and credentials.

Why should I be interested in this post?

Prop firms reveal how trading businesses monetize edge while enforcing strict risk management and incentive design. Grasping evaluation rules, fee structures, and payout mechanics sharpens your ability to assess unit economics and governance. This knowledge is directly applicable to careers in trading, risk, and fintech—helping you make informed choices before joining a program.

Related posts on the SimTrade blog

   ▶ Theo SCHWERTLE Can technical analysis actually help to make better trading decisions?

   ▶ Michel VERHASSELT Trading strategies based on market profiles and volume profiles

   ▶ Vardaan CHAWLA Real-Time Risk Management in the Trading Arena

Useful Resources

Topstep payout policy and FAQs (current rules and examples)

The Funded Trader statistics on pass/payout rates

How prop firms make money (evaluation fees vs profit share): neutral primers and industry explainers

General overviews of prop trading mechanics and risk controls

About the author

The article was written in October 2025 by Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).