My AMF Journey from preparing for the exam to receiving the certificate

Mathilde JANIK

In this article, Mathilde JANIK (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025) shares her experience taking the AMF Exam.

First of all let’s begin by presenting what the AMF (Autorité des Marchés Financiers) is and how the entity AMF differs from the AMF certification. The AMF is the Financial Markets Authority in France and its main mission is to ensure the proper functioning of financial markets, protect the savings invested in them, and ensure that investors are provided with adequate information.The AMF is the second regulatory authority for financial institutions, alongside the ACPR. The ACPR is responsible for the approval and prudential supervision of insurance companies, pension funds, credit institutions, and investment firms. The ACPR is also responsible for protecting the customers of these institutions with regard to banking and insurance transactions, as well as ensuring compliance with anti-money laundering and counter-terrorist financing rules.

The AMF missions, powers, and operating procedures are defined in the Monetary and Financial Code (Code Monétaire et Financier or CMF). A great part of regulation on financial markets and the investment services industry is defined at the European level with the ESMA (European Securities and Markets Authority). The AMF completes them within its General Regulations with rules of good conduct and organization, particularly for portfolio management companies. It monitors compliance with regulations by regulated professionals.

Brief introduction to the AMF certifications

The AMF issues professional licenses to compliance officers at investment firms. Here, we will focus on the actual AMF exam and its importance for professionals working with financial markets in France. In order to provide financial market participants with a consistent and common foundation of knowledge in the areas of finance, regulation, ethics, and sustainable finance, two certification systems have been implemented. The first certification, called “l’Examen AMF” is recognized everywhere in France and is a professional certification. There is also what’s called an internal certification provided by some employers in France, this option, which is only available to investment service providers, including management companies, is only valid within the same group. If the person who has passed the internal assessment changes groups, they will have to take the internal assessment organized by their new employer again, or take the AMF exam. There is another certification delivered by the AMF which is called “L’examen AMF Finance Durable” which is specifically tailored to professionals providing services or products linked to sustainable finance. It provides general knowledge on the regulative and economic side of sustainable finance and prepares to gather durability and sustainability preferences of customers in order to offer tailored solutions adapted to their needs. This article is made to provide a bit more information and to share my experience taking this exam, as it may be of interest to many of you who want to work in finance in France.

About the AMF Certification

The exam consists of a MCQ of 120 questions and to pass this exam and receive the certification you need at least 80% of good answers for questions relating to financial literacy and for questions relating to knowledge deemed essential to the practice of the profession (mainly legal knowledge or knowledge relating to professional ethics). The exam length is 2 hours but you don’t have to stay during the whole exam if you finish earlier. On the date where this article is published, 13 organizations in France are certified to organize the exam. I will provide the links under the section “useful resources” below. I personally prepared for this exam with Lefebvre Dalloz Education. As they provide a partnership with ESSEC we have a discount on the package for the course as well as the exam session.

The professionals who need to pass the exam are professionals exercising under the authority of an investment service provider, including portfolio management companies, or act as financial investment advisors.

Within the investment service provider, not all positions require the AMF Exam or the internal equivalent exam, there are 8 functions that require this exam; salespeople, managers, financial instrument clearing officers, post-trade officers, financial analysts, financial instrument traders, compliance and internal control officers (RCCI), and compliance officers for investment services (RCSI). On top of that, independent or employed financial advisors are now required to pass the AMF exam to advise customers on financial products. Something very important to mention is also that no diploma in France is equivalent to the AMF, therefore, no matter which educational background you may have, you need the AMF certificate to work in the functions aforementioned.

My personal experience

First of all, I would like to explain why I decided to apply for this certificate. I’m currently finishing my apprenticeship as a personal banker in a regional bank in France, during which I discovered financial advisory for personal customers and small businesses. This apprenticeship has driven a strong interest towards portfolio management and how to tailor financial products as solutions depending on the client’s needs and wants. That’s the reason why I decided to pursue a future career in wealth management and to take a step in that direction. I’m doing an internship as a wealth manager assistant from January 2026 onwards. In order to make this experience more efficient, I decided to take the exam before the start of my internship in order to be operational by January 2026 in my functions.

I registered for the AMF in late August 2025 and I passed the exam in late September. I tried to keep a steady study schedule each week and practice lessons at least 3 times a week. I reviewed each theme in details with quizzes at the end of each section, here is the list of themes present in the exam:

  • French, European, and international institutional and regulatory framework
  • Ethics, compliance, and ethical organization of institutions
  • Financial security
  • Market abuse regulations
  • Marketing of financial instruments: rules governing banking and financial solicitation, distance selling, and customer advice Customer relations and information
  • Financial instruments, crypto-assets, and their risks
  • Collective management and management on behalf of third parties
  • Market functioning and organization
  • Post-market and market infrastructure
  • Issues and transactions in securities
  • Accounting and financial fundamentals

Key Takeaways: Skills and Mindset

In terms of studying, depending on your background with finance, it may be more difficult to remember everything if the subjects mentioned are completely new. Personally, I did 3 weeks of studying before passing the exam, but most of the notions to be acquired were already familiar to me as I saw them throughout my apprenticeship.

The themes where I struggled the most were the ones related to the different institutions, jurisdictions and their areas of applications, sometimes I mixed the different institutions and what they were responsible for. The section on collective management or on third parties’ behalf was also a bit of a struggle to me as they were not the usual financial solutions we offered to clients.

The most useful study techniques for me were constant practice and quizzes, first I would read all the information per theme, then I would quiz myself on as many questions as possible, and I also did an “error log” in which I would write every time I made a mistake what the mistake was and what was the right answer. It helped me tailor my study session depending on where my weaknesses were. Once I finished the exam, the most exciting part was the email confirming I passed the exam. When I saw the green “Successfully Passed” sentence, it was a moment of true relief and accomplishment. I received my official certificate shortly after, marking the end of my AMF journey and the start of a new chapter where I can apply this critical knowledge.

Key Takeaways: Skills and Mindset

My preparation for the AMF exam highlighted two essential professional skills: discipline and the ability to embrace regulation as a foundation, not a barrier. The exam is less about innate intelligence and more about consistent, structured effort. The disciplined three-week schedule, combined with the detailed “error log,” was crucial. This study strategy translates directly to the finance world: successful wealth management or banking relies not on a single spectacular trade or transaction, but on the daily, meticulous adherence to process and continuous learning from mistakes. This consistency is the greatest soft skill I gained from the process. Before the exam, it’s easy to view the complex institutional and legal frameworks as merely regulatory hurdles. The real takeaway, however, is that this knowledge is the absolute foundation of professional trust. Understanding the nuances of customer relations and information, market abuse regulations, and professional ethics isn’t just about passing a test, it is about ensuring the integrity of the financial advice provided. For my future role as a wealth manager assistant, the AMF certificate means I can confidently structure solutions knowing they are compliant, ethical, and designed to protect the client’s interests first.

Why should I be interested in this post?

I would strongly advise any student interested in client-facing or advisory roles in French finance to approach this exam with a structured plan and a focus on understanding the spirit of the law rather than just memorizing facts. The certificate doesn’t just grant you the right to advise; it grants you the responsibility to do so ethically.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA AMF

   ▶ Mahé FERRET Selling Structured Products in France

Useful resources

Autorité des Marchés Financiers (AMF) The AMF at a glance.

Autorité des Marchés Financiers (AMF) Guide sur l’examen AMF généraliste et finance durable .

Autorité des Marchés Financiers (AMF) Certification professionnelle AMF en matière de Finance durable .

Autorité des Marchés Financiers (AMF) Certification professionnelle AMF en matière de Finance durable Transcription textuelle.

Lefebvre Dalloz Compétences. Certification professionnelle AMF

Autorité des Marchés Financiers (AMF) European supervision of capital markets: the AMF calls for an enhanced role for ESMA to promote a true Savings and Investments Union

Autorité de contrôle prudentiel et de résolution (ACPR) (2024) Rapport annuel du pôle commun AMF-ACPR 2024

Autorité de contrôle prudentiel et de résolution (ACPR) (2024) Qui sommes-nous ?

About the author

The article was written in February 2026 by Mathilde JANIK (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2025).

   ▶ Discover all articles written by Mathilde JANIK.

Randomness game

Randomness Game

The pedagogical objective of this game is to help you become familiar with randomness. At each trial, you must choose between Head and Tail. At the same time, the simulator independently chooses Head or Tail with equal probability (50%). If the two choices coincide, you win the trial. You have trials in total. Try to beat randomness. Good luck.

Your choice
Simulator choice
?
Head or Tail
Trial 0 /
Your last choice
Simulator last choice
Result
Score 0 / 0

From IAS to IFRS: How International Accounting Standards Shape Financial Reporting

Maxime PIOUX

In this article, Maxime PIOUX (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2022-2026) explains the importance of international accounting standards and highlights the key differences that finance and business students should be aware of.

Why International Accounting Standards

Financial statements are the primary source of information used by investors, managers, and other stakeholders to assess a company’s financial position and performance. However, without common accounting rules, it would be difficult to compare the results of two companies operating in different countries and industries. International accounting standards were developed to address this challenge.

In a context of globalization in financial markets, international accounting standards aim to harmonize accounting practices in order to ensure better comparability between companies regardless of their country or sector. These rules also play a key role in financial transparency. By defining how transactions should be recorded, measured, and presented, they enhance transparency and reduce information asymmetries (situations in which some parties, such as investors, have less information than others about a company’s actual financial situation).

Finally, international accounting standards help improve the quality of financial reporting by imposing disclosure requirements in the financial statements and their notes. These guidelines therefore provide more reliable and consistent financial information, facilitating economic decision-making and strengthening market confidence, as highlighted by Richard Grasso, former Chairman of the NYSE (New York Stock Exchange, the main American stock exchange), “It should strengthen investors’ confidence. This is done through transparency, high quality financial reports, and a standardized economic market.”

IAS: First International Accounting Standards and reference framework

The first international accounting standards to emerge were the IAS (International Accounting Standards), developed from 1973 by the International Accounting Standards Committee (IASC). This international organisation, composed of representatives from multiple countries, was responsible for developing accounting rules applicable worldwide by proposing a common accounting framework.

The IAS were created to meet the needs of investors and markets for reliable, transparent, and consistent information. They cover numerous areas and provide detailed rules on how to account for and present financial transactions and events. Initially, they primarily concerned multinational companies and listed entities seeking to publish financial statements comparable internationally. At the beginning, their application was often voluntary, but some jurisdictions gradually required their adoption.

IFRS: the emergence of a modern international accounting framework

In 2001, the International Accounting Standards Board (IASB) replaced the IASC, representing a significant shift with the former committee. While the IASC focused mainly on developing voluntary standards to harmonize accounting practices, the IASB introduced a more structured, rigorous, and coherent framework, with a mission to supervise and continuously develop international standards in order to strengthen their adoption and credibility worldwide. The IFRS (International Financial Reporting Standards) were born from this process. Their primary objective is similar to that of the IAS: to improve the reliability and comparability of financial statements. However, IFRS go further by imposing a uniform framework with precise principles. They aim to provide a single accounting reference, ensuring that all relevant companies present their financial transactions and events transparently and in a standardized way.

Today, IFRS apply to a wide range of companies, mainly listed and multinational entities, but some countries have adopted them for all companies. In the European Union, for example, all listed companies must prepare their consolidated financial statements according to IFRS, while in other countries, such as the United States, IFRS may be applied voluntarily or for certain subsidiaries of international groups.

To better understand the purpose of IFRS, it is useful to remember three fundamental principles these standards adhere to:

  • Completeness: Financial statements must reflect the company’s entire activity and limit off-balance-sheet information.
  • Comparability: Financial statements are standardized and identical for all companies.
  • Neutrality: Standards should not allow companies to manipulate their accounts.

The application of these standards today

Today, IFRS constitute the main framework for international accounting standards, used in 147 countries (98% of European countries and 92% of Middle Eastern countries). Some IAS, developed before 2001, continue to apply (such as IAS 1 on the presentation of financial statements) as long as they have not been replaced by an equivalent IFRS.

In France, the application of IFRS is mandatory for all listed companies, particularly for the preparation of their consolidated financial statements. Large unlisted companies and certain mid-sized enterprises can also choose to apply them in order to harmonize their international reporting, although this is not compulsory. In contrast, SMEs remain largely subject to the French General Accounting Plan (PCG “Plan Comptable Général”), which provides simplified rules suited to their size and structure.

Impact of IFRS

The impacts of IFRS on companies have been numerous and have varied by industry. However, overall, these impacts have remained relatively limited. For instance, according to a FinHarmony study on the transition to IFRS, the equity of CAC 40 companies changed by only 1.5%.

Three IFRS standards that have led to significant changes in corporate accounting are presented below.

IFRS 16: Leases in the Balance Sheet

Before the introduction of IFRS 16 in January 2019, the accounting treatment of leases was governed by IAS 17 (leases). This standard distinguished between two types of leases:

  • Finance leases, for example when a company leases a machine with a purchase option, for which the company recognized an asset corresponding to the leased item and a liability corresponding to future lease payments.
  • Operating leases, for example when a company rents office space, which were recorded as expenses in the income statement and remained off-balance-sheet.

With IFRS 16, this distinction disappears for most leases: now, all leases must be recognized in the balance sheet as a “right-of-use” asset and a lease liability. The only exceptions are short-term leases (less than 12 months) or leases of low-value assets (less than 5,000 USD). This reform aims to improve the transparency and comparability of financial statements by reflecting all lease obligations on the balance sheet.

As a result, companies with numerous operating leases, such as retail chains or airlines, have seen their assets and liabilities increase significantly, thereby affecting certain financial ratios and indicators (such as debt-to-assets or EBITDA).

Let’s take the example of an airline that leases 10 aircraft under operating lease contracts, with a total annual rent of €10 million over a 10-year period. The company generates revenue of €500 million, an EBITDA of €100 million, and has debt of €250 million.

  • Before IFRS 16, these contracts were classified as operating leases, with an annual lease expense of €10 million recorded in the income statement and no recognition on the balance sheet, despite this significant long-term financial commitment.
  • With IFRS 16, the company must now recognize a right-of-use asset on the balance sheet (corresponding to the present value of future lease payments) along with a lease liability of the same amount. Assuming a discount rate of 2%, the present value of the lease payments over 10 years is approximately €90 million, recorded as both an asset and a liability.
    In the income statement, the lease expense is replaced by depreciation expenses on the right-of-use asset and interest expenses on the lease liability.

The EBITDA, which excludes depreciation and interest, therefore increases to €110 million, compared to €100 million under the previous treatment. The former annual lease expense of €10 million no longer affects EBITDA because it has been replaced by depreciation and interest. However, the apparent leverage increases significantly, as the lease liability rises by €90 million (from €250 million to €340 million). Consequently, the debt-to-EBITDA ratio, for example, moves from 2.5 (250/100) to 3.1 (340/110), which can affect the perception of investors and banks.

This example illustrates that the increase in EBITDA and debt results from a change in accounting standards rather than a real improvement in the company’s economic performance.

IFRS 13: Historical Cost vs Fair Value

A significant change introduced by IFRS 13 in January 2013 concerns the measurement of assets and liabilities. Indeed, under certain IAS and in many national practices, assets were often recorded at historical cost, meaning their original purchase price.

By contrast, IFRS 13 promotes the concept of fair value, which represents the price at which an asset could be sold in a market at the closing date.

Fair value accounting can lead to significant fluctuations in the balance sheet and income statement, particularly for companies holding financial assets, securities, or significant real estate, as it reflects market variations. Companies in sectors such as finance, real estate or hotel industry may thus see their balance sheets and financial ratios change from one period to another, reflecting market realities. However, this approach provides a more realistic and transparent view of the financial situation.

Let’s take the example of a real estate group that owns a portfolio of buildings recorded at a historical cost of €500 million. In other words, the total purchase price of all the group’s buildings amounts to €500 million, whether they were acquired recently or several years ago. The company also has a bank debt of €200 million.

  • Before IFRS 13, the buildings were recorded under “property, plant, and equipment” in non-current assets at their historical cost of €500 million, regardless of changes in the real estate market. Equity and financial ratios therefore reflected this fixed value, without taking market fluctuations into account.
  • With the application of fair value as defined by IFRS 13, buildings are now valued at their market value at the reporting date. This fair value corresponds to the price at which the asset could be sold under normal market conditions and is generally estimated using real estate appraisals or comparable transactions.

Let’s assume that the current market value of the portfolio is €600 million. The balance sheet increases by €100 million in assets and equity. In practice, this revaluation directly affects certain financial ratios. For example, the debt-to-equity ratio decreases from 0.4 (200/500) to 0.33 (200/600). Investors and banks then perceive the company as less leveraged and with a larger asset base, even though the company’s actual operating activity has not changed.
By contrast, if the market value drops to €400 million, equity decreases by €100 million, and the debt-to-equity ratio rises from 0.4 to 0.5 (200/400), which could negatively affect the perceived risk of the company.

This example illustrates that fair value accounting more accurately reflects the current economic situation of assets, but leads to visible fluctuations in the balance sheet and financial ratios.

IFRS 15: Revenue from Contracts with Customers

IFRS 15, which came into effect in January 2018, replaced IAS 18 (Revenue) and IAS 11 (Construction Contracts), introducing a single and standardized approach to revenue recognition.

Before IFRS 15, revenue was recognized differently depending on its nature:

  • Under IAS 18, revenue from goods was recognized at delivery, and revenue from services was recognized at the time they were performed.
  • Under IAS 11, revenue from construction contracts was recognized over time based on the percentage of completion of the project.

With IFRS 15, revenue recognition is based on a single principle: the transfer of control of the good or service to the customer, regardless of physical delivery. In other words, revenue is recognized when the customer obtains control of the good or service. In practical terms, this means:

  • For goods sold, revenue is recognized when the customer can use the item and benefit economically from it.
  • For services (subscriptions or IT services for instance), revenue is recognized progressively as the service is provided, in proportion to the progress or consumption by the customer, rather than at the end of the contract or at invoicing.
  • For construction contracts, revenue is allocated to each stage of the contract as the customer gains control of the corresponding performance.

This approach standardizes revenue treatment across all sectors and reduces discrepancies between companies and countries. IFRS 15 has changed the way companies record revenue in the income statement. Some transactions must now be spread over time, while others can be recognized more quickly, depending on when the customer obtains control of the good or service. The most affected sectors are construction, technology, telecommunications, and services. This standard therefore improves comparability and transparency of revenue, enabling investors and financial analysts to better understand a company’s actual economic performance.

Let’s take the example of a construction company that signs a contract to renovate a residential complex for a total amount of €50 million, over a period of 2 years. Let’s suppose the total estimated cost of the project is €20 million.

  • Before IFRS 15, revenue recognition could differ depending on the applicable standard: under IAS 11, revenue was generally recognized progressively based on the percentage of completion of the project, but some companies could wait until invoicing or delivery to record revenue. This could lead to divergent practices, for example recognizing revenue too early to artificially improve performance, or on the contrary, postponing revenue to smooth results.
  • With IFRS 15, revenue recognition is based on the unique principle of transfer of control to the customer. In practice, this means that the company must recognize revenue as the customer obtains control of the work performed, even if payment has not yet been received.

Let’s assume that, at the end of the first year, 60% of the work is completed and the customer can use this part of the complex: the company will then record €30 million of revenue (60% of the total contract) in its income statement, and the corresponding costs of €12 million (60% of the project costs). The net profit for this part of the project is therefore €18 million (30 – 12).
On the balance sheet, assets increase by €30 million: in cash if the customer has already paid, or in accounts receivable if payment has not yet been received. Equity increases by €18 million, corresponding to the net income from this portion of the project. On the liabilities side, a trade payable of €12 million is recorded, corresponding to costs incurred but not yet paid. This debt will disappear when the company pays its suppliers, reducing cash and maintaining the balance sheet equilibrium.

This approach allows the financial statements to more accurately reflect the economic reality of the contract and makes results more transparent for investors. Without this method, revenue for the first year could have been zero, thus hiding the true performance of the project.

What about US GAAP ?

In addition to IFRS, there are also US GAAP (Generally Accepted Accounting Principles), which constitute the accounting framework used in the United States (US). US GAAP are mandatory for all U.S. listed companies, as IFRS are not permitted for the preparation of financial statements of domestic companies. However, foreign companies listed in the United States may publish their financial statements under IFRS without reconciliation to US GAAP.

US GAAP have existed since 1973 and are developed by the Financial Accounting Standards Board (FASB). They are based on a more rules-based approach, with a much larger volume of standards and interpretations than IFRS, often estimated at several thousand pages (compared with only a few hundred pages for IFRS). This approach reduces the degree of judgment and interpretation but makes the framework more complex.

Why should I be interested in this post?

Understanding the differences between IAS and IFRS standards is essential for any student in finance, accounting, auditing, or corporate finance who wishes to pursue a career in finance. International accounting standards directly influence how companies present their financial performance, measure their assets and liabilities, and communicate with investors. Mastering these concepts makes it easier to read and understand financial statements and to develop a more critical view on a company’s actual performance.

Related posts on the SimTrade blog

   ▶ Samia DARMELLAH My experience as an accounting assistant at Dafinity

   ▶ Louis DETALLE A quick review of the accountant job in France

   ▶ Alessandro MARRAS My professional experience as a financial and accounting assistant at Professional Services

   ▶ Louis DETALLE A quick review of the Audit job

Useful resources

IFRS

FASB

US GAAP vs IFRS

YouTube IFRS 16

YouTube IFRS 13

YouTube IFRS 15

Academic resources

Colmant B., Michel P., Tondeur H., 2013, Les normes IAS-IFRS : une nouvelle comptabilité financière Pearson.

Raffournier B., 2021, Les normes comptables internationales IFRS, 8th edition, Economica.

Richard J., Colette C., Bensadon D., Jaudet N., 2011, Comptabilité financière : normes IFRS versus normes françaises, Dunod.

André P., Filip A., Marmousez S., 2014, L’impact des normes IFRS sur la relation entre le conservatisme et l’efficacité des politiques d’investissement, Comptabilité Contrôle Audit, Vol.Tome 20 (3), p.101-124

Poincelot E., Chambost I., 2015, L’impact des normes IFRS sur les politiques de couverture des risques financiers : Une étude des groupes côtés en France, Revue française de gestion, Vol.41 (249), p.133-144

About the author

The article was written in February 2026 by Maxime PIOUX (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

   ▶ Read all articles by Maxime PIOUX.

Measures and statistics of business activity in global derivative markets

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School) explains how the business of derivatives markets has evolved over time and the pivotal role of the Black–Scholes–Merton option pricing model in their development.

Introduction

The derivatives market is among the most dynamic segments of global finance, serving as a tool for risk management, speculation, and price discovery across diverse asset classes. Spanning from bespoke over-the-counter contracts to standardized exchange-traded instruments, derivatives have become indispensable for investors, institutions, and corporations alike.

This post explores the derivatives landscape, examining market structures, contract types, underlying assets, and key statistics of business activity. It also highlights the pivotal role of the Black–Scholes–Merton model, which provided a theoretical framework for options pricing and catalysed the growth of derivatives markets.

Types of derivatives markets

The derivatives market can be categorized according to their market structure (over-the-counter derivatives and exchange-traded derivatives), the types of derivatives contracts traded (futures/forward, options, swaps), and the underlying asset classes involved (equities, interest rates, foreign exchange, commodities, and credit), as outlined below.

Market structure: over-the-counter derivatives and exchange-traded derivatives

Over-the-counter derivatives are privately negotiated, customized contracts between counterparties like banks, corporates, and hedge funds, traded via phone or electronic networks. OTC derivatives offer high flexibility in terms (price, maturity, quantity, delivery) but are less regulated, with decentralized credit risk management, no central clearing, low price transparency, and higher counterparty risk. They suit specialized or low-volume trades and often incubate new products.

Exchange-traded derivatives are standardized contracts traded on organized exchanges with publicly reported prices. Trades are cleared through a central clearing house that guarantees settlement, with daily marking-to-market and margining to reduce counterparty risk. ETDs are more regulated, transparent, and liquid, making them ideal for high-volume, widely traded instruments, though less flexible than OTC contracts.

Types of derivatives contracts

A derivative contract is a financial instrument that derives its values from an underlying asset. The four major types of such instruments are explained below.

A forward contract is a private agreement to buy or sell an asset at a fixed future date and price. It is traded over the counter between two counterparties (e.g., banks or clients). One party takes a long position (agrees to buy), the other a short position (agrees to sell). Settlement happens only at maturity, and contracts are customized, unregulated, and expose parties to direct counterparty risk.

A futures contract has the same economic purpose as a forward, future delivery at a fixed price, but is traded on an exchange with standardized terms. A clearing house stands between buyers and sellers and guarantees performance. Futures are marked to market daily so gains and losses are realized continuously. They are regulated, more transparent, and carry lower counterparty risk than forwards.

Options are contracts that give the holder the right but not the obligation to buy (call) or sell (put) an asset at a fixed strike price by a given expiration date. The buyer pays an upfront premium to the writer. If the option expires unexercised, the buyer loses only the premium. If exercised, the writer bears the payoff. Options can be American (exercise anytime) or European (exercise only at expiry) and are traded both on exchanges (standardized) and OTC (customized).

Swaps are bilateral contracts to exchange streams of cash flows over time, typically based on fixed versus floating interest rates or other reference indices. Payments are calculated on a notional principal that is not exchanged. Swaps are core OTC instruments for managing interest rate and financial risk.

Types of underlying asset classes

Underlying assets are the products on which a derivative instrument or contract derives its value. The most commonly traded underlying assets are explained below.

Equity derivatives include futures and options on stock indices, such as the S&P 500 Index. These instruments offer capital-efficient ways to manage market risk and enhance returns. Through index futures, institutional investors can achieve cost-effective hedging by locking in prices, while index options provide a non-linear, asymmetric payoff structure that protects against tail risk. Furthermore, equity swaps allow for the seamless exchange of total stock returns for floating interest rates, providing exposure to specific market segments without the capital requirements of direct physical ownership.

Interest rate derivatives include swaps and futures that help manage interest rate risk. Interest rate swaps involve exchanging fixed and floating payments, protecting banks against mismatches between loan income and deposit costs. Interest rate futures allow investors to lock in future borrowing or investment rates and provide insight into market expectations of monetary policy.

Commodity derivatives hedge price risk arising from storage, delivery, and seasonal supply-demand fluctuations. Forwards and futures on crude oil, natural gas, and power are widely used.

Foreign exchange derivatives include forward contracts and cross-currency swaps, allowing firms to hedge currency risk. Cross-currency swaps also support local currency bond markets by enabling hedging of interest and exchange rate risk.

Credit derivatives transfer the risk of default between counterparties. The most widely used is the credit default swap (CDS), which acts like insurance: the buyer pays a premium to receive compensation if a reference entity default.

Quantitative measures of derivatives market activity and size

This section presents the principal measures or statistics used to evaluate the size of the derivatives markets, covering both over-the-counter and exchange-traded instruments, the different derivatives products, and asset classes.

Notional outstanding and gross market value are the primary measures used to assess the size and economic exposure of OTC derivatives markets, while ETDs are typically evaluated using indicators such as open interest and trading volume.

Notional amount

Notional amount, or notional outstanding, is the total principal or reference value of all outstanding derivatives contracts. It captures the overall scale of positions in the derivatives market without reflecting actual market risk or cash exchanged.

For example let us consider a FX forward contract in which two parties agree to exchange $50 for euros in three months at a predetermined exchange rate. The notional amount is $50, because all cash flows (and gains or losses) from the contract are calculated with reference to this amount. No money is exchanged when the contract is initiated, and at maturity only the difference between the agreed exchange rate and the prevailing market rate determines the gain or loss computed on the $50 notional.

Now consider a call option on a stock with a strike price of $50. The notional amount is $50. The option buyer pays only an upfront premium, which is much smaller than $50, but the payoff of the option at maturity depends on how the market price of the stock compares to this $50 reference value.

When measuring notional outstanding in the derivatives market, the notional amounts of all individual contracts are simply added together. For example, one FX forward with a notional of $50 and two option contracts each with a notional of $50 result in a total notional outstanding of $150. This aggregated figure indicates the overall scale of derivatives activity, but it typically overstates actual economic risk because contracts may offset each other and only a fraction of the notional is ever exchanged.

Gross market value

Gross market value is the sum of the absolute values of all outstanding derivatives contracts with either positive or negative replacement (mark-to-market) values, evaluated at market prices prevailing on the reporting date. It reflects the potential scale of market risk and financial risk transfer, showing the economic exposure of a dealer’s derivatives positions in a way that is comparable across markets and products.

To continue the previous FX forward example, suppose a dealer has two outstanding FX forward contracts, each with a notional amount of $50. Due to movements in exchange rates, the first contract has a positive replacement value of $0.50 (the dealer would gain $0.50 if the contract were replaced at current market prices), while the second contract has a negative replacement value of –$0.40. The gross market value is calculated as the sum of the absolute values of these replacement values: |0.50| + |−0.40| = $0.90. Although the total notional outstanding of the two contracts is $100, the gross market value is only $0.90. This measure therefore reflects the dealer’s actual economic exposure to market movements at current prices, rather than the contractual size of the positions.

When this concept is extended to the entire derivatives market, the same distinction becomes apparent at a global scale. While the global derivatives market is often described as having hundreds of trillions of dollars in notional outstanding (approximately USD 850 trillion for OTC derivatives), the economically meaningful exposure is an order of magnitude smaller when measured using gross market value. Unlike notional amounts, gross market value aggregates current mark-to-market exposures, making it a more meaningful and comparable indicator of market risk and financial risk transfer across products and markets.

Open Interest

Open interest refers to the total number of outstanding derivative contracts that have not been closed, expired, or settled. It is calculated by adding the contracts from newly opened trades and subtracting those from closed trades. Open interest serves as an important indicator of market activity and liquidity, particularly in exchange-traded derivatives, as it reflects the level of active positions in the market. Measured at the end of each trading day, open interest is widely used as an indicator of market sentiment and the strength behind price trends.

For example on an exchange, a total of 100 futures contracts on crude oil are opened today. Meanwhile, 30 existing contracts are closed. The open interest at the end of the day would be: 100 (new contracts) − 30 (closed contracts) = 70 contracts. This indicates that 70 contracts remain active in the market, representing the total number of positions that traders are holding.

Trading Volume

Trading volume measures the total number of contracts traded over a specific period, such as daily, monthly, or annually. It provides insight into market liquidity and activity, reflecting how actively derivatives contracts are bought and sold. For OTC markets, trading volume is often estimated through surveys, while for exchange-traded derivatives, it is directly reported.

Consider the same crude oil futures market. If during a single trading day, 50 contracts are bought and 50 contracts are sold (including both new and existing positions), the trading volume for the day would be: 50 + 50 = 100 contracts

Here, trading volume shows how active the market is on that day (flow), while open interest shows how many contracts remain open at the end of the day (stock). High trading volume with low open interest may indicate rapid turnover, whereas high open interest with rising prices can signal strong bullish sentiment.

Key sources of statistics on global derivatives markets

Bank for International Settlements (BIS)

The Bank for International Settlements (BIS) provides quarterly statistics on exchange-traded derivatives (open interest and turnover in contracts, and notional amounts) and semiannual data on OTC derivatives outstanding (notional amounts and gross market values across risk categories like interest rates, FX, equity, commodities, and credit). All the data used in this post has been sourced from the BIS database.

Data are collected from over 80 exchanges for ETDs and via surveys of major dealers in 12 financial centers for OTC derivatives. BIS ensures comparability by standardizing definitions, consolidating country-level data, halving inter-dealer positions to avoid double counting, and converting figures into USD. Interpolations are used to fill gaps between triennial surveys, ensuring consistent time series for analysis.

International Swaps and Derivatives Association (ISDA)

ISDA develops and maintains standardized reference data and contractual frameworks that underpin global OTC derivatives markets. This includes machine-readable definitions and value lists for core market terms such as benchmark rates, floating rate options, currencies, business centers, and calendars, primarily derived from ISDA documentation (notably the ISDA Interest Rate Derivatives Definitions). The data are distributed via the ISDA Library and increasingly designed for automated, straight-through processing.

ISDA’s standards are created and updated through industry working groups and are widely used to support trade documentation, confirmation, clearing, and regulatory reporting. Initiatives such as the Common Domain Model (CDM) and Digital Regulatory Reporting (DRR) translate market conventions and regulatory requirements across multiple jurisdictions into consistent, machine-executable logic. While ISDA does not publish comprehensive market volume statistics, its frameworks play a central role in harmonizing OTC derivatives markets and enabling reliable post-trade transparency.

Futures Industry Association (FIA)

Futures Industry Association (FIA), via FIA Tech, provides comprehensive derivatives data including position limits, exchange fees, contract specifications, and trading volumes for futures/options across global products.

Sources aggregate from exchanges, indices (1,800+ products, 100,000+ constituents), and regulators for reference data like symbologist and corporate actions. The process involves standardizing data into consolidated formats with 500+ attributes, automating regulatory reporting (e.g., CFTC ownership/control), and ensuring compliance via databanks.

How to get the data

The data discussed in this article is drawn from the BIS, FIA and Visual Capitalist. For comprehensive statistics on global derivatives markets (both over-the-counter (OTC) and exchange-traded derivatives (ETDs)), the data are available at https://data.bis.org/ and for exchange-traded derivatives specifically, detailed data are provided by the Futures Industry Association (FIA) through its ETD volume reports, accessible at https://www.fia.org/etd-volume-reports. Data on equity spot market and real economy sectors are sourced from Visual Capitalist.

Derivatives market business statistics

Global derivatives market

In this section, we focus on two core measures of derivatives market activity and size: the notional amount outstanding and the gross market value, which together provide complementary perspectives on the scale of contracts and the associated economic exposure.

As of 30th July 2025, the global derivatives market is estimated to have an outstanding notional value of approximately USD 964 trillion, according to the Bank for International Settlements (BIS). As illustrated in the figure below, the market is largely dominated by over-the-counter (OTC) derivatives, which account for nearly 88% of total notional amounts, whereas exchange-traded derivatives (ETDs) represent a comparatively smaller share of about USD 118 trillion.

Figure 1. Derivatives Markets: OTC versus ETD (2025)
Derivatives Markets: OTC and ETD (2025)
Source: computation by the author (BIS data of 2025).

Figure 2 below compares the scale of the global equity derivatives market with that of the underlying equity spot market as of mid-2025. The figure shows that, although equity derivatives represent a sizeable market in notional terms, they are still much smaller than the equity spot market measured by market capitalization. This suggests that the primary locus of economic value in equities remains in the spot market, while the derivatives market mainly represents contingent claims written on that underlying value rather than a comparable pool of market wealth. The relatively small gross market value of equity derivatives further indicates that only a limited portion of derivative notional translates into actual market exposure.

Figure 2. Equity Markets: Spot versus Derivatives (2025)
Equity Markets: Spot versus Derivatives (2025)
Source: computation by the author (BIS and Visual Capitalist data of 2025).

Data sources: global derivatives notional outstanding as of mid-2025 BIS OTC and exchange traded data; global equity spot market capitalization as of 2025 (Visual Capitalist).

Figure 3 below juxtaposes the global derivatives market with selected real-economy sectors to provide an intuitive comparison of scale. Values are reported in USD trillions and plotted on a logarithmic axis, such that equal distances along the horizontal scale correspond to ten-fold (×10) changes in magnitude rather than linear increments. This representation allows quantities that differ by several orders of magnitude to be meaningfully displayed within a single chart.

Interpreted in this manner, the figure illustrates that the notional size of derivatives markets far exceeds the market capitalization of major real-economy sectors, including technology, financials, energy, fast moving consumer goods (FMCG), and luxury. The comparison is illustrative rather than like-for-like, and is intended to contextualize the scale of financial contract exposure rather than to imply equivalent economic value or direct risk.

Figure 3. Scale of Global Derivatives Relative to Major Real-Economy Sectors (2025)
Scale of Global Derivatives Relative to Major Real-Economy Sectors (2025)
Source: computation by the author (BIS and Visual Capitalist data).

Data sources: BIS OTC derivatives statistics (June 2025) for notional outstanding; Visual Capitalist global stock market sector data (2025) for sector market capitalizations; companies market cap / Visual Capitalist for luxury company market caps.

OTC derivatives market

Figures 4 and 5 below illustrate the evolution of the OTC derivatives market from 1998 to 2025 using the two measures discussed above: outstanding notional amounts (Figure 4) and gross market value (Figure 5). As the data show, notional outstanding tends to overstate the effective economic size of the market, as it reflects contractual face values rather than actual risk exposure. By contrast, gross market value provides a more economically meaningful measure by capturing the current cost of replacing outstanding contracts at prevailing market prices.

Figure 4. Size of the OTC Derivatives Market (Notional amount)
Size of the OTC derivative market (Notional amount)
Source: computation by the author (BIS data).

Figure 5. Size of the OTC Derivatives Market (Gross market value)
Size of the OTC derivative market (Gross market value)
Source: computation by the author (BIS data).

The figure below illustrates the OTC derivatives market data as of 30th July 2025 based on the two metrics discussed above: outstanding notional amounts and gross market value. As the data show, Gross market value (GMV) represents only about 2.6% of total notional outstanding, highlighting the large gap between contractual face values and economically meaningful exposure.

Figure 6. Size measure of the OTC derivatives market (2025)
Size of the OTC derivative market (2025)
Source: computation by the author (BIS data).

Exchange-traded derivatives market

Figure 7 below illustrates the growth of the exchange-traded derivatives market from 1993 to 2025, based on outstanding notional amounts (open interest) and turnover notional amounts (trading volume). For comparability across contracts and exchanges, open interest is expressed in notional terms by multiplying the number of open contracts by their contract size, yielding US dollar equivalents. Turnover is defined as the notional value of all futures and options traded during the period, with each trade counted once.

Figure 7. Size of the Exchange-Traded Derivatives Market
Size of the exchange traded derivatives market
Source: computation by the author (BIS data).

The figure below illustrates the exchange-traded derivatives market data as of 30th July 2025 based on the two metrics discussed above: open interest and turnover (trading volume). The chart shows that only about 12%, of the open positions is actively traded, highlighting the difference between market size and the trading activity.

Figure 8. Size of the Exchange traded derivatives market (2025)
Size of the exchange traded derivatives market (2025)
Source: computation by the author (BIS data).

Figure 9 below illustrates the evolution of the global exchange-traded derivatives market from 1993 to 2025, measured by outstanding notional amounts across major regions. The figure reveals a pronounced concentration of activity in North America and Europe, which drives most of the market’s expansion over time, while Asia-Pacific and other regions play a more modest role. Despite cyclical fluctuations, the overall trajectory is one of sustained long-run growth, underscoring the increasing importance of exchange-traded derivatives in global risk management and price discovery.

Figure 9. Size of the Exchange-Traded Derivatives Market by geographical locations
Size of the exchange traded derivatives market by geographic location
Source: computation by the author (BIS data).

Underlying asset classes

This section analyzes underlying asset-class statistics for derivatives traded in exchange-traded (ETD) and over-the-counter (OTC) markets.

Figure 10 below presents the distribution of exchange-traded derivatives (ETDs) activity across major underlying asset classes. When measured by the number of contracts traded (volume), the market is highly concentrated, with Equity derivatives dominating and accounting for the vast majority of activity. This is followed at a significant distance by Interest Rate and Commodity derivatives. However, this distribution reverses when measured by the notional value of outstanding contracts, where Interest Rate derivatives represent the largest share of the market due to the high underlying value of each contract.

Figure 10. Size of the exchange-traded derivatives market by asset classes
Size of the exchange traded derivatives market
Source: computation by the author (FIA data).

Figure 11 below presents the distribution of OTC derivatives activity across major underlying asset classes, measured by the outstanding notional amounts and displayed on a logarithmic scale. Read in this way, the chart shows that OTC activity is broadly diversified across interest rates, equity indices, commodities, foreign exchange, and credit, with interest rate and foreign exchange derivatives accounting for the largest contract volumes.

Figure 11. Size of the OTC derivatives market by asset classes
Size of the exchange traded derivatives market
Source: computation by the author (BIS data).

Role of the Black–Scholes–Merton (BSM) model

The Black–Scholes–Merton (BSM) model played a role in financial markets that extended well beyond option pricing. As argued by MacKenzie and Millo (2003), once adopted by traders and exchanges, it actively shaped how options markets were organized, priced, and operated rather than merely describing pre-existing price behaviour. Its use at the Chicago Board Options Exchange (CBOE) helped standardize quoting practices, enabled model-based hedging, and supported the rapid growth of liquidity in listed options markets.

At a broader level, MacKenzie (2006) shows that BSM contributed to a transformation in financial culture by embedding theoretical assumptions about risk, volatility, and rational pricing into everyday market practice. In this sense, BSM acted as an “engine” that reshaped markets and economic behaviour, not simply a “camera” recording them.

Beyond markets and firms, the diffusion of the BSM model also had wider societal implications. By formalizing risk as something that could be quantified, priced, and hedged, BSM contributed to a broader cultural shift in how uncertainty was perceived and managed in modern economies (MacKenzie, 2006). This reframing reinforced the view that complex economic risks could be controlled through mathematical models, with public perceptions of financial stability.

Why should you be interested in this post?

For anyone aiming for a career in finance, understanding the derivatives market is essential, as it is currently one of the most actively traded markets and is expected to grow further. Studying the statistics and business impact of derivatives provides valuable context on past challenges and the solutions developed to manage risks, offering a solid foundation for analyzing and navigating modern financial markets.

Related posts on the SimTrade blog

   ▶ Jayati WALIA Derivatives Market

   ▶ Alexandre VERLET Understanding financial derivatives: options

   ▶ Alexandre VERLET Understanding financial derivatives: forwards

   ▶ Alexandre VERLET Understanding financial derivatives: futures

   ▶ Akshit GUPTA Understanding financial derivatives: swaps

   ▶ Akshit GUPTA The Black Scholes Merton model

   ▶ Luis RAMIREZ Understanding Options and Options Trading Strategies

Useful resources

Academic research on option pricing

Black F. and M. Scholes (1973) The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654.

Merton R.C. (1973) Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1), 141–183.

Hull J.C. (2022) Options, Futures, and Other Derivatives, 11th Global Edition, Chapter 15 – The Black–Scholes–Merton model, 338–365.

Academic research on the role of models

MacKenzie, D., & Millo, Y. (2003). Constructing a Market, Performing Theory: The Historical Sociology of a Financial Derivatives Exchange. American Journal of Sociology, 109(1), 107–145.

MacKenzie, D. (2006). An Engine, not a Camera: How Financial Models Shape Markets. MIT Press.

Data

Bank for International Settlements (BIS). Retrieved from BIS Statistics Explorer.

Futures Industry Association (FIA). Retrieved from ETD Volume Reports.

Visual Capitalist. Retrieved from The Global Stock Market by Sector.

Visual Capitalist. Retrieved from Piecing Together the $127 Trillion Global Stock Market.

About the author

The article was written in February 2026 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School).

   ▶ Discover all articles written by Saral BINDAL

Volatility curves: smiles and smirks

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School) analyzes the various shapes of volatility curves observed in financial markets and explains how they reveal market participants’ beliefs about future asset price distributions as implied by option prices.

Introduction

In financial markets characterized by uncertainty, volatility is a fundamental factor shaping the dynamics of the prices of financial instruments. Implied volatility stands out as a key metric as a forward-looking measure that captures the market’s expectations of future price fluctuations, as reflected in current market prices of options.

Implied volatility is inherently a two-dimensional object, as it is indexed by strike K and maturity T. The collection of these implied volatilities across all strikes and maturities constitutes the volatility surface. Under the Black–Scholes–Merton (BSM) framework, volatility is assumed to be constant across strikes and maturities, in which case the volatility surface would degenerate into a flat plane. Empirically, however, the volatility surface is highly structured and varies significantly across both strike and maturity.

Accordingly, this post focuses on implied volatility curves across moneyness for a fixed maturity (i.e. cross-sections of the volatility surface), examining their canonical shapes, economic interpretation, and the insights they reveal about market beliefs and risk preferences.

Option pricing

Option pricing aims to determine the fair value of options (calls and puts). One of the most widely used frameworks for this purpose is the Black–Scholes–Merton (BSM) model, which expresses the option value as a function of five key inputs: the underlying asset price S, the strike price K, time to maturity T, the risk-free interest rate r, and volatility σ. Given these parameters, the model yields the theoretical value of the option under specific market assumptions. The details of the BSM option pricing formulas along with variable definitions can be found in the article Black-Scholes-Merton option pricing model.

Implied volatility

In the Black–Scholes–Merton (BSM) model, volatility is an unobservable parameter, representing the expected future variability of the underlying asset over the option’s remaining life. In practice, implied volatility is obtained by inverting the BSM pricing formula (using numerical methods such as the Newton–Raphson algorithm) to find the specific volatility that equates the BSM theoretical price to the observed market price. The details for the mathematical process of calculation of implied volatility can be found in Implied Volatility and Option Prices.

Moneyness

Moneyness describes the relative position of an option’s strike price K with respect to the current underlying asset price S. It indicates whether the option would have a positive intrinsic value if exercised at the current moment. Moneyness is typically parameterized using ratios such as K/S or its logarithmic transform.


Moneyness formula

In practice, moneyness classifies an option based on its intrinsic value. An option is said to be in-the-money (ITM) if it has positive intrinsic value, at-the-money (ATM) if its intrinsic value is zero, and out-of-the-money (OTM) if its intrinsic value is zero and immediate exercise would not be optimal. In terms of the relationship between the underlying asset price (S) and the strike price (K), a call option is ITM when S > K, ATM when S = K, and OTM when S < K. Conversely, a put option is ITM when S < K, ATM when S = K, and OTM when S > K.

The payoff, that is the cash flow realized upon exercising the option at maturity T, is given for call and put options by:


Payoff formula for call and put options

where ST is the underlying asset price at the time the option is exercised.

Figure 1 below illustrates the payoff of a call option, that is the call option value at maturity as a function of its underlying asset price. The call option’s strike price is assumed to be equal to $4,600. For an underlying price of $3,000, the call option is out-of-the-money (OTM); for a price of $4,600, the call option is at-the-money (ATM); and for a price of $7,000, the call option is in-the-money (ITM) and worth $2,400.

Figure 1. Payoff for a call option and its moneyness (OTM, ATM and ITM)
Payoff for a call option and its moneyness (OTM, ATM and ITM)
Source: computation by the author.

Similarly, Figure 2 below illustrates the payoff of a put option, that is the put option value at maturity as a function of its underlying asset price. The put option’s strike price is assumed to be equal to $4,600. For an underlying price of $3,000, the put option is in-the-money (ITM) and worth $1,600; for a price of $4,600, the put option is at-the-money (ATM); and for a price of $7,000, the put option is out-of-the-money (OTM).

Figure 2. Payoff for a put option and its moneyness (OTM, ATM and ITM)
Payoff for a put option and its moneyness (OTM, ATM and ITM)
Source: computation by the author.

Figure 3 below illustrates the temporal dynamics of moneyness for a European call option with a strike price of $4,600, showing how the option transitions between out-of-the-money, at-the-money, and in-the-money states as the underlying asset price moves relative to the strike over its lifetime.

Figure 3. Evolution of a call option moneyness
Evolution of a call option moneyness
Source: computation by the author.

Similarly, Figure 4 below illustrates the temporal dynamics of moneyness for a European put option with a strike price of $4,600, showing how the option transitions between out-of-the-money, at-the-money, and in-the-money states as the underlying asset price moves relative to the strike over its lifetime.

Figure 4. Evolution of a put option moneyness
Evolution of a put option moneyness
Source: computation by the author.

You can download the Excel file below for the computation of moneyness of call and put options as discussed in the above figures.

Download the Excel file.

Empirical observation: implied volatility depends on moneyness

Smiles and smirks

Volatility curves refer to plots of implied volatility across different strikes for options with the same maturity. Two distinct shapes are commonly observed: the “volatility smile” and the “volatility smirk”.

A volatility smile is a symmetric pattern commonly observed in options markets. For a given underlying asset and expiration date, it is defined as the mapping of option strike prices to their Black–Scholes–Merton implied volatilities. The term “smile” refers to the distinctive shape of the curve: implied volatility is lowest near the at-the-money (ATM) strike and rises for both lower in-the-money (ITM) strikes and higher out-of-the-money (OTM) strikes.

A volatility smirk (also called skew) is an asymmetric pattern in the implied volatility curve and is mainly observed in the equity markets. It is characterized by high implied volatilities at lower strikes and progressively lower implied volatilities as the strike increases, resulting in a downward-sloping profile. This shape reflects the uneven distribution of implied volatility across strikes and stands in contrast to the more symmetric volatility smile observed in other markets.

Stylized facts about the implied volatility curve across markets

Stylized facts characterizing implied volatility curves are persistent and statistically robust empirical regularities observed across financial markets. Below, I discuss the key stylized facts for major asset classes, including equities, foreign exchange, interest rates, commodities, and cryptocurrencies.

Equity market: For major equity indices, the implied volatility curve at a given maturity is typically a negatively sloped smirk: IV is highest for out of the money puts and declines as the strike moves up, rather than forming a symmetric smile (Zhang & Xiang, 2008). This left skew is persistent across maturities and provides useful signals at the individual stock level, where steeper smirks (higher OTM put vs ATM IV) forecast lower subsequent returns, consistent with markets pricing crash risk into downside options (Xing, Zhang & Zhao, 2010).

FX market: For foreign currency options, implied volatility curves most often display a U shaped smile: IV is lowest near at the money and higher for deep in or out of the money strikes, especially for major FX pairs (Daglish, Hull & Suo, 2007). The degree of symmetry depends on the correlation between the FX rate and its volatility, so near zero correlation gives a roughly symmetric smile, while non zero correlations generate skews or smirks that have been empirically documented in options on EUR/USD, GBP/USD and AUD/USD (Choi, 2021).

Commodity market: For commodity options, cross market evidence shows that implied volatility curves are generally negatively skewed with positive curvature, meaning they exhibit smirks rather than flat surfaces, with higher IV for downside strikes but still some smile like curvature (Jia, 2021). Studies on crude oil and related commodities also find pronounced smiles and smirks whose strength varies with fundamentals such as inventories and hedging pressure, reinforcing it is a core stylized fact in commodity derivatives (Soini, 2018; Vasseng & Tangen, 2018).

Fixed income market: Swaption markets display smiles and skews on their volatility curves: for a given expiry and tenor, implied volatility typically curves in moneyness and may tilt up or down depending on the correlation between the underlying rate and volatility (Daglish, Hull & Suo, 2007). Empirical work on the swaption volatility cube shows that simple one factor or SABR lifted constructions do not capture the full observed smile, indicating that a rich, strike and maturity dependent IV surface is itself a stylized feature of interest rate options (Samuelsson, 2021).

Crypto market: Bitcoin options exhibit a non flat implied volatility smile with a forward skew, and short dated options can reach very high levels of implied volatility, reflecting heavy tails and strong demand for certain strikes (Zulfiqar & Gulzar, 2021). Because of this forward skew, the paper concludes that Bitcoin options “belong to the commodity class of assets,” although later studies show that the Bitcoin smile can change shape across regimes and is often flatter than equity index skew (Alexander, Kapraun & Korovilas, 2023).

Summary of stylized facts about implied volatility
 Summary of stylized facts about implied volatility

An Empirical Analysis of S&P 500 Implied Volatility

This section describes the data, methodology, and empirical considerations for the analysis of implied volatility of put options written on the S&P 500 index. We begin by highlighting a classical challenge in cross-sectional option data: asynchronous trading.

Asynchronous trading and measurement error

In empirical option pricing, the non-synchronous observation of option prices and the underlying asset price generates measurement errors in implied volatility estimation, as the building of the volatility curve based on an option pricing model relies on option prices with the underlying price observed at the same point of time.

Formally, let the option price C be observed at time tc, while the underlying asset price S is observed at time ts with ts ≠ tc. The observed option price therefore satisfies


Asynchronous call option price and underlying asset price

Since the option price at time tc depends on the latent spot price S(tc), rather than the asynchronously observed price S(ts), this mismatch introduces measurement error in the underlying price variable and implied volatility at the end.

Various standard filters including no-arbitrage, liquidity, moneyness, maturity, and implied-volatility sanity checks are typically applied to mitigate errors-in-variables arising from asynchronous observations of option prices and their underlying assets.

Example: options on the S&P 500 index

Consider the following sample of option data written on the S&P 500 index. Data can be obtained from FirstRate Data.

Download the Excel file.

Figure 5 below illustrates the volatility smirk (or skew) for an option chain (a series of option prices for the same maturity) written on the S&P 500 index traded on 3rd July 2023 with time to maturity of 2 days after filtering it out from the above data.

Figure 5. Volatility smirk for put option prices on the S&P 500 index
Volatility smirk computed for put option on the S&P 500 index
Source: computation by the author.

You can download the Excel file below to compute the volatility curve for put options on the S&P 500 index.

Download the Excel file.

Economic Insights

This section explains how the shape of the implied volatility curve reveals key economic forces in options markets, including demand for crash protection, leverage-driven volatility feedback effects, and the role of market frictions and limits to arbitrage.

Demand for crash protection:

Out-of-the-money put options serve as insurance against market crashes and hedge tail risk. Because this demand is persistent and largely one-sided, put options become expensive relative to their Black–Scholes-Merton values, resulting in elevated implied volatilities at low strikes. This excess pricing reflects the market’s willingness to pay a premium to insure against rare but severe losses.

Leverage and volatility feedback effects:

When equity prices fall, firms become more leveraged because the value of equity declines relative to debt. Higher leverage makes equity riskier, increasing expected future volatility. Anticipating this effect, markets assign higher implied volatility to downside scenarios than to upside moves. This endogenous feedback between price declines, leverage, and volatility naturally produces a negative volatility skew, even in the absence of crash-risk preferences.

Market frictions and limits to arbitrage:

In practice, option writers are subject to capital constraints, margin requirements, and exposure to jump and tail risk. These constraints limit their capacity to supply downside protection at low prices. As a result, downside options embed not only compensation for fundamental crash risk, but also a risk premium reflecting the balance-sheet costs and risk-bearing capacity of intermediaries. The observed volatility skew therefore arises endogenously from limits to arbitrage rather than purely from differences in underlying return distributions.

Conclusion

The dependence of implied volatility on moneyness is neither an anomaly nor a technical artifact. It reflects market expectations, risk preferences, and the perceived probability of extreme outcomes. For both pedagogical and investment applications, the implied volatility curve is a central tool for understanding how markets price tail and downside risk.

Why should I be interested in this post?

Understanding implied volatility and its relationship with moneyness extends beyond option pricing, offering insights into how markets perceive risk and assess the likelihood of extreme events. Patterns such as volatility smiles and skews reflect investor behavior, the demand for protection, and the asymmetric emphasis on potential losses over gains, providing a clearer view of both pricing anomalies and the economic forces that shape financial markets.

Related posts on the SimTrade blog

Option price modelling

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Saral BINDAL Modeling Asset Prices in Financial Markets: Arithmetic and Geometric Brownian Motions

   ▶ Jayati WALIA Black-Scholes-Merton option pricing model

   ▶ Jayati WALIA Monte Carlo simulation method

Volatility

   ▶ Saral BINDAL Historical Volatility

   ▶ Saral BINDAL Implied Volatility and Option Prices

   ▶ Jayati WALIA Implied Volatlity

Useful resources

Academic research on Option pricing

Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities, Journal of Political Economy, 81(3), 637–654.

Hull J.C. (2015) Options, Futures, and Other Derivatives, Ninth Edition, Chapter 15 – The Black-Scholes-Merton model, 343-375.

Merton, R.C. (1973). Theory of rational option pricing, The Bell Journal of Economics and Management Science, 4(1), 141–183.

Academic research on Stylized facts

Alexander, C., Kapraun, J. & Korovilas, D. (2023) Delta hedging bitcoin options with a smile, Quantitative Finance, 23(5), 799–817.

Bakshi, G., Cao, C., & Chen, Z. (1997). Empirical performance of alternative option pricing models, The Journal of Finance, 52(5), 2003–2049.

Bates, D. S. (1991). The crash of ’87: Was it expected? The evidence from options markets, The Journal of Finance, 46(5), 1777–1819.

Bates, D. S. (2000). Post-’87 crash fears in the S&P 500 futures option market, Journal of Econometrics, 94(1–2), 181–238.

Choi, K. (2021) Foreign exchange rate volatility smiles and smirks, Applied Stochastic Models in Business and Industry, 37(3), 405–425.

Daglish, T., Hull, J. & Suo, W. (2007) Volatility surfaces: theory, rules of thumb, and empirical evidence, Quantitative Finance, 7(5), 507–524.

Jia, G. (2021) The implied volatility smirk of commodity options, Journal of Futures Markets, 41(1), 72–104.

Samuelsson, A. (2021) Empirical study of methods to complete the swaption volatility cube. Master’s thesis, Uppsala University.

Soini, E. (2018) Determinants of volatility smile: The case of crude oil options. Master’s thesis, University of Vaasa.

Xing, Y., Zhang, X. & Zhao, R. (2010) What does individual option volatility smirk tell us about future equity returns? Review of Financial Studies, 23(5), 1979–2017.

Zhang, J.E. & Xiang, Y. (2008) The implied volatility smirk, Quantitative Finance, 8(3), 263–284.

Zulfiqar, N. & Gulzar, S. (2021) Implied volatility estimation of bitcoin options and the stylized facts of option pricing, Financial Innovation, 7(1), 67.

About the author

The article was written in January 2026 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School).

   ▶ Discover all articles written by Saral BINDAL

Leverage in LBOs: How Debt Creates and Destroys Value in Private Equity Transactions

Ian DI MUZIO

In this article, Ian DI MUZIO (ESSEC Business School, Master in Finance (MiF), 2025–2027) explores the economics of leverage in leveraged buyouts (LBOs) from an investment banking perspective.

Rather than treating debt as a purely mechanical input in an Excel model, the article explains—both conceptually and technically—how leverage amplifies equity returns, reshapes risk, affects pricing, and constrains deal execution.

The ambition is to provide junior analysts with a realistic framework they can use when building or reviewing LBO models during internships, assessment centres, or live mandates.

Context and objective

Most students encounter leverage for the first time through a simplified capital structure slide: a bar divided into senior debt, subordinated debt, and equity, followed by a formula showing that higher debt and lower equity mechanically increase the internal rate of return (IRR, the discount rate that sets net present value to zero).

In the abstract, the story appears straightforward. If a company generates stable cash flows, a sponsor can finance a large share of the acquisition with relatively cheap debt, repay that debt over time, and magnify capital gains on a smaller equity cheque.

In reality, this mechanism operates only within a narrow corridor. Too little leverage and the financial sponsor struggles to compete with strategic buyers. Too much leverage and the business becomes fragile: covenants tighten, financial flexibility disappears, and relatively small shocks in performance can wipe out the equity.

The objective of this article is therefore not to restate textbook identities, but to describe how investment bankers think about leverage when advising financial sponsors and corporate sellers, drawing on market practice and transaction experience (see, for example, Kaplan & Strömberg).

The focus is on the interaction between free cash flow generation, debt capacity, pricing, and exit scenarios, and on how analysts should interpret LBO outputs rather than merely producing them.

What an LBO really is

At its core, a leveraged buyout is a change of control transaction in which a financial sponsor acquires a company using a combination of equity and a significant amount of borrowed capital, secured primarily on the target’s own assets and cash flows.

The sponsor is rarely a long-term owner. Instead, it underwrites a finite investment horizon—typically four to seven years—during which value is created through a combination of operational improvement, deleveraging, multiple expansion, and sometimes add-on acquisitions, before exiting via a sale or initial public offering emphasises.

From a financial perspective, an LBO is effectively a structured bet on the spread between the company’s return on invested capital and the cost of debt, adjusted for the speed at which that debt can be repaid using free cash flow.

In other words, leverage only creates value if operating performance is sufficiently strong and stable to service and amortise debt. When performance falls short, the rigidity of the capital structure becomes a source of value destruction rather than enhancement.

How leverage amplifies equity returns

The starting point for understanding leverage is the identity that equity value equals enterprise value minus net debt. If enterprise value remains constant while net debt declines over time, equity value must mechanically increase.

This is the familiar deleveraging effect: as free cash flow is used to repay borrowings, the equity slice of the capital structure expands even if EBITDA growth is modest and exit multiples remain unchanged.

Figure 1 illustrates this mechanism in a stylised LBO. The company is acquired with high initial leverage. Over the holding period, EBITDA grows moderately, but the primary driver of equity value creation is the progressive reduction of net debt.

Figure 1. Evolution of capital structure in a simple LBO.
 Evolution of capital structure in a simple LBO
Source: the author.

Figure 1 illustrates the evolution of capital structure in a simple LBO. Debt is repaid using free cash flow, causing the equity portion of enterprise value to increase even if valuation multiples remain unchanged.

To enhance transparency and pedagogical value, the Excel model used to generate Figure 1—allowing readers to adjust leverage, cash flow, and amortisation assumptions—can be made available alongside this article.

This dynamic explains why LBO IRRs can appear attractive even with limited operational growth. It also highlights the fragility of highly levered structures: when EBITDA underperforms or exit multiples contract, equity value erodes rapidly because the initial leverage leaves little margin for error.

Debt capacity and the role of free cash flow

For investment bankers, the key practical question is not “how much leverage maximises IRR in Excel?” but “how much leverage can the business sustainably support without breaching covenants or undermining strategic flexibility?”.

This shifts the focus from headline EBITDA to the quality, predictability, and cyclicality of free cash flow. In an LBO context, free cash flow is typically defined as EBITDA minus cash taxes, capital expenditure, and changes in working capital, adjusted for recurring non-operating items.

A business with recurring revenues, limited capex requirements, and stable working capital can support materially higher leverage than a cyclical, capital-intensive company, even if both report similar EBITDA today.

Debt capacity is assessed using leverage and coverage metrics such as net debt to EBITDA, interest coverage, and fixed-charge coverage, tested under downside scenarios rather than a single base case. Lenders focus not only on entry ratios, but on how those ratios behave when EBITDA compresses or capital needs spike.

Pricing, entry multiples, and the leverage trade-off

Leverage interacts with pricing in a non-linear way. At a given entry multiple, higher leverage reduces the equity cheque and tends to increase IRR, provided exit conditions are favourable.

However, aggressive leverage also constrains bidding capacity. Lenders rarely support structures far outside market norms, which means sponsors cannot indefinitely substitute leverage for price. In competitive auctions, sponsors must choose whether to compete through valuation or capital structure, knowing that both dimensions feed directly into risk.

Figure 2 presents a stylised sensitivity of equity IRR to entry multiple and starting leverage, holding exit assumptions constant.

Figure 2. Sensitivity of equity IRR to entry valuation and starting leverage.
 Sensitivity of equity IRR to entry valuation and starting leverage
Source: the author.

Figure 2 illustrates the sensitivity of equity IRR to entry valuation and starting leverage. Outside a moderate corridor, IRR becomes highly sensitive to small changes in operating or exit assumptions.

Providing the Excel file behind Figure 2 would allow readers to stress-test entry pricing and leverage assumptions interactively.

Risk, scenarios, and the distribution of outcomes

A mature view of leverage focuses on the full distribution of outcomes rather than a single base case. Downside scenarios quickly reveal how leverage concentrates risk: when performance weakens, equity absorbs losses first.

Figure 3 illustrates how higher leverage increases expected IRR but also widens dispersion, creating both a fatter upside tail and a higher probability of capital loss.

Figure 3. Distribution of equity returns under low, moderate, and high leverage.
Distribution of equity returns under low, moderate, and high leverage
Source: the author.

Higher leverage raises expected returns but materially increases downside risk.

For junior bankers, the key lesson is that leverage is a design choice with consequences. A robust analysis interrogates downside resilience, covenant headroom, and the coherence between capital structure and strategy.

The role of investment banks

Investment banks play a central role in structuring and advising on leverage. On buy-side mandates, they assist sponsors in negotiating financing packages and ensuring proposed leverage aligns with market appetite. On sell-side mandates, they help sellers compare bids not only on price, but on financing certainty and execution risk.

Conclusion

Leverage sits at the heart of LBO economics, but its effects are often oversimplified. For analysts, the real skill lies in linking model outputs to a coherent economic narrative about cash flows, debt service, and downside resilience.

Related posts on the SimTrade blog

   ▶ Alexandre VERLET Classic brain teasers from real-life interviews

   ▶ Emanuele BAROLI Interest Rates and M&A: How Market Dynamics Shift When Rates Rise or Fall

   ▶ Bijal GANDHI Interest Rates

Useful resources

Academic references

Fama, E. F., & MacBeth, J. D. (1973). Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy, 81(3), 607–636.

Koller, T., Goedhart, M., & Wessels, D. (2020). Valuation: Measuring and Managing the Value of Companies (7th ed.). Hoboken, NJ: John Wiley & Sons.

Axelson, U., Jenkinson, T., Strömberg, P., & Weisbach, M. S. (2013). Borrow Cheap, Buy High? The Determinants of Leverage and Pricing in Buyouts. The Journal of Finance, 68(6), 2223–2267.

Kaplan, S. N., & Strömberg, P. (2009). Leveraged Buyouts and Private Equity. Journal of Economic Perspectives, 23(1), 121–146.

Gompers, P. A., & Lerner, J. (1996). The Use of Covenants: An Empirical Analysis of Venture Partnership Agreements. Journal of Law and Economics, 39(2), 463–498.

Business data

PitchBook

About the author

The article was written in January 2026 by Ian DI MUZIO (ESSEC Business School, Master in Finance (MiF), 2025–2027).

   ▶ Read all posts written by Ian DI MUZIO

Modeling Asset Prices in Financial Markets: Arithmetic and Geometric Brownian Motions

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School) presents two statistical models used in finance to describe the time behavior of asset prices: the arithmetic Brownian motion (ABM) and the geometric Brownian motion (GBM).

Introduction

In financial markets, performance over time is governed by three fundamental variables: the drift (μ), volatility (σ), and maybe most importantly time (T). The drift represents the expected growth rate of the price and corresponds to the expected return of assets or portfolios. Volatility measures the uncertainty or risk associated with price fluctuations around this expected growth and corresponds to the standard deviation of returns. The relationship between these variables reflects the trade-off between risk and return. Time, which is related to the investment horizon set by the investor, determines how both performance and risk accumulate. Together, these variables form the foundation of asset pricing to model the behavior of market price over time, and in fine the performance of the investor at their investment horizon.

Modeling asset prices

Asset price modeling is used to understand the expected return and risk in asset management, risk management, and the pricing of complex financial products such as options and structured products. Although asset prices are influenced by countless unpredictable risk factors, quants in finance always try to find a parsimonious way to model asset prices (using a few parameters only).

The first study of asset price modelling dates from Louis Bachelier in 1900, in his doctoral thesis Théorie de la Spéculation (The Theory of Speculation), where he modelled stock prices as a random walk and applied this framework to option valuation. Later, in 1923, the mathematician Norbert Wiener formalized these ideas as the Wiener process, providing the rigorous stochastic foundation that underpins modern finance.

In the 1960s, Paul Samuelson refined Bachelier’s model by introducing the geometric Brownian motion, which ensures positive stock prices following a lognormal statistical distribution. His 1965 paper “Rational Theory of Warrant Pricing” laid the groundwork for modern asset price modelling, showing that discounted stock prices follow a martingale.

We detail below the two models usually used in finance to model the evolution of asset prices over time: the arithmetic Brownian motion (ABM) and the geometric Brownian motion (GBM). We will then use these models to simulate the evolution of asset prices over time with the Monte Carlo simulation method.

Arithmetic Brownian motion (ABM)

Theory

One of the most widely used stochastic processes in financial modeling is the arithmetic Brownian motion, also known as the Wiener process. It is a continuous stochastic process with normally distributed increments. Using the Wiener process notation, an asset price model in continuous time based on an ABM can be expressed as the following stochastic differential equation (SDE):


SDE for the arithmetic Brownian motion

where:

  • dSt = infinitesimal change in asset price at time t t
  • μ = drift (growth rate of the asset price)
  • σ = volatility (standard deviation)
  • dWt = infinitesimal increment of wiener process (N(0,dt))

Note that the standard Brownian motion is a special case of the arithmetic Brownian motion with a mean equal to zero and a variance equal to one.

In this model, both μ and σ are assumed to be constant over time. It can be shown that the probability distribution function of the future price is a normal distribution implying a strictly positive (although negligible in most cases) probability for the price to be negative.

Integrating the SDE for dSt over a finite interval (from time 0 to time t), we get:


Integrated SDE for the arithmetic Brownian motion

Here, Wt is defined as Wt = √t · Zt, where Zt is a normal random variable drawn from the standard distribution N(0, 1) with mean equal to 0 and variance equal to 1.

At any date t, we can also compute the expected value and a confidence interval such that the asset price St lies between the lower and upper bound of the interval with probability equal to 1-α.


Theoritical formulas for mean, upper and lower limits of ABM model

Where S0 is the initial asset price and zα.

The z-score for a confidence level of (1 – α) can be calculated as:


z-score formula

where Φ-1 denotes the inverse cumulative distribution function (CDF) of the standard normal distribution.

For example the statistical z-score (zα) values for 66%, 95%, and 99% confidence intervals are as the following:


z-score examples

Monte Carlo simulations with ABM

Since Monte Carlo simulations are performed in discrete time, the underlying continuous-time asset price process (ABM) is approximated using the Euler–Maruyama discretization of SDEs (see Maruyama, 1955), as shown below.


Discretization formula for the arithmetic Brownian motion (ABM)

where Δt denotes the time step, expressed in the same time units as the drift parameter μ and the volatility parameter σ (usually the annual unit). For example, Δt may be equal to one day (=1/252) or one month (=1/12).

Figure 1 below illustrates a single simulated asset price path under an arithmetic Brownian motion (ABM), sampled at monthly intervals (Δt = 1/12) over a 10-year horizon (T = 10). Alongside the simulated path, the figure shows the expected (mean) price trajectory and the corresponding upper and lower bounds of a 66% confidence interval. In this example, the model assumes an annual drift (μ) of $8, representing the expected growth rate, and an annual volatility (σ) of $15, capturing random price fluctuations. The initial asset price (S0) is equal to $100.

Figure 1. Single Monte Carlo–simulated asset price path under an Arithmetic Brownian Motion model.
A Monte Carlo–simulated price path under an arithmetic Brownian motion model
Source: computation by the author (with Excel).

Figure 2 below illustrates 1,000 simulated asset price paths generated under an arithmetic Brownian motion (ABM). In addition to the simulated paths, the figure displays the expected (mean) price trajectory along with the corresponding upper and lower bounds of a 66% confidence interval, using the same parameter settings as in Figure 1.

Figure 2. Monte Carlo–simulated asset price paths under an Arithmetic Brownian Motion model.
Monte Carlo–simulated price paths under an arithmetic Brownian motion model.
Source: computation by the author (with R).

Geometric Brownian motion (GBM)

Theory

Since an arithmetic Brownian motion (ABM) can take negative values, it is unsuitable for directly modeling stock prices if we assume limited liability for investors. Under limited liability, an investor’s maximum possible loss is indeed confined to their initial investment, implying that asset prices cannot fall below zero. To address this limitation, financial models instead use geometric Brownian motion (GBM), a non-negative stochastic process that is widely employed to describe the evolution of asset prices. Using the Wiener process notation, an asset price model in continuous time based on a GBM can be expressed as the following stochastic differential equation (SDE):


SDE for the geometric Brownian motion (GBM)

where:

  • St = asset price at time t t
  • μ = drift (growth rate of the asset price)
  • σ = volatility (standard deviation)
  • dWt = infinitesimal increment of wiener process (N(0,dt))

Integrating the SDE for dSt/St over a finite interval, we get:


Integrated SDE for the geometric Brownian motion (GBM)

The theoretical expected value and confidence intervals are given analytically by the following expressions:


Theoritical formulas for mean, upper and lower limits of GBM model

Monte Carlo simulations with GBM

To implement Monte Carlo simulations, we approximate the underlying continuous-time process in discrete time, yielding:


Asset price under discrete GBM

where Zt is a standard normal random variable drawn from the distribution N(0, 1) and Δt denotes the time step, chosen so that it is expressed in the same time units as the drift parameter μ and the volatility parameter σ.

Figure 3 below illustrates a single simulated asset price path under a geometric Brownian motion (GBM), sampled at monthly intervals (Δt = 1/12) over a 10-year horizon (T = 10). Alongside the simulated path, the figure shows the expected (mean) price trajectory and the corresponding upper and lower bounds of a 66% confidence interval. In this example, the model assumes an annual drift (μ) of 8%, representing the expected growth rate, and an annual volatility (σ) of 15%, capturing random price fluctuations. The initial asset price is S0 €100.

Figure 3. Monte Carlo–simulated asset price path under a Geometric Brownian Motion model.
Monte Carlo–simulated asset price path under a GBM model.
Source: computation by the author (with Excel).

Figure 4 below illustrates 1,000 simulated asset price paths generated under a geometric Brownian motion (GBM). In addition to the simulated paths, the figure displays the expected (mean) price trajectory along with the corresponding upper and lower bounds of a 66% confidence interval, using the same parameter settings as in Figure 3.

Figure 4. Monte Carlo–simulated asset price paths under a Standard Brownian Motion model.
 Monte Carlo–simulated asset price paths under a Geometric Brownian Motion model.
Source: computation by the author (with R).

Discussion

The drift μ represents the expected rate of growth of asset prices, so its cumulative contribution increases linearly with time as μT. In contrast, volatility σ captures investment risk, and its cumulative impact scales with the square root of time as σ√T. As a result, over short horizons stochastic shocks tend to dominate the deterministic drift, whereas over longer horizons the expected growth component becomes increasingly prominent.

When many paths for the asset price are simulated and plotted over time, the resulting trajectories form a cone-shaped region, commonly referred to as a fan chart. The center of this fan traces the smooth expected path governed by the drift μ, while the widening envelope reflects the growing dispersion of outcomes induced by volatility σ.

This representation underscores a key implication for long-term investing and risk management: uncertainty expands with the investment horizon even when model parameters remain constant. While the expected value evolves predictably and linearly through time, the range of plausible outcomes broadens at a slower, square-root rate, shaping the risk–return trade-off across different time scales.

You can download the Excel file provided below for generating Monte Carlo Simulations for asset prices modeled on arithmetic and geometric Brownian motion.

Download the Excel file.

You can download the Python code provided below, for generating Monte Carlo Simulations for asset prices modeled on arithmetic and geometric Brownian motion.

Download the Python code.

Alternatively, you can download the R code below with the same functionality as in the Python file.

 Download the R code.

Link between the ABM and the GBM

The ABM and GBM models are fundamentally different: the drift for the ABM is additive while the drift for the GBM is multiplicative. Moreover, the statistical distribution for the price for the ABM is a normal distribution while the statistical distribution for the GBM is a log-normal distribution. However, we can study the relationship between the two models as they are both used to model the same phenomenon, the evolution of asset prices over time in our case.

We can especially study the relationship between the two parameters of the two models, μ and σ. In the presentation above, we used the same notations for μ and σ for the two models, but the values of these parameters for the two models will be different when we apply these models to the same phenomenon. There is no mapping of the ABM and GBM in the price space such that we get the same results as the two models are fundamentally different.

Let us rewrite the two models (in terms of SDE) by differentiating the parameters for each model:


SDE for the ABM and GBM

To model the same phenomenon, we can use the following relationship between the parameters of the ABM and GBM models:


Link between the ABM and GBM parameters.

To make the two models comparable in terms of price behavior, an ABM can locally approximate GBM by matching instantaneous drift and volatility such that:


Local link between the ABM and GBM parameters.

This local correspondence is state-dependent and time-varying, and therefore not a true parameter equivalence.

Figure 5 below compares the asset price path for an ABM, monthly adjusted ABM and a GBM.


Simulated asset price paths for ABM, adjusted ABM and GBM.

Why should I be interested in this post?

Understanding how asset prices are modeled, and in particular the difference between additive and multiplicative price dynamics, is essential for building strong intuition about how prices evolve over time under uncertainty. This understanding forms the foundation of modern risk management, as it directly informs concepts such as capital protection, downside risk, and the long-term behavior of investment portfolios.

Related posts on the SimTrade blog

   ▶ Saral BINDAL Historical Volatility

   ▶ Saral BINDAL Implied Volatility and Option Prices

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Jayati WALIA Monte Carlo simulation method

Useful resources

Academic research

Bachelier L. (1900) Théorie de la spéculation. Annales scientifiques de l’École Normale Supérieure, 3e série, 17, 21–86.

Kataoka S. (1963) A stochastic programming model. Econometrica, 31, 181–196.

Lawler G.F. (2006) Introduction to Stochastic Processes, 2nd Edition, Chapman & Hall/CRC, Chapter “Brownian Motion”, 201–224.

Maruyama G. (1955) Continuous Markov processes and stochastic equations. Rendiconti del Circolo Matematico di Palermo, 4, 48–90.

Samuelson P.A. (1965) Rational theory of warrant pricing. Industrial Management Review, 6(2), 13–39.

Telser L. G. (1955) Safety-first and hedging. Review of Economic Studies, 23, 1–16.

Wiener N. (1923) Differential-space. Journal of Mathematics and Physics, 2, 131–174.

Other

H. Hamedani, Brownian Motion as the Limit of a Symmetric Random Walk, ProbabilityCourse.com Online chapter section.

About the author

The article was written in January 2026 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School).

   ▶ Discover all articles written by Saral BINDAL

Valuation in Niche Sectors: Using Trading Comparables and Precedent Transactions When No Perfect Peers Exist

Ian DI MUZIO

In this article, Ian DI MUZIO (ESSEC Business School, Master in Finance (MiF), 2025–2027) discusses how valuation practitioners use trading comparables and precedent transactions when no truly “perfect” peers exist, and how to build a defensible valuation framework in Mergers & Acquisitions (M&A) for hybrid or niche sectors.

Context and objective

In valuation textbooks, comparable companies and precedent transactions appear straightforward: an analyst selects a sector in a database, obtains a clean peer group, computes an EV/EBITDA range, and applies it to the target. In practice, this situation is rare.

In real M&A mandates, the target often operates at the intersection of several activities (e.g. media intelligence, marketing technology, and consulting), across multiple geographies, with competitors that are mostly private or poorly disclosed.

Practitioners typically rely on databases such as Capital IQ, Refinitiv, PitchBook or Orbis. While these tools are powerful, they often return peer groups that are either too broad (mixing unrelated business models) or too narrow (excluding relevant private competitors). Private peers, even when strategically closest, usually cannot be used directly because they do not publish sufficiently detailed or standardized financial statements.

The objective of this article is therefore to provide an operational framework for valuing companies in such conditions. It explains:

  • What trading comparables and precedent transactions really measure;
  • Why “perfect” peers almost never exist in practice;
  • How to construct and clean a comps set in hybrid sectors;
  • How to use precedent transactions when listed peers are scarce;
  • How to combine these tools with discounted cash-flow (DCF) analysis and professional judgment.

The target reader is a student or junior analyst who already understands the intuition behind EV/EBITDA (enterprise value divided by earnings before interest, taxes, depreciation and amortisation), but wants to understand how experienced deal teams reason when databases do not provide obvious answers.

Trading comparables: what they measure in practice

Trading comparables rely on the idea that listed companies with similar risk, growth and operating characteristics should trade at comparable valuation multiples.

The construction of trading multiples follows three technical steps.

First, equity value is converted into enterprise value (EV):

Enterprise Value = Equity Value + Net Debt + Preferred Equity + Minority Interests – Non-operating Cash and Investments.

This adjustment ensures consistency between the numerator (EV) and the denominator (operating metrics such as EBITDA), which reflect the performance of the entire firm.

Second, the denominator is selected and cleaned. Common denominators include LTM or forward revenue, EBITDA or EBIT. EBITDA is typically adjusted to exclude non-recurring items such as restructuring costs, impairments or exceptional litigation expenses.

Third, analysts interpret the distribution of multiples rather than relying on a simple average. Dispersion reflects differences in growth, margins, business quality and risk. When peers are imperfect, this dispersion becomes a key analytical input.

EV/EBITDA distribution
Figure 1 – Distribution of EV/EBITDA multiples for a selected peer group in the media and marketing technology space. The figure is based on a simulated dataset constructed to mirror typical outputs from Capital IQ and Refinitiv for educational purposes. The target company is positioned within the range based on its growth, margin and risk profile.

Precedent transactions: what trading comps do not capture

Precedent transactions analyse valuation multiples paid in actual M&A deals. While computed in a similar way to trading multiples, they capture additional economic dimensions, as explained below.

Transaction multiples typically include a control premium, as buyers obtain control over strategy and cash flows. They also embed expected synergies and strategic considerations, as well as prevailing credit-market conditions at the time of the deal.

From a technical standpoint, transaction enterprise value is reconstructed at announcement using the offer price, fully diluted shares, and the target’s net debt and minority interests. Careful alignment between balance-sheet data and LTM operating metrics is essential.

Trading vs precedent multiples
Figure 2 – Comparison between trading comparables and precedent transaction multiples (EV/EBITDA). The illustration is based on a simulated historical sample consistent with PitchBook and Capital IQ deal data. Precedent transactions typically trade at higher multiples due to control premia, synergies and financing conditions.

Why perfect peers almost never exist

Teaching in business schools often presents comparables as firms with identical sector, geography, size and growth. In real M&A practice, this situation is exceptional.

Business models are frequently hybrid. A single firm may combine SaaS subscriptions, recurring managed services and project-based consulting, each with different margin structures and risk profiles.

Accounting reporting rules, such as International Financial Reporting Standards (IFRS) or US GAAP, further reduce comparability. Differences in revenue recognition (IFRS 15), lease accounting (IFRS 16) or capitalization of development costs can materially affect reported EBITDA.

Finally, many relevant competitors are private or embedded within larger groups, making transparent comparison impossible.

Building a defensible comps set in hybrid sectors

When similarity is weak, the analysis should begin with a decomposition of the target’s business model. Revenue streams are separated into functional blocks (platform, services, consulting), each benchmarked against the most relevant public proxies.

Peer groups are therefore modular rather than homogeneous. Geographic constraints are relaxed progressively, prioritising business-model similarity over local proximity.

Comps workflow
Figure 3 – Bottom-up workflow for constructing a defensible comps set in niche sectors. The figure illustrates the analytical sequence used by practitioners: business-model decomposition, peer clustering, financial cleaning and positioning within a valuation range.

When comparables fail: the role of DCF

When no meaningful peers exist, discounted cash-flow (DCF) analysis becomes the primary valuation tool.

A DCF estimates firm value by projecting free cash flows and discounting them at the weighted average cost of capital (WACC), which reflects the opportunity cost for both debt and equity investors.

Key valuation drivers include unit economics, operating leverage and realistic assumptions on growth and margins. Sensitivity analysis is essential to reflect uncertainty.

Corporate buyers versus private equity sponsors

Corporate acquirers focus on strategic fit and synergies, while private equity sponsors are constrained by required internal rates of return (IRR) and money-on-money multiples (MOIC).

Despite different objectives, both rely on the same principle: when comparables are imperfect, the narrative behind the multiples matters more than the multiples themselves.

How to communicate limitations effectively

From the analyst’s perspective, the key is transparency. Clearly stating the limitations of the comps set and explaining the analytical choices strengthens credibility rather than weakening conclusions.

Useful resources

Damodaran, A. (NYU), Damodaran Online.

Rosenbaum, J. & Pearl, J. (2013), Investment Banking: Valuation, Leveraged Buyouts, and Mergers & Acquisitions, Wiley.

Koller, T., Goedhart, M. & Wessels, D. (2020), Valuation: Measuring and Managing the Value of Companies, McKinsey & Company, 7th edition.

About the author

This article was written in January 2025 by Ian DI MUZIO (ESSEC Business School, Master in Finance (MiF), 2025–2027).

Understanding WACC: a student-friendly guide

Daniel LEE

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027) explains the Weighted Average Cost of Capital (WACC).

Introduction

The Weighted Average Cost of Capital (WACC) is one of the most important concepts in corporate finance and valuation. I know that for some students, it feels abstract or overly technical. In reality, WACC is simpler than we think.

Whether it is a DCF, investment decision or assessing long-term value creation, understanding WACC is essential to interpret the financial world. In a DCF, WACC is used as the discount rate applied for FCF. Moreover, a higher WACC lowers the PV of future cashflows whereas a lower WACC increases the firm value. That is why WACC is a benchmark for value creation.

What is the cost of capital?

Every company needs funding to operate, which comes from two main sources: debt and equity. Debt is provided by banks or bondholders and equity is provided by shareholders. Both expect to be compensated for the risk they take. Shareholders typically require a higher return because they bear greater risk, as they are paid only after all other obligations have been met. In contrast, debt investors mainly expect regular interest payments and face lower risk because they are paid before shareholders in case of financial difficulty. The cost of capital represents the return required by each group of investors, and the Weighted Average Cost of Capital (WACC) combines these required returns into a single percentage.

The cost of capital is the return required by each investor group and WACC combines these two expectations with a simple %.

Breaking down the WACC formula

WACC is calculated with this formula:

Formula for the WACC

To gather these elements, we use several methods such as:

Cost of Equity: CAPM model

Cost of equity = Risk-free rate + β (Expected market return – Risk-free rate)

Beta measures how sensitive a company’s returns are to movements in the overall market. It captures systematic risk, meaning the risk that cannot be eliminated through diversification. A beta above 1 indicates that the firm is more volatile than the market, while a beta below 1 means it is less sensitive to market changes.

It is important to distinguish between unlevered beta and levered beta. The unlevered beta reflects only the risk of the firm’s underlying business activities, assuming the company has no debt. It represents the pure business risk of the firm and is especially useful when comparing companies within the same industry, as it removes the effect of different financing choices. This is why analysts often unlever betas from comparable firms and then relever them to match a target capital structure.

The levered beta, on the other hand, includes both business risk and financial risk created by the use of debt. When a company takes on more debt, shareholders face greater risk because interest payments must be made regardless of the firm’s performance. This increases the volatility of equity returns, leading to a higher levered beta and a higher cost of equity.

The risk-free rate represents the return investors can earn without taking any risk and is usually approximated by long-term government bond yields. It acts as the baseline return in the CAPM, since investors will only accept risky investments if they offer a return above this rate. Choosing the correct risk-free rate is important: it should match the currency and the time horizon of the cash flows. Changes in the risk-free rate have a direct impact on the cost of equity and, therefore, on firm valuation.

Cost of Debt

The interest payments are tax-deductible. That’s why we include 1-T in the formula. For example: if a company pays 5% interest annually and the corporate tax rate is 30% then the net cost of debt is 5%*(1-0.3) = 3.5%.

Capital Structure Weights

The weights Equity/(Equity+Debt) and Debt/(Equity+Debt) represents the proportion of equity and debt in the company’s balance sheet. We can then assume that a firm with more debt will have a lower WACC because debt is cheaper, but too much debt is risky. That is why the balance is very important for valuation and that usually you use a “target capitalization”. Target capitalization is an assumption of the level of debt and equity that a company is expected to have in the long term, rather than the current one.

Understanding risk through the WACC

WACC is a measure of risk. A higher WACC means the company is riskier and a lower WACC means it’s safer.

WACC is also closely linked to a firm’s capability to create value. If ROIC > WACC then the company creates value, but if ROIC < WACC, the company destroys value. This rule is widely used by CFO and investors to take decisions.

How is WACC used in practice?

  • WACC is the discount rate applied to FCF in the DCF > Lower WACC = Higher valuation; Higher WACC = Lower Valuation
  • As said before, it helps to assess value creation and find NPV
  • Assessing capital structure > helps to find the optimal balance between debt and equity
  • Comparing companies > good preliminary step to look at similar companies in the same company, the WACC will tell you a lot about their risk

Example

To illustrate how the WACC formula is used in practice, let us take the DCF valuation for Alstom that I made recently. In this valuation, WACC is used as the discount rate to convert future free cash flows into present value.

Alstom’s capital structure is defined using a target capitalization, that was chosen on the industry and the comps. Equity represents 75% of total capital and debt 25%. The cost of equity is estimated using the CAPM. Based on the base-case assumptions, Alstom has a levered beta that reflects both its industrial business risk and its use of debt. Combined with a risk-free rate and an equity risk premium, this leads to a cost of equity of 8.3%.

The cost of debt is estimated using Alstom’s borrowing conditions. Alstom pays an average interest rate of 4.12% on its debt. Since interest expenses are tax-deductible, we adjust for taxes. With a corporate tax rate of 25.8%, the after-tax cost of debt is:

4.12%×(1-0.258)=3.05%

We can now compute the WACC:

WACC=75%×8.3%+25%×3.05%=6.98%

This WACC represents the minimum return Alstom must generate on its invested capital to satisfy both shareholders and lenders. In the DCF, this rate is applied to discount future free cash flows. A higher WACC would reduce Alstom’s valuation, while a lower WACC would increase it, highlighting how sensitive valuations are to financing assumptions.

Conclusion

To conclude, WACC may look a bit complicated, but it represents a simple idea: the company must generate enough to reward its investors for the risk they take. Understanding WACC allows people to interpret valuations, understand how capital structure influences risk and compare businesses across industries. Once you master the WACC, it is one of the best tools to dig your intuition about risk and valuation.

Related posts on the SimTrade blog

   ▶ Snehasish CHINARA Academic perspectives on optimal debt structure and bankruptcy costs

   ▶ Snehasish CHINARA Optimal capital structure with corporate and personal taxes: Miller 1977

   ▶ Snehasish CHINARA Optimal capital structure with no taxes: Modigliani and Miller 1958

Useful resources

Damodaran, A. (2001) Corporate Finance: Theory and Practice. 2nd edn. New York: John Wiley & Sons.

Modigliani, F., M.H. Miller (1958) The Cost of Capital, Corporation Finance and the Theory of Investment, American Economic Review, 48(3), 261-297.

Modigliani, F., M.H. Miller (1963) Corporate Income Taxes and the Cost of Capital: A Correction, American Economic Review, 53(3), 433-443.

Vernimmen, P., Quiry, P. and Le Fur, Y. (2022) Corporate Finance: Theory and Practice, 6th Edition. Hoboken, NJ: Wiley.

About the author

The article was written in January 2026 by Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027).

   ▶ Read all articles by Daniel LEE.

Crypto ETP

Alberto BORGIA

In this article, Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025) explains about ETPs on crypto.

Introduction

An Exchange-Traded Product (ETP) is a type of regulated financial instrument, which is traded on stock exchanges and allows exposure to the price movements of an underlying asset or a benchmark without requiring direct ownership of the asset.

Crypto ETPs are instruments that provide regulated access to all market participants. Since their inception, they have become the main access point for traditional investors seeking exposure to digital assets. Every year, the value of assets in this category continues to grow and in their latest report, 21Shares analysts agree that by 2026 these assets will be able to surpass $400 billion globally.

The picture shows how rapidly crypto ETPs have scaled from early 2024 to late 2025. Assets under management (blue area) rise in successive waves, moving from roughly the tens of billions to just under the $300B range by late October 2025, while cumulative net inflows (yellow line) trend steadily upward toward ~$100B, signaling that growth has been supported by persistent new capital in addition to market performance.

As regulated access expands through mainstream distribution channels and more jurisdictions formalize frameworks for crypto investment vehicles, ETPs increasingly become the default wrapper for exposure. As the market deepens, secondary-market liquidity typically improves and execution costs compress, reducing short-term dislocations around the product and reinforcing further allocations.

Crypto ETP Asset under Management (AUM)
Crypto ETP AUM
Source: 21Shares.

This trend is driven not only by retail clients’ demand, but also by an increasing openness of traditional markets toward these types of products, meaning that established exchanges, broker-dealers, custodians and market-makers are increasingly willing to list, distribute and support crypto-linked ETPs within the same governance, disclosure and risk-management frameworks used for other exchange-traded instruments. In the US, more and more structural barriers are being removed thanks to new approval processes for crypto investment vehicles, as regulators and exchanges have been moving toward clearer, more standardized filing and review pathways and more predictable disclosure expectations.

By the end of 2025, more than 120 ETP applications were pending review in the USA, under assessment by the SEC and, where relevant, by the national securities exchanges seeking to list these products, positioning the market for significant inflows beyond Bitcoin and Ethereum in the new year.

We see this trend in other countries as well: the UK has removed the ban for retail investors, Luxembourg’s sovereign fund has invested as much as 1% of its portfolio in Bitcoin ETPs, while countries such as the Czech Republic and Pakistan have even started using such assets for national reserves. In Asia and Latin America, regulatory frameworks are also being formed, making crypto ETPs the global standard for regulated access.

This will lead to a virtuous cycle that will attract more and more capital: AUM growth enables a reduction in spreads, volatility decreases and liquidity increases, improving price efficiency and execution quality and reducing short-term dislocations, thereby supporting the growth of the asset class.

ETP o ETF

An Exchange-Traded Product is a broad category of regulated instruments that give investors transparent, tradable exposure to an underlying asset, index or a strategy. An Exchange-Traded Fund is a specific type of ETP that is legally structured as an investment fund, typically holding the underlying assets and calculating a net asset value. The key difference is therefore the legal form and the risk profile: ETFs are fund vehicles with segregated assets held for investors, whereas many non-ETF ETPs (such as ETNs) are debt instruments whose performance can also depend on the issuer’s creditworthiness. So, all ETFs are ETPs, but not all ETPs are ETFs.

Structure

There are two methods for replicating the underlying: physical and synthetic. Physical ETPs are created through the purchase and holding of the asset by the issuing entity, thus allowing a replication directly linked to the performance of the underlying. As for synthetic ETPs instead, they are created from a SWAP contract with a counterparty, for example a bank, in order to provide the return of that asset. To protect the liquidity of the daily return, the counterparty is required to post liquid collateral with the issuer and the amount of this collateral then fluctuates based on the value of the underlying asset and its volatility profile. Based on the data shown by Vanguard’s discussion of physical vs. synthetic ETF structures and with industry evidence showing that physical replication dominates European ETF AUM, we can say that in recent years investors have generally preferred physical ETPs, thanks to their transparency, the absence of counterparty risk and their relative simplicity rather than synthetic structures. In particular with regard to crypto, given the simplicity of holding the asset and their liquidity, almost all of these derivatives on cryptocurrencies are physical.

For this reasons, when you purchase this type of financial asset, you do not directly own the physical cryptocurrency (the underlying), but rather a debt security of the issuer, backed by the crypto and with a guarantee provided by the relationship with the trustee (This entity’s task is to represent the interests of investors, receiving all rights over the physical assets that collateralize the ETP. It therefore acts as a third and independent party that protects the ETP’s assets and ensures that it is managed in accordance with the terms and conditions established beforehand.)

Structure of Exchange Traded Product
ETP’s structure
Source: Sygnum Bank.

Single or diversified

Depending on the exposure the investor wants to obtain, various types of these assets can be purchased:

  • Some may replicate a specific cryptocurrency by tracking the value of a single digital coin. Their task is therefore only to replicate the market of the underlying asset in a simple and efficient way.
  • Other ETPs can replicate a basket or an index of cryptocurrencies; this is done to gain exposure simultaneously to different markets, diversifying risk.
  • We can find an example of this in the products offered by 21Shares. Part of it is represented by diversified products, such as the 21Shares Crypto Basket Equal Weight ETP, where several cryptocurrencies make up the product. The majority, however, both in terms of AUM and number of products, is single-asset, with only one underlying. Examples include the 21Shares Bitcoin ETP or the 21Shares Bitcoin Core ETP.
  • When speaking specifically about these two products, there is a distinctive feature that makes 21Shares unique. The company was the first to bring these products to market and, for this reason, having a “monopoly” at the time, it was able to charge extremely high fees. With the arrival of new players, however, it was forced to reduce them and, thanks to its structure and competitive advantages, was able to offer extremely low fees, the lowest on the market, without delisting the previous products, as they remained profitable. In fact, the two products mentioned above have no differences of any kind, except for their costs.

BTC ETP
21Shares BTC ETP
Source: 21Shares.

Advantages compared to traditional crypto

The reasons that may lead to the purchase of this type of financial instrument can be multiple. First of all, navigating the world of cryptocurrencies can seem difficult, but ETPs remove much of the complexity. Instead of relying on unregulated platforms or paying extremely high fees to traditional funds that invest only marginally in cryptocurrencies, investors have the opportunity to buy this asset directly as they would with other securities. ETPs will then sit alongside all other investments in the portfolio, thus enabling a simpler analysis of it and also comparison with other products. Moreover, even if these intermediaries do not offer true financial advice, they provide investor support that is far higher than that of classic crypto platforms.

Another element in their favor is the security and transparency on which they are based. In particular in Europe, these instruments are subject to stringent financial regulations and are required to comply with accounting, disclosure, and transparency rules. Then, since they are predominantly physically collateralized, their structure makes it possible to protect the client and the asset itself in the event of bankruptcy or insolvency of the issuer, limiting exposure to the underlying.

Why should I be interested in this post?

The crypto market is a complex world and constantly changing. This article can be read by anyone who intends even just to deepen their understanding or discover concepts that nowadays are becoming increasingly important and fundamental in financial markets and in everyday life, not only by those who want to pursue a career in the cryptocurrency sector.

Related posts on the SimTrade blog

   ▶ Snehasish CHINARA Top 10 Cryptocurrencies by Market Capitalization

   ▶ Hugo MEYER The regulation of cryptocurrencies: what are we talking about

Useful resources

CoinShares

21Shares

Swem, N. and F. Carapella (28/03/2025) Crypto ETPs: An Examination of Liquidity and NAV Premium FEDS Notes.

sygnum

Vanguard: Replication methodology / ETF knowledge

About the author

The article was written in December 2025 by Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025).

   ▶ Read all articles by Alberto BORGIA.

EBITDA: Uses, Benefits and Limitations

Alberto BORGIA

In this article, Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025) explains about EBITDA, how it can be used, its advantages and disadvantages.

Introduction

Earning Before Interest, Taxes, Depreciation and Amortization (EBITDA) is ones of the most used financial metric and its goal is to understand a company’s operating performance before considering the effects of financial choices (interests), taxation and non-cash accounting charges related to long-lived asset and acquired intangibles.

So in intuition behind it is that if two or more firms sell similar products, the analyst should be able to compare their “core operating engine”, even if they differ from a debt (higher debt), tax (different tax jurisdiction) or asset base prospective.

Because EBITDA is a key component capable of influence valuation and decisions, it is crucial to understand both how it is obtained and what it does.

How it is obtained

To calculate this metric, we begin with the income statement and add back the expenses that are excluded by the EBITDA definition:

EBITDA = Net Income + Interest + Taxes + Depreciation + Amortization

Another way is to start from the EBIT:

EBITDA = EBIT + Depreciation + Amortization

Using Carrefour as a real-case example, I calculated EBITDA starting from the company’s income-statement figures. First, I reproduced an “operating-style” EBITDA by taking Gross Margin and subtracting selling, general and administrative expenses, which gives a core operating profit measure before financing and taxes. Then, for a second approach, I computed EBITDA as Recurring Operating Income + Depreciation + Amortization. This shows how EBITDA is obtained in practice from published financial statement components.

These two formulae can look clean and easy, but the real computation is messier, depending on how the company structure his income statement and on what is included in the amortization and depreciation class. For these reasons EBITDA is usually accompanied by a set of clear definitions and reconciliations.

Another key factor is that the “earnings” are not always interpreted in the same way, that is why the SEC has decided that in the context of EBITDA and EBIT described in its adopting release, “earnings” means GAAP net income as presented in the statement of operations. So, if a measure is calculated differently, it should not be labeled EBITDA, but Adjusted EBITDA.

Adjusted EBITDA

For many documents we see the term Adjusted EBITDA, because it modifies the measure by excluding values that are considered by the management as “non-core” or “non-recurring”. These adjustments typically include items such as restructuring costs, acquisition-related expenses, unusual or non-recurring gains and losses, and stock-based compensation. The goal is to estimate a normalized operating result. This can however create risks when comparing different firms or the same one in different years.

What it is used for

The reasons why EBITDA is one of the metrics most taken into account by financial analysts are multiple as its ways of use. First of all, by excluding interest rates, it is suitable as a proxy for comparing companies from an operating perspective, even when they have different tax or capital structures. It is then used to compare various risk indicators or to limit leverage and protect lenders (in the debt market).

It is also used for company valuation and for the calculation of multiples, such as EV/EBITDA. Here, EV indicates the total value of the firm. According to the technical literature, the reasons why this multiple is particularly useful and widely used include, for example, the possibility of calculating it even when net income is negative, for this reason, it is extremely common in markets where significant infrastructure investments are present and in leveraged buyouts and naturally because it allows the comparison of companies with totally different levels of financial leverage.

They are also particularly useful (EBITDA and its variants) for communicating to investors and analysts, even though it is necessary to be especially careful about any modifications aimed at “inflating” the results.

Last it is considered by analysts as a starting point, pairing it with cash-flow measures, such as free cash flow, for a fuller view.

Advantages

There are several advantages to using EBITDA; for instance, it can be calculated quickly from publicly available financial statements or is often directly disclosed by companies. In industries where leverage varies a lot it is useful to analyze companies in it or when assessing a target in M&A where capital structure can change immediately after the acquisition. Finally operating results are less sensitive to life assigned to asset when we add back depreciation and amortization.

Disadvantages

However, EBITDA is also associated with several notable drawbacks. Even by adding back depreciation and amortization, the value does not take into account changes in working capital and capex needed to increase or maintain productive capacity, it is more like a “rough” measure of operating cash flows.

As previously noted, EBITDA is also susceptible to manipulation, as it is inherently open to interpretation. Consequently, it should be complemented with other financial metrics to provide a more comprehensive and balanced assessment, thereby reducing the risk of misinterpretation driven by management’s attempts to influence investors’ perceptions..

EBITDA Margin

To express the EBITDA relative to revenue, we can use EBITDA margin:

EBITDA Margin = EBITDA / Revenue

It is calculated to understand how much operating earnings the firm generates per unit of sales, in particular it can be used to compare a firm’s profitability with its peers or to track trends. Even though it is particularly useful in financial analysis, the EBITDA margin presents the same issues as the original metric. If the first value is defined incorrectly, then this one will also be wrong. Just like normal EBITDA, this metric can be used best when it is accompanied by Operating Cash Flow (OCF), which reflects the cash generated by a company’s core operating activities, and Free Cash Flow (FCF), which represents the cash available after capital expenditures necessary to maintain or expand the asset base, and by an industry context.

Example

I provide below an example for the computation of EBITDA based on Carrefour, a French firm operating in the retail sector, more precisely in mass-market distribution (retail grocery).

Example of EBITDA calculation: Carrefour
Example of EBITDA calculation: Carrefour

You can download the Excel file provided below, which contains the calculations of EBITDA for Carrefour.

Download the Excel file.

Why should I be interested in this post?

EBITDA represents a fundamental concept for anyone who wants to build their career in the financial field, but not only. Understanding how it works, as well as its weaknesses and strengths, is necessary in order to build the knowledge required to become a competent and respected professional. This article, in fact, starts from the basics in order to explain the principles behind this metric even to those who are not in the field, helping them understand it.

Related posts on the SimTrade blog

   ▶ Cornelius HEINTZE DCF vs. Multiples: Why Different Valuation Methods Lead to Different Results

   ▶ Dawn DENG Assessing a Company’s Creditworthiness: Understanding the 5C Framework and Its Practical Applications

Useful resources

Non-GAAP Financial Measures

Deloitte Accounting Research Tool (DART) 3.5 EBIT, EBITDA, and adjusted EBITDA

Damodaran EBITDA concept, margins, interpretation

Moody’s (202/11/2024) EBITDA: Used and Abused

Faria-e-Castro, M., Gopalan R., Pal, A, Sanchez J.M., and Yerramilli V. (2021) EBITDA Add-backs in Debt Contracting: A Step Too Far? Working paper.

Damodaran EBITDA vs cash flow logic; reinvestment/capex relevance

About the author

The article was written in December 2025 by Alberto BORGIA (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Exchange student, Fall 2025).

   ▶ Read all articles by Alberto BORGIA.

How to approach a stock pitch

Daniel LEE

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027) explains how to approach a stock pitch.

Introduction

Are you preparing for an interview for investment banking? Hedge fund? Or just participating in a finance competition? Learning how to realize a stock pitch is one of the most useful skills you can develop early in your career.

A stock pitch combines fundamental analysis, strategy, valuation skills and even communication. The goal of this article is to break down the process of a stock pitch that anyone can apply.

What is a stock pitch?

A stock pitch is a recommendation (Buy, Hold or Sell) on a stock supported by:

  • A lot of research on the company and the industry to better understand the context
  • Financial analysis and valuation
  • Investment logic

A stock pitch is structured almost every time the same way.

1. Business Overview

Here the goal is to understand the company and some of the key questions are: What is the business model? What are the revenue drivers? Is the company competitive?

2. Industry Overview

In order to put a context into the company, you will have to study key metrics like market size & growth; competitive landscape; barriers to entry and industry trends

3. Investment Thesis

Investment thesis (generally 3) are the reasons why an investor should follow your recommendation? The thesis must be backed up by evidence and specific points. Just saying “the company is a leading player in the industry” doesn’t work. A strong investment thesis should be based on management guidance and analyst’s consensus. For ex: “The company plans to deleverage by x billion $”

4. Valuation

Valuation is probably the most difficult and the core of the pitch. It is here that you must justify that the stock is undervalued/overvalued. Usually (because exceptions exist depending on the industry or the company and you have to pay attention to that!) you use a relative valuation and an intrinsic valuation.

Relative valuation is comparing your company to its competitors to have a better idea of the multiples implied in the industry. Most used metrics are EV/EBITDA; P/E or EV/EBIT. Again, some metrics could change depending on the company or industry it is really important to understand that 2 pitches won’t be the same. The choice of the comps is also very important, and every company should be justified based on specific criteria. For example, a company that makes apple, you won’t compare it to a company that produces oil.

The intrinsic valuation is the Discounted Cash Flow which forecasts the company’s performance over the next 5 years. You typically forecast revenue growth, margins or working capital needs. The DCF is highly sensitive to the assumptions you made so it is very important to do the research work before starting the valuation. In order to consider some errors or unexpected events usually people do sensitivity analysis to Perpetual Growth Rate and WACC but also a Bull & Bear analysis. These analyses show that your pitch is robust and not based on unrealistic assumptions.

Finally, with all these elements you arrive at a final price. For example, with a 50-50 weight between the trading comps (25$) and DCF (30$) > (25+30)/2 = 27.5$ will be your final share price.

5. Risks & Catalysts

This last part is here to balance between optimism and realistic downside scenarios. Considering these elements is very important. A good stock pitch is not buying recommendation with a 100% upside, a good stock pitch is an objective view on a stock including business risks.

What I learned from my previous experiences?

Working on a few stock pitches taught me several lessons:

  • Keep the pitch simple and structured: a 15-20 slides deck is enough and do not make things complicated
  • Your thesis must be defensible: It is great to have a huge upside, but you have to explain your numbers, your assumptions and your model
  • Use Capital IQ: at ESSEC, students have a free account with Capital IQ, very useful to gather financial data!
  • Tell a story: Incorporating a story is essential to make a good impression and keep the public’s attention to your presentation.

Conclusion

To conclude, a stock pitch is one of the most accessible exercises for anyone who wants to learn financial modelling skills or how to understand a business from a 360° perspective. Moreover, it is always useful to have a stock pitch ready for an interview as it is a question that comes up often.

Related posts on the SimTrade blog

Dawn DENG The Art of a Stock Pitch: From Understanding a Company to Building a Coherent Logics

Emanuele GHIDONI Reinventing Wellness: How il Puro Brings Personalization to Nutrition

Max ODEN Leveraged Finance: My Experience as an Analyst Intern at Haitong Bank

Saral BINDAL Implied Volatility and Option Prices

Adam MERALLI BALLOU The Private Equity Secondary Market: from liquidity mechanism to structural pillar

Useful resources

Vernimmen, P., Quiry, P., Dallocchio, M., Le Fur, Y. and Salvi, A. (2023) Corporate Finance: Theory and Practice.

CFA Research Challenge

Damodaran, A. (2012) Investment Valuation: Tools and Techniques for Determining the Value of Any Asset..

About the author

In this article, Daniel LEE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – 2023-2027).

   ▶ Read all articles by Daniel LEE.

Implied Volatility and Option Prices

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School) explains how implied volatility is calculated or extracted from option prices using an option pricing model.

Introduction

In financial markets characterized by uncertainty, volatility is a fundamental factor shaping the pricing and dynamics of financial instruments. Implied volatility stands out as a key metric as a forward-looking measure that captures the market’s expectations of future price fluctuations, as reflected in current market prices of options.

The Black-Scholes-Merton model

In the early 1970s, Fischer Black and Myron Scholes jointly developed an option pricing formula, while Robert Merton, working in parallel and in close contact with them, provided an alternative and more general derivation of the same formula.

Together, their work produced what is now called the Black Scholes Merton (BSM) model, which revolutionized investing and led to the award of 1997 Nobel Prize in Economic Sciences in Memory of Alfred Nobel to Myron Scholes and Robert Merton “for a new method to determine the value of derivatives,” developed in close collaboration with the late Fischer Black.

The Black-Scholes-Merton model provides a theoretical framework for options pricing and catalyzed the growth of derivatives markets. It led to development of sophisticated trading strategies (hedging of options) that transformed risk management practices and financial markets.

The model is built on several key assumptions such as, the stock price follows a geometric Brownian motion with constant drift and volatility, no arbitrage opportunities, constant risk-free interest rate and options are European-style (options that can only be exercised at maturity).

Key Parameters

In the BSM model, there are five essential parameters to compute the theoretical value of a European-style option is calculated are:

  • Strike price (K): fixed price specified in an option contract at which the option holder can buy (for a call) or sell (for a put) the underlying asset if the option is exercised.
  • Time to expiration (T): time left until the option expires.
  • Current underlying price (S0): the market price of underlying asset (commodities, precious metals like gold, currencies, bonds, etc.).
  • Risk-free interest rate (r): the theoretical rate of return on an investment that is continuously compounded per annum.
  • Volatility (σ): standard deviation of the returns of the underlying asset.

The strike price (exercise price) and time to expiration (maturity) correspond to characteristics of the option while the current underlying asset price, the risk-free interest rate, and volatility reflect market conditions.

Option payoff

The payoff for a call option gives the value of the option at the moment it expires (T) and is given by the expression below:


Payoff formula for call option

Where CT is the call option value at expiration, ST the price of the underlying asset at expiration, and K is the strike price (exercise price) of the option.

Figure 1 below illustrates the payoff function described above for a European-style call option. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days.

Figure 1. Payoff value as a function of the underlying asset price.
Payoff function
Source: computation by the author.

Call option value

While the value of an option is known at maturity (being determined by its payoff function), its value at any earlier time prior to maturity, and in particular at issuance, is not directly observable. Consequently, a valuation model is required to determine the option’s price at those earlier dates.

The Black–Scholes–Merton model is formulated as a stochastic partial differential equation and the solution to the partial differential equation (PDE) gives the BSM formula for the value of the option.

For a European-style call option, the call option value at issuance is given by the following formula:


Formula for the call option value according to the BSM model

with


Formula for the call option value according to the BSM model

Where the notations are as follows:

  • C0= Call option value at issuance (time 0) based on the Black-Scholes-Merton model
  • K = Strike price (exercise price)
  • T = Time to expiration
  • S0 = Current underlying price (time 0)
  • r = Risk-free interest rate
  • σ = Volatility of the underlying asset returns
  • N(·) = Cumulative distribution function of the standard normal distribution

Figure 2 below illustrates the call option value as a function of the underlying asset price. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current price of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield, and the volatility is assumed to be 15%.

Figure 2. Call option value as a function of the underlying asset price.
Call option value as a function of the underlying asset price.
Source: computation by the author (BSM model).

Option and volatility

In the Black–Scholes–Merton model, the value of a European call or put option is a monotonically increasing function of volatility. Higher volatility increases the probability of finishing in-the-money while losses remain limited to the option premium, resulting in a strictly positive vega (the first derivative of the option value with respect to volatility) for both calls and puts.

As volatility approaches zero, the option value converges to its intrinsic value, forming a lower bound. With increasing volatility, option values rise toward a finite upper bound equal to the underlying price for calls (and bounded by the strike for puts). An inflection point occurs where volga (the second derivative of the option value with respect to volatility) changes sign: at this point vega is maximized (at-the-money) and declines as the option becomes deep in- or out-of-the-money or as time to maturity decreases.

The upper limit and the lower limit for the call option value function is given below (Hull, 2015, Chapter 11).


Formula for upper and lower limits of the option price

Figure 3 below illustrates the value of a European call option as a function of the underlying asset’s price volatility. The example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current price of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield. A deliberately wide (and economically unrealistic) range of volatility values is employed in order to highlight the theoretical limits of option prices: as volatility tends to infinity, the option value converges to an upper bound ($6,000 in our example), while as volatility approaches zero, the option value converges to a lower bound $1,015.51).

Figure 3. Call option value as a function of price volatility
 Call option value as a function of price volatility
Source: computation by the author (BSM model).

Volatility: the unobservable parameter of the model

When we think of options, the basic equation to remember is “Option = Volatility”. Unlike stocks or bonds, options are not primarily quoted in monetary units (dollars or euros), but rather in terms of implied volatility, expressed as a percentage.

Volatility is not directly observable in financial markets. It is an unobservable (latent) parameter of the pricing model, inferred endogenously from observed option prices through an inversion of the valuation formula given by the BSM model. As a result, option markets are best interpreted as markets for volatility rather than markets for prices.

Out of the five essential parameters of the Black-Scholes-Merton model listed above, the volatility parameter is the unobservable parameter as it is the future fluctuation in price of the underlying asset over the remaining life of the option from the time of observation. Since future volatility cannot be directly observed, practitioners use the inverse of the BSM model to estimate the market’s expectation of this volatility from option market prices, referred to as implied volatility.

Implied Volatility

In practice, implied volatility is the volatility parameter that when input into the Black-Scholes-Merton formula yields the market price of the option and represents the market’s expectation of future volatility.

Calculating Implied volatility

The BSM model maps five input variables (S, K, r, T, σimplied) to a single output variable uniquely: the call option value (Price), such that it’s a bijective function. When the market call option price (CBSM) is known, we invert this relationship using (S, K, r, T, CBSM) as inputs to solve for the implied volatility, σimplied.


Formula for implied volatility

Newton-Raphson Method

As there is no closed form solution to calculate implied volatility from the market price, we need a numerical method such as the Newton–Raphson method to compute it. This involves finding the volatility for which the Black–Scholes–Merton option value CBSM equals the observed market option price CMarket.

We define the function f as the difference between the call option value given by the BSM model and the observed market price of the call option:


Function for the Newton-Raphson method.

Where x represents the unknown variable (implied volatility) to find and CMarket is considered as a constant in the Newton–Raphson method.

Using the Newton-Raphson method, we can iteratively estimate the root of the function, until the difference between two consecutive estimations is less than the tolerance level (ε).


Formula for the iterations in the Newton-Raphson method

In practice, the inflexion point (Tankov, 2006) is taken as the initial guess, because the function f(x) is monotonic, so for very large or very small initial values, the derivative becomes extremely small (see Figure 3), causing the Newton–Raphson update step to overshoot the root and potentially diverge. Selecting the inflection point also minimizes approximation error, as the second derivative of the function at this point is approximately zero, while the first derivative remains non-zero.


Formula for calculating the volatility at inflexion point.

Where σinflection is the volatility at the inflection point.

Figure 4 below illustrates how implied volatility varies with the call option price for different values of the market price (computed using the Newton–Raphson method). As before, the example considers a European call written on the S&P 500 index, with a strike price of $5,000 and a time to maturity of 30 days. The current level of the underlying index is $6,000, and the risk-free interest rate is set at 3.79% corresponding to the 1-month U.S. Treasury yield.

Figure 4. Implied volatility vs. Call Option value
 Implied volatility as a function of call option price
Source: computation by the author.

You can download the Excel file provided below, which contains the calculations and charts illustrating the payoff function, the option price as a function of the underlying asset’s price, the option price as a function of volatility, and the implied volatility as a function of the option price.

Download the Excel file.

You can download the Python code provided below, to calculate the price of a European-style call or put option and calculate the implied volatility from the option market price (BSM model). The Python code uses several libraries.

Download the Python code to calculate the price of a European option.

Alternatively, you can download the R code below with the same functionality as in the Python file.

 Download the R code to calculate the price of a European option.

Why should I be interested in this post?

The seminal Black–Scholes–Merton model was originally developed to price European options. Over time, it has been extended to accommodate a wide range of derivatives, including those based on currencies, commodities, and dividend-paying stocks. As a result, the model is of fundamental importance for anyone seeking to understand the derivatives market and to compute implied volatility as a measure of risk.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Options

   ▶ Jayati WALIA Black-Scholes-Merton Option Pricing Model

   ▶ Jayati WALIA Implied Volatility

   ▶ Akshit GUPTA Option Greeks – Vega

Useful resources

Academic research

Black F. and M. Scholes (1973) The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637–654.

Merton R.C. (1973) Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1), 141–183.

Hull J.C. (2022) Options, Futures, and Other Derivatives, 11th Global Edition, Chapter 15 – The Black–Scholes–Merton model, 338–365.

Cox J.C. and M. Rubinstein (1985) Options Markets, First Edition, Chapter 5 – An Exact Option Pricing Formula, 165-252.

Tankov P. (2006) Calibration de Modèles et Couverture de Produits Dérivés (Model calibration and derivatives hedging), Working Paper, Université Paris-Diderot. Available at https://cel.hal.science/cel-00664993/document.

About the BSM model

The Nobel Prize Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1997

Harvard Business School Option Pricing in Theory & Practice: The Nobel Prize Research of Robert C. Merton

Other

NYU Stern Volatility Lab Volatility analysis documentation.

About the author

The article was written in December 2025 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research assistant at ESSEC Business School).

   ▶ Discover all articles written by Saral BINDAL

The Private Equity Secondary Market: from liquidity mechanism to structural pillar

Adam MERALLI BALLOU

In this article, Adam MERALLI BALLOU (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) introduces the Secondary Market in Private Equity.

Introduction

Over the past decade, the private equity secondary market has undergone a profound transformation. Originally conceived as a marginal liquidity outlet for constrained investors, it has progressively become a central component of private markets architecture, as it increasingly shapes how capital circulates within the private equity ecosystem rather than merely how it exits it. The secondary market now acts as a mechanism through which investors actively manage portfolio duration, smooth cash-flow profiles, and adjust exposure across vintages and strategies, while allowing General Partners to optimize asset holding periods in response to market conditions. This evolution reflects a broader shift away from a rigid, linear fund lifecycle toward a more dynamic and continuous model of capital allocation.

This transformation has accelerated in the recent cycle, as exit activity slowed materially and fund durations extended beyond initial expectations, amplifying the need for alternative liquidity and capital recycling solutions. According to the Preqin Secondaries in 2025 report and the William Blair Private Capital Advisory Secondary Market Report (2025), year 2025 is expected to mark a historical milestone, with global secondary transaction volumes reaching approximately $175bn, the highest level ever recorded. This surge reflects not only cyclical pressures on liquidity, but also a deeper structural shift in how private equity portfolios are managed, financed, and recycled across market cycles.

Global Secondary Market Volume
Global Secondary Market Volume
Source: Willliam Blair.

This figure illustrates the rapid expansion of the global private equity secondary market. According to the William Blair Private Capital Advisory Secondary Market Report (2025), transaction volumes grew from $28bn in 2013 to $156bn in 2024, with $175bn projected for 2025. The increasing share of GP-led transactions highlights the growing role of secondary markets in addressing liquidity needs and exit constraints.

LP-led vs GP-led secondaries: complementary functions within the ecosystem

The secondary market is fundamentally organized around two distinct transaction types: LP-led and GP-led, each fulfilling a different economic function within the private equity ecosystem. LP-led transactions represent the original backbone of the market. In these deals, Limited Partners sell existing fund interests to obtain liquidity, rebalance their portfolios, or reduce exposure following overallocation to private equity. Data from the 2025 Preqin report shows that LP-led transactions tend to dominate in number, particularly during periods of market stress, as institutional investors respond to denominator effects, regulatory constraints, or liability-matching requirements. However, while LP-led transactions account for a high share of deal count, their relative weight in value terms has become more balanced. In 2024, LP-led secondaries represented roughly $80bn, or close to half of total market volume. GP-led by contrast, are initiated by the General Partner rather than by investors. In a GP-led secondary, the GP transfers one or several assets from an existing fund into a new vehicle. In 2024, GP-led transactions represented approximately $76bn in value, accounting for a share comparable to LP-led transactions despite being fewer in number, which reflects their significantly larger average deal sizes.

The explosion of continuation funds and the normalization of GP-led structures

Within the GP-led universe, the rapid rise of continuation funds stands out as the most consequential development of the past few years. Once viewed as exceptional restructuring tools for underperforming or illiquid assets, continuation funds have become mainstream instruments used to extend the ownership of high-quality portfolio companies. The Preqin report identifies 401 continuation funds launched between 2006 and 2025, with a striking acceleration after 2020. Of these, 340 funds are already closed, representing an aggregate capital base of approximately $182.7bn. In value terms, continuation funds now account for around 45–50% of total secondary market volume and nearly 80% of GP-led transactions. This expansion has been driven by a combination of prolonged exit timelines, improved governance standards, systematic use of third-party valuations, and stronger alignment mechanisms such as GP carry rollovers. The data confirms that continuation funds are no longer marginal or opportunistic structures, but rather standardized tools for managing asset life cycles and sustaining value creation beyond the constraints of traditional closed-end fund structures.

Capital concentration, pricing normalization, and the strategic role of secondaries

Beyond transaction structures, the scale of capital committed to the secondary market underscores its growing strategic importance. The William Blair report highlights that secondary-focused investors held more than $200bn of dry powder in 2024, equivalent to approximately 43% of total secondary AUM (Asset under Management), a proportion materially higher than that observed in private equity primaries. This accumulation of capital has enabled the execution of increasingly large and complex transactions and has supported a notable improvement in pricing conditions. In 2024, 91% of single-asset continuation fund transactions were priced at or above 90% of NAV (Net Asset Value, i.e. the estimated fair value of a fund’s underlying), while multi-asset continuation funds also saw a significant normalization in discounts. At the same time, performance data from Preqin indicates that secondaries continue to offer a differentiated risk-return profile, characterized by lower dispersion of outcomes and faster cash-flow generation relative to primary funds. In an environment marked by distribution scarcity and heightened uncertainty, these characteristics help explain why the secondary market has moved from a peripheral liquidity solution to a structural stabilizer of the private equity ecosystem .

Why should I be interested in this post?

As the private equity secondary market reached record transaction volumes of around $156bn in 2024 and could grow to nearly $300bn by 2030, understanding its mechanics has become essential for anyone interested in private markets. This post provides a data-driven explanation of LP-led and GP-led transactions and highlights why continuation funds now account for a large share of secondary activity. These structures are central to liquidity management, portfolio rebalancing, and capital recycling in a constrained exit environment. The different industry reports used in this analysis can be found in the “Useful information” section below.

Related posts on the SimTrade blog

   ▶ All posts about Private Equity

   ▶ Emmanuel CYROT Deep Dive into evergreen funds

   ▶ Lilian BALLOIS Discovering Private Equity: Behind the Scenes of Fund Strategies

   ▶ Adam MERALLI BALLOU My internship experience in Investor Relation at Eurazeo

Useful resources

William Blair (March 2025) William Blair Private Capital Advisory: 2025 Secondary Market Report

Preqin (June 2025) Secondaries in 2025

About the author

The article was written in December 2025 by < https://www.linkedin.com/in/adam-meralli-ballou/" target="_blank">Adam Meralli Ballou (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Adam MERALLI BALLOU.

DCF vs. Multiples: Why Different Valuation Methods Lead to Different Results

Cornelius HEINTZE

In this article, Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains how the usage of different valuation methods can lead to different outcomes and how to use them.

Why this is important

In finance valuation is always present and it is not only a mechanical exercise. Analysts are working with discounted cash flow models, multiples or other procedures to value a company. Using these different methods will lead to different outcomes and it is crucial to understand why these differences occur and if this is in line with your expectations or differing from them. This helps to avoid misleading conclusions and relying only on a single method or having difficulties interpreting multiple methods.

The DCF model: measuring intrinsic value

The discounted cash flow (DCF) model aims to measure the intrinsic value of a company. It does this by forecasting the expected future cash flows generated by the company and discounting them back to the present using an appropriate discount rate that reflects the risk specific for the company. The goal is to estimate the equity value of the company. The discount rate is often the WACC (weighted-average cost of capital), or the cost of equity based on the method you are using. The method can either be to estimate the enterprise value which would represent the value of the whole company including its assets and its liabilities. For this method you would use the WACC. To get to the equity value directly you have to subtract the part of the liabilites that contribute to the cash flows and create cash flows that are only generated by equity. You will also have to find out the cost of equity, which can be done using the CAPM. After doing this you will have the equity value of the company.

DCF logic (simplified):

  • Explicit forecast period: Forecast cash flows CFt for years t = 1 … T and discount them at rate r.
  • Terminal value: Estimate the value beyond year T using a stable long-term assumption. This is referred to as an annual perpetuity and can include a growth factor if it aligns with the assumptions about the company.

Formula (illustrative):

Value = Σt=1…T CFt / (1 + r)t + Terminal Value / (1 + r)T

This formula can differ based on which type of DCF model you are using. If you are using the WACC to discount your cashflows you will be left with the enterprise value of the company’s total assets and liabilities and not the equity value. To get to the equity value, you will have to subtract the liabilities.

Equity value using WACC = (Σt=1…T CFt / (1 + WACC)t + Terminal Value / (1 + r)T) – Liabilities

If you are using the cash flows that can be assigned to the equity of the company and the cost of equity to discount these cash flows you will automatically end up with the equity value.

Equity value = Σt=1…T CFtequity / (1 + requity)t + Terminal Value / (1 + r)T

Strengths of the DCF

You can already see that there are differences within one single model that need to be understood. In practice the method for the total company value is widely used. This is because of its fundamental strength, which is its simplicity and its convenience. It is very easy to follow and you can see how different assumptions will affect the firm value in different ways. It therefore forces the analyst to evaluate and model the key drivers of financial growth. Like looking at the growth rate, investments in working capital and the risk the company is currently facing. As a result, DCF valuations are often used for long-term strategic decisions, mergers and acquisitions, and fairness opinions.

Weaknesses of the DCF

Following this the major problems with the DCF-models are its assumptions. They are based on historical values and the CAPM, which both give no valuable outlook on the future. But as there is no better method currently to predict future cash flows, the method is holding strong in practice, although empirically it seems to be unsuitable. The resulting model is also very sensitive to the assumptions made. Especially looking at the growth rate or the discount rate which will accumulate over time.

Multiples valuation: estimating relative value

Now coming to the multiples-based valuation, this valuation method focuses on looking at the relative value of a company rather than the intrinsic value of a company. This means that the firm is compared to similar companies using different key values such as:

  • Price-to-Earnings (P/E)
  • Enterprise Value to EBITDA (EV/EBITDA)
  • Enterprise Value to Sales (EV/Sales)

The process of choosing and working with multiples is simple. There are two main approaches: the similar public company-method, which is used to create and compare multiples based on data from a company that is publicly traded on a stock market. The second method is the recent acquisition-method, which will look at the transaction prices for a similar company. As the name of the first method indicates, the chosen companies must be similar to the valued company. You can achieve this by looking at the size of the company, the industry and the location or other features and specifying different values for these aspects (i.e. number of employees, pharmacy, Germany).

Implicit assumptions behind multiples

Although multiples are often perceived as simpler ways of valuing a company, they embed the same fundamental assumptions as a DCF model, albeit in a less transparent way.

A valuation multiple implicitly reflects:

  • Expected growth
  • Risk and discount rates
  • Capital structure
  • Profitability and reinvestment needs

For example, a high EV/EBITDA multiple usually signals that the market expects strong future growth or low risk. In other words, the market has already performed a form of discounted cash flow analysis — but the assumptions are hidden inside the multiple.

Strengths of multiples

Multiples are an easy way to get an overview of the value of a company and compare the estimated values to other companies on the market. They can also be used to quickly check the plausibility of a firm value estimated with the DCF model. The main strength is again its simplicity but this time in a much faster and easier way. They are used to compare the company to competitors and to give insights on how the company would perform against them and on the stock market. It’s also very helpful when valuing smaller companies because they might not have the amount of historical data organized and needed to value this with a DCF method.

Weaknesses of multiples

One of the biggest weaknesses is the requirement of finding a similar company that is traded on the stock market, or which information is publicly available. They therefore can also be manipulated easily because they are less transparent than other methods and can be adjusted very easily (what is “similar”?). They also cannot be seen as objective values as the market is estimating them with no individual interferences. Therefore, they are not consistent and have to be used with care. You should always make plausible assumptions, that can be explained by the multiple and the current situation of the company.

When to use them and the “football field”

To really get behind the use of multiples and the DCF model all together we are looking how to combine them together in a meaningful way. Multiples are often used to create a “football field”. This technique describes a graph that is summarizing valuation ranges across methods rather than delivering a single point estimate. This is especially helpful when you are currently negotiating on an M&A deal to see if the offered prices are aligned with your assumptions and whether you want to accept or not.

A great example for the combination of the DCF model and multiples is the acquisition from Actelion by Johnson & Johnson. To see if the offer was acceptable and fair, they hired valuation professionals from Alantra. Alantra gathered data and estimated multiple values to compare the offer. They used the graph of the “football field” to make it visually appealing and instinctive. You can see that the green line is far right beyond the red line and therefore it can be seen as a fair offer considering the other values that have been estimated by Alantra in their fairness opinion for Actelion. This is because the more the value is on the right side of the graph, the higher it is.

Alantra Fairness Opinion example

You can download the full fairness opinion here

DCF vs. Multiples Example

You can download the Excel file provided below, which contains the “Football field” example.

 Download the Excel file for the football field DCF and Multiples valuation methods

You can see here that the estimated value from the DCF is a more on the lower end than the consensus of the market. This is not necessarily a problem as the market might have already considered synergies or future events that the DCF model did not capture or simply is not expecting, due to a lack of information (insiders etc.).

As you can see, rather than choosing between DCF and multiples, practitioners usually apply both approaches in a complementary way:

  • DCF models are well suited for estimating intrinsic value and analyzing long-term fundamentals.
  • Multiples are useful for understanding how the market currently prices similar firms.
  • In IPOs and M&A transactions, both methods are typically combined to form a valuation range.

A robust valuation rarely relies on a single number. Instead, it emerges from comparing and reconciling different approaches.

Conclusion

DCF and multiples-based valuation often lead to different results because they answer different questions. DCF models aim to estimate intrinsic value based on explicit assumptions, while multiples reflect relative value and prevailing market expectations.

Recognizing the strengths and limitations of each method is essential for sound financial analysis. By combining both approaches and critically assessing their underlying assumptions, analysts can arrive at more balanced and informative valuation outcomes.

To sum up…

Both DCF and multiples are useful tools, but neither should be applied mechanically. A solid valuation comes from understanding what each method captures, where it can mislead, and how results change when assumptions or peer groups change. In practice, triangulating across methods provides the most reliable foundation for decision-making.

Why should I be interested in this post?

For a student interested in business and finance, this post provides a concrete bridge between theory and practice. Valuation models such as the two-stage DCF are not only central to courses in corporate finance, but also widely used in internships, case interviews, and real-world transactions. Understanding how sensitive firm values are to assumptions on growth and discount rates helps students critically assess valuation outputs rather than taking them at face value, and prepares them for practical applications in consulting, investment banking, or asset management.

Related posts on the SimTrade blog

   ▶ All posts about financial techniques

   ▶ Jorge KARAM DIB Multiples valuation method for stocks

   ▶ Andrea ALOSCARI Valuation methods

   ▶ Samuel BRAL Valuing the Delisting of Best World International Using DCF Modeling

   ▶ Cornelius HEINTZE The effect of a growth rate in DCF

Useful resources

Paul Pignataro (2022) Financial modeling and valuation: a practical guide to investment banking and private equity Wiley, second edition.

Aswath Damodaran (2015)Explanations on Multiples

About the author

The article was written in December 2025 by Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025).

   ▶ Read all articles by Cornelius HEINTZE .

Risk-based Audit : From Risks to Assertions to Audit Procedures

Iris ORHAND

In this article, Iris ORHAND (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) shares a technical article about risk-based audit.

Introduction

Financial statements are not audited by “checking everything”. In practice, auditors use a risk-based approach: they identify what could materially go wrong, link those risks to specific financial statement assertions, and then design the right audit procedures to obtain sufficient and appropriate evidence. “Materially” means that an error or omission is significant enough to influence the decisions of users of the financial statements, meaning it has a real impact on how the financial information is interpreted.

This article explains a simple but powerful framework widely used in audit: Risks→Assertions→Procedures. It’s the logic I applied during my experience in financial audit at EY, where this methodology helps teams prioritize work, structure fieldwork, and produce clear conclusions.

The audit risk model: why “risk-based” makes sense

At a high level, auditors aim to reduce the risk of issuing an inappropriate opinion. A classic way to express this is:

Audit Risk (AR) = Inherent Risk (IR) × Control Risk (CR) × Detection Risk (DR)

  • Inherent risk (IR): the risk a material misstatement exists before considering controls (complexity, estimates, judgment, volatile business, etc.).
  • Control risk (CR): the risk that internal controls fail to prevent or detect a misstatement.
  • Detection risk (DR): the risk that audit procedures fail to detect a misstatement that exists.

In practice, when IR and/or CR are high, auditors respond by lowering DR through stronger procedures: more evidence, better targeting, larger samples, more reliable sources, and more experienced review.

Materiality: focusing on what matters

Because financial statement users care about decisions, audit planning relies on materiality (and performance materiality) to size the work. Materiality helps answer:

  • What could influence users’ decisions?
  • Which line items/disclosures require deeper work?
  • What magnitude of error becomes unacceptable?

This is also why “risk-based” is essential: the audit effort is scaled to what is material and risky, not what is merely easy to test.

Assertions: translating accounting lines into “what could be wrong”

Assertions are management’s implicit claims behind each number. Auditors use them to define the nature of possible misstatements. The most common are:

  • Existence / Occurrence: the asset/revenue is real and actually happened
  • Completeness: nothing important is missing
  • Rights & obligations: the entity truly owns/owes it
  • Valuation / Accuracy: amounts are measured correctly (estimates, provisions…)
  • Cut-off: recorded in the correct period
  • Presentation & disclosure: correctly described and disclosed

This is a key step: a “risk” becomes actionable only when you connect it to one (or several) assertions.

From risk to procedures: the core workflow

A practical “risk-based audit” workflow looks like this:

  • Firstly : Identify significant risks (business model, incentives, complexity, unusual transactions, estimates, prior year issues).
  • Secondly : Map each risk to assertions (e.g. : revenue fraud risk → occurrence, cut-off).
  • Thirdly : Choose the response: 1) Tests of controls (TOC) if relying on internal controls; 2) Substantive tests (analytical procedures + tests of details)
  • Finally : Execute, document, conclude: evidence must be sufficient, appropriate, and consistent.

Concrete examples: what we do in practice

Example 1: Revenue recognition

Typical risks : overstated revenue, early recognition, fictitious sales, side agreements. Key assertions : occurrence, cut-off, accuracy, presentation.

Common procedures:

  • Analytical review (trends, margins, monthly patterns) to spot anomalies
  • Cut-off testing around year-end (invoices, delivery notes, contracts)
  • Tests of details on samples (supporting documents, customer confirmations when relevant)
  • Review of revenue recognition policy and contract terms (IFRS 15 logic, performance obligations)

Example 2: Inventory (valuation and existence)

Typical risks : obsolete stock, wrong costing, missing inventory, poor count controls. Key assertions : existence, valuation, completeness, rights.

Common procedures:

  • Attendance/observation of physical inventory count
  • Reconciliation count-to-ERP, and ERP-to-FS
  • Price testing, cost build-up testing, NRV/obsolescence analysis
  • Movement testing and cut-off around receiving/shipping

Example 3: Provisions & estimates (judgment-heavy)

Provisions and estimates refer to amounts recorded in the accounts for obligations or future events that are uncertain but likely enough to require recognition, which means management must use judgment to estimate their value based on the best information available.

Typical risks : management bias, under/over provisioning, inconsistent assumptions. Key assertions: valuation, completeness, presentation.

Common procedures:

  • Understanding process + key assumptions and governance
  • Back-testing prior-year estimates vs actual outcomes
  • Sensitivity analysis on assumptions (rates, volumes, timelines)
  • Lawyer letters / review of claims, contracts, contingencies

Conclusion

Risk-based audit is more than a buzzword: it’s the method that turns financial statement complexity into a structured plan. By linking risks to specific assertions, auditors can design procedures that are both efficient and defensible, especially under time pressure and tight deadlines.

Why should I be interested in this post?

If you are interested in audit, accounting, corporate finance, or risk, understanding the risk-based approach is foundational. It explains how auditors prioritize, how they challenge information, and why audit work is ultimately about building confidence in financial reporting through evidence.

Related posts on the SimTrade blog

Professional experiences

   ▶ Posts about Professional experiences

   ▶ Iris ORHAND My apprenticeship experience as a Junior Financial Auditor at EY

   ▶ Iris ORHAND My apprenticeship experience as an Executive Assistant in Internal Audit (Inspection Générale) at Bpifrance

   ▶ Annie Yeung My Audit Summer Internship experience at KPMG

   ▶ Mahe Ferret My internship at NAOS – Internal Audit and Control

Useful resources

Site economie.gouv Méthodologie de conduite d’une mission d’audit interne

Site L-expert-comptable.com (25/02/2025) La méthodologie d’audit : Les assertions

Corcentric Les étapes clefs d’un processus d’audit comptable et financier

Cabinet Narquin & Associés Les méthodes d’audit utilisées par les commissaires aux comptes

About the author

The article was written in December 2025 by Iris ORHAND (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Iris ORHAND

Historical Volatility

Saral BINDAL

In this article, Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School) explains the concept of historical volatility used in financial markets to represent and measure the changes in asset prices.

Introduction

Volatility in financial markets refers to the degree of variation in an asset’s price or returns over time. Simply put, an asset is considered highly volatile when its price experiences large upward or downward movements, and less volatile when those movements are relatively small. Volatility plays a central role in finance as an indicator of risk and is widely used in various portfolio and risk management techniques.

In practice, the concept of volatility can be operationalized in different ways: historical volatility and implied volatility. Traders and analysts use historical volatility to understand an asset’s past performance and implied volatility as a forward-looking measure of upcoming uncertainties in the market.

Historical volatility measures the actual variability of an asset’s price over a past period, calculated as the standard deviation of its historical returns. Computed over different periods (say a month), historical volatility allows investors to identify trends in volatility and assess how an asset has reacted to market conditions in the past.

Practical Example: Analysis of the S&P 500 Index

Let us consider the S&P 500 index as an example of the calculation of volatility.

Prices

Figure 1 below illustrates the daily closing price of the S&P 500 index over the period from January 2020 to December 2025.

Figure 1. Daily closing prices of the S&P 500 index (2020-2025).
Daily closing prices of the S&P 500 Index (2020-2025)
Source: computation by the author.

Returns

Returns are the percentage gain or loss on the asset’s investment and are generally calculated using one of two methods: arithmetic (simple) or logarithmic (continuously compounded).


Returns Formulas

Where Ri represents the rate of return, and Pi denotes the asset’s price at a given point in time.

The preference for logarithmic returns stems from their property of time-additivity, which simplifies multi-period calculations (the monthly log return is equal to the sum of the daily log returns of the month, which is not the case for arithmetic return). Furthermore, logarithmic returns align with the geometric mean thereby mathematically capture the effects of compounding, unlike arithmetic return, which can overstate performance in volatile markets.

Distribution of returns

A statistical distribution describes the likelihood of different outcomes for a random variable. It begins with classifying the data as either discrete or continuous.

Figure 2 below illustrates the distribution of daily returns for S&P 500 index over the period from January 2020 to December 2025.

Figure 2. Historical distribution of daily returns of the S&P 500 index (2020-2025).
Historical distribution of daily returns of the S&P 500 index (2020-2025)
Source: computation by the author.

Standard deviation of the distribution of returns

In real life, as we do not know the mean and standard deviation of returns, these parameters have to be estimated with data.

The estimator for the mean μ, denoted by μ̂, and the estimator for the variance σ2, denoted by σ̂2, are given by the following formulas:


Formulas for the mean and variance estimators

With the following notations:

  • Ri = rate of return for the ith day
  • μ̂ = estimated mean of the data
  • σ̂2 = estimated variance of the data
  • n = total number of days for the data

These estimators are unbiased and efficient (note the Bessel’s correction for the standard deviation when we divide by (n–1) instead of n).


Unbiased estimators of the mean and variance

For the distribution of returns in Figure 2, the mean and standard deviation calculated using the formulas above are 0.049% and 1.068%, respectively (in daily units).

Annualized volatility

As the usual time frame for human is the year, volatility is often annualized. In order to obtain annual (or annualized) volatility, we scale the daily volatility by the square root of the number of days in that period (τ), as shown below.


Annual Volatility formula

Where  is the number of trading days during the calendar year.

In the U.S. equity market, the annual number of trading days typically ranges from 250 to 255 (252 tradings days in 2025). This variation reflects the holiday calendar: when a holiday falls on a weekday, the exchange closes ; when it falls on a weekend, trading is unaffected. In contrast, the cryptocurrency market has as many trading days as there are calendar days in a year, since it operates continuously, 24/7.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized volatility is given by


 S&P500index Annual Volatility formula

Annualized mean

The calculated mean for the 5-year S&P 500 logarithmic returns is also the daily average return for the period. The annualized average return is given by the formula below.


Annualized mean formula

Where τ is the number of trading days during the calendar year.

For the S&P 500 index over the period from January 2020 to December 2025, the annualized average return is given by


Annualized mean formula

If the value of daily average return is much less than 1, annual average return can be approximated as


Annualized mean value

Application: Estimating the Future Price Range of the S&P 500 index

To develop an intuitive understanding of these figures, we can estimate the one-standard-deviation price range for the S&P 500 index over the next year. From the above calculations, we know that the annualized mean return is 12.534% and the annualized standard deviation is 16.953%.

Under the assumption of normally distributed logarithmic returns, we can say approximately with 68% confidence that the value of S&P 500 index is likely to be in the range of:


Upper and lower limits

If the current value of the S&P 500 index is $6,830, then converting these return estimates into price levels gives:


Upper and lower price limits

Based on a 68% confidence interval, the S&P 500 index is likely to trade in the range of $6,526 to $8,838 over the next year.

Historical Volatility

Historical volatility represents the variability of an asset’s returns over a chosen lookback period. The annualized historical volatility is estimated using the formula below.


 Historical volatility formula

With the following notations:

  • σ = Standard deviation
  • Ri = Return
  • n = total number of trading days in the period (21 for 1 month, 63 for 3 months, etc.)
  • τ = Number of trading days in a calendar year

Volatility calculated over different periods must be annualized to a common timeframe to ensure comparability, as the standard convention in finance is to express volatility on an annual basis. Therefore, when working with daily returns, we annualize the volatility by multiplying it by the square root of 252.

For example, for the S&P 500 index, the annualized historical volatilities over the last 1 month, 3 months, and 6 months, computed on December 3, 2025, are 14.80%, 12.41%, and 11.03%, respectively. The results suggest, since the short term (1 month) volatility is higher than medium (3 months) and long term (6 months) volatility, the recent market movements have been turbulent as compared to the past few months, and due to volatility clustering, periods of high volatility often persist, suggesting that this elevated turbulence may continue in the near term.

Unconditional Volatility

Unconditional volatility is a single volatility number using all historical data, which in our example is the entire five years data; It does not account for the fact that recent market behavior is more relevant for predicting tomorrow’s risk than events from past years, implying that volatility changes over time. It is frequently observed that after any sudden boom or crash in the market, as the storm passes away the volatility tends to revert to a constant value and that value is given by the unconditional volatility of the entire period. This tendency is referred to as mean reversion.

For instance, using S&P 500 index data from 2020 to 2025, the unconditional volatility (annualized standard deviation) is calculated to be 16.952%.

Rolling historical volatility

A single volatility number often fails to capture changing market regimes. Therefore, a rolling historical volatility is usually generated to track the evolution of market risk. By calculating the standard deviation over a moving window, we can observe how volatility has expanded or contracted historically. This is illustrated in Figure 3 below for the annualized 3-month historical volatility of the S&P 500 index over the period 2020-2025.

Figure 3. 3-month rolling historical volatility of the S&P500 index (2020-2025).
3-month rolling historical volatility of the S&P500 index
Source: computation by the author.

In Figure 3, the 3-month rolling historical volatility is plotted along with the unconditional volatility computed over the entire period, calculated using overlapping windows to generate a continuous series. This provides a clear historical perspective, showcasing how the asset’s volatility has fluctuated relative to its long-term average.

For example, during the start of Russia–Ukraine war (February 2022 – August 2022), a noticeable jump in volatility occurred as energy and food prices surged amid fears of supply chain disruptions, given that Russia and Ukraine are major exporters of oil, natural gas, wheat, and other commodities.

The rolling window can be either overlapping or non-overlapping, resulting in continuous or discrete graphs, respectively. Overlapping windows shift by one day, creating a smooth and continuous volatility series, whereas non-overlapping windows shift by one time period, producing a discrete series.

You can download the Excel file provided below, which contains the computation of returns, their historical distribution, the unconditional historical volatility, and the 3-month rolling historical volatility of the S&P 500 index used in this article.

Download the Excel file for returns and volatility calculation

You can download the Python code provided below, which contains the computation of returns, first four moments of the distribution, and experiment with the x-month rolling historical volatility function to visualize the evolution of historical volatility over time.

Download the Python code for returns and volatility calculation.

Alternatively, you can download the R code below with the same functionality as in the Python file.

Download the R code for returns and volatility calculation.

Alterative measures of volatility

We now mention a few other ways volatility can be measured: Parkinson volatility, Implied volatility, ARCH model, and stochastic volatility model.

Parkinson volatility

The Parkinson model (1980) uses the highest and lowest prices during a given period (say a month) for the purpose of measurement of volatility. This model is a high-low volatility measure, based on the difference between the maximum and minimum prices observed during a certain period.

Parkinson volatility is a range-based variance estimator that replaces squared returns with the squared high–low log price range, scaled to remain unbiased. It assumes a driftless (expected growth rate of the stock price equal to zero) geometric Brownian motion, it is five times more efficient than close-to-close returns because it accounts for fluctuation of stock price within a day.

For a sample of n observations (say days), the Parkinson volatility is given by


Parkinson Volatility formula

where:

  • Ht is the highest price on period t
  • Lt is the lowest price on period t

Implied volatility

Implied Volatility (IV) is the level of volatility for the underlying asset that, when plugged into an option pricing model such as Black–Scholes–Merton, makes the model’s theoretical option price equal to the option’s observed market price.

It is a forward looking measure because it reflects the market’s expectation of how much the underlying asset’s price is likely to fluctuate over the remaining life of the option, rather than how much it has moved in the past.

The Chicago Board Options Exchange (CBOE), a leading global financial exchange operator provides implied volatility indices like the VIX and Implied Correlation Index, measuring 30-day expected volatility from SPX options. These are used by traders to gauge market fear, speculate via futures/options/ETPs, hedge equity portfolios and manage risk during volatility spikes.

ARCH model

Autoregressive Conditional Heteroscedasticity (ARCH) models address time-varying volatility in time series data. Introduced by Engle in 1982, ARCH models look at the size of past shocks to estimate how volatile the next period is likely to be. If recent movements were big, the model expects higher volatility; if they were small, it expects lower volatility justifying the idea of volatility clustering. Originally applied to inflation data, this model has been widely used in to model financial data.

ARCH model capture volatility clustering, which refers to an observation about how volatility behaves in the short term, a large movement is usually followed by another large movement, thus volatility is predictable in the short term. Historical volatility gives a short-term hint of the near future changes in the market because recent noise often continues.

Generalized Autoregressive Conditional Heteroscedasticity (GARCH) extends ARCH by past predicted volatility, not just past shocks, as refined by Bollerslev in 1986 from Engle’s work. Both of these methods are more accurate methods to forecast volatility than what we had discussed as they account for the time varying nature of volatility.

Stochastic volatility models

In practice, volatility is time-varying: it exhibits clustering, persistence, and mean reversion. To capture these empirical features, stochastic volatility (SV) models treat volatility not as a constant parameter but as a stochastic process jointly evolving with the asset price. Among these models, the Heston (1993) specification is one of the most influential.

The Heston model assumes that the asset price follows a diffusion process analogous to geometric Brownian motion, while the instantaneous variance evolves according to a mean-reverting square-root process. Moreover, the innovations to the price and variance processes are correlated, thereby capturing the leverage effect frequently observed in equity markets.

Applications in finance

This section covers key mathematical concepts and fundamental principles of portfolio management, highlighting the role of volatility in assessing risk.

The normal distribution

The normal distribution is one of the most commonly used probability distribution of a random variable with a unimodal, symmetric and bell-shaped curve. The probability distribution function for a random variable X following a normal distribution with mean μ and variance σ2 is given by


Normal distribution function

A random variable X is said to follow standard normal distribution if its mean is zero and variance is one.

The figure below represents the confidence intervals, showing the percentage of data falling within one, two, and three standard deviations from the mean.

Figure 4. Probability density function and confidence intervals for a standard normal varaible.
Standard normal distribution” width=
Source: computation by the author

Brownian motion

Robert Brown first observed Brownian motion was as the erratic and random movement of pollen particles suspended in water due to constant collision with water molecules. It was later formulated mathematically by Norbert Wiener and is also known as the Wiener process.

The random walk theory suggests that it’s impossible to predict future stock prices as they move randomly, and when the timestep of this theory becomes infinitesimally small it becomes, Brownian Motion.

In the context of financial stochastic process, when the market is modeled by the standard Brownian motion, the probability distribution function of the future price is a normal distribution, whereas when modeled by Geometric Brownian Motion, the future prices are said to be lognormally distributed. This is also called the Brownian Motion hypothesis on the movement of stock prices.

The process of a standard Brownian motion is given by:


Standard Brownian motion formula.

The process of a geometric Brownian motion is given by:


Geometric Brownian motion formula.

Where, dSt is the change in asset price in continuous time dt, dXt is a random variable from the normal distribution (N (0, 1)) or Wiener process at a time t, σ represents the price volatility, and μ represents the expected growth rate of the asset price, also known as the ‘drift’.

Modern Portfolio Theory (MPT)

Modern Portfolio Theory (MPT), developed by Nobel Laureate, Harry Markowitz, in the 1950s, is a framework for constructing optimal investment portfolios, derived from the foundational mean-variance model.

The Markowitz mean–variance model suggests that risk can be reduced through diversification. It proposes that risk-averse investors should optimize their portfolios by selecting a combination of assets that balances expected return and risk, thereby achieving the best possible return for the level of risk they are willing to take. The optimal trade-off curve between expected return and risk, commonly known as the efficient frontier, represents the set of portfolios that maximizes expected return for each level of standard deviation (risk).

Capital Asset Pricing Model (CAPM)

The Capital Asset Pricing Model (CAPM) builds on the model of portfolio choice developed by Harry Markowitz (1952), stated above. CAPM states that, assuming full agreement on return distributions and either risk-free borrowing/lending or unrestricted short selling, the value-weighted market portfolio of risky assets is mean-variance efficient, and expected returns are linear in the market beta.

The main result of the CAPM is a simple mathematical formula that links the expected return of an asset to its risk measured by the beta of the asset:


CAPM formula

Where:

  • E(Ri) = expected return of asset i
  • Rf = risk-free rate
  • βi = measure of the risk of asset i
  • E(Rm) = expected return of the market
  • E(Rm) − Rf = market risk premium

CAPM recognizes that an asset’s total risk has two components: systematic risk and specific risk, but only systematic risk is compensated in expected returns.

Returns decomposition fromula.
 Returns decomposition fromula

Where the realized (actual) returns of the market (Rm) and the asset (Ri) exceed their expected values only because of consideration of systematic risk (ε).

Decomposition of risk.
Decompositionion of risk

Systematic risk is a macro-level form of risk that affects a large number of assets to one degree or another, and therefore cannot be eliminated. General economic conditions, such as inflation, interest rates, geopolitical risk or exchange rates are all examples of systematic risk factors.

Specific risk (also called idiosyncratic risk or unsystematic risk), on the other hand, is a micro-level form of risk that specifically affects a single asset or narrow group of assets. It involves special risk that is unconnected to the market and reflects the unique nature of the asset. For example, company specific financial or business decisions which resulted in lower earnings and affected the stock prices negatively. However, it did not impact other asset’s performance in the portfolio. Other examples of specific risk might include a firm’s credit rating, negative press reports about a business, or a strike affecting a particular company.

Why should I be interested in this post?

Understanding different measures of volatility, is a pre-requisite to better assess potential losses, optimize portfolio allocation, and make informed decisions to balance risk and expected return. Volatility is fundamental to risk management and constructing investment strategies.

Related posts on the SimTrade blog

Risk and Volatility

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Youssef LOURAOUI Systematic Risk

   ▶ Youssef LOURAOUI Specific Risk

   ▶ Jayati WALIA Implied Volatility

   ▶ Mathias DUMONT Pricing Weather Risk

   ▶ Jayati WALIA Black-Scholes-Merton Option Pricing Model

Portfolio Theory and Models

   ▶ Jayati WALIA Returns

   ▶ Youssef LOURAOUI Portfolio

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Optimal Portfolio

Financial Indexes

   ▶ Nithisha CHALLA Financial Indexes

   ▶ Nithisha CHALLA Calculation of Financial Indexes

   ▶ Nithisha CHALLA The S&P 500 Index

Useful Resources

Academic research

Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity, Journal of Econometrics, 31(3), 307–327.

Engle, R. F. (1982). Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation, Econometrica, 50(4), 987–1007.

Fama, E. F., & French, K. R. (2004). The Capital Asset Pricing Model: Theory and Evidence, Journal of Economic Perspectives, 18(3), 25–46.

Heston, S. L. (1993). A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options, The Journal of Finance, 48(3), 1–24.

Markowitz, H. M. (1952). Portfolio Selection, The Journal of Finance, 7(1), 77–91.

Parkinson, M. (1980). The extreme value method for estimating the variance of the rate of return. Journal of Business, 53(1), 61–65.

Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk, The Journal of Finance, 19(3), 425–442.

Tsay, R. S. (2010). Analysis of financial time series, John Wiley & Sons.

Other

NYU Stern Volatility Lab Volatility analysis documentation.

Extreme Events in Finance Risk maps: extreme risk, risk and performance.

About the author

The article was written in December 2025 by Saral BINDAL (Indian Institute of Technology Kharagpur, Metallurgical and Materials Engineering, 2024-2028 & Research Assistant at ESSEC Business School).

   ▶ Discover all articles written by Saral BINDAL

The “lemming effect” in finance

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the “lemming effect” in financial markets, inspired by the animated movie Zootopia.

About the concept

The “lemming effect” refers to situations where individuals follow the crowd unthinkingly, just as lemmings are believed to follow one another off a cliff. In finance, this idea is linked to herd behaviour: investors imitate the actions of others instead of relying on their own information or analysis.

The image above is a cartoon showing a line of lemmings running off a cliff, with several already falling through the air. The caption “The Lemming Effect: Stop! There is another way” warns that blindly following others can lead to disaster, even if “everyone is doing it.” The message is to think independently, question group behaviour, and choose an alternative path instead of copying the crowd.

In Zootopia, there is a scene where lemmings dressed as bankers leave their office and are supposed to walk straight home after work. However, after one lemming notices Nick selling popsicles and suddenly changes direction to buy one, the rest of the lemmings automatically follow and queue up too, even though this is completely different from their original route and plan. This illustrates how individuals can abandon their own path and intentions simply because they see someone else acting first, much like investors may follow others into a trade or trend without conducting independent analysis.

Watch the video!


Source: Zootopia (Disney, 2016).

The first image shows Nick Wilde (the fox) holding a red paw-shaped popsicle. In the film, Nick uses this eye‑catching pawpsicle as a marketing tool to attract the lemmings and earn a profit.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The second image shows a group of identical lemmings in suits walking in and out of a building labelled “Lemming Brothers Bank.” This is a parody of the real investment bank “Lehman Brothers,” which collapsed during the 2008 financial crisis. When one lemming notices the pawpsicle, it immediately changes direction from going home and heads toward Nick to buy the product, illustrating how one individual’s choice triggers the rest to follow.

zootopia lemmings
Source: Zootopia (Disney, 2016).

The third image shows Nick successfully selling pawpsicles to a whole line of lemmings. Nick is exploiting the lemmings’ herd‑like behaviour: once a few begin buying, the others automatically copy them and all purchase the same pawpsicle. The humour lies in how Nick profits from their conformity, using their predictable group behaviour—the “lemming effect”—to make easy money.

zootopia lemmings
Source: Zootopia (Disney, 2016).

Behavioural finance uses the lemming effect to describe deviations from perfectly rational decision-making. Rather than analysing fundamentals calmly, investors may be influenced by social proof, fear of missing out (FOMO) or the comfort of doing what “everyone else” seems to be doing.

Understanding the lemming effect is important both for professional investors and students of finance. It helps to explain why markets sometimes move far away from fundamental values and reminds decision-makers to be cautious when “the whole market” points in the same direction.

How the lemming effect appears in markets

In practice, the lemming effect can be seen when large numbers of investors buy the same “hot” stocks simply because prices are rising, they assume that so many others doing the same thing cannot be wrong.

It applies in reverse during market downturns. Bad news, rumours, or sharp price declines can trigger a wave of selling. The fear of being the last one can push them to copy others’ behaviour rather than stick to their original plan.

Such herd-driven moves can amplify volatility, push prices far above or below intrinsic value, and create opportunities or risks that would not exist in a purely rational market. Recognising these dynamics helps investors to step back and question whether they are thinking independently.

Related financial concepts

The lemming effect connects naturally with several basic financial ideas: diversification, risk-return trade-off, market efficiency, Keynes’ beauty contest and gamestop story. It shows how human behaviour can distort these textbook concepts in real markets.

Diversification

Diversification means not putting all your money in the same blanket (asset or sector), so that the poor performance of one investment does not destroy the whole. When the lemming effect is strong, investors often forget diversification and concentrate on a few “popular” stocks. From a diversification perspective, following the crowd can increase risk without necessarily increasing expected returns.

Risk and return

Finance said that higher expected returns usually come with higher risk. However, when many investors behave like lemmings, they may underestimate the true risk of crowded trades. Rising prices can create an illusion of safety, even if fundamentals do not justify the move. Understanding the lemming effect reminds investors to ask whether a sustainable increase in expected return really compensates the extra risk taken by following the crowd.

Market efficiency

In an efficient market, prices should reflect all available information. Herd behaviour and the lemming effect demonstrate that markets can deviate from this ideal when many investors react based on emotions or social cues rather than information. Short-term mispricing created by herding can eventually be corrected when new information becomes available or when rational investors intervene. For students, this illustrates why theoretical models of perfect efficiency are useful benchmarks but do not fully capture real-world behaviour.

Keynes’ beauty contest

Keynes’ “beauty contest” analogy describes investors who do not choose stocks based on their own view of fundamental value, but instead try to guess what everyone else will think is beautiful.Instead of asking “Which company is truly best?”, they ask “Which company does the average investor think others will like?” and buy that, hoping to sell to the next person at a higher price. This links directly to the lemming effect: investors watch each other and pile into the same trades, just like the lemmings all changing direction to follow the first one who goes for the pawpsicle.

GameStop story

The GameStop short squeeze in 2021 is a modern real‑world illustration of herd behaviour. A large crowd of retail investors on Reddit and other forums started buying GameStop shares together, partly for profit and partly as a social movement against hedge funds, driving the price far above what traditional valuation models would suggest. Once the price started to rise sharply, more and more people jumped in because they saw others making money and feared missing out, reinforcing the crowd dynamic in a very “lemming‑like” way.

Why should I be interested in this post?

For business and finance students, the lemming effect is a bridge between psychology and technical finance. It helps explain why prices sometimes move in surprising ways, and why sticking mindlessly to the crowd can be dangerous for long-term wealth.

Whether you plan to work in banking, asset management, consulting or corporate finance, understanding herd behaviour can improve your judgment. It encourages you to combine quantitative tools with a critical view of market sentiment, so that you do not become the next “lemming” in a crowded trade.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE “The market is never wrong, only opinions are“ – Jesse Livermore

   ▶ Hadrien PUCHE “It’s not whether you’re right or wrong that’s important, but how much money you make when you’re right and how much you lose when you’re wrong.”– George Soros

   ▶ Daksh GARG Social Trading

   ▶ Raphaël ROERO DE CORTANZE Gamestop: how a group of nostalgic nerds overturned a short-selling strategy

Useful resources

BBC Five animals to spot in a post-Covid financial jungle

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2), 207-232.

Gupta, S., & Shrivastava, M. (2022). Herding and loss aversion in stock markets: mediating role of fear of missing out (FOMO) in retail investors. International Journal of Emerging Markets, 17(7), 1720-1737.

Gupta, S., & Shrivastava, M. (2022). Argan, M., Altundal, V., & Tokay Argan, M. (2023). What is the role of FoMO in individual investment behavior? The relationship among FoMO, involvement, engagement, and satisfaction. Journal of East-West Business, 29(1), 69-96.

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Time value of money

Langchin SHIU

In this article, SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026) explains the time value of money, a simple but fundamental concept used in all areas of finance.

Overview of the time value of money

The time value of money (TVM) is the idea that one euro today is worth more than one euro in the future because today’s money can be invested to earn interest. In other words, receiving cash earlier gives more opportunities to save, invest, and grow wealth over time. This principle serves as the foundation for valuing loans, bonds, investment projects, and many everyday financial decisions.

To work with TVM, finance uses a few key tools: present value (the value today of future cash flows), future value (the value in the future of money invested today),etc. With these elements, it becomes possible to compare different cash-flow patterns that occur at various dates consistently.

Future value

The future value (FV) of money answers the question: if I invest a certain amount today at a given interest rate, how much will I have after some time? Future value uses the principle of compounding, which means that interest earns interest when it is reinvested.

For a simple case with annual compounding, the formula is:

Future Value (FV)

where PV is the amount invested today, r is the annual interest rate, and T is the number of years.

For example, if 1,000 euros are invested at 5% per year for 3 years, the future value is FV = 1,000 × (1.05)^3 = 1,157.63 euros. This shows how even a modest interest rate can increase the value of an investment over time.

Compounding frequency can also change the result. If interest is compounded monthly instead of annually, the formula is adjusted to use a periodic rate and the total number of periods. The more frequently interest is added, the higher the future value for the same nominal annual rate, illustrating why compounding is such a powerful mechanism in long-term investing.

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

Compounding mechanism with monthy and annual compounding.
Compounding mechanism

You can download the Excel file provided below, which contains the computation of an investment to illustrate the impact of the frequency on the compounding mechanism.

Download the Excel file for computation of an investment to illustrate the impact of the frequency on the compounding mechanism

Present value

Present value (PV) is the reverse operation of future value and answers the question: how much is a future cash flow worth today? To find PV, the future cash flow is “discounted” back to today using an appropriate discount rate that reflects opportunity cost, risk and inflation.

For a single future cash flow, the present value formula is:

Present Value (PV)

Where FV is the future amount, r is the discount rate per period, and T is the number of periods.

For example, if an investor expects to receive 1,000 euros in 2 years and the discount rate is 5% per year, the present value is PV = 1,000 / (1.05)^2 = 907.03 euros. This means the investor would be indifferent between receiving 907.03 euros today or 1,000 euros in two years at that discount rate.

Choosing the discount rate is a key step: for a safe cash flow, a risk-free rate such as a government bond yield might be used, while for a risky project, a higher rate reflecting the required return of investors would be more appropriate. A higher discount rate reduces present values, making future cash flows less attractive compared to money today.

Applications of the time value of money

The time value of money is used in almost every area of finance. In corporate finance, it forms the basis of discounted cash-flow (DCF) analysis, where the expected future cash flows of a project or company are discounted to estimate the net present value. Investment decisions are typically made by comparing the present value to the initial cost.

DCF

In banking and personal finance, TVM is essential to design and understand loans, deposits and retirement plans. Customers who understand how interest rates and compounding work can better compare offers, negotiate terms and plan their savings. In capital markets, bond pricing, yield calculations and valuation of many other instruments depend directly on discounting streams of cash flows.

Even outside professional finance, TVM helps individuals answer simple but important questions: is it better to take a lump sum now or a stream of payments later, how much should be saved each month to reach a future target, or what is the true cost of borrowing at a given interest rate? A good intuition for TVM improves financial decision-making in everyday life.

Why should I be interested in this post?

As a university student, understanding TVM is essential because it underlies more advanced techniques such as discounted cash-flow (DCF) valuation, bond pricing and project evaluation. It is usually one of the first technical topics taught in introductory corporate finance and quantitative methods courses.

Related posts on the SimTrade blog

   ▶ All posts about Financial techniques

   ▶ Hadrien PUCHE The four most dangerous words in investing are, it’s different this time

   ▶ Hadrien PUCHE Remember that time is money

Useful resources

Harvard Business School Online Time value of money

Investing.com Time value of money: formula and examples

About the author

The article was written in December 2025 by SHIU Lang Chin (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2024-2026).

   ▶ Read all articles by SHIU Lang Chin.

Deep Dive into evergreen funds

Emmanuel CYROT

In this article, Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026) introduces the ELTIF 2.0 Evergreen Fund.

Introduction

The asset management industry is pivoting to democratize private market access for the wealth segment. We are moving from the rigid Capital Commitment Model (the classic “blind pool” private equity structure) to the flexible NAV-Based Model, an open-ended structure where subscriptions and redemptions are executed at periodic asset valuations rather than through irregular capital calls. For technical product specialists, the ELTIF 2.0 regulation isn’t just a compliance update, it’s the architectural blueprint for the democratization of private markets. Here is the deep dive into how these “Semi-Liquid” or “Evergreen” structures actually work, the European landscape, and the engineering behind them.

The Liquidity Continuum: Solving the “J-Curve” Problem

To understand the evergreen structure, you have to understand what it fixes. In a traditional Closed-End Fund (the “Old Guard”):

  • The Cash Drag: You commit €100k, but the manager only calls 20% in Year 1. Your money sits idle.
  • The J-Curve: You pay fees on committed capital immediately, but the portfolio value drops initially due to costs before rising (the “J” shape).
  • The Lock: Your capital is trapped for 10-12 years. Secondary markets are your only (expensive) exit.

The Evergreen / Semi-Liquid Solution represents the structural convergence of private market asset exposure with an open-ended fund’s periodic subscription and redemption framework.

  • Fully Invested Day 1: Unlike the Capital Commitment model, your capital is put to work almost immediately upon subscription.
  • Perpetual Life: There is no “end date.” The fund can run for 99 years, recycling capital from exited deals into new ones.
  • NAV-Based: You buy in at the current Net Asset Value (NAV), similar to a mutual fund, rather than making a commitment.

The difference in investment processes between evergreen funds and closed ended funds
 The difference in investment processes between evergreen funds and closed ended funds
Source: Medium.

The European Landscape: The Rise of ELTIF 2.0

The “ELTIF 2.0” regulation (Regulation (EU) 2023/606) is the game-changer. It removed the extra local rules that held the market back in Europe. These rules included high national minimum investment thresholds for retail investors and overly restrictive limits on portfolio composition and liquidity features imposed by national regulators.

Market Data as of 2025 (Morgan Lewis)

  • Volume: The market is rapidly expanding, with over 160+ registered ELTIFs now active across Europe as of 2025.
  • The Hubs: Luxembourg is the dominant factory (approx. 60% of funds), followed by France (strong on the Fonds Professionnel Spécialisé or FPS wrapper) and Ireland.
  • The Arbitrage: The killer feature is the EU Marketing Passport. A French ELTIF can be sold to a retail investor in Germany or Italy without needing a local license. This allows managers to aggregate retail capital on a massive scale.

Structural Engineering: Liquidity

This section delves into the precise engineering required to reconcile the illiquidity of the underlying assets with the promise of periodic investor liquidity in Evergreen/Semi-Liquid funds. This is achieved through a combination of Asset Allocation Constraints and robust Liquidity Management Tools (LMTs).

The primary allocation constraint is the “Pocket” Strategy, or the 55/45 Rule. The fund is structurally divided into two distinct components. First, the Illiquid Core, which must represent greater than 55% of the portfolio, is the alpha engine holding long-term, illiquid assets such as Private Equity, Private Debt, or Infrastructure. Notably, ELTIF 2.0 has broadened the scope of this core to include newer asset classes like Fintechs and smaller listed companies. Second, the Liquid Pocket, which can be up to 45%, serves as the fund’s buffer, holding easily redeemable, UCITS-eligible assets like money market funds or government bonds. While the regulation permits a high 45% pocket, efficient fund operation typically keeps this buffer closer to 15%–20% to mitigate performance-killing “cash drag”.

Crucial to managing liquidity risk is the Gate Mechanism. Although the fund offers conditional liquidity (often quarterly), the Gate prevents a systemic crisis if many investors attempt to exit simultaneously. This mechanism works by capping redemptions at a specific percentage of the Net Asset Value (NAV) per period, commonly set at 5%. If aggregate redemption requests exceed this threshold (e.g., requests total 10%), all withdrawing investors receive a pro-rata share of the allowable 5% and the remainder of their request is deferred to the next liquidity window.

Finally, managers utilize Anti-Dilution Tools like Swing Pricing to protect the financial interests of the long-term investors remaining in the fund. In a scenario involving heavy redemptions, where the fund manager is forced to sell assets quickly and incur high transaction costs, Swing Pricing adjusts the NAV downwards only for the exiting investors. This critical mechanism ensures that those demanding liquidity—the “leavers”—bear the transactional “cost of liquidity,” thereby insulating the NAV of the “stayers” from dilution.

Why should I be interested in this post?

Mastering ELTIF 2.0 architecture offers a definitive edge over the standard curriculum. With the industry pivoting toward the “retailization” of private markets, understanding the engineering behind evergreen funds and liquidity gates demonstrates a level of practical sophistication that moves beyond theory—exactly what recruiters at top-tier firms like BlackRock or Amundi are seeking for their next analyst class.

Related posts on the SimTrade blog

   ▶ David-Alexandre BLUM The selling process of funds

Useful resources

Société Générale Fonds Evergreen et ELTIF 2 : Débloquer les Marchés Privés pour les Investisseurs Particuliers

About the author

The article was written in December 2025 by Emmanuel CYROT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2021-2026).

   ▶ Read all articles by Emmanuel CYROT.