The Business Model of Proprietary Trading Firms

Anis MAAZ

In this article, Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains how prop firms work, from understanding their business model and evaluation processes, to fee structures and risk management rules. The goal is not to promise guaranteed profits, but to provide a transparent, realistic overview of how proprietary trading firms operate and what traders should know before joining one.

Context and objective

  • Goal: demystify how prop firms make money, how their rules work, and what realistic outcomes look like, even if you are new to prop firms.
  • Outcome: a technical but accessible guide with a simple numeric example and a due diligence checklist.

What a prop firm is

Proprietary trading firms (prop firms) use their own capital to trade in financial markets, leveraging advanced risk management techniques and state-of-the-art technologies. But how exactly do prop firms make money, and what makes them attractive to aspiring traders? Traders who meet the firm’s rules get access to buying power and share in the profits. Firms protect their capital with strict risk limits (daily loss, max drawdown, product caps). Two operating styles you will encounter: In house/desk model: you trade live firm capital on a desk with a risk manager. Evaluation (“challenge”) model: you pay a fee to prove you can hit a target without breaking rules. If you pass, you receive a “funded” account with payout rules. For example, a classic challenge can be to reach a profit of 6% without losing more than 4% of your initial challenge capital to become funded.

The Proprietary Trading Industry: Origins and Scale

Proprietary trading as a business model emerged in the 1980s-1990s in the US, initially within investment banks’ trading desks before regulatory changes (notably the Volcker Rule in 2010) pushed prop trading into independent firms. The modern “retail prop firm” model, offering funded accounts to individual traders via evaluation challenges, gained momentum in the 2010s, particularly after 2015 with firms like FTMO (Czech Republic, 2014) and TopstepTrader (US, 2012).

Today, the industry includes an estimated 200+ prop firms globally, concentrated in the US, UK, and UAE (Dubai has become a hub due to favorable regulations). Major players include FTMO, TopstepTrader, Apex Trader Funding, Alphafutures, and MyForexFunds. Most are privately owned by founders or small investor groups and some (like Topstep) have received venture capital. The market size is difficult to quantify precisely, but industry reports estimate the global prop trading sector handles billions in trading capital, with the retail-focused segment growing 40-50% annually from 2020-2024.

Core Characteristics of prop firms

  • Capital Allocation: Prop firms provide traders with access to firm capital, enabling them to trade larger positions than they could on their own.
  • Profit Sharing: A trader’s earnings are typically a percentage of the profits generated. This incentivizes high-caliber performance.
  • Training Programs: Many prop firms invest in the development of new traders via structured training programs, equipping them with proven strategies and technologies.
  • Diverse Markets: Prop traders operate across various asset classes, such as stocks, forex, options, cryptocurrencies, and commodities.

How the business model works

The money comes from evaluation fees and resets: a major revenue line for challenge-style firms because most applicants do not pass the challenges. Once funded, a trader keeps the majority of the profits generated (often 70–90%) and the firm keeps the rest. Some firms charge for platform, data or advanced tools such as a complete order book, and pay exchange/clearing fees on futures.

In some cases, firms may charge onboarding or monthly platform fees to cover operational costs, such as trading infrastructure, data services, and proprietary software. However, top firms often waive such fees for consistently profitable traders.

For example, a firm charging $150 for a $50,000 evaluation challenge that attracts 10,000 applicants per month generates $1.5M in fee revenue. If 8% pass (800 traders) and receive funded accounts, and only 20% of those (160) reach a payout, the firm pays out perhaps $500,000-$800,000 in profit splits while retaining the rest as margin. Add-on services (resets at $100 each, platform fees) further boost revenue.

Who Are the Traders?

Prop firm traders come from diverse backgrounds: retail traders seeking leverage, former bank traders, students, and career-changers. No formal degree is required. The average trader age ranges from 25-40, though firms accept anyone 18+. Most traders operate as “independent contractors”, not employees, they receive profit splits, bearing their own tax obligations.

Retention is actually very low: industry data suggests 60-70% of funded traders lose their accounts within 3 months due to rule violations or drawdowns. Only 10-15% maintain funded status beyond 6 months. The model is inherently high-churn: firms continuously recruit through affiliates and ads, knowing most will fail but a small percentage will generate consistent trading activity and profit-share revenue.

What successful traders share :

  • The ability to manage risk and follow rules.
  • Analytical skills and a deep understanding of market behavior.
  • Psychological toughness to handle the highs and lows of trading.

It’s not an easy industry at all, and it’s better to have a real job, because only a small fraction of traders pass and an even smaller fraction reach payouts after succeeding in a challenge. Fee income arrives upfront, payouts happen later and only for those who succeed and manage to be disciplined through time.

For new traders, it’s not easy to pass a challenge when the rules are strict, because trading with someone else’s capital often amplifies fear and greed. Success is judged not only by profitability but also by consistency and adherence to firm guidelines, and any new traders struggle to maintain profitability and burn out within months.

EU regulators have long reported that most retail accounts lose money on leveraged products like CFDs: typically 74–89%, which helps explain why challenge pass rates are low without strong process and discipline.

Success rates: what is typical and why most traders fail

“Pass rate” (applicants who complete the challenge) is commonly cited around 5–10%. “Payout rate among funded traders” is often ~20%. End to end, only ~1–2% of all applicants reach a payout. All of these statistics vary by firm, product, and rules. Most people fail due to rule breaches under pressure (daily loss, news locks), overtrading, and inconsistent execution. Psychological factors like revenge trading, FOMO (Fear of missing out), are the usual culprits.

Trading Strategies, Markets, and Tools

Which Markets?

Most prop firms focus on futures (E-mini S&P, Nasdaq, crude oil), forex (EUR/USD, GBP/USD), and increasingly cryptocurrencies (Bitcoin, Ethereum). Some firms also offer equities (US stocks). The choice depends on the firm’s clearing relationships and risk appetite. Futures dominate because of high leverage, deep liquidity, and high trading windows.

Common Strategies

Prop traders typically employ “intraday strategies”:

  • Scalping (holding positions seconds to minutes)
  • Momentum trading (riding short-term trends), and mean reversion (fading extremes)
  • Swing trading (multi-day holds) is less common due to overnight risk rules
  • High-frequency strategies are rare in retail prop firms, and most traders use setups based on technical indicators (moving averages, RSI, volume profiles).

Tools and Platforms

Firms provide access to professional platforms like NinjaTrader, TradingView, MetaTrader 4/5,). Traders receive Level 2 data (order book), news feeds (Bloomberg, Reuters), and sometimes proprietary risk dashboards. Some firms offer replay tools to practice historical data.

The key performance idea

Positive expectancy = you make more on your average winning trade than you lose on your average losing trade, often enough to overcome costs. Here is a simple way to check:

Step 1: Out of 10 trades, how many are winners? Example: 5 winners, 5 losers (50% win rate). Step 2: What’s your average win and average loss? Example: average win €120; average loss €80. Step 3: Expected profit per trade ≈ (wins × avg win − losses × avg loss) ÷ number of trades. Here: (5 × 120 − 5 × 80) ÷ 10 = (€600 − €400) ÷ 10 = €20 per trade. If costs/slippage are below €20 per trade, you likely have an edge worth scaling, subject to the firm’s risk limits.

The firm wants you to stay inside limits, your average loss is controlled (stops respected), and your results are repeatable across days. They avoid the “luck factor” by putting rules like 2 minimum winning days to pass a challenge and impossible to make more than half of the challenge target in one day.

There are many ways to pass a challenge, depending on your trading strategy: If you aim for trades where your win is 5 times higher than what you risk, you do not need a winrate of 50% or 80% to pass the challenges and be profitable.

Payout mechanics: example with Topstep (to clarify the “50%” point)

Profit split: you keep 100% of the first $ 10,000 you withdraw; after that, the split is 90% to you / 10% to Topstep (per trader, across accounts).

Per request cap: Express Funded Account: request up to the lesser of $ 5,000 or 50% of your account balance per payout, after 5 winning days. Live Funded Account: up to 50% of the balance per request (no $ 5,000 cap). After 30 non consecutive “winning days” in Live, you can unlock daily payouts up to 100% of balance.

Note: “50%” here is a cap on how much you may withdraw per request—not the profit split. Other firms differ (some advertise 80–90% splits, 7–30 day payout cycles, or higher first withdrawal shares), so always read the current Terms.

Why traders choose prop firms (psychology and practical reasons)

Traders are attracted to prop firms for both psychological and practical reasons. The appeal starts with small upfront risk: instead of depositing a large personal account, you pay a fixed evaluation fee. If you perform well within the rules, you gain access to greater buying power, which lets you scale faster than you could with a small personal account.

But this method is indeed a psychological trap, because most of the traders will fail their first account, buy another one because it’s “cheap” and it will become an addiction when they will start burning accounts every day because it “doesn’t feel real” for them. The trade offs are real, evaluation fees and resets can add up, rules may feel restrictive, and pressure tends to spike near limits or payout thresholds. All these factors contribute to why many candidates ultimately fail.

However, for experimented traders who can manage psychology, the built in structure, risk limits, reviews, and a community adds accountability and often improves discipline. Payouts can also serve as a capital building path, gradually seeding your own account over time.

Regulation: A Gray Zone

Proprietary trading firms operate in a largely unregulated space, especially the evaluation-based model. In the US, prop firms are not broker-dealers; they typically collaborate with registered FCMs (Futures Commission Merchants) or brokers who handle execution and clearing, but the firm itself is often a private LLC with minimal oversight. The CFTC (Commodity Futures Trading Commission) regulates futures markets but not prop firms’ internal challenge mechanisms.

In France, the AMF has issued warnings about unregulated prop firms and emphasized that if a firm collects fees from French residents, it may fall under consumer protection law. Some firms have pulled out of France or adjusted terms. The UK FCA has similarly warned consumers. The UAE (DIFC, DMCC) offers more permissive environments, attracting many firms to Dubai.

Conclusion

Prop trading firms offer a compelling proposition: controlled access to institutional sized buying power, standardized risk limits, and a structured pathway for transforming skill into capital without large personal deposits. In this model, firms protect capital through rules and fees, while profitable traders create a scalable environment for strategy development and execution.

At the same time, the evaluation-and-payout cycle can amplify cognitive and emotional traps. Fee resets, drawdown thresholds, and profit targets concentrate attention on short-term outcomes, which can foster overtrading, sensation seeking, and schedule-driven risk-taking. The same leverage that accelerates account growth also magnifies behavioral errors and variance, making intermittent reinforcement (occasional big wins amid frequent setbacks) psychologically sticky and potentially addictive.

In the end, prop firms are neither shortcut nor scam, but a high-constraint laboratory. They reward, stable execution, rule adherence, and penalize improvisation and impulse. As a venue, they are well suited to disciplined traders with repeatable processes, robust risk controls, and patience for incremental scale. Without those traits, the structure that protects the firm can become a treadmill for the trader.

At the end of the day, the prop firm model is designed for the firm to profit from fees, not trader success. With 1-2% end-to-end success rates, it’s closer to a paid training lottery than a career path.

If your goal is to learn trading, SimTrade, paper trading, or small personal accounts teach discipline without predatory fee structures. Joining a bank’s graduate program gives you access to senior traders, research, and real market-making or flow trading experience.

If you’ve already traded profitably for 1-2 years, have a proven strategy, need leverage, and fully understand the fee economics, then a top-tier firm (FTMO, Topstep) could provide capital to scale. But as a first step out of ESSEC, I would prioritize banking or buy-side roles that offer mentorship, stability, and credentials.

Why should I be interested in this post?

Prop firms reveal how trading businesses monetize edge while enforcing strict risk management and incentive design. Grasping evaluation rules, fee structures, and payout mechanics sharpens your ability to assess unit economics and governance. This knowledge is directly applicable to careers in trading, risk, and fintech—helping you make informed choices before joining a program.

Related posts on the SimTrade blog

   ▶ Theo SCHWERTLE Can technical analysis actually help to make better trading decisions?

   ▶ Michel VERHASSELT Trading strategies based on market profiles and volume profiles

   ▶ Vardaan CHAWLA Real-Time Risk Management in the Trading Arena

Useful Resources

Topstep payout policy and FAQs (current rules and examples)

The Funded Trader statistics on pass/payout rates

How prop firms make money (evaluation fees vs profit share): neutral primers and industry explainers

General overviews of prop trading mechanics and risk controls

About the author

The article was written in October 2025 by Anis MAAZ (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Modern Portfolio Theory: What is it and what are its limitations?

Yann TANGUY

In this article, Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027) explains the Modern Portfolio Theory and how Post-Modern Portfolio Theory solves some of its limitations.

Creation of Modern Portfolio Theory (MPT)

Developed in 1952 by Nobel laureate Harry Markowitz, MPT revolutionized the way investors think about portfolios. Before Markowitz, investment decisions were mostly based on the relative nature of each investment. MPT changed the way to think about investing by showing that an investment cannot be thought of in isolation but as part of contribution to portfolio risk and return.

At the center of MPT is the diversification theory. The adage “don’t put all your eggs in one basket” is the base of this theory. By diversifying a portfolio with assets having different risk and return profiles and a low correlation, an investor can build a portfolio that has a lower risk than any of its components.

A Practical Example

Let’s assume that we have just two assets: stocks and bonds. Stocks have given higher returns over a long period of time compared to bonds but are riskier. On the other hand, bonds are less risky but return less.

An investor who puts all their money in stocks will have huge returns in a bull market but will suffer huge losses in a bear market. A conservative investor who puts money in bonds alone will have a smooth portfolio but will be denied the chance of better growth.

MPT believes that the combination of different investments in a portfolio can have a better risk-reward ratio than single investments. The key is the correlation of the assets. If the correlation is less than 1, the portfolio’s risk will be less than the weighted average of each individual asset’s risk. In this simplified example, stocks are performing poorly when bonds are performing well and vice versa, so they have a negative correlation, hedging out the overall returns of the portfolio.

Mathematical explanation

To estimate the risk of a portfolio, MPT uses statistical measures like variance and standard deviation. Variance is calculated to then obtain the standard deviation, which we use to assess the risk of an asset as it indicates how much said asset’s price fluctuates.

On the other hand, correlation and covariance quantify how two assets move compared to each other. Covariance and correlation give an indication of change in value, i.e. Do the assets move in the same way. Correlation is between -1 and 1, a correlation of 1 means that the asset moves in the exact same way and -1 means that they move in opposite ways.

The portfolio variance is calculated as follows for a portfolio of asset A and asset B:

Portfolio Variance Formula

Where:

  • R = return
  • w = weight of the asset
  • Var = variance
  • Cov = covariance

The variance of a portfolio is then not equal to the weighted average risk of its components because we factor in the covariance of said components.

The aim of MPT is to find the optimal portfolio mix that minimizes the portfolio standard deviation for a given level of expected return or that maximizes the portfolio expected return for a given level of standard deviation. This can be graphically represented as the efficient frontier, a line representing the set of optimal portfolios.

This Efficient Frontier represents different allocations of assets in a portfolio. All portfolios on this frontier are called efficient portfolios, meaning that they have the best risk adjusted returns possible with this combination of assets. This means that when choosing the allocation for a portfolio one should pick a portfolio located on the frontier based on their risk tolerance and return objective.

The figure below represents the efficient frontier when investors can invest in risky assets only.

Efficient Portfolio Frontier.
Portfolio Efficient Frontier
Source: Computation by the Author.

Quantifying performance

To quantify the performance of a portfolio, MPT utilizes Sharpe ratio. The Sharpe ratio measures the excess return of the portfolio (the return over the risk-free rate) for the risk of the portfolio (defined by portfolio standard deviation). The formula is as follows:

Sharpe Ratio Formula

Where:

  • E(RP) = expected return of portfolio P
  • Rf = risk-free rate
  • σP = standard deviation of returns of portfolio P

A higher Sharpe ratio indicates a better risk-adjusted return.

Limitations of MPT

Even though MPT has been around in finance for decades now, it is not universally accepted. The biggest criticism against it is that it employs standard deviation to measure price movement, but the problem is that no difference is made between positive and negative volatility. They are both seen as risky.

However, many investors would be happy with a portfolio that performs 20% or 40% returns every year, but this portfolio could be considered risky by MPT, even if it always performs better than the return needed as there is a lot of variation, however this variation does not matter to you if your return objective is always met. This means that investors care more about downside risks, the risk of performing worse than your return objective.

Emergence of Post-Modern Portfolio Theory (PMPT)

PMPT, introduced in 1991 by software designers Brian M. Rom and Kathleen Ferguson, is a refinement of MPT to overcome its main shortcoming. The key difference lies in the fact that PMPT focuses on downside deviation as a measure of risk, rather than the normal standard deviation that takes every form of deviation into account.

The origins of PMPT can be linked to the work of A. D. Roy with his “Safety First” principle in his 1952 paper, “Safety First and the Holding of Assets”. In his paper, Roy argued that investors are primarily motivated by the desire to avoid disaster rather than to maximize their gains. As he put it, “Decisions taken in practice are less concerned with whether a little more of this or of that will yield the largest net increase in satisfaction than with avoiding known rocks of uncertain position or with deploying forces so that, if there is an ambush round the next corner, total disaster is avoided.” Roy proposed that investors should seek to minimize the probability that their portfolio’s return will fall below a certain minimum acceptable level, or “disaster” level which is now known as MAR for “Minimum Acceptable Return”.

PMPT introduces the concept of the Minimum Acceptable Return (MAR), i.e., the lowest return that the investor wishes to receive. Instead of looking at the overall volatility of a portfolio, PMPT looks only at the returns below the MAR.

Calculating Downside Deviation

To compute downside deviation, we carry out the following:

  1. Define the Minimum Acceptable Return (MAR).
  2. Calculate the difference between the portfolio return and the MAR for each period.
  3. Square the negative differences.
  4. Sum the squared negative differences.
  5. Divide by the number of periods.
  6. Take the square root of the result to obtain the downside deviation.

You can download the Excel file below which illustrates the difference between MPT and PMPT with two examples of market conditions (correlation).

Download the Excel file for the data for MPT and PMPT

In this file we find 2 combinations of assets: Example 1 and Example 2. The first combination has a positive correlation (0.72) and the second combination a negative one (-0.75) all the while having very similar standard deviation and returns for each asset.

First, using MPT, we demonstrate how high correlation leads to a worsened diversification effect, and a lower increase in portfolio efficiency (Sharpe Ratio) compared to a very similar portfolio with a low correlation.

Diversification effect on Sharpe Ratio (High correlation)

Diversification effect on Sharpe Ratio (Low correlation)

Afterwards, we use PMPT to show how correlation also impacts the diversification effect through the lens of downside deviation, meaning how much does the portfolio moves below the MAR, keeping in mind that these portfolios have only around a 0.1% difference in average return and originally have almost the same volatility.

Diversification effect on Downside Deviation (High correlation)

Diversification effect on Downside Deviation (Low correlation)

Focusing on downside risk is made even more important when you consider that financial returns are rarely normally distributed, as is often assumed in MPT. In their 2004 paper, “Portfolio Diversification Effects of Downside Risk,” Namwon Hyung and Casper G. de Vries show that returns often show signs of what they call “fat tails,” meaning that extreme negative events are more common than a normal distribution would predict.

They find that in this environment; diversification is even more powerful in reducing downside risk. They state: “The VaR-diversification-speed is higher for the class of (finite variance) fat tailed distributions in comparison to the normal distribution”. Meaning that for investors concerned about downside risk, diversification is a more potent tool than they might realize as diversification becomes even more efficient when taking into account the real distribution of returns.

Conclusion

Modern Portfolio Theory has been the main theory used by investors for more than half a century. Its basic premise of diversification and asset allocation is as valid as it ever was. But the usage of Standard Deviation of returns only gives a side of picture, a picture fully captured by PMPT.

Post-Modern Portfolio Theory is more advanced way of managing risk. With its focus on downside deviation, it provides investors with an accurate sense of what they are risking and allows them to build portfolios better aligned with their goals and risk tolerance. MPT was the first iteration, but PMPT has built a more practical framework to effectively diversify a portfolio.

An effective diversification strategy is built on a solid foundation of asset allocation among low-correlation asset classes. By focusing on the quality of diversification rather than only the quantity of holdings, investors can build portfolios that are better aligned with their goals, avoiding the unnecessary costs and diluted returns that come with a diworsified approach.

Why should I be interested in this post?

MPT is a theory widely used in Asset management, the understanding of its principles and limitations is primordial in nowadays financial landscape.

Related posts on the SimTrade blog

   ▶ Rayan AKKAWI Warren Buffet and his basket of eggs

   ▶ Raphael TRAEN Understanding Correlation in the Financial Landscape: How It Drives Portfolio Diversification

   ▶ Rishika YADAV Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

   ▶ Youssef LOURAOUI Minimum Volatility Portfolio

   ▶ All posts about Financial techniques

Useful resources

Ferguson, K. (1994) Post-Modern Portfolio Theory Comes of Age, The Journal of Investing, 1:349-364

Geambasu, C., Sova, R., Jianu, I., and Geambasu, L., (2013) Risk measurement in post-modern portfolio theory: Differences from modern portfolio theory, Economic Computation and Economic Cybernetics Studies and Research, 47:113-132.

Markowitz, H. (1952) Portfolio Selection, The Journal of Finance, 7(1):77–91.

Roy, A.D. (1952) Safety First and the Holding of Assets, Econometrica, 20, 431-449.

Hyung, N., & de Vries, C. G. (2004) Portfolio Diversification Effects of Downside Risk, Working paper.

Sharpe, W.F. (1966) Mutual Fund Performance, Journal of Business, 39(1), 119–138.

Sharpe, W.F. (1994) The Sharpe Ratio, Journal of Portfolio Management, 21(1), 49–58.

About the author

This article was written in October 2025 by Yann TANGUY (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023-2027).

Understanding Risk-Adjusted Return: Sharpe Ratio & Beyond

Rishika YADAV

In this article, Rishika YADAV (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023–2027) explains the concept of risk-adjusted return, with a focus on the Sharpe ratio and complementary performance measures used in portfolio management.

Risk-adjusted return

Risk-adjusted return measures how much return an investment generates relative to the level of risk taken. This allows meaningful comparisons across portfolios and funds. For example, two portfolios may both generate a 12% return, but the one with lower volatility is superior because most investors are risk-averse — they prefer stable and predictable returns. A portfolio that achieves the same return with less risk provides higher utility to a risk-averse investor. In other words, it offers better compensation for the risk taken, which is precisely what risk-adjusted measures like the Sharpe Ratio capture.

The Sharpe Ratio

The Sharpe Ratio is the most widely used risk-adjusted performance measure. It standardizes excess return (return minus the risk-free rate) by total volatility and answers the question: how much additional return does an investor earn per unit of risk?

Sharpe Ratio = (E[RP] − Rf) / σP

where Rp = portfolio return, Rf = risk-free rate (e.g., T-bill yield), and σp = standard deviation of portfolio returns (volatility).

Interpretation

The Sharpe Ratio was developed by Nobel Laureate William F. Sharpe (1966) as a way to measure the excess return of an investment relative to its risk. A higher Sharpe ratio indicates better risk-adjusted performance.

  • < 1 = sub-optimal
  • 1–2 = acceptable to good
  • 2–3 = very good
  • > 3 = excellent (rarely achieved consistently)

In real financial markets, sustained Sharpe Ratios above 1.0 are uncommon. Over the past four decades, broad equity indices like the S&P 500 have averaged between 0.4 and 0.7, while balanced multi-asset portfolios often fall in the 0.6–0.9 range. Only a handful of hedge funds or quantitative strategies have achieved Sharpe ratios consistently above 1.0, and values exceeding 1.5 are exceptionally rare. Thus, while the Sharpe ratio is a useful comparative tool, the theoretical thresholds (e.g., >3 as “excellent”) are not typically observed in real markets.

Capital Allocation Line (CAL) and Capital Market Line (CML)

The Capital Allocation Line (CAL) represents the set of portfolios obtainable by combining a risk-free asset with a chosen risky portfolio P. It is a straight line in the (risk, expected return) plane: investors choose a point on the CAL according to their risk preference.

The equation of the CAL is:

E[RQ] = Rf + ((E[RP] − Rf) / σP) × σQ

where:

  • E[Rp] = expected return of the combined portfolio
  • Rf = risk-free rate
  • E[RP] = expected return of risky portfolio P
  • σP = standard deviation of P
  • σQ = resulting standard deviation of the combined portfolio (proportional to weight in P)

The slope of the CAL equals the Sharpe ratio of portfolio P:

Slope(CAL) = (E[RP] − Rf) / σP = Sharpe(P)

The Capital Market Line (CML) is the CAL when the risky portfolio Q is the market portfolio (M). Under CAPM/Markowitz assumptions the market portfolio is the tangent (highest Sharpe) point on the efficient frontier and the CML is tangent to the efficient frontier at M.

The equation of the CML is:

E[RQ] = Rf + ((E[RM] − Rf) / σM) × σQ

where M denotes the market portfolio.

The slope of the CML, (E[RM] − Rf) / σM, is the Sharpe ratio of the market portfolio.

The link between the CAL, CML and Sharpe ratio is illustrated in the figure below.

Figure 1. Capital Allocation Line (CAL), Capital Market Line (CML) and the Sharpe ratio.
Capital Allocation Line and Sharpe ratio
Source: computation by author.

Strengths of the Sharpe Ratio

  • Simple and intuitive — easy to compute and interpret.
  • Versatile — applicable across asset classes, funds, and portfolios.
  • Balances reward and risk — combines excess return and volatility into a single metric.

Limitations of the Sharpe Ratio

  • Assumes returns are approximately normally distributed — real returns often show skewness and fat tails.
  • Penalizes upside and downside volatility equally — it does not distinguish harmful downside movements from beneficial upside.
  • Sensitive to the chosen risk-free rate and the return measurement horizon (daily/monthly/annual).

Beyond Sharpe: Alternative measures

  • Treynor Ratio — uses systematic risk (β) instead of total volatility: Treynor = (Rp − Rf) / βp. Best for well-diversified portfolios.
  • Sortino Ratio — focuses only on downside deviation, so it penalizes harmful volatility (losses) but not upside variability.
  • Jensen’s Alpha — α = Rp − [Rf + βp(Rm − Rf)]; measures manager skill relative to CAPM expectations.
  • Information Ratio — active return (vs benchmark) divided by tracking error; useful for evaluating active managers.

Applications in portfolio management

Risk-adjusted metrics are used by asset managers to screen and rank funds, by institutional investors for capital allocation, and by analysts to determine whether outperformance is due to skill or increased risk exposure. When two funds have similar absolute returns, the one with the higher Sharpe Ratio is typically preferred.

Why should I be interested in this post?

Understanding the Sharpe Ratio and complementary risk-adjusted measures is essential for students interested in careers in asset management, equity research, or investment analysis. These tools help you evaluate performance meaningfully and make better investment decisions.

Related posts on the SimTrade blog

   ▶ Capital Market Line (CML)

   ▶ Understanding Correlation and Portfolio Diversification

   ▶ Implementing the Markowitz Asset Allocation Model

   ▶ Markowitz and Modern Portfolio Theory

Useful resources

Jensen, M. (1968) The Performance of Mutual Funds in the Period 1945–1964, Journal of Finance, 23(2), 389–416.

Sharpe, W.F. (1966) Mutual Fund Performance, Journal of Business, 39(1), 119–138.

Sharpe, W.F. (1994) The Sharpe Ratio, Journal of Portfolio Management, 21(1), 49–58.

Sortino, F. and Price, L. (1994) Performance Measurement in a Downside Risk Framework, Journal of Investing, 3(3), 59–64.

About the author

This article was written in October 2025 by Rishika YADAV (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2023–2027). Her academic interests lie in strategy, finance, and global industries, with a focus on the intersection of policy, innovation, and sustainable development.

US Treasury Bonds

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM, 2021-2024) gives a comprehensive overview of U.S. Treasury bonds, covering their features, benefits, risks, and how to invest in them.

Introduction

Treasury bonds, often referred to as T-bonds, are long-term debt securities issued by the U.S. Department of the Treasury. They are regarded as one of the safest investments globally, offering a fixed interest rate and full backing by the U.S. government. This article aims to provide an in-depth understanding of Treasury bonds, from their basics to advanced concepts, making it an essential read for finance students and professionals.

What Are Treasury Bonds?

Treasury bonds are government debt instruments with maturities ranging from 10 to 30 years. Investors receive semi-annual interest payments and are repaid the principal amount upon maturity. Due to their low credit risk, Treasury bonds are a popular choice for conservative investors and serve as a benchmark for other interest-bearing securities.

Types of Treasury Securities

Treasury bonds are part of a broader category of U.S. Treasury securities, which include:

  • Treasury Bills (T-bills): Short-term securities with maturities of one year or less, sold at a discount and matured at face value.
  • Treasury Notes (T-notes): Medium-term securities with maturities between 2 and 10 years, offering fixed interest payments.
  • Treasury Inflation-Protected Securities (TIPS): Securities adjusted for inflation to protect investors’ purchasing power.
  • Treasury Bonds (T-bonds): Long-term securities with maturities of up to 30 years, ideal for investors seeking stable, long-term income.

Historical Performance of Treasury Bonds

Historically, Treasury bonds have been a cornerstone of risk-averse portfolios. During periods of economic uncertainty, they act as a haven, preserving capital and providing reliable income. For instance, during the 2008 financial crisis and the COVID-19 pandemic, Treasury bond yields dropped significantly as investors flocked to their safety.

Despite their stability, T-bonds are sensitive to interest rate fluctuations. When interest rates rise, bond prices typically fall, and vice versa. Over the long term, they have delivered modest returns compared to equities but excel in capital preservation.

Investing in Treasury Bonds

Investing in Treasury bonds can be done through various channels like Direct Purchase, Brokerage Accounts, Mutual Funds and ETFs, and Retirement Accounts:

  • Direct Purchase: Investors can buy T-bonds directly from the U.S. Treasury via the TreasuryDirect website.
  • Brokerage Accounts: Treasury bonds are also available on secondary markets through brokers.
  • Mutual Funds and ETFs: Investors can gain exposure to Treasury bonds through funds that focus on government securities.
  • Retirement Accounts: T-bonds are often included in 401(k) plans and IRAs for diversification.

Factors Affecting Treasury Bond Prices

Several factors influence the prices and yields of Treasury bonds such as Interest Rates, Inflation Expectations, Federal Reserve Policy, and Economic Conditions:

  • Interest Rates: An inverse relationship exists between bond prices and interest rates.
  • Inflation Expectations: Higher inflation erodes the real return on bonds, causing prices to drop.
  • Federal Reserve Policy: The Federal Reserve’s actions, such as changing the federal funds rate or engaging in quantitative easing, directly impact Treasury yields.
  • Economic Conditions: In times of economic turmoil, demand for Treasury bonds increases, driving up prices and lowering yields.

Relationship between bond price and current bond yield

Let us consider a US Treasury bond with nominal value M, coupon C, maturity T, and interests paid twice a year every semester. The coupon (or interest paid every period) is computed with the coupon rate. The nominal value is reimbursed at maturity. The current yield is the market rate, which may be lower or greater than the rate at the time of issuance of the bond (the coupon rate used to compute the dollar value of the coupon). The formula below gives the formula for the price of the bond (we consider a date just after the issuance date and different yield rates.

Formula for the price of the bond
 Formula for the price of the bond
Source: The author

Relationship between bond price and current bond yield
Relationship between bond price and current bond yield
Source: The author

You can download below the Excel file for the data used to build the figure for the relationship between bond price and current bond yield.

Download the Excel file to compute the bond price as a function of the current yield

Risks and Considerations

While Treasury bonds are low-risk investments, they are not entirely risk-free, there are several factors to consider, such as Interest Rate Risk (Rising interest rates can lead to capital losses for bondholders), Inflation Risk (Fixed payments lose purchasing power during high inflation periods), Opportunity Cost (Low returns on T-bonds may be less attractive compared to higher-yielding investments like stocks).

Treasury Bond Futures

Treasury bond futures are standardized contracts that allow investors to speculate on or hedge against future changes in bond prices. These derivatives are traded on exchanges like the Chicago Mercantile Exchange (CME) and are essential tools for managing interest rate risk in sophisticated portfolios.

Treasury Bonds in the Global Market

The U.S. Treasury market is the largest and most liquid government bond market worldwide. It plays a pivotal role in the global financial system:

  • Reserve Currency: Many central banks hold U.S. Treasury bonds as a key component of their foreign exchange reserves.
  • Benchmark for Other Securities: Treasury yields serve as a reference point for pricing other debt instruments.
  • Foreign Investment: Countries like China and Japan are significant holders of U.S. Treasury bonds, underscoring their global importance.

Conclusion

Treasury bonds are fundamental to the financial landscape, offering safety, stability, and insights into broader economic dynamics. Whether you are a finance student building foundational knowledge or a professional refining investment strategies, understanding Treasury bonds is indispensable. As of 2023, the U.S. Treasury market exceeds $24 trillion in outstanding debt, reflecting its vast scale and importance. By mastering the nuances of Treasury bonds, you gain a competitive edge in navigating the complexities of global finance.

Why should I be interested in this post?

Understanding Treasury bonds is crucial for anyone pursuing a career in finance. These instruments provide insights into Monetary Policy, Fixed-Income Analysis, Portfolio Management, and Macroeconomic Indicators.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLADatastream

Useful resources

Treasury Direct Treasury Bonds

Fiscal data U.S. Treasury Monthly Statement of the Public Debt (MSPD)

Treasury Direct Understanding Pricing and Interest Rates

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

Herfindahl-Hirschmann Index

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024) delves into the Herfindahl-Hirschmann Index(HHI).

History of the Herfindahl-Hirschmann Index(HHI)

The Herfindahl–Hirschman Index (HHI) originated in the mid-20th century as a measure of market concentration. Its roots trace back to Albert O. Hirschman, who in 1945 introduced a squaring-based method to assess trade concentration in his book “National Power and the Structure of Foreign Trade.” A few years later, Orris C. Herfindahl independently applied a similar concept in his 1950 doctoral dissertation on the U.S. steel industry, formalizing the formula that sums the squares of firms’ market shares to capture dominance. Over time, economists combined their contributions, naming it the Herfindahl–Hirschman Index.

During the 1970s and 1980s, the measure gained prominence in industrial organization and competition economics. In 1982, the U.S. Department of Justice and the Federal Trade Commission officially adopted the HHI in their Merger Guidelines to evaluate market concentration and the impact of mergers, establishing it as a global standard. Since then, competition authorities worldwide, including the European Commission and the OECD, have incorporated HHI into their antitrust frameworks, and it remains widely used today to assess competition across various industries, such as banking, telecommunications, and energy.

The Herfindahl-Hirschman Index (HHI) is a widely used measure of market concentration and competition in various industries. The HHI has become a crucial tool for finance professionals, policymakers, and regulatory bodies to assess the level of competition in a market. In this article, we will delve into the basics of the HHI, its calculation, interpretation, and advanced applications, including recent statistics and news.

What is the Herfindahl-Hirschman Index (HHI)?

The HHI is a numerical measure that calculates the market concentration of a particular industry by considering the market share of each firm. The index ranges from 0 to 10,000, where higher values indicate greater market concentration and reduced competition. For example, a market comprising four firms with market shares of 30%, 30%, 20%, and 20% would have an HHI of 2,600 (30² + 30² + 20² + 20² = 2,600).

Calculation of the HHI

The HHI is calculated by summing the squares of the market shares of each firm in the industry. The market share is typically expressed as a percentage of the total market size. The formula for calculating the HHI is:

Formula for the Herfindahl-Hirschman Index (HHI).
 Formula for the Herfindahl-Hirschman Index (HHI).
Source: the author.

where MSi is the market share of firm i, and N the number of firms.

The HHI ranges from 0 (perfect competition) to 10,000 (monopoly).

Interpretation of the HHI

According to the HHI, the concentration of sectors can be categorized as low, moderate and high:

  • Low concentration (HHI < 1,500): Indicates a highly competitive market with many firms.
  • Moderate concentration (1,500 ≤ HHI < 2,500): Suggests a moderately competitive market with some dominant firms.
  • High concentration (HHI ≥ 2,500): Indicates a highly concentrated market with limited competition.

I built an Excel file to illustrate the three cases: low, moderate, and high concentration.

Low concentration: HHI < 1,500
 Low concentration (according to the HHI)
Source: the author.

Moderate concentration: 1,500 < HHI < 2,500
 Moderate concentration (according to the HHI)
Source: the author.

High concentration: HHI > 2,500
High concentration (according to the HHI)
Source: the author.

You can download below the Excel file for the data used to build the figure for the HH index.

Download the Excel file for the data used to build the figure for the  HH index

Advanced Applications of the HHI

The HHI has several advanced applications in finance, economics, and regulatory frameworks. Some of these applications include:

  • Merger analysis: Regulatory bodies, such as the US Federal Trade Commission (FTC), use the HHI to assess the potential impact of mergers and acquisitions on market competition.
  • Industry analysis: Finance professionals use the HHI to analyze the competitive landscape of an industry and identify potential investment opportunities.
  • Antitrust policy: The HHI is used to inform antitrust policy and enforcement, helping to prevent anti-competitive practices and promote competition.
  • Market structure analysis: The HHI is used to analyze the market structure of an industry, including the number of firms, market shares, and barriers to entry.

Criticisms and Limitations of the HHI

While the HHI is a widely used and useful measure of market concentration, it has several criticisms and limitations. Some of these include:

  • Simplistic assumption: The HHI assumes that market shares are a good proxy for market power, which may not always be the case.
  • Ignorance of other factors: The HHI ignores other factors that can affect market competition, such as barriers to entry, product differentiation, and firm conduct.
  • Sensitive to market definition: The HHI is sensitive to the definition of the market, which can affect the calculation of market shares and the resulting HHI value.

Real-World Examples

US Airline Industry: The HHI for the US airline industry has increased significantly over the past two decades, indicating growing market concentration. According to a 2020 report by the US Government Accountability Office, the HHI for the US airline industry increased from 1,041 in 2000 to 2,041 in 2020.

US Technology Industry: The HHI for the US technology industry has also increased significantly over the past decade, indicating growing market concentration. According to a 2022 report by the US FTC, the HHI for the US technology industry increased from 1,500 in 2010 to 3,000 in 2020.

Recent Statistics and News

  • A 2021 FTC staff report on acquisitions by major technology firms highlighted a “systemic nature of their acquisition strategies,” indicating a clear trend toward market concentration as they frequently acquired startups and potential competitors.
  • A 2020 article by the American Enterprise Institute noted that while the HHI for the US airline industry had increased by 41% since the early 2000s, inflation-adjusted ticket prices had actually fallen.
  • In its 2019 antitrust lawsuit to block the T-Mobile and Sprint merger, the US Department of Justice argued the deal was “presumptively anticompetitive,” citing HHI calculations that showed the merger would substantially increase concentration in the mobile wireless market.
  • Recent studies have utilized the HHI to analyze hospital market concentrations. For example, research on New Jersey’s hospital markets revealed increasing consolidation, with several regions classified as “highly concentrated” based on HHI scores. This information is crucial for understanding the implications of market concentration on healthcare accessibility and pricing

Regulatory Framework

The HHI is widely used by regulatory bodies around the world to assess market competition and concentration. In the US, the FTC and the Department of Justice use the HHI to evaluate mergers and acquisitions and to enforce antitrust laws. Similarly, in the European Union, the European Commission uses the HHI to assess market competition and concentration in various industries.

Conclusion

The Herfindahl-Hirschman Index remains a fundamental instrument for assessing market concentration and competition. Its applications have evolved across various sectors, providing valuable insights into market structures. However, practitioners should be mindful of its limitations and consider complementing the HHI with other analytical tools for a comprehensive market assessment.

Why should I be interested in this post?

The Herfindahl-Hirschman Index is a powerful tool for analyzing market structure and assessing competitive dynamics. As markets continue to evolve, the HHI will remain an essential tool for navigating the complexities of competition in the modern economy. So as business and finance students, it is necessary to know such an important index to keep up with the evolving world around us.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLADatastream

Useful resources

United states Department of Justice Herfindahl–Hirschman index

Eurostat Glossary:Herfindahl Hirschman Index (HHI)

United States Census Bureau Herfindahl–Hirschman index

Academic articles

Bach, G. D. (2020, March 18). Strong Competition Among US Airlines Before COVID-19 Pandemic. American Enterprise Institute.

Federal Trade Commission. (2021, September). FTC Staff Presents Report on Nearly a Decade of Unreported Acquisitions by the Biggest Technology Companies. Federal Trade Commission

United States Department of Justice. (2019, June 11). Complaint, United States of America et al. v. Deutsche Telekom AG et al. (Case 1:19-cv-01713). United States District Court for the District of Columbia

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

Overview of US Treasuries

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024) gives an overview of US Treasuries, their types, characteristics, and advanced applications.

Introduction

US Treasuries are a cornerstone of global financial markets, serving as a benchmark for risk-free investments and a safe-haven asset during times of economic uncertainty. As a finance professional, understanding the basics and intricacies of US Treasuries is essential for making informed investment decisions and navigating the complexities of global finance. In this article, we will provide a comprehensive overview of US Treasuries, covering the basics, types, characteristics, market structure, and advanced applications.

What are US Treasuries?

US Treasuries are debt securities issued by the US Department of the Treasury to finance government spending and pay off maturing debt. They are considered one of the safest investments globally, backed by the full faith and credit of the US government.

Types of US Treasuries

There are four main types of US Treasuries:

Treasury Bills (T-bills)

  • Short-term securities with maturities ranging from a few weeks to 52 weeks
  • Sold at a discount to face value, with the difference representing the interest earned.
  • Low risk, low return investment (low duration fixed-income securities)

Treasury Notes (T-Notes)

  • Medium-term securities with maturities ranging from 2 to 10 years
  • Sold at face value, with interest paid semi-annually
  • Moderate risk, moderate return investment (medium duration fixed-income securities)

Treasury Bonds (T-Bonds)

  • Long-term securities with maturities ranging from 10 to 30 years
  • Sold at face value, with interest paid semi-annually
  • Higher risk, higher return investment (high duration fixed-income securities)

Treasury Inflation-Protected Securities (TIPS)

  • Securities with principal and interest rates adjusted to reflect inflation
  • Designed to provide a hedge against inflation
  • Low risk, low return investment

Figure 1 below gives the Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024.

Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024
Evolution of the Structure of U.S. Federal Debt by Security Type from 2005 to 2024
Source: U.S. Department of Treasury

Figure 2 below gives the U.S. Federal Debt by Security Type on August 31, 2025.

U.S. Federal Debt by Security Type on August 31, 2025
US Federal Debt by Security Type on August 31, 2025
Source: U.S. Department of Treasury

Characteristics of US Treasuries

US Treasuries have several key characteristics such as Risk-free status, Liquidity, Taxation, and Return characteristics

Risk-free status: US Treasuries are considered one of the safest investments globally, backed by the full faith and credit of the US government.

Liquidity: US Treasuries are highly liquid, with a large and active market.

Taxation: Interest earned on US Treasuries is exempt from state and local taxes.

Return characteristics: US Treasuries offer a relatively low return compared to other investments, but provide a high degree of safety and liquidity.

Market Structure

The US Treasury market is one of the largest and most liquid markets globally, with a wide range of participants, including:

  • Primary dealers: Authorized dealers that participate in US Treasury auctions.
  • Investment banks: Firms that provide underwriting, trading, and advisory services.
  • Asset managers: Firms that manage investment portfolios on behalf of clients.
  • Central banks: Institutions that manage a country’s monetary policy and foreign exchange reserves.

Advanced Applications of US Treasuries

US Treasuries have several advanced applications, including:

  • Yield curve analysis: US Treasuries are used to construct the yield curve, which is a graphical representation of interest rates across different maturities.
  • Hedging strategies: US Treasuries are used to hedge against interest rate risk, inflation risk, and credit risk.

Figure 3 below gives the yield curve for the Treasuries in the United States on June 28, 2024.

Yield curve for US Treasuries (31/12/2024)
Yield curve for US Treasuries (31/12/2024)
Source: U.S. Department of Treasury

You can download below the Excel file for the data used to build the figure for the yield curve for US Treasuries.

Download the Excel file for the data used to build the figure for the yield curve for US Treasuries

Conclusion

US Treasuries are a fundamental component of global financial markets, offering a safe-haven asset and a benchmark for risk-free investments. By understanding the basics and intricacies of US Treasuries, finance professionals can make informed investment decisions and navigate the complexities of global finance.

Why should I be interested in this post?

Understanding US Treasuries is crucial for anyone pursuing a career in finance. These instruments provide insights into Monetary Policy, Fixed-Income Analysis, Portfolio Management, and Macroeconomic Indicators.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLA Datastream

   ▶ Ziqian ZONG The Yield Curve

   ▶ Youssef LOURAOUI Interest rate term structure and yield curve calibration

   ▶ William ARRATA My experiences as Fixed Income portfolio manager then Asset Liability Manager at Banque de France

Useful resources

Treasury Direct Treasury Bonds

US Treasury Yield curve data

About the author

The article was written in October 2025 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management (MiM), 2021-2024).

The Art of a Stock Pitch: From Understanding a Company to Building a Coherent Logics

Dawn DENG

In this article, Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026) offers a practical introduction to building a beginner-friendly stock pitch—from selecting a company you truly understand, to structuring the investment thesis, and translating logic into valuation. The goal is not to produce “perfect numbers,” but to make your reasoning coherent, transparent, and testable.

Why learn to do a stock pitch?

Learning to pitch a stock is learning to tell a story in financial language. Whether you are aiming at investment banking, asset management, or equity research roles—or competing in a student investment fund—the stock pitch is a core exercise that reveals both how you think and how you communicate. Within ten minutes, you must answer three questions: Who is this company? Why is it worth investing in? And how much is it worth? A strong pitch convinces not by breadth of information, but by reasoning that is consistent, evidence-based, and verifiable.

Choosing a company: balance understanding and interest

For beginners, picking the right company matters more than picking the right industry. Do not start by hunting the next “multibagger.” Start with a business you can truly explain: how it makes money, who its customers are, and what drives its costs. Familiar products and clear business models are your best teachers. I first learned how to build a stock pitch during my Investment Banking Preparatory Program at my home university, Queen’s Smith School of Business. The program was designed to train first- and second-year students in the fundamentals of financial modeling, valuation, and investment reasoning. In my first pitch with the audience from school investment clubs and the professor, I chose L3Harris Technologies (NYSE: LHX)—working across defense communications and space systems. Its complexity pushed me to locate it precisely in the value chain: not a weapons maker, but a critical node in command-and-control. No valuation model can substitute that kind of business understanding.

Industry analysis: space, structure, and cycle

The defense sector operates under multi-year budget cycles, long procurement timelines, and high barriers to entry. The market is dominated by five major U.S. contractors—Lockheed Martin, Northrop Grumman, General Dynamics, Raytheon, and L3Harris. While peers tend to focus on platform manufacturing, L3Harris differentiates itself through integrated communication and command systems, giving it recurring revenue and a lighter asset base. This focus positions the company at the intersection of AI-driven defense innovation and space-based data systems—a niche expected to grow rapidly as military operations become more network-centric.

Investment thesis: three key arguments

(1) Strategic Layer – “Why now”

The defense industry is entering a new digitalization cycle. L3Harris’s acquisition of Aerojet Rocketdyne expands its vertical integration into propulsion and guidance, while its strong exposure to secure communication networks aligns with rising defense budgets for AI and satellite modernization.

(2) Competitive Layer – “Why this company”

Compared to peers, L3Harris demonstrates strong operational efficiency and disciplined capital allocation. Its EBITDA margin of ~20% and R&D intensity near 4% of revenue outperform sector averages. Management has proven its ability to sustain synergy realization post-merger, reducing leverage faster than expected.

(3) Financial Layer – “Why it matters”

The company’s robust cash generation supports consistent dividend growth and share repurchases, signaling confidence and financial flexibility. Our base-case target price was USD 287, implying ~12% upside, supported by improving free cash flow yield and moderate multiple expansion.

Valuation: turn logic into numbers

Valuation quantifies your logic. At the beginner level, focus on two complementary methods: Relative Valuation and Absolute Valuation (DCF). The first tells you how markets price similar assets; the second estimates intrinsic value under your assumptions. Use them to cross-check each other.

Relative Valuation

We benchmarked L3Harris Technologies against major U.S. defense peers including Lockheed Martin, Northrop Grumman, and Raytheon Technologies, using EV/EBITDA and P/E multiples as our key comparative metrics. Peers traded at around 14–16× EV/EBITDA, consistent with the industry’s steady cash-flow profile. However, given L3Harris’s stronger growth visibility, improving free cash flow, and synergies expected from the Aerojet Rocketdyne acquisition, we assigned a justified multiple of 17× EV/EBITDA—positioning it slightly above the sector average. This premium reflects not only its operational efficiency but also its role in the ongoing digital transformation of defense communications and space systems.

Absolute Valuation (Discounted Cash Flow)

DCF values the business as the present value of future free cash flows. Build operational drivers in business terms (volume/price, mix, scale effects), then translate into FCF:
FCF = EBIT × (1 – tax rate) + D&A – CapEx – ΔWorking Capital. Choose a WACC consistent with long-term capital structure (equity via CAPM; debt via yield or recent financing, after tax). For terminal value, use a perpetual growth rate aligned with nominal GDP and industry logic, or an exit multiple consistent with your relative valuation. Present a range via sensitivity (WACC, terminal growth, margins, CapEx) rather than a single precise point. Where DCF and multiples converge, your target price gains credibility; where they diverge, explain the source—cycle position, peer distortions, or different long-term assumptions.

Risks and catalysts: define uncertainty

Every pitch must face uncertainty head-on. Map the fragile links in your logic—macro and policy (rates, budgets, regulation), competition and disruption (new entrants, technology shifts), execution and governance (integration, capacity ramp-up, incentives). Then specify catalysts and timing windows: earnings and guidance, major contracts, launches or pricing moves, structural margin inflections, M&A progress, or regulatory milestones. Make it explicit what would validate your thesis and when you would reassess.

Related posts on the SimTrade blog

   ▶ Cornelius HEINTZE Two-Stage Valuation Method: Challenges

   ▶ Andrea ALOSCARI Valuation Methods

   ▶ Jorge KARAM DIB Multiples Valuation Method for Stocks

Useful resources

Mergers & Inquisitions How to Write a Stock Pitch

Training You Stock Pitch en Finance de Marché : définition et méthode

Harvard Business School Understanding the Discounted Cash Flow (DCF) Method

Corporate Finance Institute Types of Valuation Multiples and How to Use Them

About the author

The article was written in October 2025 by Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026).

Assessing a Company’s Creditworthiness: Understanding the 5C Framework and Its Practical Applications

Posts

Dawn DENG

In this article, Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026) presents a practical framework for assessing a company’s creditworthiness. The analysis integrates both financial and non-financial dimensions of trust, using the classic 5C framework widely adopted in banking and corporate finance.

Why assess creditworthiness

In corporate finance, assessing a company’s creditworthiness lies at the heart of lending, underwriting, and risk management. For banks, it is not only a “yes/no” lending decision (and also the level of interest rate propose to the client); it is a structured way to understand repayment capacity, operating quality, and long-term sustainability. The goal is not to label a company as “good” or “bad,” but to answer three questions: Can it repay? Will it repay? If not, how much can be recovered?

The five pillars of credit analysis: the 5C framework

The 5C framework, an industry standard that crystallized over decades of banking practice and supervisory guidance, assesses five core dimensions: Character, Capacity, Capital, Collateral, and Conditions. Rather than originating from a single author or institution, it emerged progressively across lenders’ credit manuals, central-bank training, and regulator handbooks, and is now embedded in banks’ risk-rating and loan-pricing models. These components are interdependent: strength in one area can mitigate weaknesses in another, while vulnerabilities may compound when several Cs deteriorate at the same time.

The five pillars of credit analysis: the 5C framework.
The five pillars of credit analysis: the 5C framework
Source: the author.

Character: reputation and track record

Character covers the firm’s reputation and willingness to honor obligations. Analysts review borrowing history, repayment behavior, disclosure practices, management integrity, and banking relationships. A consistent record of timely payments and transparent reporting typically earns a stronger credit score.

For example, a mid-sized manufacturer that consistently meets payment deadlines and maintains transparent reporting will typically be viewed as a low-risk borrower, even if its margins are moderate.

Capacity: ability to repay

Capacity assesses whether operating cash flow can service debt on time. Core indicators include: Interest Coverage (EBIT/Interest), DSCR, and Liquidity ratios (Current/Quick/Cash). As a rule of thumb, an interest coverage below 2× or DSCR below 1.0× often signals liquidity pressure.

For example, in 2023, several property developers in China exhibited DSCR levels below 1.0 amid declining sales, illustrating how even profitable firms can face repayment stress when cash inflows weaken.

Capital: structure and leverage

Capital reflects how the company balances debt and equity. Key metrics are Debt-to-Equity, Debt-to-Assets, and Net Debt/EBITDA. Higher leverage raises financial risk, but acceptable ranges are industry-specific: capital-intensive sectors may tolerate 2–3× EBITDA, while asset-light tech/retail often sit closer to 0.5–1.5×.

A practical example: L3Harris Technologies, a U.S. defense contractor, maintains moderate leverage with strong cash conversion, reinforcing its credit profile despite large-scale acquisitions.

Collateral: security and guarantees

Collateral is the lender’s safety net. Recoveries depend on the value and liquidity of pledged assets (property, receivables, equipment). Asset-light firms lack hard collateral and thus rely more on cash-flow quality and relationship history to mitigate risk.

Asset-light companies (e.g., software, consulting) rely more on cash flow and relationship capital rather than tangible assets, making consistent performance crucial to maintaining credit access.

Conditions: macro and industry context

Conditions cover both external factors (interest rates, regulations, economic cycles) and loan-specific purposes.

During tightening monetary cycles, higher financing costs can compress margins, while in recessionary or trade-sensitive sectors, declining demand directly raises default risk. For example, during 2022’s rate hikes, small exporters with floating-rate debt experienced significant declines in credit ratings due to rising interest expenses.

Financial perspective: reading credit signals in the statements

Effective credit analysis connects the three statements: the income statement (profitability), balance sheet (capital structure and asset quality), and cash flow statement (true repayment capacity).

Income statement: focus on revenue stability, margin trends, and the weight of non-recurring items. Persistent declines in gross or operating margins may indicate weakening competitiveness.

Balance sheet: examine asset quality and liability mix. High receivables or inventory build-ups can flag liquidity strain; heavy short-term debt raises refinancing risk.

Cash flow statement: the practical health check. Sustainable, positive operating cash flow that covers interest and capex signals solvency; strong accounting profits with chronically negative cash flow suggest poor earnings quality.

Useful cross-checks include Operating Cash Flow/Total Debt (coverage of principal from operations) and the persistence of negative free cash flow funded by external capital (a sign of structural vulnerability).

Beyond numbers: governance, transparency, and relationship capital

Creditworthiness extends beyond ratios. Governance quality, reporting transparency, competitive barriers, and banking relationships shape real-world risk. Policy-sensitive sectors (e.g., energy, real estate) exhibit higher cyclicality; tech and retail hinge on stable cash generation and customer retention. Stable leadership, prudent accounting, and timely disclosures build lender confidence. Long-standing cooperation and on-time performance often translate into better terms, a compounding of “relationship capital.”

At its core, credit is a form of deferred trust: banks lend to future behaviors and cash flows. Whether a firm deserves that trust depends on how it balances transparency, responsibility, and disciplined execution.

Conclusion

Credit analysis is not merely about numbers, it is about understanding how financial structure, behavioral consistency, and institutional trust interact. The 5C framework provides a structured map, yet effective analysts also recognize the fluid connections among its components: good character supports capital access, strong capacity reinforces collateral confidence, and favorable conditions amplify all others. Assessing creditworthiness is thus the art of finding order amid uncertainty, of determining whether a company can remain stable when markets turn turbulent.

Related posts on the SimTrade blog

About credit risk

   ▶ Jayati WALIA Credit risk

   ▶ Jayati WALIA Quantitative risk management

   ▶ Bijal GANDHI Credit Rating

About professional experiences

   ▶ Snehasish CHINARA My Apprenticeship Experience as Customer Finance & Credit Risk Analyst at Airbus

   ▶ Jayati WALIA My experience as a credit analyst at Amundi Asset Management

   ▶ Aamey MEHTA My experience as a credit analyst at Wells Fargo

Useful resources

Allianz Trade Determining Customer Creditworthiness

Emagia blog Assessing a Company’s Creditworthiness

About the author

The article was written in October 2025 by Dawn DENG (ESSEC Business School, Global Bachelor in Business Administration (GBBA), Smith-ESSEC Double Degree Program, 2024-2026).

The Two-Stage Valuation Method and its challenges

Cornelius HEINTZE

In this article, Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains how the two-stage valuation model and the segmentation in growth stage and stable phase impact the valuation of companies and which problems tend to arise with the use of this model.

Why this is important

The valuation of companies is always present in the world of finance. We see it in Mergers and Acquisitions (M&A), initial public offerings (IPOs) and daily stock market pricing where firms are valued within seconds based on new information. For markets to function properly, valuations need to represent the underlying company as precisely as possible. Otherwise, information asymmetries increase, leading to inefficient or even dysfunctional markets.

The Two-Stage Model

The Two-Stage Model is the traditional model that is used by finance experts across the world. What makes it stand out is the segmentation of the valuation in two steps:

  • Growth phase (explicit forecast period): In this phase, the company’s future cash flows are projected in detail for each year t = 1 … T. These cash flows are then discounted back to the valuation date using the discount rate r:

    PV(Growth phase) = Σt=1…T ( CFt ) / (1 + r)t

  • Stable phase (terminal value): After the explicit forecast horizon, the company is assumed to enter a stable stage. There are two assumptions needed to fulfill this stage and its equations. First it is assumed that the company can realize the cashflows over an indefinite timespan. Second, it is assumed that the perpetual growth rate g does not exceed the growth rate of the whole economy. The two common resulting equations are:
    • No growth (steady state):
      PV(Stable phase) = CFstable / (r * (1 + r)T)

    • Constant growth in perpetuity:
      PV(Stable phase) = CFT+1 / ((r − g) * (1 + r)T)

Total firm value is then the sum of both parts:

Value = PV(Growth phase) + PV(Stable phase)

Problems with the Two-Stage Model

If we look closer at the equations for the stable phase you will realize that they show a perpetuity. Looking at the assumptions given, this is also the only possible outcome. But given this circumstance we encounter the first big problem of the Two-Stage Model: the stable phase often makes up over 50% of the firm value. This is a problem as the assumptions for the stable phase are often very subjective and not very realistic. The problem evolves even more when it is assumed that there is a constant growth rate. Let’s look at this through an example:

Assumptions: discount rate r = 10%, explicit forecast over T = 5 years with free cash flows (in €m): 80, 90, 95, 98, 100. After year 5, we consider two terminal cases.

Phase 1 – Present value of explicit cash flows

  • Year 1: 80 / (1.10)1 = 72.73
  • Year 2: 90 / (1.10)2 = 74.38
  • Year 3: 95 / (1.10)3 = 71.37
  • Year 4: 98 / (1.10)4 = 66.94
  • Year 5: 100 / (1.10)5 = 62.09

PV(Phase 1) ≈ 347.51 (€m)

Phase 2 – Stable phase

  • (a) No growth: CFstable = 100 ⇒ TV at t=5
    PV(Terminal) = 100 /(0.1*(1.10)5) = 620.92

  • (b) Constant growth g = 2%: CFT+1 = 100 ⇒ TV at t=5
    PV(Terminal) = 100/((0.10-0.08) * (1.10)5) = 776.15

Total value and weights

  • No growth: Total = 347.51 + 620.92 = 968.43 ⇒ Stable Phase share ≈ 64.1%, Phase-1 share ≈ 35.9%
  • g = 2%: Total = 347.51 + 776.15 = 1,123.66 ⇒ Stable Phase share ≈ 69.1%, Phase-1 share ≈ 30.9%
  • Impact of growth: Increase in the firm value of 155.23 or ≈ 16%

Takeaway: A modest increase in the perpetual growth rate from 0% to 2% raises the terminal present value by ~155 (€m) and lifts its weight from ~64% to ~69% of total value. This illustrates the strong sensitivity of the two-stage model to terminal assumptions.

If you want to try out for yourself and learn more about the sensitivity of the growth rate in relation to the firm value you can do so in the excel-file I have created in order for this example as shown below:

Two-Stage Model Example 1

Another very interesting fact gets visible, while trying out the model, which is commonly seen in early tech startups or general startups, that have very high early investment costs (for example software development). They will have a negative firm value in the growth phase but in the long run it is assumed that these companies will have a constant growth rate and positive cashflows, therefore evening out the negative growth phase. This again shows how much of an impact the stable and the growth phase has on the firm value.

Two-Stage Model Example Startup

You can download the excel file here:

Download the Excel file for Two-Stage-Model Analysis

Implications for practical use and solutions

As seen in the example, the impact of the stable phase and therefore the assumptions about the cashflows and the circumstances of the company as to whether it is appropriate to use a growth rate plays a big role in on the valuation of the firm. Deciding these assumptions lies at the feet of the firms that valuate the company or at the company valuating itself. Therefore, they are highly subjective and must be transparent at all times to ensure an appropriate valuation of the firm. If this is not the case firms can be valued at a much higher value than it is appropriate and therefore convey false information.

To fight this it is recommended to incorporate various valuation methods to verify that the value is not too high or too low but rather on a bandwidth of values which are plausible. This is often times part of a fairness opinion which is issued by an independent company. You can see an example here when Morgan Stanley drafted a fairness opinion for Monsanto for the merger with Bayer:

Full SEC Statement for the merger

To sum up…

The Two-Stage Valuation Model remains a cornerstone in corporate finance because of its simplicity and structured approach. However, as the example shows, the stable phase dominates the overall result and makes valuation highly sensitive to small changes in assumptions. In practice, analysts and other users of the information provided by the valuing company should therefore apply the model with caution, test alternative scenarios, and complement it with other methods. Looking ahead, the combination of traditional models with advanced techniques such as multi-stage models, sensitivity analyses, or even simulation approaches can provide a more balanced and reliable picture of a company’s value.

Why should I be interested in this post?

Whether you are a student of finance, an investor, or simply curious about how firms are valued, understanding the Two-Stage Valuation Model is essential. It is one of the most widely used approaches in practice and often determines the prices we see in the markets, from IPOs to M&A. By being aware of both its strengths and its limitations, you can better interpret valuation results and make more informed financial decisions.

Related posts on the SimTrade blog

   ▶ All posts about financial techniques

   ▶ Jorge KARAM DIB Multiples valuation methods

   ▶ Andrea ALOSCARI Valuation methods

   ▶ Samuel BRAL Valuing the Delisting of Best World International Using DCF Modeling

Useful resources

Paul Pignataro (2022) “Financial modeling and valuation: a practical guide to investment banking and private equity” Wiley, Second edition.

Tim Koller, Marc Goedhart, David Wessels (2010) “Valuation: Measuring and Managing the Value of Companies”, McKinsey and Company.

Fairness Opinion Example

About the author

The article was written in October 2025 by Cornelius HEINTZE (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025

Valuing the Delisting of Best World International Using DCF Modeling

Samuel BRAL

In this article, Samuel BRAL (ESSEC Business School, Global BBA – Exchange at NUS, 2025) shares how he conducted a valuation of Best World International using a Discounted Cash Flow model in Excel. This modeling exercise was part of a corporate finance case during his exchange at the National University of Singapore.

Context of the project

During my exchange at NUS, I was asked to evaluate the fair price at which Best World International, a Singaporean skincare and wellness company, could be taken private. The company had announced its intention to delist from the Singapore Exchange (SGX). My role was to determine the intrinsic value per share using a discounted cash flow approach that distinguishes between a high-growth projection period and a long-term steady-state phase. The goal was to assess whether the proposed buyout price was fair to minority shareholders.

Understanding the DCF method

The Discounted Cash Flow method estimates the value of a company by forecasting its future free cash flows and discounting them back to their present value using the firm’s Weighted Average Cost of Capital. This method is widely used by investment banks, private equity firms, and corporate finance teams for valuing companies, especially in the context of M&A and privatizations.

Well-known examples of its application include the valuation of Twitter during its acquisition by Elon Musk in 2022 and the fairness opinions issued by investment banks in LBO transactions such as the Bain Capital acquisition of Kioxia.

Step-by-step technical implementation

The Excel model followed a two-stage DCF approach: an explicit forecast period from 2024 to 2028 and a terminal value from 2029 onward. Below is a breakdown of the modeling process:

1. Revenue Forecasting

I projected revenue growth using a blended approach. I considered:

  • The average historical CAGR of BWI’s revenues between 2021 and 2023.
  • The expected CAGR for the ASEAN cosmetics and wellness industry (7–9%) based on Statista and Euromonitor data.

Revenue = Previous Year Revenue × (1 + Growth Rate)

2. EBIT Estimation

I calculated EBIT by projecting the cost structure of the business:

  • I took historical averages of cost items such as COGS and SG&A as a percentage of revenue.
  • Assumed that operating leverage would allow fixed costs to grow slower than revenue, improving margins over time.

EBIT = Revenue – Operating Costs

3. Tax Adjustment and NOPAT

I applied a normalized effective tax rate based on BWI’s historical tax filings and Singapore’s corporate tax regime (17%).

NOPAT = EBIT × (1 – Tax Rate)

4. Depreciation and CAPEX

I assumed CAPEX as a stable % of revenue, using 2023 data as the benchmark. Depreciation was projected using the historical ratio of D&A to CAPEX.

Free Cash Flow = NOPAT + Depreciation – CAPEX – ΔWorking Capital

5. Net Working Capital (NWC)

NWC = Current Assets – Current Liabilities. I used the average NWC-to-revenue ratio from past years to forecast changes in NWC.

6. Terminal Value and Discounting

The Terminal Value, which captures the value of a business beyond the explicit forecast period in a DCF analysis – often 5 or 10 years into the future. was calculated using the Gordon Growth formula:

TV = FCF_2028 × (1 + g) / (WACC – g)

Where g was estimated at 2.5%, reflecting long-term GDP and sector growth rates in the ASEAN region.

Both FCFs and Terminal Value were discounted using WACC (5.55%). The present values were then summed to calculate Enterprise Value.

7. Equity Value per Share

Enterprise Value – Net Debt + Cash = Equity Value

Equity Value / Number of Shares = Value per Share

WACC and Beta calculation

WACC reflects the average cost of capital from both equity and debt, weighted by their proportions in the firm’s capital structure, it serves as the discount rate for projecting future cash flows. For companies like BWI, which operate in niche, consumer-focused markets, WACC provides a benchmark for evaluating whether future growth justifies current valuations

  • Cost of equity was derived using the Capital Asset Pricing Model (CAPM):
  • Cost of Equity = Risk-Free Rate + Beta × Market Risk Premium
  • Beta was computed by unlevering and relevering betas of comparable firms in China, Taiwan, and Malaysia. This accounts for business and financial risk.
  • Cost of debt was based on comparable bond yields and company-specific risks.
  • Capital structure weights were based on BWI’s most recent financial statements.

The photos below are showing how I proceeded

WACC Computation

Beta Computation

Key results and analysis

The model output was:

  • Enterprise Value = SGD 4.8 billion
  • Equity Value = SGD 4.18 billion
  • Intrinsic Value per Share = SGD 9.72 (vs. proposed delisting price of SGD 7.00)

This suggests that the buyout offer undervalued the company by more than 30%. This raised questions of fairness for minority shareholders, echoing similar cases in Asia such as the privatization of Wing Tai Holdings or the delisting of Global Logistic Properties.

Download the Excel file

If you want to access a part of my work on the projections and DCF, click the link below:

Download the Excel file for WACC and Beta analysis

Why should I be interested in this post?

This modeling project not only strengthened my technical finance skills but also helped me think critically about shareholder rights, valuation fairness, and the role of financial modeling in defending minority interests. Mastering the DCF approach is essential for anyone pursuing investment banking, private equity, or corporate strategy roles.

Related posts on the SimTrade blog

   ▶All posts about Technical techniques

   ▶ Andrea ALOSCARI Valuation Methods

   ▶ Yann-Ray KAMANOU TAWAMBA Understanding the Discount Rate: A Key Concept in Finance

   ▶ William LONGIN How to compute the present value of an asset?

   ▶ Andrea ALOSCARI Internship: Corporate & Investment Banking (Intesa Sanpaolo)

Useful resources

SimTrade Platform

Monetary Authority of Singapore

About the author

This article was written in September 2025 by Samuel BRAL (ESSEC Business School, Global Bachelor in Business Administration – Exchange at NUS).

Forecasting Airline Route Profitability with Monte Carlo Simulation

Samuel BRAL

In this article, Samuel BRAL (ESSEC Business School, Global BBA – Exchange at NUS, 2025) explains how he applied Monte Carlo simulations to support Emirates Airlines in evaluating the profitability of launching a new long-haul route under conditions of uncertainty.

Context of the project

This project was part of the course “Decision Analytics using Spreadsheets” at the National University of Singapore (NUS). I was asked to provide a quantitative recommendation to Emirates Airlines on selecting a new international route from Dubai. The available destination options included Buenos Aires, Tokyo, Cape Town, and Cairo.

Due to the complexity of airline operations and the uncertainty surrounding factors such as demand, ticket prices, no-show rates, and operating costs, a traditional static financial model would not be sufficient. Instead, I built a Monte Carlo simulation model to capture the dynamic range of possible outcomes and assess the risk-return profile of each destination.

What is a Monte Carlo simulation?

A Monte Carlo simulation is a mathematical technique used to estimate the probability distribution of outcomes when there is uncertainty in the input variables. By running thousands of simulations using random values generated from defined probability distributions, the method provides insights into the range, likelihood, and volatility of potential results.

This approach is commonly used in financial modeling, risk analysis, and engineering. For example, investment banks use Monte Carlo models to simulate portfolio returns and Value at Risk (VaR), while oil and gas companies apply them to forecast drilling success and production volumes.

Simulation approach and methodology

I built a simulation model in Excel that executed 2,000 trials per route. Each trial simulated a potential outcome based on randomly generated values for key variables. The profit was calculated using the following formula:

Profit = (Tickets Sold × Ticket Price) – Operating Costs – Compensation Costs

Here is how each component was modeled:

  • Passenger demand: Modeled as a normal distribution using historical demand averages and standard deviations for each route. For example, Tokyo exhibited more stable demand, while Buenos Aires showed higher variance due to geopolitical and economic volatility in Argentina.
  • Ticket price: Ticket prices were generated using NORM.INV(RAND(), mean, stdev) to account for fluctuations caused by competitive pricing, seasonal variation, and macroeconomic factors like fuel costs and currency movements.
  • No-show rate: Modeled with a uniform distribution between 5% and 10%, based on IATA statistics and academic studies on airline overbooking behavior (source: IATA Global Passenger Survey, 2023).
  • Aircraft assignment: Simulated using a discrete probability distribution based on the actual Emirates fleet composition (e.g., A380, Boeing 777). Larger aircraft allowed more passengers but incurred higher operating costs.
  • Compensation cost: Incurred when demand exceeded seat capacity, reflecting the cost of rebooking, refunds, and customer service. These costs were calibrated using Emirates’ historical compensation data for overbooking cases (source: Emirates Annual Report 2023).

To execute the simulations, I used Excel’s Data Table function to loop through trials and capture the output profit distribution for each destination. From this distribution, I calculated:

  • Expected profit (mean)
  • Standard deviation of profit (volatility)
  • Probability of a loss (profit < 0)
  • Probability of a significant loss (loss > SGD 100,000)

Key results and insights

The simulation identified Buenos Aires as the most profitable option with an expected profit of SGD 292,247 and a 99.65% chance of profitability. However, the route also exhibited a small 0.1% risk of incurring losses above SGD 100,000 due to volatile demand and long travel distance.

Cape Town, while less profitable, offered near-zero downside risk. Tokyo had moderate returns and relatively low variance. This reflects a classic risk-return tradeoff that airlines often face: should the company pursue high-reward but volatile destinations, or opt for stable but lower-margin routes?

Additionally, I tested various overbooking strategies. An overbooking rate of 9.3% was found to optimize expected profits while keeping the cost of passenger compensation within an acceptable range. This mirrors real-world practices, where carriers like Delta and Lufthansa use algorithmic overbooking based on historical no-show patterns to maximize seat utilization (source: MIT Airline Data Project). If you want to have access to the work, here is the Excel file on the overview of all routes as well as the work for Buenos Aires.

Download the Excel file for Monte Carlo simulation

Why should I be interested in this post?

This project demonstrates how Monte Carlo simulations transform business decision-making under uncertainty. Instead of relying on single-point forecasts, the model enabled me to quantify risk, test strategic decisions (like overbooking), and provide data-driven recommendations.

For students and professionals in finance, consulting, or operations, Monte Carlo simulation is a core technique for scenario planning and risk assessment. It enhances decision quality in fields as diverse as project finance, asset management, supply chain optimization, and policy modeling.

Related posts on the SimTrade blog

   ▶All posts about Technical Subjects

   ▶Professional experience: Head of Data Modelling

   ▶Professional experience: Business Data Analyst at Tikehau Capital

Useful resources

SimTrade Platform

IATA Global Passenger Survey 2023

Emirates Annual Report and Press Releases

MIT Airline Data Project

About the author

This post was written in September 2025 by Samuel BRAL (ESSEC Business School, Global Bachelor in Business Administration – Exchange at NUS).

Understanding organizations’ role in bargaining tariffs

Annie YEUNG

In this article, Annie YEUNG (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains about understanding tariff bargaining that involves international organizations, governments, and industries..

The World Trade Organization (WTO)

The WTO was established in 1995 and is the most influential international organization in managing and negotiating for global trade. The WTO includes 164 member countries, and its main goal includes facilitating trade negations and monitor trade policies, especially tariffs. There has been statistics showing that average global tariffs has decreased since the establishment of WTO. For example, average bound tariffs for decreased by nearly 3 percent from 1995 to 2023.

The WTO is also plays an important role in bargaining tariffs. For example, the Doha Development Round, launched in 2001 under the WTO has focused on improving trading for developing nations. This negotiation, initiated by the WTO, has aimed to make global trade equitable. Its missions include reducing agricultural subsidies, lowering tariffs, which all have effect in order to help developing nations improve access to trading in the global market. There has been negotiations created amongst developed and undeveloped countries, with these negotiations focusing on farm subsidies and market access, as well as lowering industrial tariffs to allow developed nations to better participate in global trading activities . however it is important to note that despite ongoing talks, this round of negotiation has not been successful in delivering an agreement. This failure could have been attributed to the challenges in different trading activity interests amongst different countries. For example, there has been different perspectives in trading such as agricultural subsidies, and tariffs on services, resulting in unresolved conflicts despite negotiation.

WTO negotiations reflect deeper global power imbalances, which are manifested in trading activities and tariffs imposed. The more influential and wealthier nations often hold more bargaining power and power in the global trading market landscapes. For example, these influential countries, particularly those in the G20, often set agenda in the trade negotiations and possess more negotiating capacity. As a result these countries often dominate in negotiations, setting a dynamic that advocate for their trade interests during the process of bargaining of tariffs.

The African Group, the Association of Southeast Asian Nations (ASEAM) are organizations that have become more active, which these countries have formed organizations in order to bargain for equity in the global trading landscape. These organizations that are forming alliances in order to bargain for equitable trading systems that are recognizing asymmetries between economies to push for stabilization for less developed countries. While developed nations are pushing for lower tariffs across all sectors. However, the countries with large economies often contend that decreased tariffs will destabilize economies. As a result, the WTO plays a crucial role to address this imbalance. For example, the “Special Differential Treatment” proposed by the WTO framework promotes more support for developing countries in implementing trade agreements and reducing tariffs. This could reduce unfair trade advantages and reconcile global trade liberalization amongst developed and undeveloped countries..

Trade Negotiation Teams – Representatives

Trade negotiations are often led by high-ranking officials representing their nations. For example, the U.S. Trade Representative (USTR) is a team of experts that are specializing in multiple fields in industries, including agriculture, technology, labor, environment etc. For example, the negotiation team participating in the 2024 Trade Policy Agenda which the USTR emphasized on commitments for their countries interests, negotiating for high-standard commitments in sustainable trade practice to bolster supply chain resilience. The USTR also participates in the World Trade Organization to unify positions with other organizations such as the African Group and the ASEAN to implement trade agreements with other developing countries.

For example, the United States imposes an average agricultural tariff of 5.1%. India has an average agriculture tariff of 38% to protect domestic producers. The European Union imposes 11% in agricultural tariffs. These disparities often lead complex negotiations, especially with agriculture which is crucial for food security .Hence, negotiation objectives often focus on reciprocity. This means to maximize benefits, which trading partners much seek for equivalent concessions, and negotiate on agreements to match others. For example, if one agrees to lower tariffs, the trading partner shall agree to provide benefits on a similar traded product. This ensures mutual benefit in policy making and reaching political goals. However, reciprocity could sometimes be challenging when two countries are under asymmetrical power dynamics, such as negotiating between developed and developing countries, Furthermore, during negotiation, while trade liberation is a long-term mission in tariff negotiations, each country approaches tariff negotiation seeking to protect strategic sectors, preserve jobs, and ensuring their own country’s interests. In negotiating for tariff cuts, some countries may insist on tariffs in protection for their own domestic producers, or in goal to safeguard their own strategic sectors.

Examples of governments’ Tariffs – a case study

Trump – the U.S. – China Trade War during 2018 to 2020

The U.S. China trade war during the 2018-2020 led to higher prices for American consumers. China responded with retaliatory tariffs on U.S. goods, and global trade dynamics were disrupted. The trade wars between U.S. and China also casted an effect on the economies globally, as the two countries have such large economies.

As a result from the trade war, both the U.S. and China have experienced profound effects in multiple perspectives of their economies. The U.S> economy were affected in their GDP and employment. According to a report from Bloomberg Economics, the trade war in 2019 has costed the US economy at $316 billion. The trade wars has also resulted U.S. in stock market losses, with research from the federal reserve Bank of New York finding U.S. firms losing at least $1.7 trillion in market value as a result from the tariffs imposed on imports from China. China has also experienced economic challenges as a result from the trade wars, with export decline and experiencing economic pressures. China has experienced an export growth slow-down, indicating deepened economic challenges.

Furthermore, the trade wars has casted a great effect to the global economic markets. With the two countries being two of the world’s largest economies, their trade tensions has led to significant shifts in market dynamics and economic growth on a global scale. According to data from Banque de France, 10 percent increase in tariffs could reduce global GDP by 3%. This decline as a result from the trade wars and increased tariffs is a result of increased prices that leads to decreased productivity, higher financing costs and reduced investment demand. According to data from The World Economic Forum, it is documented that the trade war has resulted a decrease of global GDP growth to 2.8% in 2019.

Related posts on the SimTrade blog

   ▶ Shruti CHAND Balance of Trades

   ▶ Marine SELLI Trump Trade

   ▶ Louis DETALLE Understand the mechanism of inflation in a few minutes?

Useful resources

A Quick Review of 250 Years of Economic Theory About Tariffs

Tariff Negotiations and Renegotiations under the GATT and the WTO — Procedures and Practices

A quantitative analysis of multi-party tariff negotiations

About the author

This article was written in June 2025 by Annie YEUNG (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025).

Understanding the Evolution of Tariffs

Annie YEUNG

In this article, Annie YEUNG discusses the historical development of tariffs and the evolution of tariffs over economic landscapes overtime.

Brief explanation of history of global tariffs

In the 19th century, tariffs were the main source of government revenue in aim for protectionism. Tariffs were widely used to protect domestic industries that were in their beginning stages. Many countries such as the United States and Europe has imposed high tariffs in order to ensure the development of their industrializing economies. For example, the U.S. implemented tariffs, such as the Tariff of Abominations in 1828 to protect its manufacturing industry. During the post World War II era, the General Agreement on Tariffs and Trade was established in 1947. In response to the post war devastation, countries lowered their tariffs in order to promote economic growth. Reciprocal lower tariffs were also implemented amongst countries and trading partners. Starting in near the beginning of the 21st center, the launch of the World Trade Organization in 1995 marked a significant evolution in global trade, where governance of trade tariffs were established through the launch of the WTO. The WTO emphasized on the trading of goods and introduced a governance structure for development considerations, granting special support for developing and less developed countries. The WTO also introduced an institutional structure for the dispute settlement procedures on global trading. Today, tariff reductions have continued due to negotiations and regional trade agreements, which deepened the harmonization of the global markets, facilitating increased global trade volumes. However, during the last decade, there has been an introduction of resurgence of tariffs amongst increased instability in the current geopolitical grounds. Tariffs has served as a political tool. For example, the U.S.- China trade war has seen tariffs as an economic tool under rising geopolitical tensions where billions of dollars of goods subject to tariffs. As two of biggest economies globally, this trade war has disrupted global supply chains. This has posed challenges as other countries have also employed tariffs for protectionism goals.

The Protectionism Approach

The United States has been maintaining high tariffs to nurture for its developing domestic industries during the 19th centuries. The United States has seen an increase in the average tariff rate over a century of time. During the beginning of the 19th century, the U.S.Average Tariff Rate was 35%, whereas by 1913, the U.S. Average Tariff Rate has increased to 40%. This historical evolution could be attributed to the need for domestic protection; early industrialization period during the early 19th century required protection, whereas in the beginning of the 20th century, tariffs needed to support the growing industry. European countries such as Germany and France also utilized tariffs to protect industrial growth during this period of time. However, developing countries struggled developing tariffs as threir internal markets were still in developing stages. However, beginning from the early 1900s, there has been a further rise in nationalism. Some tariffs rates exceeded 60%, and as a result, global trade decreased by 66% between 1929 to 1934. This was also during the period of the Great Depression, in which these tariff hikes and set-back in economy was a result of reduced international trade.

Trade Liberalization

After the establishment of the General Agreement on Tariffs and Trade in 1947 to reduce tariffs, there have been successful negotiation amongst nations to cut average tariffs worldwide in order to reduce protectionism and open up markets to global trading activity. This is seen from the data presented from the World Bank World Development Indicators, which there has been a gradual deduction in average global tariff rate from 15% to 6% from from the year 1950 to 2000. Since the beginning of tariff reductions and the start of post-war rehabilitation and rebuilding of the economy in 1950, continued multilateral negotiations has resulted in a historic low of tariffs in the year 2000. As a result, trade volumes increased and there was a global economic growth.

Complex Socio-economical landscapes

Despite the decreasing of average global tariff rate, trade policies in the 21st century, especially in the recent years, has grown to become more complex. For example, there has been targeted tariffs and trade conflicts, leading. To increased uncertainty in global trading markets. For example, U.S. has increased its tariffs from the year 2017 to 2020, with the average U.S. Tariff Rate of 1.6% in 2017 plummeting to 3.1% in the year 2020. This could have been attributed to the increased policies on tariffs on steel, aluminum, and the trade wars with China, resulting to much higher tariff rates compared to the beginning of the 2000s. For example, targeted tariffs has become a strategic target tariff, a tool with political goals. For example, the steel and aluminum tariff was to protective domestic industries, which the Trump administrated imposed a 25% tariff on steel and 10% on aluminum in March 2018. These tariffs were to uphold national security to maintain the U.S. domestic metals industry. However, this tariff led to price increases and resulted in retaliation of tariff policies from its trading partners.

< p> Increased tariffs from one country often results in retaliatory tariffs from its trading partners. Not just China, but there were also other countries that have responded to U.S.’s tariff hikes, including Canada, the European Union, and India. As a result, the increase in tariff has resulted in increased uncertainty to the global trade environment, affecting stock markets, companies, local businesses etc. Hence, the tariff can cast direct effects to each producer and consumer domestically, as investors raise concern over costs, and supply chains are rendered volatile, slowing businesses.

A Case Study: The European Union’s tariff on Chinese Electric Vehicles

In the year 2024, the European Union imposed tariffs of 38.1% on Chinese imported electrical vehicles. This is an example of the shifting grounds of global trading market environments. Today, tariffs have increased globally, and this tariff is an example that has marked a shift in the European Union trade policy and has great implications for the global automotive industry as well as the international trading landscapes. For example, the EU has imposed different tariffs targeting on specific different companies. The SAIC Motor has an 38.1% tariff, whereas Geely experiences a 20% tariff, while BYD has a 17.4% tariff for all goods imported into the European Union from China. These tariffs were imposed by the EU due to the low prices that Chinese manufacturers deliver to the European market, which may potentially undermine local producers through competition. In response to the tariff, China has filed a complaint with the World Trade Organization to contend for EU’s actions, appealing that the EU may have constituted to protectionism under fair competition. The tariff casts a large impact on the European EV market, which European consumers are facing higher prices. Market share also shifts, as the tariffs has changed the competitive landscape; Chinese manufactured EVs are facing higher costs, which may benefit domestic European manufacturers. Hence, the EU’s recent tariffs on Chinese manufactured electric vehicles marks today’s international trade policies amongst the historical evolution of global trade and tariffs, and has sparked debate and challenges.

Evolution of tariffs
Evolution of tariffs
Source: ACEA.

Why should I be interested in this post?

Evolution of tariffs is crucial as it reveals how economic policies shape international trade dynamics, and this affects to domestic industries, producers, consumers, and also has a wider effect to the global market. Studying the changes of tariffs throughout time can allow us to gain insights into historical trends, and stay informed upon future policy decisions.

Related posts on the SimTrade blog

   ▶ Snehasish CHINARA XRP: Pioneering Financial Revolution

   ▶ Marine SELLI Trump Trade

   ▶ Nithisha CHALLA Statista

Useful resources

A history of free trade — and the deep irony of ‘liberation day’

The Evolution of Tariffs

History of U.S. tariffs and why it matters today

The Problem of the Tariff in American Economic History, 1787–1934

Financial Times Transcript: Tariffs past, present and future. With Doug Irwin

About the author

The article was written in June 2025 by Annie YEUNG (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025).

Understanding the Economics of Tariffs

Annie YEUNG

In this article, Annie YEUNG (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025) explains about understanding how tariffs are crucial for consumers, suppliers, and policymakers.

Introduction: What are tariffs?

Tariffs is a tax that is placed on imported and exported goods, essentially, a duty on goods when they cross international borders. Tariffs are taxes that is imposed by a government, and they are often utilized by governments to protect domestic industries, raise government revenue, and influent foreign policy. Tariffs are always impacting global economies on a large scale, as the effect they bring are always to large bodies of consumers and suppliers internationally, especially if the tariffs are imposed by a country with large export or import volumes. Trade tariffs make a direct effect by making imported goods more expensive, and they can often shift increased consumptions towards domestically produced goods. Tariffs take effect by rendering international imported goods more expensive, which consumers would, due to effect of demand, increase their quantity demanded towards domestic goods. Hence, tariffs take effect in protecting domestically produced goods, and may achieve political goals. However, as prices are increased, consumers often need to pay a higher price, which this may lead to inefficiencies and deadweight loss; this may lead to trade disputes.

Evolution of tariffs
 Evolution of tariffs
Source: Average of World Tariffs, Adapted from Mitchell (1992) and Coatsworth and Williamson (2002).

Different types of tariffs

Ad Valorem Tariffs

An Ad Valorem Tariff is a tariff that is added onto the price of the imported good as a percentage. For example, an ad valorem tariff may be a 10 percent tax that is added onto the price of each good imported. Hence, an ad valorem tariff means that the more expensive a good is, the more tariff is added on. This may mean that higher valued imported goods are rendered much more expensive and takes greater effect as a result from the tariff.

Specific tariffs

Specific tariffs are tariffs that charges a fixed fee on the quantity or physical unit of the imported good. Hence, special tariffs are imposed on goods that are regardless of their price, and it would be a fixed fee that is imposed per physical unit of the imported good. For example, a specific tariff could be a $1 imposed on per kilogram of wheat that is imported wheat into the country. The economic impact are easier to administer and do not adjust with the market price of the good.

Compound tariffs

A compound tariff is a combination of both ad valorem and specific tariffs. Compound tariffs may include both an ad valorem tariff and a specific tariff combined to be imposed on an imported good.

Sliding Scale tariffs

Sliding scale tariffs are a variable tariff rate that are adjusted based on global commodity prices, domestic supply levels, inflation volatility etc. The tariff is dependent on when world prices of the good increases or rises. When world prices decrease, the sliding scale tariff increases, and when world prices increase, the tariff decreases. Hence, this tariff takes an economic effect by helping to maintain a minimum domestic price of a good, and helping to balance price stabilization. Hence, sliding scale tariffs may help stabilize domestic goods’ prices, and smooths the supply and demand for domestic goods, protecting domestic producers and reducing market volatility in face of global economic changes.

Protective tariffs

Protective tariffs take effect by protecting domestic industries from foreign competition. The goal of imposing a protective tariff is to encourage consumers to purchase more from domestic by raising the price of internationally imported goods. As a result, when demand for domestic goods increase as a result from protective tariffs, more domestic jobs can be created, and growth of local industries are secured, which exemplifies the protection for these domestic sectors.

Revenue tariffs

Revenue tariffs are tariffs to raise government revenue instead of protecting domestic producers. The purpose of imposing revenue tariffs is to generate an income for the government, especially when the country’s economic system heavily depends on imported goods and has a high volume of imported goods. However, revenue tariffs may be a great burden for domestic consumers as they bear the higher prices of goods, and may affect trade flows and consumption choices within the population, as consumers are the major price payers under a revenue tariff.

Economic Effects of Tariffs

Tariffs can create multidimensional impacts on the global economy both in short term and long term, and consumers, producers, governments, as well as international relations may all be affected. Hence, tariffs are a very important factor in influencing the international landscape and may cast a great effect on global economic markets.

The effect of tariffs on consumers

Tariffs firstly directly impacts consumers. When a government imposes tariffs, suppliers importing a good internationally will need to pay an extra cost to the government. As a result, this will raise prices of goods, reducing the purchasing power of consumers. Furthermore, as pries increase for imported goods, consumers may find more limited choices in the market, and this may lead to consumer dissatisfaction.

The effect of tariffs on domestic producers

Tariffs are generally casting a more beneficial effect for domestic producers as they often result in increased output and employment to domestic industries. With more demand turned to domestic producers, they may result in higher sales and revenue outputs, boosting the economic return for domestic producers. While tariffs may provide protection to domestic industries as they gain a price advantage over foreign producers, there may be reduced competition. Furthermore, domestic producers may also be harmed through tariffs if they rely on foreign inputs. For example, when domestic producers rely on imported raw materials, their input cost increases, and this may result in less profit earned.

The effect on governments and the international landscapes

Trade tariffs may generate positive impacts to governments, as trade tariffs may act as a channel for revenue generation. Governments also utilize trade tariffs for political goals, and may cast a strategic effect on trade negotiations and affect the economic diplomacy. Simultaneously, trade tariffs may manipulate trade flows, and cause dissatisfaction, as rising consumer prices may lead to domestic unrest and trade wars. When one country imposes a tariff, this may often provoke retaliation from other countries, leading to a spiral of protective tariffs, that rises global prices and slows global economies. Hence, tariffs can lead to trade wars and lead to geopolitical instability.

Why should I be interested in this post?

This post discusses how trade policies may affect all actors in the economy. Understanding tariffs help you understand global events, and this can influence your everyday life as well.

Related posts on the SimTrade blog

   ▶ Anant JAIN Hyperinflation In Argentina Since 2018: A Deep Dive Into The Economic Crisis

   ▶ Camille KELLER From bean to brew: understanding coffee as a global commodity

   ▶ Mathis DIALLO The price of cocoa

   ▶ Jorge KARAM DIB Explanations for the recent changes in the Mexican economic landscape

Useful resources

CEPR Trump’s China tariffs: Lessons from first principles of classic trade policy welfare analysis

Knowledge.deck Trade and Tariff Impact Analysis

Wall Street Journal Tariffs Are More Than Just Taxes. They Are a Tool of Geopolitics.

About the author

The article was written in July 2025 by Annie YEUNG (ESSEC Business School, Global Bachelor in Business Administration (GBBA) – Exchange Student, 2025).

Bitcoin : Défis et Opportunités

Jean-Marie Choffray

Dans cet article, Jean-Marie CHOFFRAY (Professeur Ordinaire Honoraire d’Informatique Décisionnelle à l’Université de Liège, PhD-77, Management Science, Massachusetts Institute of Technology) introduit son recent article “Bitcoin : Défis et Opportunités”.

Nier la réalité de bitcoin n’en change pas la nature… Cette courte note a pour objet de fournir au lecteur une première synthèse des principaux Défis et Opportunités engendrés par l’adoption et la diffusion de Bitcoin (avec « B » majuscule, le réseau informatique). C’est une Révolution Technologique dont les conséquences s’observeront dans les décennies à venir. En effet, le dernier bitcoin (avec « b » minuscule, le moyen d’échange) sera produit vers 2140 ! Suivent sept propositions de réflexion et d’action.

Les trois ANNEXES – Le Triomphe de la Vie dans la Victoire de Bitcoin ; Bitcoin est un rêve, un idéal, un espoir ; Mille quatre cent milliards de dollars – offrent au lecteur un complément d’information lui permettant d’approfondir sa compréhension du phénomène et son analyse de la situation actuelle. De nombreuses et excellentes sources d’informations sont disponibles et consultables sur internet, notamment : https://bitcoin.org/fr/ ; Bitcoin Statistics ; Strategy’s Bitcoin for Corporations.

Qu’est-ce que Bitcoin ?

La Technologie Bitcoin comporte deux éléments : (1) une Base de Données Séquentielle qui intègre aujourd’hui (~) 1,5 milliard de transactions irréversibles, incorruptibles et inviolables entre des agents réels et/ou virtuels – robots ? et (2) un Système d’Exploitation Décentralisé (Bitcoin Core) permettant de valider, de sécuriser et d’enregistrer de telles transactions. Un bitcoin est un moyen d’accès à cette base de données, permettant à son détenteur d’effectuer une transaction irréversible, incorruptible et inviolable ; reconnue comme telle par le réseau. Selon l’objet de la transaction, il s’agit donc d’un droit de propriété digital, d’un moyen d’échange et/ou d’une réserve de valeur ; monnaie et/ou capital digital ?

Ainsi, bitcoin est un objet digital qui peut être stocké, accumulé, transféré et/ou vendu. Le nombre de bitcoins émis diminue exponentiellement dans le temps et le dernier exemplaire sera produit vers 2140. Leur nombre est également limité dans l’espace ; le réseau n’en produira jamais que vingt et un millions. (cf. article original de Satoshi Nakamoto : Bitcoin, un système de paiement électronique). La capitalisation boursière actuelle du réseau (~ $2T : deux mille milliards de dollars) en fait le cinquième actif financier mondial. Soit, plus que la capitalisation cumulée des six plus grandes banques mondiales ; de l’ordre de trois fois le total de bilan de la Banque Centrale Européenne ; ou, encore, deux fois le PIB de la Suisse…

Défis et opportunités

On peut considérer aujourd’hui que la Technologie Bitcoin est quasiment indestructible. Sa probabilité d’effondrement total est estimée à moins de 1%. Pour deux raisons : (1) un éventuel dysfonctionnement du réseau n’affecterait que marginalement la base de transactions séquentielle actuelle (i.e. l’histoire exhaustive des transactions cryptées et encodées depuis 2009) et (2) la décentralisation géographique, technologique et financière du réseau garantit la robustesse – fiabilité et validité – de son mécanisme de gouvernance (e.g. Proof of work). Il va donc falloir apprendre à vivre avec bitcoin, qu’on le veuille ou non, qu’on le souhaite ou pas ! Ceci est d’autant plus vrai que plusieurs pays, dont les États-Unis d’Amérique, ont officialisé leur soutien à cette évolution digitale de l’écosystème bancaire et financier (cf. Strategic Bitcoin Reserve Bill).

Propositions de réflexion et d’action

Pour toute entité publique ou privée soucieuse de marquer sa présence dans ce nouvel espace économique, caractérisé par une forte croissance (~ 60%/an) et une volatilité comparable (~ 60%/cycle de 4 ans) :

  1. Contribuer à créer un Centre Interuniversitaire d’intelligence, d’expertise et de compétence centré sur Bitcoin et les technologies annexes ou induites.
  2. Organiser un Symposium Annuel, destiné à rassembler les acteurs du secteur, à diffuser les bonnes pratiques et à susciter l’innovation.
  3. Constituer un Réseau d’Opérateurs (i.e. bitcoin Miners) assurant une présence effective à l’échelle mondiale et sécurisant l’accès aux transactions (cf. mise en place de Mining Pools).
  4. Inviter les entreprises – et toute autre institution dotée de Fonds Propres – à adopter le Standard Bitcoin, en y consacrant (~3-5%) de leur Actif Net.
  5. Destiner les Excédents Énergétiques – sources intermittentes, surplus nucléaire, cycles d’inférence (Intelligence Artificielle) etc. – à la production et au transfert de bitcoins ; au développement technologique – matériels et logiciels – sous-jacent ; et à la création de produits et services nouveaux.
  6. Constitution d’une Réserve Stratégique – régionale et/ou nationale – de bitcoins tendant vers 3-5% de la richesse économique (cf. Senateur C. Lummis).
  7. Émission de BitBonds : emprunts obligataires adossés (~10%) à bitcoin (cf. Andrew Hohns : BitBonds, An Idea Whose Time Has Come)

Lire la suite de l’article

Autres articles sur le blog

   ▶ Snehasish CHINARA Bitcoin: the mother of all cryptocurrencies

Ressources utiles

Choffray Jean-Marie (2025) Bitcoin : Défis et Opportunités Liège Université

Choffray Jean-Marie List des publications Liège Université

A propos de l’auteur

L’article a été rédigé en juin 2025 par Jean-Marie CHOFFRAY (Professeur Ordinaire Honoraire d’Informatique Décisionnelle à l’Université de Liège, PhD-77, Management Science, Massachusetts Institute of Technology).

Behavioral finance

Mahe FERRET

In this article, Mahe FERRET (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026) explains the appeal and challenges of behavioral finance when investing.

Introduction

Behavioral finance is a psychological and economic finance field that allows us to understand how investors – individuals and institutions – make financial decisions. Unlike traditional finance, which assumes that investors are rational actors who always make the optimal decisions to maximize profits based on all available information, behavioral finance recognizes that decisions are often influenced by cognitive biases and emotional responses.

As the financial industry becomes more complex, understanding the psychological biases of investor behavior becomes essential. Behavioral finance includes a more realistic human-centered perspective for analyzing market reactions, making it a crucial area of study for academics, investors, and policymakers alike.

History and Theoretical Foundations

Behavioral finance challenges the classical economic model of the “Homo Economicus”, which states that an investor is a fully rational decision-maker. Instead, it builds on theories about cognitive biases, unconscious and systematic errors that occur when people make a decision.

It also challenges classical theories such as the Efficient Market Hypothesis and Expected Utility Theory. These models presume that markets are efficient (stock prices reflect all available information) and that investors act logically. However, evidence from historical events (financial asset bubbles and market crashes) suggests otherwise, with investors having irrational behavior leading to mispricing (an over or undervaluation of the market price) and high volatility, which could result in potential negative return investments.

Overconfidence is one of the most studied biases. This bias leads investors to overestimate their knowledge and ability to make decisions, often resulting in excessive trading and poor returns. On the other hand, confirmation bias influences investors to seek information that supports their preexisting beliefs, sometimes ignoring evidence. Continuing along this path, herding bias reflects the tendency to mimic the actions of the majority, ignoring personal beliefs or individual analysis. This can generate bubble behavior, such as buying simply because of a trend, even when it seems irrational. Finally, among the long list of other biases, the disposition effect can harm long-term returns. Most of the time, investors sell assets that have increased in value to secure gains but keep assets that have dropped in value to avoid facing a loss.

These biases are not just theory and can explain some behaviors as seen in market crisis, where collective overconfidence and optimism fueled risky lending and investment practices.

Case Study: The 2008 Financial Crisis and Cognitive Biases

The 2008 financial crisis is a significant example of how cognitive biases can influence market behavior. While traditional economists tried to explain the irrational behaviors behind the collapse of global markets, behavioral finance offered an explanation: cognitive biases.

The crisis was a result of years of rising home prices in the U.S. housing market, which created a false sense of security. Financial institutions, driven by overconfidence in their risk management and in the fact that housing prices would continue to rise, issued endless subprime mortgages to borrowers with low credit profiles. These loans were then turned into complex financial instruments like mortgage-backed securities (MBS) and collateralized debt obligations (CDOs), sold to investors worldwide.

According to Montgomery (2011), a collective psychological bias led to this irrational behavior. Overconfidence pushed investors and institutions to underestimate the high risk of the defaults and overtrade, while confirmation bias caused them to ignore warning signs and only select information that supported their vision of the future. The investors were also too optimistic about the market, thinking that it would be in their favor, leading to an underestimation of systemic risk (risk that affects the entire financial system).

Evolution of the S&P 500 index in 2008.
Evolution of the  S&P 500 index in 2008
Source : invezz

This chart visually demonstrates the decline of the S&P 500 index during the market crash, illustrating how cognitive biases affect investor decisions. The index reached a high of 1576, marking the peak of the pre-crisis bull market. The market crashed by 57.7% from its peak and lasted for a total a year and a half. As the crisis progressed, panic selling spread rapidly, as a symbol of herd behavior, accelerating the decline and increasing the losses. Many investors also sold off assets at a loss to avoid more losses, despite fundamental research suggesting long-term recovery potential, which can be translated as a loss aversion bias.

These biases all contributed to the formation of a speculative bubble, which exploded when the housing prices began to fall and defaults rose, triggering a global credit freeze and economic recession.

Nudges, a strategy to mitigate biases ?

Behavioral finance offers an explanation for anomalies in market behavior but can also be used as a tool to improve decision-making. Strategies such as “nudges” (Thaler & Sunstein, 2008) could improve structured environments for decision making without restricting individual freedom. By changing the choice architecture, or “organizing the context in which people make decisions”, such as with default options or checklists, biases can be mitigated.

An example of a nudge strategy from “Nudge” (Thaler and Sunstein, 2008) is the use of automatic enrollment in retirement savings plan, such as 401(k)s in the U.S. Traditionally, employees had to opt in to participate in their company’s retirement savings plan. Many did not enroll because they procrastinated or found the process confusing. The nudge would be to change the default option so that employees are automatically enrolled in the retirement plan, but can opt out if they choose. Like in the finance industry, the choice architecture has changed concerning the default option, and this small change led to high increases in participation rates among employees. Changing the choice architecture in the decision-making process could be the solution to minimize cognitive biases and their negative impact on investments.

Why should I be interested in this post?

As a business student, understanding market anomalies—such as overreactions to news or momentum effects—is essential because they reveal limitations in classical finance theories that assume investors are always rational and markets efficient. Real markets often behave differently, with phenomena like speculative bubbles and panic selling challenging these traditional views. Studying behavioral finance offers valuable insights into the psychological factors and cognitive biases that influence investor decisions. This knowledge is crucial for future business professionals, as it helps improve decision-making, risk management, and strategy development in finance and beyond. Recognizing how human behavior impacts markets prepares business students to navigate real-world complexities more effectively.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLA CRSP

   ▶ Nithisha CHALLA Market consensus based financial analysts forecasts

   ▶ Raphaël ROERO DE CORTANZE How do animal spirits shape the evolution of financial markets?

Useful resources

CFA Institute (2025). Market Efficiency.

Montgomery, H. (2011). The Financial Crisis – Lessons for Europe from Psychology.

Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk.

Thaler, R.H. and Sunstein, C.R. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. London: Penguin Books.

About the author

The article was written in June 2025 by Mahe FERRET (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

Selling Structured Products in France

Mahe FERRET

In this article, Mahé FERRET (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026) explains the appeal and challenges of selling structured products in France.

Introduction

Structured products are investment products combining traditional assets (stocks, bonds, indexes…) with derivatives (options, futures…) to offer customized returns tailored to an investor’s risk profile.

In recent years, structured products have gained popularity due to persistent low interest rates and increased market volatility. For instance, buffered ETFs reached $43.4 billion in assets in 2024 according to N.S Huang (Kiplinger, 2024). In France, the market has grown significantly, reaching €42 billion in 2023, an 82% increase over two years, showing investors’ interest in higher returns with safety. Sales teams in investment banks actively seek to answer this demand by offering structured solutions to wealth managers, private banks and institutional investors, using payoff strategies and risk scenarios to support which product to choose.

Why Structured Products Appeal to French Investors

These products are particularly interesting for France’s investment culture, known for capital protection and an income preference due to low interest rates and relatively more risk-averse type of investors. The structured products appeal to French investors as they aim to protect the initial investments and offer higher returns than traditional bonds.

Capital protection means that an investor will not lose their initial investment, even if the market is dropping, and will earn a profit if the market performs well. As an example, BNP Paribas offers Capital Protection Notes (CPNs) tied to the S&P500 that guarantees the initial investment amount at maturity and 130% of the average performance of the index if it rises. If the index’s performance is zero or negative, the investor will only receive its capital back, with no additional return. In client meetings, sales professionals use scenario simulations and historical data to demonstrate the potential returns under different market conditions. Another type of structured product that could interest sustainable caring French investors could be an ESG (Environmental, Social, Governance) note tied to a renewable energy index. As an example, an ESG-linked structured product is tied to indices like the Euronext Eurozone ESG Large 80 Index, with a fixed or conditional coupon of 3 to 5% annually and a maturity of usually 5 to 8 years. With the increasing demand for these products, ESG investments are more and more promoted by Sales through a sustainable aspect, especially to family offices and pension funds committed to responsible investing. ESG products include ESG factors while still using traditional assets like stocks, allowing investors to search for both financial returns and positive societal impact. They often include stock from companies with already strong ESG processes, green bonds supporting environmental projects or derivatives linked to sustainability indicators.

Regulatory Environment in France

In France, the Autorité des marchés financiers (AMF) regulates the sales of structured products to ensure fairness and transparency. These products are complex, and regulations like PRIIPs (Packaged retail and insurance-based investment products) require a Key Information Document (KID) to explain them in simple terms. MiFID II (Markets in Financial Instruments Directive II) also mandates clear disclosure of risks and costs. ESG products, in particular, are under scrutiny to prevent greenwashing. It is an important aspect for the Sales team to consider, as they must respect regulatory requirements at every step with the clients, from pre-trade client conversations to post-sale documentation, and integrate it into their sales pitch.

Client Segments and Tailored Offerings

As complex as these products can be, one of their benefits is that they can be tailored to each investor’s profile risk (more or less tolerance to risk). The structured products can be ideal for retail investors needing safe products. A retail investor could be a retiree seeking a complementary source of revenue and would seek a PPN guaranteeing €10,000 principal with a 3% coupon if the CAC 40 stays flat or rises. The product can be chosen according to the risk level and could be a principal-protected note (PPN), for safer investments. However, less risk-averse investors could seek customized high-return options like a Rainbow note (a derivative-based product designed to offer potential returns based on the performance of a basket of assets, often with a focus on the best or worst performers within that basket) and institutions would need complex products for portfolio strategies like a buffered note. A rainbow note is a product linked to at least two assets and answers a diversification benefit, with a growth and stability balance. Sales teams must match the product structure to the investor’s objectives by collaborating with structuring desks (Department of the trading room that creates the structure that best fits the demands of the client) and traders to design personalized solutions. For a pension fund, a buffered note, designed to allow you to earn a return based on the performance of a stock but with a “buffer” to protect from some losses, offers risk management characteristics, with protection against the first 10% of losses on a global equity index.

Benefits

Structured financial products offer several advantages that make them attractive to a wide range of investors. From a sales perspective, they are attractive tools to meet a client’s needs with a lot of advantages. First, they often include capital protection, meaning that even if the underlying asset’s performance declines, the investor’s capital will be preserved at a predetermined protection level. Additionally, these products can provide regular income, but only to the extent that specific market conditions are met during the investment period. Structured products also allow investors to bet on market volatility, meaning that the products’ prices tend to fall when volatility rises. This creates an opportunity to buy low during periods of high volatility and sell when the volatility declines. Furthermore, these instruments both answer the client’s investment preferences and the diversification potential by offering many investment options across different asset classes. Sales professionals often highlight how these products provide a unique combination of stability and performance that standard products cannot offer.

Challenges

Structured products, despite their benefits, also present common obstacles for investors and for the sales team. Sales must be able to clearly explain these risks using simplified language to make it understandable to even non-expert clients. First, there is the issuer’s risk. Since these tools are issued by banks or other intermediaries, there is a risk that the issuer becomes insolvent or unable to meet its obligations, and the investor may not receive their returns at maturity. There is also an underlying risk, as the value of a structured product depends directly on the performance of the underlying asset, which is subject to high volatility. In extreme cases, the product’s value could go to zero if the asset performs poorly. A second aspect is sometimes the lack of liquidity that can be common for such unique products. Although some products are listed and supported by market makers there is no guarantee of continuous availability in the market. Investors may have difficulties buying or selling the product before maturity, which could lead to unexpected losses due to the absence of market participants at the time of the transaction. Finally, the product can be seen as complex because they are multi-layered, combining different asset types (indices, funds) with different payoff conditions and risk levels.

Complexity of a basket of equity indices.
Complexity of a basket of equity indices
Source: AMF.

On this graph, each added asset increases the product’s complexity, making it harder to assess risk, performance and transparency. An investor needs then to evaluate each asset but also their own impact within a basket.

Why should I be interested in this post?

As an ESSEC student interested in business and finance, I found that learning about structured products really helped me understand how financial institutions create investment solutions based on different risk profiles. They’re a great example of how finance can combine both protection and performance. For anyone considering a career in sales, asset management, or investment banking, getting familiar with these products is a great way to build practical knowledge and better understand how finance works in the real world.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Equity structured products

   ▶ Dante MARRAMIERO Structured debt, private equity, rated feeder funds, collateral fund obligations

   ▶ Shengyu ZHENG Capital guaranteed products

   ▶ Jayati WALIA Fixed income products

Useful resources

AMF & ACPR Analysis of the French structured product market

Kiplinger Buffered ETFs: What are they and should you invest in one?

Itransact BNP PARIBAS S&P 500 100% CAPITAL PROTECTED NOTE 5

Yassien Yousfi ESG structured products: challenges and opportunities

Klara Gjorga Equity Derivatives and Structured Products Sales

Line Grinden Quinn – Structured Products: Sound strategy or sales pitch?

About the author

The article was written in June 2025 by Mahe FERRET (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

How blockchain challenges traditional financial systems: Lessons from my ESSEC thesis

Alexandre GANNE

In this article, Alexandre GANNE (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2025) shares key insights from his bachelor thesis on blockchain technology and its implications for traditional banking systems.

Introduction

This post is the result of a year-long academic research project conducted as part of my final thesis at ESSEC Business School. It explores how the growing adoption of blockchain technology is redefining core principles of traditional financial systems and the strategic implications this transformation holds for banking institutions.

The disruptive nature of blockchain

Blockchain is often described as the cornerstone of the next technological revolution in finance. It allows for the decentralization of data storage and value exchange, eliminating the need for central authorities to validate transactions. With distributed consensus mechanisms and cryptographic security, blockchain systems can operate autonomously and transparently. These features make it not just a new tool, but a foundational shift that could reshape core banking functions such as recordkeeping, interbank transfers, and credit issuance. Its key characteristics, immutability, programmability, disintermediation, and transparency, pose significant challenges to the centralized model of traditional finance.

From intermediation to decentralization

One of blockchain’s most radical promises is disintermediation. Traditional financial systems are heavily reliant on intermediaries such as banks, brokers, and clearinghouses to establish trust and validate transactions. Blockchain introduces the ability to execute trustless peer-to-peer exchanges using cryptographic proofs and decentralized ledgers. For example, platforms like Ethereum enable the deployment of smart contracts, self-executing programs that automatically enforce the terms of a contract without human intervention, drastically reducing friction and cost.

Security and auditability

Unlike traditional databases that are vulnerable to manipulation or single points of failure, blockchain offers a tamper-proof and chronologically auditable data structure. This makes it a valuable tool for regulatory compliance and fraud prevention.

Implications for the banking sector

Custody and settlement

Traditional banks act as intermediaries for the settlement of securities and custody of assets. Blockchain-based tokenization could eliminate the need for such intermediaries by allowing real-time settlement and direct ownership recording on-chain.

Compliance

Know Your Customer (KYC) and Anti-Money Laundering (AML) procedures are critical, yet often duplicative and costly for financial institutions. Blockchain can streamline these processes by allowing users to maintain a single, verified digital identity that can be securely shared across multiple entities. Through permissioned blockchain networks, institutions can access and update identity records in real time, increasing efficiency while maintaining regulatory compliance. Additionally, immutable audit trails enhance traceability and accountability.

New business models

The rise of decentralized finance (DeFi) introduces new paradigms in financial services, automated lending, yield farming, insurance, and derivatives, all operating without traditional intermediaries. In response, incumbent banks are exploring strategic partnerships, investments in blockchain startups, and internal initiatives to tokenize assets or build proprietary custodial solutions. Hybrid models, blending regulated infrastructure with decentralized services, are likely to emerge as a dominant trend over the next decade.

Why should I be interested in this post?

For any ESSEC student or finance professional interested in the frontier of financial innovation, this article distills the key findings of a year-long academic thesis dedicated to understanding how blockchain is transforming our industry. It bridges theory and practice, highlighting both opportunities and risks. As regulators, institutions, and entrepreneurs continue to shape the future of financial systems, understanding blockchain is no longer optional, it is essential to navigate and lead in tomorrow’s economy.

Related posts on the SimTrade blog

   ▶ Nithisha CHALLA Top financial innovations in the 21st century

   ▶ Youssef EL QAMCAOUI Decentralized finance (DeFi)

   ▶ Snehasish CHINARA Cardano: Exploring the Future of Blockchain Technology

   ▶ Snehasish CHINARA Solana: Ascendancy of the High-Speed Blockchain

   ▶ Snehasish CHINARA Ethereum – Unleashing Blockchain Innovation

Useful resources

BIS – The implications of decentralised finance

ECB Blockchain

FSB The Financial Stability Risks of Decentralised Finance

About the author

The article was written in May 2025 by Alexandre GANNE (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2025).

Pricing Weather Risk: How to Value Agricultural Derivatives with Climate-Based Volatility Inputs

Mathias DUMONT

In this article, Mathias DUMONT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026) explains how weather risk impacts the pricing of agricultural derivatives like futures and options, and how climate-based data can be integrated into stochastic pricing models. Combining academic insights and practical examples, including a mini-case from the SimTrade Blé de France simulation, the article illustrates adjustments to models such as the Black-Scholes-Merton model for temperature and rainfall variables in valuing agricultural contracts.

Introduction

Extreme weather has always been a critical factor in agriculture, but climate change is amplifying the frequency and severity of these events. From prolonged droughts to unseasonal floods, weather shocks can send crop yields and commodity prices on wild rides. This rising uncertainty has given birth to weather derivatives – financial instruments designed to hedge weather-related risks – and has made volatility forecasting a key challenge in pricing agricultural contracts. In fact, as businesses grapple with climate volatility, trading volume in weather derivatives has surged. CME Group saw a 260% increase last year (CME Group, 2023). The question for traders and risk managers is: how do we quantitatively factor weather risk into the pricing of futures and options on crops like wheat and corn?

Weather Risk and Agricultural Markets

Weather directly affects crop supply. A bumper harvest following ideal weather can flood the market and depress prices, whereas a drought or frost can decimate yields and trigger price spikes. These supply swings translate into volatility for agricultural commodity markets. For example, during the U.S. drought of 2012, corn prices skyrocketed, and the implied volatility of corn futures jumped by over 14 percentage points within a month, reaching ~49% in mid-July. Such surges reflect the market rapidly repricing risk as participants absorb new climate information (in this case, worsening crop prospects). Seasonal patterns are also evident: harvest seasons tend to coincide with higher price volatility because that’s when weather uncertainty is at its peak. Studies show that harvesting cycles create predictable seasonal volatility patterns in crop markets – when a critical growth period is underway, any shift in rainfall or temperature forecasts can send prices swinging.

Beyond affecting supply quantity, weather can influence crop quality (e.g., excessive rain can spoil grain quality) and even logistic costs (flooded transport routes, etc.), further feeding into prices. The interconnected global nature of agriculture means a drought in one region can reverberate worldwide. As noted in the SimTrade Blé de France case, weather conditions in France influence the quantity and quality of wheat the company harvests, while weather conditions around the world influence the international wheat price. In the Blé de France simulation (which models a French wheat producer’s stock), participants see how news of floods or droughts translate into stock price moves. For instance, the company might project a 7-million-ton wheat harvest, but analysts’ forecasts range from 6.5 to 7.2 Mt – with the realized level highly weather-dependent in the final weeks of the season. A poor weather turn not only shrinks the crop but boosts global wheat prices, creating a complex revenue impact on the firm. This mini-case underlines that weather risk entails both volume uncertainty and price uncertainty, a double-whammy for agricultural firms and their investors.

Case Study: Weather Shocks in Wheat Markets

To illustrate the impact of weather risk on commodity pricing, consider three simulated scenarios for an upcoming wheat growing season: (1) **Favorable weather**, (2) **Moderate conditions**, and (3) **Severe weather** such as drought. Each scenario generates a distinct price trajectory in the wheat market. Under favorable weather, prices tend to remain stable or decline slightly, particularly at harvest, due to strong yields and potential oversupply. In moderate conditions, prices may rise modestly as the market adjusts to balanced supply and demand. In contrast, severe weather triggers early price rallies as concerns about yield shortfalls emerge, followed by sharp spikes once crop damage becomes evident. For producers and traders, anticipating these divergent price paths is essential for pricing contracts, managing risk exposure, and structuring hedging strategies effectively.

Figure 1. Simulated commodity price paths under three weather scenarios.
Simulated Price Paths
Source: Author’s simulation.

Figure 1. shows the simulation of commodity price paths under three weather scenarios: severe weather (red), moderate weather (orange), and favorable weather (green). A mid-season weather forecast alert (Day 15) triggers a shift in market expectations, causing price divergence. This simulation illustrates how weather shocks and forecasts impact commodity pricing through volatility and revised yield expectations.

From a risk management perspective, tools exist to handle these contingencies. Farmers or firms concerned about catastrophic weather can turn to weather derivatives for protection. Weather derivatives are financial contracts (often based on indexes like temperature or rainfall levels) that pay out based on specific weather outcomes, allowing businesses to offset losses caused by adverse conditions. They have been used by a wide range of players – from utilities hedging warm winters, to breweries hedging late frosts. These instruments can be customized over-the-counter or traded on exchanges. Notably, CME Group lists standardized weather futures and options tied to indices such as heating degree days (HDD) and cooling degree days (CDD) for various cities. The existence of such contracts means that even when commodity producers cannot fully insure their crop yield, they might hedge certain aspects of weather risk (like an unusually hot summer) via financial markets. In our context, a wheat farmer worried about drought could, say, buy a weather option that pays off if rainfall falls below a threshold, providing funds when their crop output (and thus futures position) suffers.

Climate-Based Volatility in Derivatives Pricing

How can weather uncertainty be incorporated into derivative pricing models? Classic option pricing, such as the Black-Scholes-Merton model, assumes a fixed volatility for the underlying asset’s returns. For agricultural commodities, that volatility is anything but constant – it ebbs and flows with the weather and seasonal progress. Practitioners thus often use stochastic volatility models or at least adjust the volatility input over time. For example, one might use higher volatility estimates during the crop’s growing season and lower volatility post-harvest when output is known. This practice parallels how equity traders anticipate higher volatility in stock prices ahead of major earnings or profit announcements, and lower volatility after the announcement of profits by the firm.

Like companies facing performance surprises, weather shocks inject information asymmetry into the market, which must be priced into the option premiums. This aligns with the observed Samuelson effect, where futures contracts on commodities tend to have higher volatility when they are near maturity (coinciding with harvest uncertainty).

Market prices of options themselves reflect these expectations. When a looming weather event is expected to cause turmoil, options premiums will rise. The metric capturing this is implied volatility – the volatility level implied by current option prices. Implied vol is essentially forward-looking and will jump if traders foresee choppy waters ahead. Empirical evidence shows that extreme weather forecasts translate into higher implied vols for crop options. In 2012, as drought fears intensified, corn option implied volatility spiked (alongside futures prices). Conversely, once a forecasted drought started being relieved by rains, implied volatility eased off, signaling that some uncertainty had been resolved. A recent study also found that integrating meteorological data (like rainfall and temperature anomalies) into volatility modeling significantly improves the ability to hedge risk in agricultural markets. In other words, the more information we feed into our models about the climate, the more accurately we can price and hedge these derivatives.

Figure 2. Implied Volatility of Crop Options Over Time with Weather Events
Line chart showing implied volatility of crop options over 12 months with spikes linked to weather events
Source: Author’s simulation.

This simulation illustrates the evolution of implied volatility over a 12-month crop cycle. Forecasted climate events—drought (Month 3), frost (Month 6), heatwave (Month 8), and rainfall shortage (Month 11)—lead to moderate but distinct volatility spikes. As uncertainty resolves, volatility returns to baseline.

One practical approach to pricing under climate uncertainty is to use scenario-based or simulation-based models. Instead of assuming a single volatility number, an analyst can simulate thousands of possible weather outcomes (perhaps using historical climate data or meteorological forecast models) and the corresponding price paths for the commodity. Each simulated price path yields a payoff for the derivative (e.g. an option’s payoff at expiration), and by averaging those payoffs (and discounting appropriately), one can derive a weather-adjusted theoretical price. This Monte Carlo style approach effectively treats weather as an external random factor influencing the commodity’s drift and volatility. It’s particularly useful for complex derivatives or when the payoff depends explicitly on weather indices (such as a derivative that pays out if rainfall is below X mm).

When the derivative’s underlying is the commodity itself (e.g. a corn futures option), traditional risk-neutral pricing arguments still apply, but the challenge is forecasting volatility. Traders often adjust the volatility smile/skew on agricultural options to account for asymmetric weather risks – for instance, if a drought can cause a much bigger upside move than a rainy season can cause a downside move, call options might embed a higher implied volatility (reflecting that upside risk of price spikes). This is observed in practice as well; extreme weather events can distort the implied volatility “skew” of crop options, as out-of-the-money calls become more sought after as disaster insurance.

In contrast, if the derivative’s underlying is a pure weather index (say an option on cumulative rainfall), then pricing becomes more complex because the underlying (rainfall) is not a tradable asset. In such cases, the Black-Scholes-Merton formula is not directly applicable. Instead, pricing relies on actuarial or risk-neutral methodologies that incorporate a market price of risk for weather. For example, one method is to estimate the probability distribution of the weather index from historical data, then add a risk premium to account for investors’ risk aversion to weather variability, and discount expected payoffs accordingly. Another method uses “burn analysis” – taking historical weather outcomes and the associated financial losses/gains had the derivative been in place, to gauge a fair premium. Academic research has proposed models ranging from modified Black-Scholes-Merton-type formulas for rainfall (with adjustments for the non-tradability) to advanced statistical models (e.g. Ornstein-Uhlenbeck processes with seasonality for temperature indices. The key takeaway is that whether it’s directly in commodity options or in dedicated weather derivatives, climate factors force us to go beyond textbook models and embrace more dynamic, data-driven pricing techniques.

Why should I be interested in this post?

For an ESSEC student or a young finance professional, this topic sits at the intersection of finance and real-world impact. Understanding weather risk in markets is not just about farming – it’s about how big data and climate science are increasingly intertwined with financial strategy. Agricultural commodities remain a cornerstone of the global economy, and volatility in these markets can affect food prices, inflation, and even economic stability in various countries. By grasping how to value derivatives with climate-based volatility inputs, you are gaining insight into a growing niche of finance that deals with sustainability and risk management. Moreover, the skills involved – scenario analysis, simulation modeling, blending of economic and scientific data – are highly transferable to other domains (think energy markets or any sector where uncertainty reigns). In a world facing climate change, expertise in weather-related financial products could open career opportunities in commodity trading desks, insurance/reinsurance firms, or specialized hedge funds. Ultimately, this post encourages you to think creatively and interdisciplinarily: the best hedging or valuation solutions may come from combining financial theory with environmental intelligence.

Related posts on the SimTrade blog

   ▶ Camille KELLER Coffee Futures: The Economic and Environmental Drivers Behind Rising Prices

   ▶ Jayati WALIA Implied Volatility

   ▶ Akshit GUPTA Futures Contract

   ▶ Anant JAIN Understanding Price Elasticity of Demand

Useful resources

Chicago Mercantile Exchange (CME) Weather futures and options product information. (Exchange-traded weather derivative contracts on temperature and other indices)

U.S. Energy Information Administration Drought increases price of corn, reduces profits to ethanol producers (2012). (Article discussing the 2012 drought’s impact on corn prices and volatility)

Nature Communications (2024) Financial markets value skillful forecasts of seasonal climate. (Research showing that seasonal climate outlooks have measurable effects on implied volatility and market uncertainty)

Das, S. et al. (2025) Predicting and Mitigating Agricultural Price Volatility Using Climate Scenarios and Risk Models. (Academic study demonstrating the integration of climate data into volatility models and using Black-Scholes to value a government price support as a put option)

Pai, J. & Zheng, Z. (2013) Pricing Temperature Derivatives with a Filtered Historical Simulation Approach. (Discussion of why Black-Scholes is not directly applicable to weather derivatives and alternative pricing approaches)

About the author

The article was written in May 2025 by Mathias DUMONT (ESSEC Business School, Global Bachelor in Business Administration (GBBA), 2022-2026).

Understanding Break-even Analysis: A Key Financial Technique

Olivia BRÜN

In this article, Olivia BRÜN (ESSEC Business School, Global Bachelor in Business Administration (BGBA), and ESIC Business School, Bachelor of Business Administration and Management (BBAM), 2022–2026) analyses the concept of break-even analysis, a widely used financial technique employed to determine business profitability. This article illustrates the method in a case study of Watches of Switzerland Group, a publicly listed upscale watch retailer with its headquarters in the United Kingdom.

Introduction and Context

Break-even analysis is a critical component of managerial decision-making and financial planning. It allows companies to determine the level (volume) of sales that will cover all costs, both variable and fixed, before the company can be profitable. The break-even point is a crucial milestone in the operations of a firm. Sales below the break-even point create losses, while sales above it enable every extra unit sold to contribute to overall profitability.

This method is widely used in various industries to evaluate new projects, determine pricing strategies, and examine the financial feasibility of corporate decisions. Especially in capital-intensive industries or businesses focused on product offerings, understanding the break-even point is key to sound financial management and setting realistic sales targets.

History of the Concept

Break-even analysis stems from cost-volume-profit (CVP) analysis. Originating in managerial accounting in the early 20th century, CVP distinguishes between fixed costs (independent of production volume) and variable costs (dependent on production volume). By comparing these costs to projected revenues, decision-makers can identify the break-even point.

Case Study: Watches of Switzerland Group

This case study applies the break-even method to Watches of Switzerland Group PLC, a retailer of high-end watches. The following figures are taken from the company’s 2022 Annual Report:

For full financial details, see the official Watches Annual Report (2022).

Using these values, we compute the variable cost per unit and contribution margin per unit as follows:

  • Variable cost per unit: £3,132 (= £966.5 million / 308,560)
  • Contribution margin per unit: £1,868 (= £5,000 – £3,132 )

Break-even point (units): 220,128 units (= Fixed Costs / Contribution Margin per Unit = £411.2 million / £1,868).

At the break-even point, total revenues and total costs are approximately £1.1 billion. Sales above this point generate operating profit.

Break-even Chart from Excel

The chart below illustrates the relationship between total revenue and total cost across different sales volumes. The break-even point is located where the two lines intersect, at approximately 220,128 units, equivalent to around £1.1 billion in revenue. This marks the threshold at which the company covers all fixed and variable costs, resulting in neither profit nor loss.

The underlying Excel model (see “READ ME” tab for detailed explanations) allows for interactive analysis. Users can adjust inputs such as fixed costs, average selling price, and variable cost per unit. The break-even point updates automatically, making the tool highly practical for scenario analysis and financial planning. This kind of sensitivity analysis is essential in real world decision making, especially in industries with high fixed costs like luxury retail.

Break-even Analysis for Watches of Switzerland GroupBreak-even Analysis for Watches of Switzerland Group
Source: Excel computation based on data from Watches of Switzerland Group

You may download the Excel file used to do the computations and produce the chart above.

Download the Excel file to compute the breakeven point

Why should I be interested in this post?

Break-even analysis is fundamental in both theoretical and applied finance. It is widely used in consultancy, financial planning, and entrepreneurship. Understanding this concept allows business professionals to assess cost structures, pricing strategies, and financial viability of new projects.

For an ESSEC student pursuing business or finance, mastering break-even analysis equips you to analyze operational leverage and forecast how profits change with varying sales levels. This insight helps in making informed strategic decisions, managing risk, and ensuring sustainable business growth.

Useful resources

Academic resources

Horngren, C. T., Datar, S. M., & Rajan, M. V. (2015) Cost Accounting: A Managerial Emphasis (15th ed.). Pearson Education. – This foundational textbook offers detailed explanations of break-even analysis, cost behavior, and their relevance in managerial decision-making.

Atrill, P., McLaney, E. (2022) Management Accounting for Decision Makers (10th ed.). Pearson.
– This book focuses on applying break-even and contribution analysis in real business contexts, helping students and professionals make informed financial decisions.

Gallo, A. (2014) A Quick Guide to Breakeven Analysis Harvard Business Review.

Business resources

Watches of Switzerland Group

Watches of Switzerland Group (2022) Annual Report and Accounts 2022

About the author

The article was written in May 2025 by Olivia BRÜN (ESSEC Business School, Global Bachelor in Business Administration (BGBA), and ESIC Business School, Bachelor of Business Administration and Management (BBAM), 2022–2026), 2022–2026).