Market Making

Market Making

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) explains the activity of market making which is key to bring liquidity in financial markets.

Market Making: What is It and How Does It Work?

Market making is a trading strategy that involves creating liquidity in a security by simultaneously being ready to buy and sell amount of that security. Market makers provide an essential service to the market by providing liquidity to buyers and sellers, which helps to keep stock prices stable (by limiting the price impact of incoming orders). This type of trading is often done by large institutional investors such as banks. In this article, I will explore what market making is, how it works, and provide some real-life examples of market makers in action.

What is Market Making?

Market making is a trading strategy that involves simultaneously being ready to buy and sell amounts of a security in order to create or improve market liquidity for other participants. Market makers are also known as “specialists” or “primary dealers” on the stock market. They act as intermediaries between buyers and sellers, providing liquidity to the market by always being willing to buy and sell a security at a certain price (or more precisely at two prices: a price to buy and a price to sell). The remuneration of a market maker is obtained by making a profit by taking the spread between the bid and ask prices of a security.

How Does Market Making Work?

Market makers create liquidity by always having an inventory of securities that they can buy and sell. They are willing to buy and sell a security at any given time, and they do so at a certain price. The price they buy and sell at may be different from the current market price, as market makers may be trying to influence the price of a security in order to make a profit.

Market makers buy and sell large amounts of a security in order to maintain an inventory, and they use a variety of techniques to do so. For example, they may buy large amounts of a security when the price is low and sell it when the price is high. They may also use algorithms to quickly buy and sell a security in order to take advantage of small price movements.

By providing liquidity to the market, market makers help to keep stock prices stable. They are able to do this by quickly buying and selling large amounts of a security in order to absorb excess demand or supply. This helps to prevent large price fluctuations and helps to keep the price of a security within a certain range.

Market making nowadays

One of the most well-known examples of market making is the role played by Wall Street banks. These banks act as market makers for many large stocks on the NYSE and NASDAQ. They buy and sell large amounts of a security in order to maintain an inventory, and they use algorithms to quickly buy and sell a security in order to take advantage of small price movements.

Another example of market making is the practice of high-frequency trading. In his book Flash Boys, author Michael Lewis examines the impact of high frequency trading (HFT) on market making. HFT uses powerful computers and sophisticated algorithms to rapidly analyze large amounts of data, allowing traders to make trades in milliseconds. This has led to an increased use of HFT for market making activities, which has caused some to argue that it may be harming market liquidity and efficiency. Market makers have begun using HFT strategies to gain an edge over traditional market makers, allowing them to make markets faster and at narrower spreads. This has resulted in tighter spreads and higher trading volumes, but it has also been argued that it has led to increased volatility and decreased liquidity. As a result, some investors have argued that HFT strategies have created an uneven playing field, where HFT firms have an advantage over traditional market makers.

The use of HFT has also raised concerns about the fairness of markets. HFT firms have access to large amounts of data, which they can use to gain an informational advantage over other market participants. This has raised questions about how well these firms are able to price securities accurately, and whether they are engaging in manipulative practices such as front running. Additionally, some argue that HFT firms are able to take advantage of slower traders by trading ahead of them and profiting from their trades.

These concerns have led regulators to take a closer look at HFT and market making activities. The SEC and other regulators have implemented a number of rules designed to protect investors from unfair or manipulative practices. These include Regulation NMS, which requires market makers to post their best bid and ask prices for securities, as well as Regulation SHO, which prohibits naked short selling and other manipulative practices. Additionally, the SEC has proposed rules that would require exchanges to establish circuit breakers and limit the amount of order cancellations that can be done in a certain period of time. These rules are intended to ensure that markets remain fair and efficient for all investors.

Conclusion

In conclusion, market making is a trading strategy that involves creating liquidity in a security by simultaneously being ready to buy and sell large amounts of that security. Market makers provide an essential service to the market by providing liquidity to buyers and sellers, which helps to keep stock prices stable. Wall Street banks and high-frequency traders are two of the most common examples of market makers.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Market maker – Job Description

Useful resources

SimTrade course Market making

Michael Lewis (2015) Flash boys.

U.S. Securities and Exchange Commission (SEC) Specialists

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Evidence of underpricing during IPOs

Evidence of underpricing during IPOs

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) exposes the results of various studies concerning IPO underpricing.

What is IPO Underpricing?

Underpricing is estimated as the percentage difference between the price at which the IPO shares were sold to investors (the offer price) and the price at which the shares subsequently trade in the market. As an example, imagine an IPO for which the shares were sold at $20 and that the first day of trading shows shares trading at $23.5, thus the associated underpricing induced is (23.5 / 20) -1 = 17.5%.

In well-developed capital markets and in the absence of restrictions on how much prices are allowed to fluctuate by from day to day, the full extent of underpricing is evident fairly quickly, certainly by the end of the first day of trading as investor jump on an occasion to reflect the fair value of the asset entering the market, and so most studies use the first-day closing price when computing initial underpricing returns. Using later prices, say at the end of the first week of trading, is useful in less developed capital markets, or in the presence of ‘daily volatility limits’ restricting price fluctuations, because aftermarket prices may take some time before they equilibrate supply and demand.

In the U.S. and increasingly in Europe, the offer price is set just days (or even more typically, hours) before trading on the stock market begins. This means that market movements between pricing and trading are negligible and so usually ignored. But in some countries (for instance, Taiwan and Finland), there are substantial delays between pricing and trading, and so it makes sense to adjust the estimate of underpricing for interim market movements.

As an alternative to computing percentage initial returns, underpricing can also be measured as the (dollar) amount of ‘money left on the table’. This is defined as the difference between the aftermarket trading price and the offer price, multiplied by the number of shares sold at the IPO. The implicit assumption in this calculation is that shares sold at the offer price could have been sold at the aftermarket trading price instead—that is, that aftermarket demand is price-inelastic. As an example, imagine an IPO for which the shares were sold at $20 and that the first day of trading shows shares trading at $23.5, with 20 million shares sold. The initial IPO in dollars was $400,000,000 and at the end of the first trading day this amount goes down to $470,000,000, inducing an amount of money left on the table of $70,000,000.

The U.S. probably has the most active IPO market in the world, by number of companies going public and by the aggregate amount of capital raised. Over long periods of time, underpricing in the U.S. averages between 10 and 20 percent, but there is a substantial degree of variation over time. There are occasional periods when the average IPO is overpriced, and there are periods when companies go public at quite substantial discounts to their aftermarket trading value. In 1999 and 2000, for instance, the average IPO was underpriced by 71% and 57%, respectively. In dollar terms, U.S. issuers left an aggregate of $62 billion on the table in those two years alone. Such periods are often called “hot issue markets”. Given these vast amounts of money left on the table, it is surprising that issuers appear to put so little pressure on underwriters to change the way IPOs are priced. A recent counterexample, however, is Google’s IPO which unusually for a U.S. IPO, was priced using an auction.

Why Has IPO Underpricing Changed over Time?

Underpricing is the difference between the price of a stock when it is first offered on the public market (the offer price) and the price at which it trades after it has been publicly traded (the first-day return). Various authors note that underpricing has traditionally been seen as a way for firms to signal quality to potential investors, which helps them to attract more investors and raise more capital.

In their study “Why Has IPO Underpricing Changed over Time? “, authors Tim Loughran and Jay Ritter discuss how the magnitude of underpricing has varied over time. They note that the average underpricing was particularly high in the 1970s and 1980s, with average first-day returns of around 45%. However, they also find that underpricing has declined significantly since then, with average first-day returns now hovering around 10%.

They then analyze the reasons for this decline in underpricing. They argue that the increased availability of information has made it easier for potential investors to assess a company’s quality prior to investing, thus reducing the need for firms to signal quality through underpricing. Additionally, they suggest that increased transparency and reduced costs of capital have also contributed to the decline in underpricing. Finally, they suggest that improved liquidity has made it easier for firms to raise capital without relying on underpricing.

These changes in underpricing have affected both existing and potential investors. Main arguments are that existing shareholders may benefit from reduced underpricing because it reduces the amount of money that is taken out of their pockets when a company goes public. On the other hand, potential investors may be disadvantaged by reduced underpricing because it reduces the return they can expect from investing in an IPO.

In conclusion we can note that while underpricing has declined significantly over time, there is still some evidence of underpricing in today’s markets. It suggests that further research is needed to understand why this is the case and how it affects investors. Many argue that research should focus on how different types of IPOs are affected by changes in underpricing, as well as on how different industries are affected by these changes. Additionally, they suggest that researchers should investigate how different investor groups are affected by these changes, such as institutional investors versus retail investors.

Overall, studies provide valuable insight into why IPO underpricing has changed so dramatically over the past four decades and how these changes have affected both existing shareholders and potential investors. It provides convincing evidence that increased access to information, greater transparency, reduced costs of capital, and improved liquidity have all contributed to the decline in underpricing. While it is clear that underpricing has declined significantly over time, further research is needed to understand why some IPOs still exhibit underpricing today and what effect this may have on different investor groups.

Related posts on the SimTrade blog

▶ Louis DETALLE A quick review of the ECM (Equity Capital Market) analyst’s job…

▶ Marie POFF Film analysis: The Wolf of Wall Street

Useful resources

Ljungqvist A. (2004) IPO Underpricing: A Survey, Handbook in corporate finance: empirical corporate finance, Edited by B. Espen Eckbo.

Loughran T. and J. Ritter (2004) Why Has IPO Underpricing Changed over Time? Financial Management, 33(3), 5-37.

Ellul A. and M. Pagano (2006) IPO Underpricing and After-Market Liquidity The Review of Financial Studies, 19(2), 381-421.

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Market efficiency

Market efficiency

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) explains the key financial concept of market efficiency.

What is Market Efficiency?

Market efficiency is an economic concept that states that financial markets are efficient when all relevant information is accurately reflected in the prices of assets. This means that the prices of assts reflect all available information and that no one can consistently outperform the market by trading on the basis of this information. Market efficiency is often measured by the degree to which prices accurately reflect all available information.

The efficient market hypothesis (EMH) states that markets are efficient and that it is impossible to consistently outperform the market by utilizing available information. This means that any attempt to do so will be futile and that all investors can expect to earn the same expected return over time. The EMH is based on the idea that prices are quickly and accurately adjusted to reflect new information, which means that no one can consistently make money by trading on the basis of this information.

Types of Market Efficiency

Following Fama’s academic works, there are three different types of market efficiency: weak, semi-strong, and strong.

Weak form of market efficiency

The weak form of market efficiency states that asset prices reflect all information from past prices and trading volumes. This implies that technical analysis, which is the analysis of past price and volume data to predict future prices, is not an effective way to outperform the market.

Semi-strong form of market efficiency

The semi-strong form of market efficiency states that asset prices reflect all publicly available information, including financial statements, research reports, and news. This implies that fundamental analysis, which is the analysis of a company’s financial statements and other publicly available information to predict future prices, is also not an effective way to outperform the market.

Strong form of market efficiency

Finally, the strong form of market efficiency states that prices reflect all available information, including private information. This means that even insider trading, which is the use of private information to make profitable trades, is not an effective way to outperform the market.

The Grossman-Stiglitz paradox

If financial markets are informationally efficient in the sense they incorporate all relevant information available, then considering this information is useless when making investment decisions in the sense that this information cannot be used to beat the market on the long term. We may wonder how this information can be incorporate in the market prices if no market participants look at information. This is the Grossman-Stiglitz paradox.

Real-Life Examples of Market Efficiency

The efficient market hypothesis has been extensively studied and there are numerous examples of market efficiency in action.

NASDAQ IXIC 1994 – 2005

One of the most famous examples is the dot-com bubble of the late 1990s. During this time, the prices of tech stocks skyrocketed to levels that were far higher than their fundamental values. This irrational exuberance was quickly corrected as the prices of these stocks were quickly adjusted to reflect the true value of the companies.

NASDAQ IXIC Index, 1994-2005

Source: Wikimedia.

The figure “NASDAQ IXIC Index, 1994-2005” shows the Nasdaq Composite Index (IXIC) from 1994 to 2005. During this time period, the IXIC experienced an incredible surge in value, peaking in 2000 before its subsequent decline. This was part of the so-called “dot-com bubble” of the late 1990s and early 2000s, when investors were optimistic about the potential for internet-based companies to revolutionize the global economy.

The IXIC rose from around 400 in 1994 to a record high of almost 5000 in March 2000. This was largely due to the rapid growth of tech companies such as Amazon and eBay, which attracted huge amounts of investment from venture capitalists. These investments drove up stock prices and created a huge market for initial public offerings (IPOs).

However, this rapid growth was not sustainable, and by the end of 2002 the IXIC had fallen back to around 1300. This was partly due to the bursting of the dot-com bubble, as investors began to realize that many of the companies they had invested in were unprofitable and overvalued. Many of these companies went bankrupt, leading to large losses for their investors.

Overall, the figure “Indice IXIC du NASDAQ, 1994-2005” illustrates the boom and bust cycle of the dot-com bubble, with investors experiencing both incredible gains and huge losses during this period. It serves as a stark reminder of the risks associated with investing in tech stocks. During this period, investors were eager to pour money into internet-based companies in the hopes of achieving huge returns. However, many of these companies were unprofitable, and their stock prices eventually plummeted as investors realized their mistake. This led to large losses for investors, and the bursting of the dot-com bubble.

In addition, this period serves as a reminder of the importance of proper risk management when it comes to investing. While it can be tempting to chase high returns, it is important to remember that investments can go up as well as down. By diversifying your portfolio and taking a long-term approach, you can reduce your risk profile and maximize your chances of achieving successful returns.

U.S. Subprime lending expanded dramatically 2004–2006.

Another example of market efficiency is the global financial crisis of 2008. During this time, the prices of many securities dropped dramatically as the market quickly priced in the risks associated with rising defaults and falling asset values. The market was able to quickly adjust to the new information and the prices of securities were quickly adjusted to reflect the new reality.

U.S. Subprime Lending Expanded Significantly 2004-2006

Source: US Census Bureau.

The figure “U.S. Subprime lending expanded dramatically 2004–2006” illustrates the extent to which subprime mortgage lending in the United States increased during this period. It shows a dramatic rise in the number of subprime mortgages issued from 2004 to 2006. In 2004, less than $500 billion worth of mortgages were issued that were either subprime or Alt-A loans. By 2006, that figure had risen to over $1 trillion, an increase of more than 100%.

This increase in the number of subprime mortgages being issued was largely driven by lenders taking advantage of relaxed standards and government policies that encouraged home ownership. Lenders began offering mortgages with lower down payments, looser credit checks, and higher loan-to-value ratios. This allowed more people to qualify for mortgages, even if they had poor credit or limited income.

At the same time, low interest rates and a strong economy made it easier for people to take on these loans and still be able to make their payments. As a result, many people took out larger mortgages than they could actually afford, leading to an unsustainable increase in housing prices and eventually a housing bubble.

When the bubble burst, millions of people found themselves unable to make their mortgage payments, and the global financial crisis followed. The dramatic increase in subprime lending seen in this figure is one of the primary factors that led to the 2008 financial crisis and is an illustration of how easily irresponsible lending can lead to devastating consequences.

Impact of FTX crash on FTT

Finally, the recent rise (and fall) of the cryptocurrency market is another example of market efficiency. The prices of cryptocurrencies have been highly volatile and have been quickly adjusted to reflect new information. This is due to the fact that the market is highly efficient and is able to quickly adjust to new information.

Price and Volume of FTT

Source: CoinDesk.

FTT price and volume is a chart that shows the impact of the FTX exchange crash on the FTT token price and trading volume. The chart reflects the dramatic drop in FTT’s price and the extreme increase in trading volume that occurred in the days leading up to and following the crash. The FTT price began to decline rapidly several days before the crash, dropping from around $3.60 to around $2.20 in the hours leading up to the crash. Following the crash, the price of FTT fell even further, reaching a low of just under $1.50. This sharp drop can be seen clearly in the chart, which shows the steep downward trajectory of FTT’s price.

The chart also reveals an increase in trading volume prior to and following the crash. This is likely due to traders attempting to buy low and sell high in response to the crash. The trading volume increased dramatically, reaching a peak of almost 20 million FTT tokens traded within 24 hours of the crash. This is significantly higher than the usual daily trading volume of around 1 million FTT tokens.

Overall, this chart provides a clear visual representation of the dramatic impact that the FTX exchange crash had on the FTT token price and trading volume. It serves as a reminder of how quickly markets can move and how volatile they can be, even in seemingly stable assets like cryptocurrencies.

Today, the FTT token price has recovered somewhat since the crash, and currently stands at around $2.50. However, this is still significantly lower than it was prior to the crash. The trading volume of FTT is also much higher than it was before the crash, averaging around 10 million tokens traded per day. This suggests that investors are still wary of the FTT token, and that the market remains volatile.

Conclusion

Market efficiency is an important concept in economics and finance and is based on the idea that prices accurately reflect all available information. There are three types of market efficiency, weak, semi-strong, and strong, and numerous examples of market efficiency in action, such as the dot-com bubble, the global financial crisis, and the recent rise of the cryptocurrency market. As such, it is clear that markets are generally efficient and that it is difficult, if not impossible, to consistently outperform the market.

Related posts on the SimTrade blog

   ▶ All posts related to market efficiency

   ▶ William ANTHONY Peloton’s uphill battle with the world’s return to order

   ▶ Aamey MEHTA Market efficiency: the case study of Yes bank in India

   ▶ Aastha DAS Why is Apple’s new iPhone 14 release line failing in the first few months?

Useful resources

SimTrade course Market information

Academic research

Fama E. (1970) Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383-417.

Fama E. (1991) Efficient Capital Markets: II Journal of Finance, 46, 1575-617.

Grossman S.J. and J.E. Stiglitz (1980) On the Impossibility of Informationally Efficient Markets The American Economic Review, 70, 393-408.

Chicago Booth Review (30/06/2016) Are markets efficient? Debate between Eugene Fama and Richard Thaler (YouTube video)

Business resources

CoinDesk These Four Key Charts Shed Light on the FTX Exchange’s Spectacular Collapse

Bloomberg Crypto Prices Fall Most in Two Weeks Amid FTT and Macro Risks

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Special Acquisition Purpose Companies (SPAC)

Special Acquisition Purpose Companies (SPAC)

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) develops on the SPACs.

What are SPACs

Special purpose acquisition companies (SPACs) are an increasingly popular form of corporate finance for businesses seeking to go public. SPACs are publicly listed entities created with the objective of raising capital through their initial public offering (IPO) and then using that capital to acquire a private operating business. As the popularity of this financing method has grown, so have questions about how SPACs work, their potential risks and rewards, and their implications for investors. This essay will provide an overview of SPAC structures and describe key considerations for investors in evaluating these vehicles.

How are SPACs created

A special purpose acquisition company (SPAC) is created by sponsors who typically have a specific sector or industry focus; they use proceeds from their IPO to acquire target companies within that focus area without conducting the usual due diligence associated with traditional IPOs. The target company is usually identified prior to the IPO taking place; after it does take place, shareholders vote on whether or not they would like to invest in the acquisition target’s stock along with other aspects such as management compensation packages.

The SPAC process

The process begins when sponsors form a shell corporation that issues share via investment banks’ underwriting services; these shares are then offered in an IPO which typically raises between $250 million-$500 million dollars depending on market conditions at time of launch. Sponsors can also raise additional funds through private placements before going public if needed and may even receive additional cash from selling existing assets owned by company founders prior to launching its IPO. This allows them more flexibility in terms of what targets they choose during search process as well as ability transfer ownership over acquired business faster than traditional M&A processes since no need wait secure regulatory approval beforehand. Once enough capital has been raised through IPO/private placement offerings, sponsor team begins searching for suitable candidate(s) purchase using criteria determined ahead time based off desired sector/industry focus outlined earlier mentioned: things like size revenue generated per quarter/yearly periods competitive edge offered current products compared competitors etcetera all come play here when narrowing down list candidates whose acquisitions could potentially help increase value long-term investments made original shareholders..

Advantages of SPACs

Unlike traditional IPOs where companies must fully disclose financial information related past performance future prospects order comply regulations set forth Securities & Exchange Commission (SEC), there far less regulation involved investing SPACs because purchase decisions already being made prior going public stage: meaning only disclose details about target once agreement reached between both parties – though some do provide general information during pre-IPO phase give prospective buyers better idea what expect once deal goes through.. This type of structure helps lower cost associated taking business public since much due diligence already done before opening up share offer investors thus allowing them access higher quality opportunities at fraction price versus those available traditional stock exchange markets. Additionally, because shareholder votes taken into consideration each step way, risk potential fraud reduced since any major irregularities discovered regarding selected targets become transparent common knowledge everyone voting upon proposed change (i.e., keeping board members accountable).

Disadvantages of SPACs

As attractive option investing might seem, there are still certain drawbacks that we should be aware such the high cost involved structuring and launching successful campaigns and the fact that most liquidation events occur within two years after listing date – meaning there is a lot of money spent upfront without guarantee returns back end. Another concern regards transparency: while disclosure requirements are much stricter than those found regular stocks, there is still lack of full disclosure regarding the proposed acquisitions until the deal is finalized making difficult to determine whether a particular venture is worth the risk taken on behalf investor. Lastly, many believe merging different types of businesses together could lead to the disruption of existing industries instead just creating new ones – something worth considering if investing large sums money into particular enterprise.

Examples of SPACs

VPC Impact Acquisition (VPC)

This SPAC was formed in 2020 and is backed by Pershing Square Capital Management, a leading hedge fund. It had an initial funding of $250 million and made three acquisitions. The first acquisition was a majority stake in the outdoor apparel company, Moosejaw, for $280 million. This acquisition was considered a success as Moosejaw saw significant growth in its business after the acquisition, with its e-commerce sales growing over 50% year-over-year (Source: Business Insider). The second acquisition was a majority stake in the lifestyle brand, Hill City, for $170 million, which has also been successful as it has grown its e-commerce and omnichannel businesses (Source: Retail Dive). The third acquisition was a minority stake in Brandless, an e-commerce marketplace for everyday essentials, for $25 million, which was not successful and eventually shut down in 2020 after failing to gain traction in the market (Source: TechCrunch). In conclusion, VPC Impact Acquisition has been successful in two out of three of its acquisitions so far, demonstrating its ability to identify successful investments in the consumer and retail sector.

Social Capital Hedosophia Holdings Corp (IPOE)

This SPAC was formed in 2019 and is backed by Social Capital Hedosophia, a venture capital firm co-founded by famed investor Chamath Palihapitiya. It had an initial funding of $600 million and has made two acquisitions so far. The first acquisition was a majority stake in Virgin Galactic Holdings, Inc. for $800 million, which has been extremely successful as it has become a publicly traded space tourism company and continues to make progress towards its mission of accessible space travel (Source: Virgin Galactic). The second acquisition was a majority stake in Opendoor Technologies, Inc., an online real estate marketplace, for $4.8 billion, which has been successful as the company has seen strong growth in its business since the acquisition (Source: Bloomberg). In conclusion, Social Capital Hedosophia Holdings Corp has been incredibly successful in both of its acquisitions so far, demonstrating its ability to identify promising investments in the technology sector.

Landcadia Holdings II (LCA)

This SPAC was formed in 2020 and is backed by Landcadia Holdings II Inc., a blank check company formed by Jeffery Hildebrand and Tilman Fertitta. It had an initial funding of $300 million and made one acquisition, a majority stake in Waitr Holdings Inc., for $308 million. Unfortunately, this acquisition was not successful and it filed for bankruptcy in 2020 due to overleveraged balance sheet and lack of operational improvements (Source: Reuters). Waitr had previously been a thriving food delivery company but failed to keep up with the rapid growth of competitors such as GrubHub and DoorDash (Source: CNBC). In conclusion, Landcadia Holdings II’s attempt at acquiring Waitr Holdings Inc. was unsuccessful due to market conditions outside of its control, demonstrating that even when a SPAC is backed by experienced investors and has adequate funding, there are still no guarantees of success.

Conclusion

Despite all these drawbacks, Special Purpose Acquisition Companies remain a viable option for entrepreneurs seeking to take advantage of the rising trend toward the digitalization of global markets who otherwise wouldn’t have access to the resources necessary to fund projects themselves. By providing unique opportunity to access higher caliber opportunities, this type of vehicle serves fill gap left behind many start-up ventures unable to compete against larger organizations given the limited financial capacity to operate self-sufficiently. For reasons stated above, it is clear why SPACs continue to gain traction both among investors entrepreneurs alike looking to capitalize quickly on changing economic environment we live today…

Related posts on the SimTrade blog

   ▶ Daksh GARG Rise of SPAC investments as a medium of raising capital

Useful resources

U.S. Securities and Exchange Commission (SEC) Special Purpose Acquisition Companies

U.S. Securities and Exchange Commission (SEC) What are the differences in an IPO, a SPAC, and a direct listing?

U.S. Securities and Exchange Commission (SEC) What You Need to Know About SPACs – Updated Investor Bulletin

PwC Special purpose acquisition companies (SPACs)

Harvard Business Review SPACs: What You Need to Know

Harvard Business Review SPACs: What You Need to Know

Bloomberg

Reuters

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

My experience as an intern in the Corporate Finance department at Maison Chanel

My experience as an intern in the Corporate Finance department at Maison Chanel

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) shares his professional experience as a Corporate Finance intern at Maison Chanel.

About the company

Chanel is a French company producing haute couture, as well as ready-to-wear, accessories, perfumes, and various luxury products. It originates from the fashion house created by Coco Chanel in 1910 but is the result of the takeover of Chanel by the company Les Parfums Chanel in 1954.

Chanel logo.
Channel logo
Source: Chanel.

In February 2021, the company opened a new building called le19M.This building was designed to bring together 11 Maisons d’art, the Maison ERES and a multidisciplinary gallery, la Galerie du 19M, under one same roof. Six hundred artisans and experts are gathered in a building offering working conditions favorable to the wellbeing of everyone and to the development of new perspectives at the service of the biggest names in fashion and luxury.

My internship

From September 2021 to February 2022, I was an intern in the Corporate Finance and Internal Control department at Maison Chanel, Paris, France. As part of Manufactures de Mode, subsidiary of Chanel, which aims to serve as support for all the Maisons d’art and Manufactures de Modes, located in le19M building, my internship was articulated around three main missions.

My missions

My first mission was to develop and implement an internal control process worldwide in every entity belonging to the fashion division of Chanel. The idea behind this was to make a single process that could be used in every entity, whatever its size, so all of them have the same, improving the efficiency during internal and external audits.

During the six months of my internship, we focus our development on a particular aspect of internal control that is called “segregation of duties” or SoD. The segregation of duties is the assignment of various steps in a process to different people. The intent behind doing so is to eliminate instances in which someone could engage in theft or other fraudulent activities by having an excessive amount of control over a process. In essence, the physical custody of an asset, the record keeping for it, and the authorization to acquire or dispose of the asset should be split among different people. We developed multiple procedures and matrix to allow the company to check whether their actual processes were at risk or not, with different level of risks, and adjustments proper to each entity.

My second mission was to value each company to test them for goodwill impairment in Chanel SAS consolidation. We use a discounted cash flow (DCF) model to value every company and based on the value determined, we tested the goodwill. Goodwill impairment is an earnings charge that companies record on their income statements after they identify that there is persuasive evidence that the asset associated with the goodwill can no longer demonstrate financial results that were expected from it at the time of its purchase.

Let me take an example. Imagine company X acquire company Y for $100,000 while company Y was valued at $60,000 in fair value. In this situation, the goodwill is $40,000 (=100,000 – 60,000). Now let’s say we are a year later, and the fair value of company Y is calculated as $45,000 while its recoverable amount is $80,000. The carrying amount of the asset and the goodwill (85,000) is now higher than the recoverable amount of the asset (80,000), and this is misleading, so we have to impair the goodwill by $5,000 (85,000 – 80,000) to account for this decrease in value. As the company was acquired at a price higher than the fair value, it is the goodwill that will be impaired of such a loss.

My last mission was a day-to-day exercise by which I had to assist and support each entity in its duties towards Chanel SAS. It could have been everything related to finance or accounting (reporting, valuation, integration post-acquisition, etc.), and sometimes not even related to finance but to the development of these companies (IT, audits, etc.). This last mission allowed me to travel and visit multiple Maisons d’art and Manufactures de modes to help prepared internal and external audits.

Required skills and knowledge

The main requirements for this internship were to be at ease with accounting and financial principle (reporting, consolidation, fiscal integration, valuation, etc.) to be able to communicate with a multitude of employees by writing and talking, and to be perfectly fluent in English as entities are located everywhere.

What I learned

This internship was a great opportunity to learn because it required a complete skillset of knowledge to be able to work at the same time on internal control aspects, financial aspects, accounting aspects, and globally audit aspects. It gave me the possibility to meet a huge number of people, all interesting and knowledgeable, to travel, to learn more about the fashion luxury industry at every stage of the creation process, and to discover how it is to work in a large company operating on a worldwide scale.

Three concepts I applied during my journey

Discounted cash flow (DCF)

Discounted cash flow (DCF) analysis is a valuation method used to estimate the value of an asset or business. It does this by discounting all future cash flows associated with the asset or business back to the present time, so that they have a consistent value in today’s terms. DCF analysis is one of the most commonly used methods for valuing a business and its assets, as it takes into account both current and expected future earnings potential.

The purpose of using DCF analysis is to determine an accurate value for an asset or company in order to make informed decisions about investing in it. The method takes into account all expected future cash flows from operating activities such as sales, expenses, taxes and dividends paid out over time when calculating its intrinsic worth. This allows investors to accurately evaluate how much they should pay for an investment today compared to what it could be worth in the future due to appreciation or other factors that may affect its price at any given moment over time.

The process involves estimating free cash flow (FCF), which includes net income plus non-cash items like depreciation and amortization minus capital expenditures required for day-to-day operations, then discounting this figure back at a rate determined by market conditions such as risk level and interest rates available on similar investments. The resulting number provides investors with both a present value (PV) which reflects what would be earned from holding onto their money without risking any capital gains tax if held long enough; as well as terminal value (TV) which considers what kind of return can be expected after taking into account growth rates for remaining years left on investments being considered.

Since DCF only takes into consideration anticipated figures based off research conducted prior through financial data points, there are certain limitations associated with using this type of calculation when trying to determine fair market values since unexpected events can occur during timespan between now until end date calculated period ends causing prices either rise above estimated figures proposed earlier before end date was reached thus creating higher returns than originally forecasted initially before actual event took place; at same opposite can occur where unforeseen economic downturns could lower prices below predicted projections resulting lower returns than assumed initially prior situation happening firstly. Therefore, while estimates provided via discounted cash flow are helpful tools towards making more informed decisions when considering buying/selling specific assets/companies, ultimately investor should also conduct additional due diligence beyond just relying solely upon these calculations alone before making final decision whether proceed further move ahead not regarding particular opportunities being evaluated currently.

Goodwill impairment is an analysis used to determine the current market value of a company’s intangible assets. It is usually performed when a company has acquired another company or has merged with another entity but can also be done in other situations such as when the fair value of the reporting unit decreases significantly due to market conditions or internal factors. The purpose of goodwill impairment analysis is to ensure that a company’s financial statements accurately reflect its financial position by recognizing any potential losses in intangible asset values associated with poor performance.

When conducting goodwill impairment analysis, companies must first calculate their total identifiable assets and liabilities at fair value less costs associated with disposal (FVLCD). This includes both tangible and intangible assets like trademarks, patents, and customer relationships. Next, they must subtract FVLCD from the acquisition price of the target entity to calculate goodwill. Goodwill represents any excess amount paid for an acquiree above its fair market value which cannot be attributed directly to specific tangible or intangible assets on its balance sheet. If this calculated goodwill amount is greater than zero, then it needs to be tested for potential impairment losses over time.
The most common method used for testing goodwill impairments involves comparing the implied fair value of each reporting unit’s net identifiable asset base (including both tangible and intangible components) against its carrying amount on the balance sheet at that moment in time. Companies may use either a discounted cash flow model or their own proprietary valuation techniques as part of this comparison process which should consider future expected cash flow streams from operations within each reporting unit affected by acquisitions prior years among other inputs including industry trends and macroeconomic factors etcetera where applicable. If there is evidence that suggests that either one would result in lower overall returns than originally anticipated, then it could indicate an impaired asset situation requiring additional accounting adjustments.

Goodwill

In summary, goodwill impairment analysis plays an important role in ensuring accurate accounting practices are followed by companies so that their financial statements accurately reflect current values rather than simply relying on historic acquisition prices which may not necessarily represent present day realities. By taking all relevant information into consideration during these tests, businesses can identify potential issues early on and make necessary changes accordingly without having too much negative impact downstream operations going forward.

Segregation of duties (SoD)

Segregation of duties (SoD) is an important part of any company’s internal control system. It involves the separation and assignment of different tasks to different people within a business, in order to reduce the risk that one person has too much power over critical functions. This segregation helps to ensure accuracy, integrity, and security in all areas.

Segregation of duties can be broken down into two main components: functional segregation and administrative segregation. Functional segregation involves assigning specific responsibilities or tasks to individuals with expertise or knowledge in that area while administrative segregation focuses on preventing an individual from having too much authority over a process or task by dividing those responsibilities among multiple people.

The purpose behind segregating duties is to limit potential risks associated with fraud, errors due to lack of proper supervision, mismanagement, waste, and misuse of resources as well as other potential criminal activities that could lead to loss for the business. Segregation also ensures accountability for everyone’s actions by making sure no single employee has access or control over more than one critical function at any given time: thereby reducing opportunities for mismanagement and manipulation without proper oversight from management personnel. Additionally, it allows businesses better manage their internal processes by providing checks-and-balances between departments; thus, promoting better coordination between them which can be beneficial when dealing with complex procedures such as budgeting cycles or payroll processing, etc.

In conclusion, segregating duties helps businesses reduce risks related not only fraud but also mismanagement, waste, misuse & other criminal activities which may lead businesses losses & create transparency & accountability within departments so they are able coordinate properly & execute operations efficiently. It is therefore an essential component business should consider implementing into their internal controls systems if they wish to ensure their financial stability long run.

Related posts on the SimTrade blog

   ▶ All posts about professional experience

   ▶ Emma LAFARGUE Mon expérience en contrôle de gestion chez Chanel

   ▶ Marie POFF Film analysis: Rogue Trader

   ▶ Louis DETALLE The incredible story of Nick Leeson & the Barings Bank

   ▶ Maite CARNICERO MARTINEZ How to compute the net present value of an investment in Excel

   ▶ William LONGIN How to compute the present value of an asset?

Useful resources

Maison Chanel

le19m

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

My experience as a Risk Advisory Analyst in Deloitte

My experience as a Risk Advisory Analyst in Deloitte

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2023) shares her experience as a Risk Advisory Analyst in Deloitte.

About the company

Deloitte is one of the Big Four accounting firms along with EY (Ernst & Young), KPMG, and PricewaterhouseCoopers (PWC). It is the largest professional services network (with teams in different countries working together) by the number of professionals and revenue in the world, headquartered in London, England. The firm was founded by William Welch Deloitte in London in 1845 and expanded into the United States in 1890. Deloitte provides audit, consulting, financial advisory, risk advisory, tax, and legal services with approximately 415,000 professionals globally. In fiscal year 2021, the network earned a revenue of US$50.2 billion in aggregate. Additionally, a few of Deloitte’s largest customers as of 2021 includes Morgan Stanley, The Blackstone Group, Berkshire Hathaway, etc.

Logo of Deloitte.
Logo of Deloitte
Source: Deloitte.

As a risk advisory analyst, I had the opportunity to read a lot of surveys that Deloitte conducted on an annual basis to assess work ethics, strategy and their influence on a particular business line. In order for individuals to relate, these polls also provide an overview of the global standing in the relevant business sector. The 11th edition of the Global Risk Management Survey states that despite the relatively stable global economy, risk management is currently dealing with numerous significant impending risks that will force financial service institutions to reconsider their traditional methods. The company also maintains that risk management must be integrated into strategy so that the institution’s risk appetite and risk utilization are important factors to consider.

My experience as a Financial Risk Advisory Analyst at Deloitte

My hands-on experience with risk management and its applications kick-started with my first profile in the Anti-Money Laundering division after graduation as a Financial Risk Advisory Analyst at Deloitte USI (Deloitte USI is a division of Deloitte US that serves customers of the US member firm and is physically located in India.). In this project, I worked for an international bank to audit and assess the company’s customer risk.

My responsibility at work

As an employee in the Risk Advisory department at Deloitte, I provided a host of advanced services to an international bank. I conducted Enhanced Due Diligence for the client’s high-risk and high-net-worth customers through sources of origin and transactions that exhibit irregular behavior. A large part of my work was to minimize or optimize risk, in maintaining the highest standard of financial understanding, I undertook regular risk assessments. The nature of my tasks has brought me close familiarity with numerous domains, highlighting clientele involvement in economically sensitive industries and geographies all over the world.

The work involved holistic net-worth assessment for high-profile customers in accordance with their diversified financial portfolios. The team starts by researching the client and using public records to confirm any criminal history. The team then determines the customer’s net worth by conducting a thorough analysis of the client’s varied sources of income, including a family trust, an inheritance, self-employment, and stock investments. Additionally, the team examines the transactions to look for any potential signs of money laundering.

The whole process is carried out in accordance with the three stages:

  • Placement
  • Layering
  • Integration

The first step in money laundering is depositing illegal funds in financial institutions to make them appear legitimate. This entails splitting up larger sums of money into smaller, less noticeable amounts, transporting cash across borders to deposit the money in foreign banks, or purchasing pricey items like fine art, antiques, gold, etc. Once the money has entered the financial system, it is moved around, or layered, from one place to another in an effort to conceal criminal activity.

For instance, buying an antique item with the money and selling it later to fund the establishment of a holding company or non-financial trust. These financial entities, which are typically corporations or limited liability companies (LLCs), hold the controlling stack of their subsidiary companies and, as a result, oversee the management of child companies without getting directly involved in their day-to-day management.

Another example would be by locating the holding company in a region with a low tax rate. These controlling companies are simple to establish and can significantly reduce the tax burden of the entire corporation. If a child company declares bankruptcy, the holding company, which may hold additional child companies or portions of child companies, is shielded from the loan creditors. After the money appears legitimate, the money is integrated into the system to gain profit. At this stage, identifying black money is very difficult for the bank system.

My missions

My job has broadened the scope of my leadership abilities, and I have led a group of five analysts for a quality check to ensure that projects with strict deadlines are completed on time and to the standard of quality that clients have come to expect from the company. I’ve received several spot awards during my time at Deloitte for my willingness and capacity to go above and beyond.

By establishing a scope to coordinate with on-site teams and executives across geographies, I have gained significant international exposure in the comparatively brief time I have spent at Deloitte. Additionally, I’ve had a profound introduction to the procedures that enable experts to identify elements that pose risks to the regular functioning of enterprises, and thus eliminate and streamline the same.

What I have gained from the job

The following points mentioned below are a brief sum-up of what I learned through my full-time role in the project:

Tax obligations in various jurisdictions

The tax is calculated for a company based on the base location irrespective of how money is flowing into the company.

Different financial entities

The functioning, policies, and structure are different for different entities like LLCs, LLPs, holding companies, non-financial trusts, etc.

Beneficial Ownership

One company can have multiple form of owners, like joint ownership, proprietorship, or partnership, and in a such complex model, how beneficial ownership is decided.

Required skills and knowledge

The hard skills I needed to make presentations or scatterplots when I first started working included knowledge of Money Laundering, Microsoft Suite and Excel. Since the projects associated with these business lines are typically enormous, having solid soft skills will make it easier to manage them. Good soft skills, compliance, teamwork, and cooperation are necessary on an individual level.

Key concepts

I developed below key concepts that I use during my job.

Know your customer (KYC)

Know Your Customer (KYC) can also refer to Know Your Client. Financial institutions are protected by Know Your Customer (KYC) regulations from fraud, corruption, money laundering, and financing of terrorism. When opening an account and on an ongoing basis, KYC checks are required to identify and confirm the client’s identity. In other words, banks need to confirm that their customers are actually who they say they are.

Due Diligence

It refers to the procedures employed by financial organizations to gather and assess pertinent data regarding a customer. It seeks to identify any potential risk associated with doing business with them for the financial institution. The procedure entails assessing public data sources, including firm listings, private data sources from third parties, or government sanction lists. Meeting Know Your Customer (KYC) standards, which differ from nation to country, involves conducting extensive customer due diligence.

Anti-Money Laundering (AML)

The network of rules and norms known as anti-money laundering (AML) aims to expose attempts to pass off illegal money as legitimate income. Money laundering aims to cover up offenses like minor drug sales and tax evasion as well as public corruption and funding of terrorist organizations. AML initiatives seek to make it more difficult to conceal the proceeds of crime. Financial institutions need rules to create regulated customer due diligence plans to evaluate money laundering risks and identify questionable transactions.

Why should I be interested in this post?

I believe that this post’s description of anti-money laundering, a significant business sector of Risk and Financial Advisory, might be very helpful to those interested in pursuing professions in finance. It will help them bridge the gap between real life work experience and theoretical knowledge. My understanding is that this article also provides a quick overview of the auditing and RFA (risk and financial advisory) work environments at Deloitte, one of the Big Four organizations.

Related posts on the SimTrade blog

   ▶ All posts about Professional experiences

   ▶ Basma ISSADIK My experience as an M&A/TS intern at Deloitte

   ▶ Anant JAIN My internship experience at Deloitte

   ▶ Pierre-Alain THIAM My experience as a junior audit consultant at KPMG

Useful resources

Deloitte

About the author

The article was written in January 2023 by Nithisha CHALLA (ESSEC Business School, Grande Ecole Program – Master in Management, 2021-2023).

Catégories de mesures de risques

Catégories de mesures de risque

Shengyu ZHENG

Dans cet article, Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) présente les catégories de mesures de risques couramment utilisées en finance.

Selon le type d’actif et l’objectif de gestion de risques, on se sert de mesures de risques de différentes catégories. Techniquement, on distingue trois catégories de mesures de risques selon l’objet statistique utilisé : la distribution statistique, la sensibilité et les scénarios. Généralement, les méthodes des différentes catégories sont employées et combinées, en constituant un système de gestion de risques qui facilite de différents niveaux des besoins managériaux.

Approche basée sur la distribution statistique

Les mesures modernes de risques s’intéressent à la distribution statistiques de la variation de valeur d’une positon de marché (ou de la rentabilité de cette position) à un horizon donné.

Les mesures se divise principalement en deux types, globales et locales. Les mesures globales (variance, beta) rendent compte de la distribution entière. Les mesures locales (Value-at-Risk, Expected Shortfall, Stress Value) se focalisent sur les queues de distribution, notamment la queue où se situent les pertes.

Cette approche n’est toutefois pas parfaite. Généralement un seul indicateur statistique n’est pas suffisant pour décrire tous les risques présents dans la position ou le portefeuille. Le calcul des propriétés statistiques et l’estimation des paramètres sont basés sur les données du passé, alors que le marché financier ne cesse d’évoluer. Même si la distribution reste inchangée entre temps, l’estimation précise de distribution n’est pas évidente et les hypothèses paramétriques ne sont pas toujours fiables.

Approche basée sur les sensibilités

Cette approche permet d’évaluer l’impact d’une variation d’un facteur de risques sur la valeur ou la rentabilité du portefeuille. Les mesures, telles que la duration et la convexité pour les obligations et les Grecques pour les produits dérivés, font partie de cette catégorie.

Elles comportent aussi des limites, notamment en termes d’agrégation de risques.

Approche basée sur les scénarios

Cette approche considère la perte maximale dans tous les scénarios générés sous les conditions de changements majeurs du marché. Les chocs peuvent s’agir, par exemple, d’une hausse de 10% d’un taux d’intérêt ou d’une devise, accompagnée d’une chute de 20% des indices d’actions importants.

Un test de résistance est un dispositif souvent mis en place par les banques centrales afin d’assurer la solvabilité des acteurs importants et la stabilité du marché financier. Un test de résistance, ou en anglicisme un « stress test », est un exercice consistance à simuler des conditions économiques et financières extrêmes mais effectivement plausibles, dans le but d’étudier les conséquences majeures apportées surtout aux établissements financiers (par exemple, les banques ou les assureurs), et de quantifier la capacité de résistance de ces établissements.

Autres article sur le blog SimTrade

▶ Shengyu ZHENG Mesures de risques

▶ Shengyu ZHENG Moments de la distribution

▶ Shengyu ZHENG Extreme Value Theory: the Block-Maxima approach and the Peak-Over-Threshold approach

▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

Resources

Academic research (articles)

Aboura S. (2009) The extreme downside risk of the S&P 500 stock index. Journal of Financial Transformation, 2009, 26 (26), pp.104-107.

Gnedenko, B. (1943). Sur la distribution limite du terme maximum d’une série aléatoire. Annals of mathematics, 423–453.

Hosking, J. R. M., Wallis, J. R., & Wood, E. F. (1985) “Estimation of the generalized extreme-value distribution by the method of probability-weighted moments” Technometrics, 27(3), 251–261.

Longin F. (1996) The asymptotic distribution of extreme stock market returns Journal of Business, 63, 383-408.

Longin F. (2000) From VaR to stress testing : the extreme value approach Journal of Banking and Finance, 24, 1097-1130.

Longin F. et B. Solnik (2001) Extreme correlation of international equity markets Journal of Finance, 56, 651-678.

Mises, R. v. (1936). La distribution de la plus grande de n valeurs. Rev. math. Union interbalcanique, 1, 141–160.

Pickands III, J. (1975). Statistical Inference Using Extreme Order Statistics. The Annals of Statistics, 3(1), 119– 131.

Academic research (books)

Embrechts P., C. Klüppelberg and T Mikosch (1997) Modelling Extremal Events for Insurance and Finance.

Embrechts P., R. Frey, McNeil A. J. (2022) Quantitative Risk Management, Princeton University Press.

Gumbel, E. J. (1958) Statistics of extremes. New York: Columbia University Press.

Longin F. (2016) Extreme events in finance: a handbook of extreme value theory and its applications Wiley Editions.

Other materials

Extreme Events in Finance

Rieder H. E. (2014) Extreme Value Theory: A primer (slides).

A propos de l’auteur

Cet article a été écrit en janvier 2023 par Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).

Hedge fund diversification

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) discusses the notion of hedge fund diversification by analyzing the paper “Hedge fund diversification: how much is enough?” by Lhabitant and Learned (2002).

This article is organized as follows: we describe the primary characteristics of the research paper. Then, we highlight the research paper’s most important points. This essay concludes with a discussion of the principal findings.

Introduction

The paper discusses the advantages of investing in a set of hedge funds or a multi-strategy hedge fund. It is a relevant subject in the field of alternative investments since it has attracted the interest of institutional investors seeking to uncover the alternative investment universe and increase their portfolio return. The paper’s primary objective is to determine the appropriate number of hedge funds that an portfolio manager should combine in its portfolio to maximise its (expected) returns. The purpose of the paper is to examine the impact of adding hedge funds to a traditional portfolio and its effect on the various statistics (average return, volatility, skewness, and kurtosis). The authors consider basic portfolios (randomly chosen and equally-weighted portfolios). The purpose is to evaluate the diversification advantage and the dynamics of the diversification effect of hedge funds.

Key elements of the paper

The pioneering work of Henry Markowitz (1952) depicted the effect of diversification by analyzing the portfolio asset allocation in terms of risk and (expected) return. Since unsystematic risk (specific risk) can be neutralized, investors will not receive an additional return. Systematic risk (market risk) is the component that the market rewards. Diversification is then at the heart of asset allocation as emphasized by Modern Portfolio Theory (MPT). The academic literature has since then delved deeper on the analysis of the optimal number of assets to hold in a well-diversified portfolio. We list below some notable contributions worth mentioning:

  • Elton and Gruber (1977), Evans and Archer (1968), Tole (1982) and Statman (1987) among others delved deeper into the optimal number of assets to hold to generate the best risk and return portfolio. There is no consensus on the optimal number of assets to select.
  • Evans and Archer (1968) depicted that the best results are achieved with 8-10 assets, while raising doubts about portfolios with number of assets above the threshold. Statman (1987) concluded that at least thirty to forty stocks should be included in a portfolio to achieve the portfolio diversification.

Lhabitant and Learned (2002) also mention the concept of naive diversification (also known as “1/N heuristics”) is an allocation strategy where the investor split the overall fund available is distributed into same. Naive diversification seeks to spread asset risk evenly in the portfolio to reduce overall risk. However, the authors mention important considerations for naïve/Markowitz optimization:

  • Drawback of naive diversification: since it doesn’t account for correlation between assets, the allocation will yield a sub-optimal result and the diversification won’t be fully achieved. In practice, naive diversification can result in portfolio allocations that lie on the efficient frontier. On the other hand, mean-variance optimisation, the framework revolving he Modern Portfolio Theory is subject to input sensitivity of the parameters used in the optimization process. On a side note, it is worth mentioning that naive diversification is a good starting point, better than gut feeling. It simplifies allocation process while also benefiting by some degree of risk diversification.
  • Non-normality of distribution of returns: hedge funds exhibit non-normal returns (fat tails and skewness). Those higher statistical moments are important for investors allocation but are disregarded in a mean-variance framework.
  • Econometric difficulties arising from hedge fund data in an optimizer framework. Mean-variance optimisers tend to consider historical return and risk, covariances as an acceptable point to assess future portfolio performance. Applied in a construction of a hedge fund portfolio, it becomes even more difficult to derive the expected return, correlation, and standard deviation for each fund since data is scarcer and more difficult to obtain. Add to that the instability of the hedge funds returns and the non-linearity of some strategies which complicates the evaluation of a hedge fund portfolio.
  • Operational risk arising from fund selection and implementation of the constraints in an optimiser software. Since some parameters are qualitative (i.e., lock up period, minimum investment period), these optimisers tool find it hard to incorporate these types of constraints in the model.

Conclusion

Due to entry restrictions, data scarcity, and a lack of meaningful benchmarks, hedge fund investing is difficult. The paper analyses in greater depth the optimal number of hedge funds to include in a diversified portfolio. According to the authors, adding funds naively to a portfolio tends to lower overall standard deviation and downside risk. In this context, diversification should be improved if the marginal benefit of adding a new asset to a portfolio exceeds its marginal cost.

The authors reiterate that investors should not invest “naively” in hedge funds due to their inherent risk. The impact of naive diversification on the portfolio’s skewness, kurtosis, and overall correlation structure can be significant. Hedge fund portfolios should account for this complexity and examine the effect of adding a hedge fund to a well-balanced portfolio, taking into account higher statistical moments to capture the allocation’s impact on portfolio construction. Naive diversification is subject to the selection bias. In the 1990s, the most appealing hedge fund strategy was global macro, although the long/short equity strategy acquired popularity in the late 1990s. This would imply that allocations will be tilted towards these two strategies overall.

The answer to the title of the research paper? Hedge funds portfolios should hold between 15 and 40 underlying funds, while most diversification benefits are reached when accounting with 5 to 10 hedge funds in the portfolio.

Why should I be interested in this post?

The purpose of portfolio management is to maximise returns on the entire portfolio, not just on one or two stocks. By monitoring and maintaining your investment portfolio, you can accumulate a substantial amount of wealth for a range of financial goals, such as retirement planning. This article facilitates comprehension of the fundamentals underlying portfolio construction and investing. Understanding the risk/return profiles, trading strategy, and how to incorporate hedge fund strategies into a diversified portfolio can be of great interest to investors.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Equity market neutral strategy

   ▶ Youssef LOURAOUI Fixed income arbitrage strategy

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Elton, E., and M. Gruber (1977). “Risk Reduction and Portfolio Size: An Analytical Solution.” Journal of Business, 50. pp. 415-437.

Evans, J.L., and S.H. Archer (1968). “Diversification and the Reduction of Dispersion: An Empirical Analysis”. Journal of Finance, 23. pp. 761-767.

Lhabitant, François S., Learned Mitchelle (2002). “Hedge fund diversification: how much is enough?” Journal of Alternative Investments. pp. 23-49.

Markowitz, H.M (1952). “Portfolio Selection.” The Journal of Finance, 7, pp. 77-91.

Statman, M. (1987). “How many stocks make a diversified portfolio?”, Journal of Financial and Quantitative Analysis , pp. 353-363.

Tole T. (1982). “You can’t diversify without diversifying”, Journal of Portfolio Management, 8, pp. 5-11.

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Managed futures strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the managed futures strategy (also called CTAs or Commodity Trading Advisors). The objective of the managed futures strategy is to look for market trends across different markets.

This article is structured as follow: we introduce the managed futures strategy principle. Then, we present the different types of managed futures strategies available. We also present a performance analysis of this strategy and compare it a benchmark representing all hedge fund strategies (Credit Suisse Hedge Fund index) and a benchmark for the global equity market (MSCI All World Index).

Introduction

According to Credit Suisse (a financial institution publishing hedge fund indexes), a managed futures strategy can be defined as follows: “Managed Futures funds (often referred to as CTAs or Commodity Trading Advisors) focus on investing in listed bond, equity, commodity futures and currency markets, globally. Managers tend to employ systematic trading programs that largely rely upon historical price data and market trends. A significant amount of leverage is employed since the strategy involves the use of futures contracts. CTAs do not have a particular biased towards being net long or net short any particular market.”

Managed futures funds make money based on the points below:

  • Exploit market trends: trending markets tend to keep the same direction over time (either upwards or downwards)
  • Combine short-term and long-term indicators: use of short-term and long-term moving averages
  • Diversify across different markets: at least one market should move in trend
  • Leverage: the majority of managed futures funds are leveraged in order to get increased exposures to a certain market

Types of managed futures strategies

Managed futures may contain varying percentages of equity and derivative investments. In general, a diversified managed futures account will have exposure to multiple markets, including commodities, energy, agriculture, and currencies. The majority of managed futures accounts will have a trading programme that explains their market strategy. The market-neutral and trend-following strategies are two main methods.

Market-neutral strategy

Market-neutral methods look to profit from mispricing-induced spreads and arbitrage opportunities. Investors that utilise this strategy usually attempt to limit market risk by taking long and short positions in the same industry to profit from both price increases (for long positons) and decreases (for short positions).

Trend-following strategy

Trend-following strategies seek to generate profits by trading long or short based on fundamental and/or technical market indicators. When the price of an asset is falling, trend traders may decide to enter a short position on that asset. On the opposite, when the price of an asset is rising, trend traders may decide to enter a long position. The objective is to collect gains by examining multiple indicators, deciding an asset’s direction, and then executing the appropriate trade.

Methodolgical isuses

The methodology to define a managed futures strategy is described below:

  • Identify appropriate markets: concentrate on the markets that are of interest for this style of trading strategy
  • Identify technical indicators: use key technical indicators to assess if the market is trading on a trend
  • Backtesting: the hedge fund manager will test the indicators retained for the strategy on the market chosen using historical data and assess the profitability of the strategy across a sample data frame. The important point to mention is that the results can be prone to errors. The results obtained can be optimized to historical data, but don’t offer the returns computed historically.
  • Execute the strategy out of sample: see if the in-sample backtesting result is similar out of sample.

This strategy makes money by investing in trending markets. The strategy can potentially generate returns in both rising and falling markets. However, understanding the market in which this strategy is employed, coupled with a deep understanding of the key drivers behind the trending patterns and the rigorous quantitative approach to trading is of key concern since this is what makes this strategy profitable (or not!).

Performance of the managed futures strategy

Overall, the performance of the managed futures strategy was overall not correlated from equity returns, but volatile (Credit Suisse, 2022). To capture the performance of the managed futures strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the managed futures strategy, we examine the rebased performance of the Credit Suisse managed futures index with respect to the MSCI All-World Index.

Over a period from 2002 to 2022, the managed futures strategy index managed to generate an annualized return of 3.98% with an annualized volatility of 10.40%, leading to a Sharpe ratio of 0.077. Over the same period, the Credit Suisse Hedge Fund Index managed to generate an annualized return of 5.18% with an annualized volatility of 5.53%, leading to a Sharpe ratio of 0.208. The managed futures strategy had a negative correlation with the global equity index, just about -0.02 overall across the data analyzed. The results are in line with the idea of global diversification and decorrelation of returns derived of the managed futures strategy from global equity returns.

Figure 1 gives the performance of the managed futures funds (Credit Suisse Managed Futures Index) compared to the hedge funds (Credit Suisse Hedge Fund index) and the world equity funds (MSCI All-World Index) for the period from July 2002 to April 2021.

Figure 1. Performance of the managed futures strategy.
Performance of the managed futures strategy
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the explanations about the Credit Suisse managed futures strategy.

Managed futures

Why should I be interested in this post?

Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Equity market neutral strategy

   ▶ Youssef LOURAOUI Fixed income arbitrage strategy

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Managed futures strategy

Credit Suisse Managed futures performance benchmark

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Dedicated short bias strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the dedicated short bias strategy. The strategy holds a net short position, which implies more shorts (selling) than long (buying) positions. The objective of the dedicated bias strategy is to profit from shorting overvalued equities.

This article is structured as follow: we introduce the dedicated short bias strategy. Then, we present a practical case study to grasp the overall methodology of this strategy. We also present a performance analysis of this strategy and compare it a benchmark representing all hedge fund strategies (Credit Suisse Hedge Fund index) and a benchmark for the global equity market (MSCI All World Index).

Introduction

According to Credit Suisse (a financial institution publishing hedge fund indexes), a dedicated short bias strategy can be defined as follows: “Dedicated Short Bias funds take more short positions than long positions and earn returns by maintaining net short exposure in long and short equities. Detailed individual company research typically forms the core alpha generation driver of dedicated short bias managers, and a focus on companies with weak cash flow generation is common. To affect the short sale, the manager borrows the stock from a counter-party and sells it in the market. Short positions are sometimes implemented by selling forward. Risk management consists of offsetting long positions and stop-loss strategies”.

This strategy makes money by short selling overvalued equities. The strategy can potentially generate returns in falling markets but would underperform in rising equity market. The interesting characteristic of this strategy is that it can potentially offer to investors the added diversification by being non correlated with equity market returns.

Example of the dedicated short bias strategy

Jim Chanos (Kynikos Associates) short selling trade: Enron

In 2000, Enron dominated the raw material and energy industries. Kenneth Lay and Jeffrey Skilling were the two leaders of the group that disguised the company’s financial accounts for years. Enron’s directors, for instance, hid interminable debts in subsidiaries in order to create the appearance of a healthy parent company whose obligations were extremely limited because they were buried in the subsidiary accounts. Enron filed for bankruptcy on December 2, 2001, sparking a big scandal, pulling down the pension funds intended for the retirement of its employees, who were all laid off simultaneously. Arthur Andersen, Enron’s auditor, failed to detect the scandal, and the scandal ultimately led to the dissolution of one of the five largest accounting and audit firms in the world (restructuring the sector from the Big 5 to the Big 4). Figure 1 represents the share price of Enron across time.

Figure 1. Performance Enron across time.
img_SimTrade_Enron_performance
Source: Computation by the author

Fortune magazine awarded Enron Corporation “America’s Most Innovative Company” annually from 1996 to 2000. Enron Corporation was a supposedly extremely profitable energy and commodities company. At the beginning of 2001, Enron had around 20,000 employees and a market valuation of $60 billion, approximately 70 times its earnings.

Short seller James Chanos gained notoriety for identifying Enron’s problems early on. This trade was dubbed “the market call of the decade, if not the past fifty years” (Pederssen, 2015).

Risk of the dedicated short bias strategy

The most significant risk that can make this strategy loose money is a short squeeze. A short seller can borrow shares through a margin account if he/she believes a stock is overvalued and its price is expected to decline. The short seller will then sell the stock and deposit the money into his/her margin account as collateral. The seller will eventually have to repurchase the shares. If the price of the stock has decreased, the short seller gains money owing to the difference between the price of the stock sold on margin and the price of the stock paid later at the reduced price. Nonetheless, if the price rises, the buyback price may rise the initial sale price, and the short seller will be forced to sell the security quickly to avoid incurring even higher losses.

We illustrate below the risk of a dedicated short bias strategy with Gamestop.

Gamestop short squeeze

GameStop is best known as a video game retailer, with over 3,000 stores still in operation in the United States. However, as technology in the video game business advances, physical shops faced substantial problems. Microsoft and Sony have both adopted digital game downloads directly from their own web shops for their Xbox and Playstation systems. While GameStop continues to offer video games, the company has made steps to diversify into new markets. Toys and collectibles, gadgets, apparel, and even new and refurbished mobile phones are included.

However, given the increased short pressure by different hedge funds believing that the era of physical copies was dead, they started positioning in Gamestop stock and traded short in order to profit from the decrease in value. In this scenario, roughly 140% of GameStop’s shares were sold short in January 2021. In this case, investors have two choices: keep the short position or cover it (to buy back the borrowed securities in order to close out the open short position at a profit or loss). When the stock price rises, covering a short position means purchasing the shares at a loss since the stock price is now higher than what was sold. And when 140% of a stock’s float is sold short, a large number of positions are (have to be) closed. As a result, short sellers were constantly buying shares to cover their bets. When there is that much purchasing pressure, the stock mechanically continued to rise. From the levels reached in early 2020 to the levels reached in mid-2021, the stock price climbed by a factor of a nearly a hundred times (Figure 2).

Figure 2. Performance of Gamestop stock price.
 Gamestop performance
Source: (Data: Tradingview)

In the Gamestop story, the short sellers lost huge amount of money. Especially, the hedge fund Melvin Capital lost billions of dollars after being on the wrong side of the GameStop short squeeze.

Why should I be interested in this post?

Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

Hedge funds

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

Financial techniques

   ▶ Akshit GUPTA Short selling

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Wikipedia Gamestop short squeeze

TradingView, 2023 Gamestop stock price historical chart

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Hedging of the crude oil price

Youssef_Louraoui

In this article, Youssef Louraoui (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) discusses the concept of hedging and its application in the crude oil market.

This article is structured as follow: we introduce the concept of hedging in the first place. Then, we present the mathematical foundation of the Minimum Variance Hedging Ratio (MVHR). We wrap up with an empirical analysis applied to the crude oil market with a conclusion.

Introduction

Hedging is a strategy that considers taking both positions in the physical as well as the futures market to offset market movement and lock-in the price. When an individual or a corporation decides to hedge risk using futures markets, the objective is to take the opposite position to neutralize the risk as far as possible. If the company is long on the physical side (say a producer), they will mitigate the hedging by taking a short exposure in the future market. The opposite is true for a market player who is short physical. He will seek to have a long exposure in the futures market to offset the risk (Hull, 2006).

Short hedge

Selling futures contracts as insurance against an expected decrease in spot prices is known as a short hedge. For instance, an oil producer might sell crude futures or forwards if they anticipate a decline in the price of the commodity.

Long hedge

A long hedge involves purchasing futures as insurance against an increase in price. For instance, an aluminum smelter will purchase electricity futures and forward contracts, allowing the business to secure its electricity needs in the event that the physical market rises in value.

Mathematical foundations

Linear regression model

We can consider the hedge ratio as the slope of the following linear regression representing the relationship between the spot and futures price changes:

doc_SimTrade_MVHR_formula_4

where

  • ∆St the change in the spot price at time t
  • β represents the hedging parameter
  • ∆Ft the change in the futures price at time t

The linear regression model above can also be expressed with returns instead of price changes:

doc_SimTrade_MVHR_formula_5

  • RSpot the return in the spot market at time t
  • RFutures the return in the futures market at time t

Hedge ratio

We can derive the following formula for the Minimum Variance Hedging Ratio (MVHR) denoted by the Greek letter beta β:

doc_SimTrade_MVHR_formula_3

where

  • Cov(∆St,∆Ft) the co movement of the change in spot price and futures price at time t
  • Var(∆Ft) represents the variance of the change in price of the future price at time t

The variance and covariance of spot and futures prices are time-varying due to the changing distributional features of these values across time. Accordingly, taking into consideration such dynamics in the variance and covariance term of asset prices is a more acceptable method of establishing the minimal variance hedge ratio. There is a number of different methods that account for the dynamic nature of the minimal variance hedge ratio estimation (Alizadeh, 2022):

  • Simple Rolling OLS
  • Rolling VAR or VECM
  • GARCH models
  • Markov Regime Switching
  • Minimising VaR and CVaR methods

Empirical approach to hedging analysis

Periods

We downloaded ten-year worth of weekly data for the WTI crude oil spot and futures contract from the US Energy Information Administration (EIA) website. We decompose the data into two periods to assess the effectiveness of the different hedging strategies: 1st period from 23rd March 2012 to 24th March 2017 and 2nd period from 31st March 2017 to 22nd March 2022.

First period: March 2012 – March 2017

The first five years are used to estimate the Minimum Variance Hedging Ratio (Ederington, 1979). We can approach this question by using the “=slope(known_ys, known_xs)” function in Excel to obtain the gamma coefficient that would represent the MVHR. When computing the slope for the first period of the sample from 23rd March 2012 to 24th March 2017, we get a MVHR equal to 0.985. We obtain a correlation (ρ) using the Excel formula “=correl(array_1, array_2)” highlighting the logarithmic return of WTI spot and futures contract price, which yields 0.986. We can see from the figure 1 how the spot and futures prices converge closely and track each other in a very tight corridor, with very minor divergence. The regression plot between spot and futures contract returns for the first period is shown in Figure 2. This suggests that the hedger should take an opposite position in the futures market equal to 0.985 contract for each spot contract in order to minimise risk when using futures contracts as a hedging tool.

Figure 1. WTI spot and futures (1 month) prices
March 2012 – March 2017
WTI spot and futures prices
Source: computation by the author (data: EIA & Refinitiv Eikon).

Figure 2. Linear regression of WTI spot return on futures (1 month) return
March 2012 – March 2017
Linear regression of WTI spot return on futures (1 month) return
Source: computation by the author (data: EIA & Refinitiv Eikon).

A one-to-one hedge ratio (also known as naïve hedge) means that for every dollar of exposure in the physical market, we take one dollar exposure in the futures market. The effectiveness of this strategy is tied closely to how the spot/futures market correlation behaves. The effectiveness of this strategy would be equal to the correlation of the spot and the futures market prices in the second period.

Second period: March 2017 – March 2022

We compute the MVHR for the second period with the same approach retained in the first part by using the “=slope(known_ys, known_xs)” function in Excel to obtain the gamma coefficient that would represent the MVHR. When computing the slope for the first period of the sample from 23rd March 2017 to 24th March 2022, we get a MVHR equal to 1.095. This means that for every spot contract that we own, we need to buy 0.985 futures contracts to hedge our market risk. As previously stated, the same trend can be seen in figure 3, where spot and futures prices converge closely and track each other with just little deviation. Figure 4 represents the regression plot between spot and futures contract returns for the second period. This means that in order to reduce risk to the minimum possible amount when futures contract used as hedging instrument, for each spot contract the hedger should take an opposite position equivalent to 1.05 contract in the futures market.

Figure 3. WTI spot and futures (1 month) prices
March 2017 – March 2022.
WTI spot and futures prices
Source: computation by the author (data: EIA & Refinitiv Eikon).

Figure 4. Linear regression of WTI spot return on futures (1 month) return
March 2017 – March 2022
Linear regression of WTI spot return on futures (1 month) return
Source: computation by the author (data: EIA & Refinitiv Eikon).

We can approach this hedging exercise in a time-varying framework. Some academics consider that covariance and correlation are not static parameters, so they came up with models to accommodate for the time-varying nature of these two parameters. We can compute the rolling regression as the rolling slope by changing the timeframe to allow for dynamic coefficients. For this example, we computed rolling regression for one month, three-month, one year and two years. We can plot the rolling regression in the graph below (Figure 5). We can average the rolling gammas and obtain an average for each rolling period (Table 1):

Table 1. Table capturing the rolling hedge ratio for WTI across different horizons.
 Hedging strategy
Source: computation by the author (data: EIA & Refinitiv Eikon).

Figure 5. WTI hedge ratio for different rolling window sizes.
Hedge ratio for WTI for rolling window sizes
Source: computation by the author (data: EIA & Refinitiv Eikon).

Conclusion

In an realistic setting, these results may be oversimplified. In some instances, cross hedging is required to calculate this strategy. This technique is used to hedge an asset’s value by relying on another asset to replicate its behaviour. Let’s use an airline as an example of a corporation seeking to hedge its jet fuel expenditures. As there is currently no jet fuel futures contract, the airline can hedge its basis risk with heating oil (an equivalent product with a valid futures market). As stated previously, the degree of correlation between the spot price and the futures price impacts the precision of cross-hedging (and hedging in general). To get the desired results and avoid instances in which we overhedge or underhedge our exposure, hedging must finally be performed appropriately.

You can find below the Excel spreadsheet that complements the explanations about of this article.

 Hedging strategy on crude oil

Why should I be interested in this post?

Understanding hedging techniques can be a valuable tool to implement to reduce the downside risk of an investment. Implementing a good hedging strategy can help professionals to better monitor and modify their trading strategies based different market environments.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI My experience as an Oil Analyst at an oil and energy trading company

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Minimum volatility factor

   ▶ Youssef LOURAOUI VIX index

   ▶ Jayati WALIA Black Scholes Merton option pricing model

   ▶ Jayati WALIA Implied volatility

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Adler M. and B. Dumas (1984) “Exposure to Currency Risk: Definition and Measurement” Financial Management 13(2) 41-50.

Alizadeh A. (2022) Volatility of energy prices: Estimation and modelling. Oil and Energy Trading module at Bayes Business School. 46-51.

Ederington L.H. (1979). The Hedging Performance of the New Futures Markets. Journal of Finance, 34(1) 157-170.

Hull C.J. (2006). Options, futures and Other Derivatives, sixth edition. Pearson Prentice Hall. 99-373.

Business

US Energy Information Administration (EIA)

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Modeling of the crude oil price

Modeling of the crude oil price

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) models the market price of the crude oil.

This article is structured as follows: we introduce the crude oil market. Then, we present the mathematical foundations of Geometric Brownian Motion (GBM) model. We use this model to simulate the price of crude oil.

The crude oil market

The crude oil market represents the physical (cash or spot) and paper (futures) market where buyers and sellers acquire oil.

Nowadays, the global economy is heavily reliant on fossil fuels such as crude oil, and the desire for these resources frequently causes political upheaval due to the fact that a few nations possess the greatest reservoirs. The price and profitability of crude oil are significantly impacted by supply and demand, like in any sector. The top oil producers in the world are the United States, Saudi Arabia, and Russia. With a production rate of 18.87 million barrels per day, the United States leads the list. Saudi Arabia, which will produce 10.84 million barrels per day in 2022 and own 17% of the world’s proved petroleum reserves, will come in second. Over 85% of its export revenue and 50% of its GDP are derived from the oil and gas industry. In 2022, Russia produced 10.77 million barrels every day. West Siberia and the Urals-Volga area contain the majority of the nation’s reserves. 10% of the oil produced worldwide comes from Russia.

Throughout the late nineteenth and early twentieth centuries, the United States was one of the world’s largest oil producers, and U.S. corporations developed the technology to convert oil into usable goods such as gasoline. U.S. oil output declined significantly throughout the middle and latter decades of the 20th century, and the country began to import energy. Nonetheless, crude oil net imports in 2021 were at their second-lowest yearly level since 1985. Its principal supplier was the Organization of the Petroleum Exporting Countries (OPEC), created in 1960, which consisted of the world’s largest (by volume) holders of crude oil and natural gas reserves.

As a result, the OPEC nations wielded considerable economic power in regulating supply, and hence price, of oil in the late twentieth century. In the early twenty-first century, the advent of new technology—particularly hydro-fracturing, or fracking—created a second U.S. energy boom, significantly reducing OPEC’s prominence and influence.

Oil spot contracts and futures contracts are the two forms of oil contracts that investors can exchange. To the individual investor, oil can be a speculative asset, a portfolio diversifier, or a hedge for existing positions.

Spot contract

The spot contract price indicates the current market price for oil, while the futures contract price shows the price that buyers are ready to pay for oil on a delivery date established in the future.

Most commodity contracts bought and sold on the spot market take effect immediately: money is exchanged, and the purchaser accepts delivery of the commodities. In the case of oil, the desire for immediate delivery vs future delivery is limited, owing to the practicalities of delivering oil.

Futures contract

An oil futures contract is an agreement to buy or sell a specified number of barrels of oil at a predetermined price on a predetermined date. When futures are acquired, a deal is struck between buyer and seller and secured by a margin payment equal to a percentage of the contract’s entire value. The futures price is no guarantee that oil will be at that price on that date in the future market. It is just the price that oil buyers and sellers anticipate at the time. The exact price of oil on that date is determined by a variety of factors impacting the supply and demand. Futures contracts are more frequently employed by traders and investors because investors do not intend to take any delivery of commodities at all.

End-users of oil buy on the market to lock in a price; investors buy futures to speculate on what the price will be in the future, and they earn if they estimate correctly. They typically liquidate or roll over their futures assets before having to take delivery. There are two major oil contracts that are closely observed by oil market participants: 1) West Texas Intermediate (WTI) crude, which trades on the New York Mercantile Exchange, serves as the North American oil futures benchmark (NYMEX); 2) North Sea Brent Crude, which trades on the Intercontinental Exchange, is the benchmark throughout Europe, Africa, and the Middle East (ICE). While the two contracts move in tandem, WTI is more sensitive to American economic developments, while Brent is more sensitive to those in other countries.

Mathematical foundations of the Geometric Brownian Motion (GBM) model

The concept of Brownian motion is associated with the contribution of Robert Brown (1828). More formally, the first works of Brown were used by the French mathematician Louis Bachelier (1900) applied to asset price forecast, which prepared the ground of modern quantitative finance. Price fluctuations observed over a short period, according to Bachelier’s theory, are independent of the current price as well as the historical behaviour of price movements. He deduced that the random behaviour of prices can be represented by a normal distribution by combining his assumptions with the Central Limit Theorem. This resulted in the development of the Random Walk Hypothesis, also known as the Random Walk Theory in modern finance. A random walk is a statistical phenomenon in which stock prices fluctuate at random. We implement a quantitative framework in a spreadsheet based on the Geometric Brownian Motion (GBM) model. Mathematically, we can derive the price of crude oil via the following model:

img_SimTrade_GBM_equation_2

where dS represents the price change in continuous time dt, dX the Wiener process representing the random part, and Μdt the deterministic part.

The probability distribution function of the future price is a log-normal distribution when the price dynamics is described with a geometric Brownian motion.

Modelling crude oil market prices

Market prices

We downloaded a time series for WTI from June 2017 to June 2022. We picked this timeframe to assess the behavior of crude oil during two main market events that impacted its price: Covid-19 pandemic and the war in Ukraine.

The two main parameters to compute in order to implement the model are the (historical) average return and the (historical) volatility. We eliminated outliers (the negative price of oil) to clean the dataset and obtain better results. The historical average return is 11.99% (annual return) and the historical volatility is 59.29%. Figure 1 helps to capture the behavior of the WTI price over the period from June 2017 to June 2022.

Figure 1. Crude oil (WTI) price.
img_SimTrade_WTI_price
Source: computation by the author (data: Refinitiv Eikon).

Market returns

Figure 2 represents the returns of crude oil (WTI) over the period. We can clearly see that the impact of the Covid-19 pandemic had important implications for the negative returns in during the period covering early 2020.

Figure 2. Crude oil (WTI) return.
img_SimTrade_WTI_return
Source: computation by the author (data: Refinitiv Eikon).

We compute the returns using the log returns approach.

img_SimTrade_log_return_WTI

where Pt represents the closing price at time t.

Figure 3 captures the distribution of the crude oil (WTI) daily returns in a histogram. As seen in the plot, the returns are skewed towards the negative tail of the distribution and show some peaks in the center of the distribution. When analyzed in conjunction, we can infer that the crude oil daily returns doesn’t follow the normal distribution.

Figure 3. Histogram of crude oil (WTI) daily returns.img_SimTrade_WTI_histogramSource: computation by the author (data: Refinitiv Eikon).

To have a better understanding of the crude oil behavior across the 1257 trading days retained for the period of analysis, it is interesting to run a statistical analysis of the four moments of the crude oil time series: the mean (average return), standard deviation (volatility), skewness (symmetry of the distribution), kurtosis (tail of the distribution). As captured by Table 1, crude oil performed positively over the period covered delivering a daily return equivalent to 0.05% (13.38% annualized return) for a daily degree of volatility equivalent to 3.74% (or 59.33% annualized). In terms of skewness, we can see that the distribution of crude oil return is highly negatively skewed, which implies that the negative tail of the distribution is longer than the right-hand tail (positive returns). Regarding the high positive kurtosis, we can conclude that the crude oil return distribution is more peaked with a narrow distribution around the center and show more tails than the normal distribution.

Table 1. Statistical moments of the crude oil (WTI) daily returns.
 WTI statistical moment
Source: computation by the author (data: Refinitiv Eikon).

Application: simulation of future prices for the crude oil market

Understanding the evolution of the price of crude oil can be significant for pricing purposes. Some models (such as the Black-Scholes option pricing model) rely heavily on a price input and can be sensitive to this parameter. Therefore, accurate price estimation is at the core of important pricing models and thus having a good estimate of spot and future price can have a significant impact in the accuracy of the pricing implemented profitability of the trade.

We implement this framework and use a Monte Carlo simulation of 25 iterations to capture the different path that the WTI price can take over a period of 24 months. Figure 4 captures the result of the model. We plot the simulations in a 3D-graph to grasp the shape of the variations in each maturity. As seen from Figure 4, price peaked at the longer end of the maturity at a level near the 250$/bbl. Overall the shape is bumpy, with some local spikes achieved throughout the whole sample and across all the maturities (Figure 4).

Figure 4. Geometric Brownian Motion (GBM) simulations for WTI. WTI GBM simulationSource: computation by the author (Data: Refinitiv Eikon).

You can find below the Excel spreadsheet that complements the explanations about of this article.

 GBM_simulation_framework

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI My experience as an Oil Analyst at an oil and energy trading company

   ▶ Jayati WALIA Brownian Motion in Finance

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Bachelier, Louis (1900). Théorie de la Spéculation, Annales Scientifique de l’École Normale Supérieure, 3e série, tome 17, 21-86.

Bashiri Behmiri, Niaz and Pires Manso, José Ramos, Crude Oil Price Forecasting Techniques: A Comprehensive Review of Literature (June 6, 2013). SSRN Reseach Journal.

Brown, Robert (1828), “A brief account of microscopical observations made on the particles contained in the pollen of plants” in Philosophical Magazine 4:161-173.

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Equity market neutral strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the equity market neutral strategy. The objective of the equity market neutral strategy is to benefit from both long and short positions while minimizing the exposure to the equity market fluctuations.

This article is structured as follow: we introduce the equity market neutral strategy. Then, we present a practical case study to grasp the overall methodology of this strategy. We conclude with a performance analysis of this strategy in comparison with a global benchmark (MSCI All World Index and the Credit Suisse Hedge Fund index).

Introduction

According to Credit Suisse (a financial institution publishing hedge fund indexes), an equity market neutral strategy can be defined as follows: “Equity Market Neutral funds take both long and short positions in stocks while minimizing exposure to the systematic risk of the market (i.e., a beta of zero is desired). Funds seek to exploit investment opportunities unique to a specific group of stocks, while maintaining a neutral exposure to broad groups of stocks defined for example by sector, industry, market capitalization, country, or region. There are a number of sub- sectors including statistical arbitrage, quantitative long/short, fundamental long/short and index arbitrage”. This strategy makes money by holding assets that are decorrelated from a specific benchmark. The strategy can potentially generate returns in falling markets.

Mathematical foundation for the beta

This strategy relies heavily on the beta, derived from the capital asset pricing model (CAPM). Under this framework, we can relate the expected return of a given asset and its risk:

CAPM

Where :

  • E(r) represents the expected return of the asset
  • rf the risk-free rate
  • β a measure of the risk of the asset
  • E(rm) the expected return of the market
  • E(rm) – rf represents the market risk premium.

In this model, the beta (β) parameter is a key parameter and is defined as:

Beta

Where:

  • Cov(r, rm) represents the covariance of the asset return with the market return
  • σ2(rm) is the variance of market return.

The beta is a measure of how sensitive an asset is to market swings. This risk indicator aids investors in predicting the fluctuations of their asset in relation to the wider market. It compares the volatility of an asset to the systematic risk that exists in the market. The beta is a statistical term that denotes the slope of a line formed by a regression of data points comparing stock returns to market returns. It aids investors in understanding how the asset moves in relation to the market. According to Fama and French (2004), there are two ways to interpret the beta employed in the CAPM:

  • According to the CAPM formula, beta may be thought in mathematical terms as the slope of the regression of the asset return on the market return observed on different periods. Thus, beta quantifies the asset sensitivity to changes in the market return;
  • According to the beta formula, it may be understood as the risk that each dollar invested in an asset adds to the market portfolio. This is an economic explanation based on the observation that the market portfolio’s risk (measured by 〖σ(r_m)〗^2) is a weighted average of the covariance risks associated with the assets in the market portfolio, making beta a measure of the covariance risk associated with an asset in comparison to the variance of the market return.

Additionally, the CAPM makes a distinction between two forms of risk: systematic and specific risk. Systematic risk refers to the risk posed by all non-diversifiable elements such as monetary policy, political events, and natural disasters. By contrast, specific risk refers to the risk inherent in a particular asset and so is diversifiable. As a result, the CAPM solely captures systematic risk via the beta measure, with the market’s beta equal to one, lower-risk assets having a beta less than one, and higher-risk assets having a beta larger than one.

Application of an equity market neutral strategy

For the purposes of this example, let us assume that a portfolio manager wants to invest $100 million across a diverse equity portfolio while maintaining market-neutral exposure to market index changes. To create an equity market-neutral portfolio, we use five stocks from the US equity market: Apple, Amazon, Microsoft, Goldman Sachs, and Pfizer. Using monthly data from Bloomberg for the period from 1999 to 2022, we compute the returns of these stocks and their beta with the US equity index (S&P500). Using the solver function on Excel, we find the weights of the portfolio with the maximum expected return with a beta equal to zero.

Table 1 displays the target weights needed to build a portfolio with a neutral view on the equity market. As shown by the target allocation in Table 1, we can immediately see a substantial position of 186.7 million dollars on Pfizer while keeping a short position on the remaining equity positions of the portfolio totaling 86.7 million dollars in short positions. Given that the stocks on the short list have high beta values (more than one), this allocation makes sense. Pfizer is the only defensive stock and has a beta of 0.66 in relation to the S&P 500 index.

If the investment manager allocated capital in the following way, he would create an equity market neutral portfolio with a beta of zero:

Apple: -$4.6 million (-4.6% of the portfolio; a weighted-beta of -0.066)
Amazon: -$39.9 million (-39.9% of the portfolio; a weighted-beta of -0.592)
Microsoft: -$16.2 million (-16.2% of the portfolio; a weighted-beta of -0.192)
Goldman Sachs: -$26 million (-26% of the portfolio; a weighted-beta of -0.398)
Pfizer: $186.7 million (186.7% of the portfolio; a weighted-beta of 1.247)

Table 1. Target weights to achieve an equity market neutral portfolio.
Target weights to achieve an equity market neutral portfolio. Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the explanations about the equity market neutral portfolio.

 Equity market neutral strategy

An extension of the equity market neutral strategy to other asset classes

A portfolio with a beta of zero, or zero systematic risk, is referred to as a zero-beta portfolio. A portfolio with a beta of zero would have an expected return equal to the risk-free rate. Given that its expected return is equal to the risk-free rate or is relatively low compared to portfolios with a higher beta. Such portfolio would have no correlation with market movements.

Since a zero-beta portfolio has no market exposure and would consequently underperform a diversified market portfolio, it is highly unlikely that investors will be interested in it during bull markets. During a bear market, it may garner some interest, but investors are likely to ask if investing in risk-free, short-term Treasuries is a better and less expensive alternative to a zero-beta portfolio.

For this example, we imagine the case of a portfolio manager wishing to invest 100M$ across a diversified portfolio, while holding a zero-beta portfolio with respect to a broad equity index benchmark. To recreate a diversified portfolio, we compiled a shortlist of trackers that would represent our investment universe. To maintain a balanced approach, we selected trackers that would represent the main asset classes: global stocks (VTI – Vanguard Total Stock Market ETF), bonds (IEF – iShares 7-10 Year Treasury Bond ETF and TLT – iShares 20+ Year Treasury Bond ETF), and commodities (DBC – Invesco DB Commodity Index Tracking Fund and GLD – SPDR Gold Shares).

To construct the zero-beta portfolio, we pulled a ten-year time series from Refinitiv Eikon and calculated the beta of each asset relative to the broad stock index benchmark (VTI tracker). The target weights to create a zero-beta portfolio are shown in Table 2. As captured by the target allocation in Table 2, we can clearly see an important weight for bonds of different maturities (56.7%), along with a 33.7% towards commodities and a small allocation towards global equity equivalent to 9.6% (because of the high beta value).

If the investment manager allocated capital in the following way, he would create a zero-beta portfolio with a beta of zero:

VTI: $9.69 million (9.69% of the portfolio; a weighted-beta of 0.097)
IEF: $18.99 million (18.99% of the portfolio; a weighted-beta of -0.029)
GLD: $18.12 million (18.12% of the portfolio; a weighted-beta of 0.005)
DBC: $15.5 million (15.50% of the portfolio; a weighted-beta of 0.070)
TLT: $37.7 million (37.7% of the portfolio; a weighted-beta of -0.143)

Table 2. Target weights to achieve a zero-beta portfolio.
Target weights to achieve a zero-beta portfolio Source: computation by the author. (Data: Reuters Eikon)

You can find below the Excel spreadsheet that complements the explanations about the zero beta portfolio.

Zero beta portfolio

Performance of the equity market neutral strategy

To capture the performance of the equity market neutral strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the equity market neutral strategy, we examine the rebased performance of the Credit Suisse managed futures index with respect to the MSCI All-World Index.

The equity market neutral strategy generated an annualized return of -0.18% with an annualized volatility of 7.5%, resulting in a Sharpe ratio of -0.053. During the same time period, the Credit Suisse Hedge Fund index had an annualized return of 4.34 percent with an annualized volatility of 5.64 percent, resulting in a Sharpe ratio of 0.174. With a neutral market beta exposure of 0.04, the results are consistent with the theory that this approach does not carry the equity risk premium. This aspect justifies the underperformance.

Figure 1 gives the performance of the equity market neutral funds (Credit Suisse Equity Market Neutral Index) compared to the hedge funds (Credit Suisse Hedge Fund index) and the world equity funds (MSCI All-World Index) for the period from July 2002 to April 2021.

Figure 1. Performance of the equity market neutral strategy.
Performance of the equity market neutral strategy
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the explanations about the Credit Suisse equity market neutral strategy.

 Equity market neutral performance

Why should I be interested in this post?

Understanding the performance and risk of the equity market neutral strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

Hedge funds

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

Financial techniques

   ▶ Youssef LOURAOUI Yield curve structure and interet rate calibration

   ▶ Akshit GUPTA Interest rate swaps

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Equity market neutral strategy

Credit Suisse Equity market neutral performance benchmark

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Fixed-income arbitrage strategy

Fixed-income arbitrage strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the fixed-income arbitrage strategy which is a well-known strategy used by hedge funds. The objective of the fixed-income arbitrage strategy is to benefit from trends or disequilibrium in the prices of fixed-income securities using systematic and discretionary trading strategies.

This article is structured as follow: we introduce the fixed-income arbitrage strategy principle. Then, we present a practical case study to grasp the overall methodology of this strategy. We also present a performance analysis of this strategy and compare it a benchmark representing all hedge fund strategies (Credit Suisse Hedge Fund index) and a benchmark for the global equity market (MSCI All World Index).

Introduction

According to Credit Suisse (a financial institution publishing hedge fund indexes), a fixed-income arbitrage strategy can be defined as follows: “Fixed-income arbitrage funds attempt to generate profits by exploiting inefficiencies and price anomalies between related fixed-income securities. Funds limit volatility by hedging out exposure to the market and interest rate risk. Strategies include leveraging long and short positions in similar fixed-income securities that are related either mathematically or economically. The sector includes credit yield curve relative value trading involving interest rate swaps, government securities and futures, volatility trading involving options, and mortgage-backed securities arbitrage (the mortgage-backed market is primarily US-based and over-the-counter)”.

Types of arbitrage

Fixed-income arbitrage makes money based on two main underlying concepts:

Pure arbitrage

Identical instruments should have identical price (this is the law of one price). This could be the case, for instance, of two futures contracts traded on two different exchanges. This mispricing could be used by going long the undervalued contract and short the overvalued contract. This strategy uses to work in the days before the rise of electronic trading. Now, pure arbitrage is much less obvious as information is accessible instantly and algorithmic trading wipe out this kind of market anomalies.

Relative value arbitrage

Similar instruments should have a similar price. The fundamental rationale of this type of arbitrage is the notion of reversion to the long-term mean (or normal relative valuations).

Factors that influence fixed-income arbitrage strategies

We list below the sources of market inefficiencies that fixed-income arbitrage funds can exploit.

Market segmentation

Segmentation is of concern for fixed-income arbitrageurs. In financial institutions, the fixed-income desk is split into different traders looking at specific parts of the yield curve. In this instance, some will focus on very short, dated bonds, others while concentrate in the middle part of the yield curve (2-5y) while other while be looking at the long-end of the yield curve (10-30y).

Regulation

Regulation has an implication in the kind of fixed-income securities a fund can hold in their books. Some legislations regulate actively to have specific exposure to high yield securities (junk bonds) since their probability of default is much more important. The diminished popularity linked to the tight regulation can make the valuation of those bonds more attractive than owning investment grade bonds.

Liquidity

Liquidity is also an important concern for this type of strategy. The more liquid the market, the easier it is to trade and execute the strategy (vice versa).

Volatility

Large market movements in the market can have implications to the profitability of this kind of strategy.

Instrument complexity

Instrument complexity can also be a reason to have fixed-income securities. The events of 2008 are a clear example of how banks and regulators didn’t manage to price correctly the complex instruments sold in the market which were highly risky.

Application of a fixed-income arbitrage

Fixed-income arbitrage strategy makes money by focusing on the liquidity and volatility factors generating risk premia. The strategy can potentially generate returns in both rising and falling markets. However, understanding the yield curve structure of interest rates and detecting the relative valuation differential between fixed-income securities is the key concern since this is what makes this strategy profitable (or not!).

We present below a case study related tot eh behavior of the yield curves in the European fixed-income markets inn the mid 1990’s

The European yield curve differential during in the mid 1990’s

The case showed in this example is the relative-value trade between Germany and Italian yields during the period before the adoption of the Euro as a common currency (at the end of the 1990s). The yield curve should reflect the future path of interest rates. The Maastricht treaty (signed on 7th February 1992) obliged most EU member states to adopt the Euro if certain monetary and budgetary conditions were met. This would imply that the future path of interest rates for Germany and Italy should converge towards the same values. However, the differential in terms of interest rates at that point was nearly 350 bps from 5-year maturity onwards (3.5% spread) as shown in Figure 1.

Figure 1. German and Italian yield curve in January 1995.
German and Italian yield curve in January 1995
Source: Motson (2022) (Data: Bloomberg).

A fixed-income arbitrageur could have profited by entering in an interest rate swap where the investor receives 5y-5y forward Italian rates and pays 5y-5y German rates. If the Euro is introduced, then the spread between the two yield curves for the 5-10y part should converge to zero. As captured in Figure 2, the rates converged towards the same value in 1998, where the spread between the two rates converged to zero.

Figure 2. Payoff of the fixed-income arbitrage strategy.
Payoff of the fixed-income arbitrage strategy.
Source: Motson (2022) (Data: Bloomberg).

Performance of the fixed-income arbitrage strategy

Overall, the performance of the fixed-income arbitrage between 1994-2020 were smaller on scale, with occasional large drawdowns (Asian crisis 1998, Great Financial Crisis of 2008, Covid-19 pandemic 2020). This strategy is skewed towards small positive returns but with important tail-risk (heavy losses) according to Credit Suisse (2022). To capture the performance of the fixed-income arbitrage strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the fixed-income arbitrage strategy, we examine the rebased performance of the Credit Suisse index with respect to the MSCI All-World Index.

Over a period from 2002 to 2022, the fixed-income arbitrage strategy index managed to generate an annualized return of 3.81% with an annualized volatility of 5.84%, leading to a Sharpe ratio of 0.129. Over the same period, the Credit Suisse Hedge Fund index Index managed to generate an annualized return of 5.04% with an annualized volatility of 5.64%, leading to a Sharpe ratio of 0.197. The results are in line with the idea of global diversification and decorrelation of returns derived from the global macro strategy from global equity returns. Overall, the Credit Suisse fixed-income arbitrage strategy index performed better than the MSCI All World Index, leading to a higher Sharpe ratio (0.129 vs 0.08).

Figure 3 gives the performance of the fixed-income arbitrage funds (Credit Suisse Fixed-income Arbitrage Index) compared to the hedge funds (Credit Suisse Hedge Fund index) and the world equity funds (MSCI All-World Index) for the period from July 2002 to April 2021.

Figure 3. Performance of the fixed-income arbitrage strategy.
 Global macro performance
Source: computation by the author (Data: Bloomberg).

You can find below the Excel spreadsheet that complements the explanations about the fixed-income arbitrage strategy.

Fixed-income arbitrage

Why should I be interested in this post?

The fixed-income arbitrage strategy aims to profit from market dislocations in the fixed-income market. This can be implemented, for instance, by investing in inexpensive fixed-income securities that the fund manager predicts that it will increase in value, while simultaneously shorting overvalued fixed-income securities to mitigate losses. Understanding the profits and risks associated with such a strategy may aid investors in adopting this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

Hedge funds

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

Financial techniques

   ▶ Youssef LOURAOUI Yield curve structure and interest rate calibration

   ▶ Akshit GUPTA Interest rate swaps

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Motson, N. 2022. Hedge fund elective. Bayes (formerly Cass) Business School.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Fixed-income arbitrage strategy

Credit Suisse Fixed-income arbitrage performance benchmark

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Global macro strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the global macro equity strategy, one of the most widely known strategies in the hedge fund industry. The goal of the global macro strategy is to look for trends or disequilibrium in equity, bonds, currency or alternative assets based on broad economic data using a top-down approach.

This article is structured as follow: we introduce the global macro strategy principle. Then, we present a famous case study to grasp the overall methodology of this strategy. We conclude with a performance analysis of this strategy in comparison with a global benchmark (MSCI All World Index and the Credit Suisse Hedge Fund index).

Introduction

According to Credit Suisse, a global macro strategy can be defined as follows: “Global Macro funds focus on identifying extreme price valuations and leverage is often applied on the anticipated price movements in equity, currency, interest rate and commodity markets. Managers typically employ a top-down global approach to concentrate on forecasting how political trends and global macroeconomic events affect the valuation of financial instruments. Profits are made by correctly anticipating price movements in global markets and having the flexibility to use a broad investment mandate, with the ability to hold positions in practically any market with any instrument. These approaches may be systematic trend following models, or discretionary.”

This strategy can generate returns in both rising and falling markets. However, asset screening is of key concern, and the ability of the fund manager to capture the global macro picture that is driving all asset classes is what makes this strategy profitable (or not!).

The greatest trade in history

The greatest trade in history (before Michael Burry becomes famous for anticipating the Global financial crisis of 2008 linked to the US housing market) took place during the 1990’s when the UK was intending to join the Exchange Rate Mechanism (ERM) founded in 1979. This foreign exchange (FX) system involved eight countries with the intention to move towards a single currency (the Euro). The currencies of the countries involved would be adjustably pegged with a determined band in which they can fluctuate with respect to the Deutsche Mark (DEM), the currency of Germany considered as the reference of the ERM.

Later in 1992, the pace at which the countries adhering to the ERM mechanism were evolving at different rate of growth. The German government was in an intensive spending following the reunification of Berlin, with important stimulus from the German Central Bank to print more money. However, the German government was very keen on controlling inflation to satisfactory level, which was achieved by increasing interest rates in order to curb the inflationary pressure in the German economy.

In the United Kingdom (UK), another macroeconomic picture was taking place: there was a high unemployment coupled with already relatively high interest rates compared to other European economies. The Bank of England was put in a very tight spot because they were facing two main market scenarios:

  • To increase interest rates, which would worsen the economy and drive the UK into a recession
  • To devalue the British Pound (GBP) by defending actively in the FX market, which would cause the UK to leave the ERM mechanism.

The Bank of England decided to go with the second option by defending the British Pound in the FX market by actively buying pounds. However, this strategy would not be sustainable over time. Soros (and other investors) had seen this disequilibrium and shorted British Pound and bought Deutsche Mark. The situation got completely off control for the Bank of England that in September 1992, they decided to increase interest rates, which were already at 10% to more than 15% to calm the selling pressure. Eventually, the following day, the Bank of England announced the exit of the UK from the ERM mechanism and put a hold on the increase of interest rate to the 12% until the economic conditions get better. Figure 1 gives the evolution of the exchange rate between the British Pound (GBP) and the Deutsche Mark (DEM) over the period 1991-1992.

Figure 1. Evolution of the GBP-DEM (British Pound / Deutsche Mark FX rate).
 Global macro performance
Source: Bloomberg.

It was reported that Soros amassed a position of $10 billion and gained a whopping $1 billion for this trade. This event put Soros in the scene as the “man who broke the Bank of England”. The good note about this market event is that the UK economy emerged much healthier than the European countries, with UK exports becoming much more competitive as a result of the pound devaluation, which led the Bank of England to cut rates cut down to the 5-6% level the years following the event, which ultimately helped the UK economy to get better.

Performance of the global macro strategy

Overall, the performance of the global macro funds between 1994-2020 was steady, with occasional large drawdowns (Asian crisis 1998, Dot-com bubble 2000’s, Great Financial Crisis of 2008, Covid-19 pandemic 2020). On a side note, the returns seem smaller and less volatile since 2000 onwards (Credit Suisse, 2022).

To capture the performance of the global macro strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the global macro hedge fund strategy, we examine the rebased performance of the Credit Suisse index with respect to the MSCI All-World Index. Over a period from 2002 to 2022, the global macro strategy index managed to generate an annualized return of 7.85% with an annualized volatility of 5.77%, leading to a Sharpe ratio of 0.33. Over the same period, the MSCI All World Index managed to generate an annualized return of 6.00% with an annualized volatility of 15.71%, leading to a Sharpe ratio of 0.08. The low correlation of the long-short equity strategy with the MSCI All World Index is equal to -0.02, which is close to zero. The results are in line with the idea of global diversification and decorrelation of returns derived from the global macro strategy from global equity returns. Overall, the Credit Suisse hedge fund strategy index performed better worse than the MSCI All World Index, leading to a higher Sharpe ratio (0.33 vs 0.08).

Figure 2 gives the performance of the global macro funds (Credit Suisse Global Macro Index) compared to the hedge funds (Credit Suisse Hedge Fund index) and the world equity funds (MSCI All-World Index) for the period from July 2002 to April 2021.

Figure 2. Performance of the global macro strategy.
Performance of the global macro strategy
Source: computation by the author (data: Bloomberg).

You can find below the Excel spreadsheet that complements the explanations about the global macro hedge fund strategy.

Global Macro

Why should I be interested in this post?

Global macro funds seek to profit from market dislocations across different asset classes. reduce negative risk while increasing market upside. They might, for example, invest in inexpensive assets that the fund managers believe will rise in price while simultaneously shorting overvalued assets to cut losses. Other strategies used by global macro funds to lessen market volatility can include leverage and derivatives. Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Akshit GUPTA Portrait of George Soros: a famous investor

   ▶ Youssef LOURAOUI Yield curve structure and interest rate calibration

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Global macro strategy

Credit Suisse Global macro performance benchmark

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Interest rate term structure and yield curve calibration

Interest rate term structure and yield curve calibration

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School,, MSc. Energy, Trade & Finance, 2021-2022) presents the usage of a widely used model for building the yield curve, namely the Nelson-Seigel-Svensson model for interest rate calibration.

This article is structured as follows: we introduce the concept of the yield curve. Next, we present the mathematical foundations of the Nelson-Siegel-Svensson model. Finally, we illustrate the model with practical examples.

Introduction

Fine-tuning the term structure of interest rates is the cornerstone of a well-functioning financial market. For this reason, the testing of various term-structure estimation and forecasting models is an important topic in finance that has received considerable attention for several decades (Lorenčič, 2016).

The yield curve is a graphical representation of the term structure of interest rates (i.e. the relationship between the yield and the corresponding maturity of zero-coupon bonds issued by governments). The term structure of interest rates contains information on the yields of zero-coupon bonds of different maturities at a certain date (Lorenčič, 2016). The construction of the term structure is not a simple task due to the scarcity of zero-coupon bonds in the market, which are the basic elements to estimate the term structure. The majority of bonds traded in the market carry coupons (regular paiement of interests). The yields to maturity of coupon bonds with different maturities or coupons are not immediately comparable. Therefore, a method of measuring the term structure of interest rates is needed: zero-coupon interest rates (i.e. yields on bonds that do not pay coupons) should be estimated from the prices of coupon bonds of different maturities using interpolation methods, such as polynomial splines (e.g. cubic splines) and parsimonious functions (e.g. Nelson-Siegel).

As explained in an interesting paper that I read (Lorenčič, 2016), the prediction of the term structure of interest rates is a basic requirement for managing investment portfolios, valuing financial assets and their derivatives, calculating risk measures, valuing capital goods, managing pension funds, formulating economic policy, making decisions about household finances, and managing fixed income assets . The pricing of fixed income securities such as swaps, bonds and mortgage-backed securities depends on the yield curve. When considered together, the yields of non-defaulting government bonds with different characteristics reveal information about forward rates, which are potentially predictive of real economic activity and are therefore of interest to policy makers, market participants and economists. For instance, forward rates are often used in pricing models and can indicate market expectations of future inflation rates and currency appreciation/depreciation rates. Understanding the relationship between interest rates and the maturity of securities is a prerequisite for developing and testing the financial theory of monetary and financial economics. The accurate adjustment of the term structure of interest rates is the backbone of a well-functioning financial market, which is why the refinement of yield curve modelling and forecasting methods is an important topic in finance that has received considerable attention for several decades (Lorenčič, 2016).

The most commonly used models for estimating the zero-coupon curve are the Nelson-Siegel and cubic spline models. For example, the central banks of Belgium, Finland, France, Germany, Italy, Norway, Spain and Switzerland use the Nelson-Siegel model or a type of its improved extension to fit and forecast yield curves (BIS, 2005). The European Central Bank uses the Sonderlind-Svensson model, an extension of the Nelson-Siegel model, to estimate yield curves in the euro area (Coroneo, Nyholm & Vidova-Koleva, 2011).

Mathematical foundation of the Nelson-Siegel-Svensson model

In this article, we will deal with the Nelson-Siegel extended model, also known as the Nelson-Siegel-Svensson model. These models are relatively efficient in capturing the general shapes of the yield curve, which explains why they are widely used by central banks and market practitioners.

Mathematically, the formula of Nelson-Siegel-Svensson is given by:

img_SimTrade_NSS_equation

where

  • τ = time to maturity of a bond (in years)
  • β0 = parameter to capture for the level factor
  • β1= parameter to capture the slope factor
  • β2 = parameter to capture the curvature factor
  • β3 = parameter to capture the magnitude of the second hump
  • λ1 and λ2 = parameters to capture the rate of exponential decay
  • exp = the mathematical exponential function

The parameters β0, β1, β2, β3, λ1 and λ2 can be calculated with the Excel add-in “Solver” by minimizing the sum of squared residuals between the dirty price (market value, present value) of the bonds and the model price of the bonds. The dirty price is a sum of the clean price, retrieved from Bloomberg, and accrued interest. Financial research propose that the Svensson model should be favored over the Nelson-Siegel model because the yield curve slopes down at the very long end, necessitating the second curvature component of the Svensson model to represent a second hump at longer maturities (Wahlstrøm, Paraschiv, and Schürle, 2022).

Application of the yield curve structure

In financial markets, yield curve structure is of the utmost importance, and it is an essential market indicator for central banks. During my last internship at the Central Bank of Morocco, I worked in the middle office, which is responsible for evaluating risk exposures and profits and losses on the positions taken by the bank on a 27.4 billion euro foreign reserve investment portfolio. Volatility evaluated by the standard deviation, mathematically defined as the deviation of a random variable (asset prices or returns in my example) from its expected value, is one of the primary risk exposure measurements. The standard deviation reveals the degree to which the present return deviates from the expected return. When analyzing the risk of an investment, it is one of the most used indicators employed by investors. Among other important exposures metrics, there is the VaR (Value at Risk) with a 99% confidence level and a 95% confidence level for 1-day and 30-day positions. In other words, the VaR is a metric used to calculate the maximum loss that a portfolio may sustain with a certain degree of confidence and time horizon.

Every day, the Head of the Middle Office arranges a general meeting in which he discusses a global debriefing of the most significant overnight financial news and a debriefing of the middle office desk for “watch out” assets that may present an investment opportunity. Consequently, the team is tasked with adhering to the investment decisions that define the firm, as it neither operates as an investment bank nor as a hedge fund in terms of risk and leverage. As the central bank is tasked with the unique responsibility of safeguarding the national reserve and determining the optimal mix of low-risk assets to invest in, it seeks a good asset strategy (AAA bonds from European countries coupled with American treasury bonds). The investment mechanism is comprised of the segmentation of the entire portfolio into three principal tranches, each with its own features. The first tranche (also known as the security tranche) is determined by calculating the national need for a currency that must be kept safe in order to establish exchange market stability (mostly based on short-term positions in low-risk profile assets) (Liquid and high rated bonds). The second tranche is based on a buy-and-hold strategy and a market approach. The first entails taking a long position on riskier assets than the first tranche until maturity, with no sales during the asset’s lifetime (riskier bonds and gold). The second strategy is based on the purchase and sale of liquid assets with the expectation of better returns.

Participants in the market are accustomed to categorizing the debt of eurozone nations. Germany and the Netherlands, for instance, are regarded as “core” nations, and their debt as safe-haven assets (Figure 1). Due to the stability of their yield spreads, France, Belgium, Austria, Ireland, and Finland are “semi-core” nations (Figure 1). Due to their higher bond yields and more volatile spreads, Spain, Portugal, Italy, and Greece are called “peripheral” (BNP Paribas, 2019) (Figure 2). The 10-year gap represents the difference between a country’s 10-year bond yield and the yield on the German benchmark bond. It is a sign of risk. Therefore, the greater the spread, the greater the risk. Figure 3 represents the yield curve for the Moroccan bond market.

Figure 1. Yield curves for core countries (Germany, Netherlands) and semi-core (France, Austria) of the euro zone.
Yield curves for core countries of the euro zone
Source: computation by the author.

Figure 2. Yield curves for peripheral countries of the euro zone
(Spain, Italy, Greece and Portugal).
Yield curves for semi-core countries of the euro zone
Source: computation by the author.

Figure 3. Yield curve for Morocco.
Yield curve for Morocco
Source: computation by the author.

This example provides a tool comparable to the one utilized by central banks to measure the change in the yield curve. It is an intuitive and simplified model created in an Excel spreadsheet that facilitates comprehension of the investment process. Indeed, it is capable of continuously refreshing the data by importing the most recent quotations (in this case, retrieved from investing.com, a reputable data source).

One observation can be made about the calibration limits of the Nelson-Seigel-Svensson model. In this sense, when the interest rate curve is in negative levels (as in the case of the structure of the Japanese curve), the NSS model does not manage to model negative values, obtaining a result with substantial deviations from spot rates. This can be interpreted as a failure of the NSS calibration approach to model a negative interest rate curve.

In conclusion, the NSS model is considered as one of the most used and preferred models by central banks to obtain the short- and long-term interest rate structure. Nevertheless, this model does not allow to model the structure of the curve for negative interest rates.

Excel file for the calibration model of the yield curve

You can download an Excel file with data to calibrate the yield curve for different countries. This spreadsheet has a special macro to extract the latest data pulled from investing.com website, a reliable source for time-series data.

Download the Excel file to compute yield curve structure

Why should I be interested in this post?

Predicting the term structure of interest rates is essential for managing investment portfolios, valuing financial assets and their derivatives, calculating risk measures, valuing capital goods, managing pension funds, formulating economic policy, deciding on household finances, and managing fixed income assets. The yield curve affects the pricing of fixed income assets such as swaps, bonds, and mortgage-backed securities. Understanding the yield curve and its utility for the markets can aid in comprehending this parameter’s broader implications for the economy as a whole.

Related posts on the SimTrade blog

Hedge funds

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Equity market neutral strategy

   ▶ Youssef LOURAOUI Fixed income arbitrage strategy

   ▶ Youssef LOURAOUI Global macro strategy

Financial techniques

   ▶ Bijal GANDHI Interest Rates

   ▶ Akshit GUPTA Interest Rate Swaps

Other

   ▶ Youssef LOURAOUI My experience as a portfolio manager in a central bank

Useful resources

Academic research

Lorenčič, E., 2016. Testing the Performance of Cubic Splines and Nelson-Siegel Model for Estimating the Zero-coupon Yield Curve. NGOE, 62(2), 42-50.

Wahlstrøm, Paraschiv, and Schürle, 2022. A Comparative Analysis of Parsimonious Yield Curve Models with Focus on the Nelson-Siegel, Svensson and Bliss Versions. Springer Link, Computational Economics, 59, 967–1004.

Business Analysis

BNP Paribas (2019) Peripheral Debt Offers Selective Opportunities

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School,, MSc. Energy, Trade & Finance, 2021-2022).

Minimum Volatility Portfolio

Youssef_Louraoui

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) elaborates on the concept of Minimum Volatility Portfolio, which is derived from Modern Portfolio Theory (MPT) and also in practice to build investment funds.

This article is structured as follows: we introduce the concept of Minimum Volatility Portfolio. Next, we present some interesting academic findings, and we finish by presenting a theoretical example to support the explanations given in this article.

Introduction

The minimum volatility portfolio represents a portfolio of assets with the lowest possible risk for an investor and is located on the far-left side of the efficient frontier. Note that the minimum volatility portfolio is also called the minimum variance portfolio or more precisely the global minimum volatility portfolio (to distinguish it from other optimal portfolios obtained for higher risk levels).

Modern Portfolio Theory’s fundamental notion had significant implications for portfolio construction and asset allocation techniques. In the late 1970s, the portfolio management business attempted to capture the market portfolio return. However, as financial research progressed and some substantial contributions were made, new factor characteristics emerged to capture extra performance. The financial literature has long encouraged taking on more risk to earn a higher return. However, this is a common misconception among investors. While extremely volatile stocks can produce spectacular gains, academic research has repeatedly proved that low-volatility companies provide greater risk-adjusted returns over time. This occurrence is known as the “low volatility anomaly,” and it is for this reason that many long-term investors include low volatility factor strategies in their portfolios. This strategy is consistent with Henry Markowitz’s renowned 1952 article, in which he embraces the merits of asset diversification to form a portfolio with the maximum risk-adjusted return.

Academic Literature

Markowitz is widely regarded as a pioneer in financial economics and finance due to the theoretical implications and practical applications of his work in financial markets. Markowitz received the Nobel Prize in 1990 for his contributions to these fields, which he outlined in his 1952 Journal of Finance article titled “Portfolio Selection.” His seminal work paved the way for what is now commonly known as “Modern Portfolio Theory” (MPT).

In 1952, Harry Markowitz created modern portfolio theory with his work. Overall, the risk component of MPT may be evaluated using multiple mathematical formulations and managed through the notion of diversification, which requires building a portfolio of assets that exhibits the lowest level of risk for a given level of expected return (or equivalently a portfolio of assets that exhibits the highest level of expected return for a given level of risk). Such portfolios are called efficient portfolios. In order to construct optimal portfolios, the theory makes a number of fundamental assumptions regarding the asset selection behavior of individuals. These are the assumptions (Markowitz, 1952):

  • The only two elements that influence an investor’s decision are the expected rate of return and the variance. (In other words, investors use Markowitz’s two-parameter model to make decisions.) .
  • Investors are risk averse. (That is, when faced with two investments with the same expected return but two different risks, investors will favor the one with the lower risk.)
  • All investors strive to maximize expected return at a given level of risk.
  • All investors have the same expectations regarding the expected return, variance, and covariances for all hazardous assets. This assumption is known as the homogenous expectations assumption.
  • All investors have a one-period investment horizon.

Only in theory does the minimum volatility portfolio (MVP) exist. In practice, the MVP can only be estimated retrospectively (ex post) for a particular sample size and return frequency. This means that several minimum volatility portfolios exist, each with the goal of minimizing and reducing future volatility (ex ante). The majority of minimum volatility portfolios have large average exposures to low volatility and low beta stocks (Robeco, 2010).

Example

To illustrate the concept of the minimum volatility portfolio, we consider an investment universe composed of three assets with the following characteristics (expected return, volatility and correlation):

  • Asset 1: Expected return of 10% and volatility of 10%
  • Asset 2: Expected return of 15% and volatility of 20%
  • Asset 3: Expected return of 22% and volatility of 35%
  • Correlation between Asset 1 and Asset 2: 0.30
  • Correlation between Asset 1 and Asset 3: 0.80
  • Correlation between Asset 2 and Asset 3: 0.50

The first step to achieve the minimum variance portfolio is to construct the portfolio efficient frontier. This curve represents all the portfolios that are optimal in the mean-variance sense. After solving the optimization program, we obtain the weights of the optimal portfolios. Figure 1 plots the efficient frontier obtained from this example. As captured by the plot, we can see that the minimum variance portfolio in this three-asset universe is basically concentrated on one holding (100% on Asset 1). In this instance, an investor who wishes to minimize portfolio risk would allocate 100% on Asset 1 since it has the lowest volatility out of the three assets retained in this analysis. The investor would earn an expected return of 10% for a volatility of 10% annualized (Figure 1).

Figure 1. Minimum Volatility Portfolio (MVP) and the Efficient Frontier.
 Minimum Volatility Portfolio
Source: computation by the author.

Excel file to build the Minimum Volatility Portfolio

You can download below an Excel file in order to build the Minimum Volatility portfolio.

Download the Excel file to compute the Jensen's alpha

Why should I be interested in this post?

Portfolio management’s objective is to optimize the returns on the entire portfolio, not just on one or two stocks. By monitoring and maintaining your investment portfolio, you can accumulate a sizable capital to fulfil a variety of financial objectives, including retirement planning. This article helps to understand the grounding fundamentals behind portfolio construction and investing.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Markowitz Modern Portfolio Theory

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Minimum Volatility Factor

   ▶ Youssef LOURAOUI Beta

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Lintner, John. 1965a. Security Prices, Risk, and Maximal Gains from Diversification. Journal of Finance, 20, 587-616.

Lintner, John. 1965b. The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets.Review of Economics and Statistics 47, 13-37.

Markowitz, H., 1952. Portfolio Selection. The Journal of Finance, 7, 77-91.

Sharpe, William F. 1963. A Simplified Model for Portfolio Analysis. Management Science, 19, 425-442.

Sharpe, William F. 1964. Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. Journal of Finance, 19, 425-442.

Business analysis

Robeco, 2010 Ten things you should know about minimum volatility investing.

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Moments d’une distribution statistique

Moments d’une distribution statistique

Shengyu ZHENG

Dans cet article, Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023) présente les quatre premiers moments d’une distribution statistique : la moyenne, la variance, la skewness et la kurtosis.

Variable aléatoire

Une variable aléatoire est une variable dont la valeur est déterminée d’après la réalisation d’un événement aléatoire. Plus précisément, la variable (X) est une fonction mesurable depuis un ensemble de résultats (Ω) à un espace mesurable (E).

X : Ω → E

X est une variable aléatoire réelle à condition que l’espace mesurable (E) soit, ou fasse partie de, l’ensemble des nombres réels (ℝ).

Je présente un exemple avec la rentabilité d’un investissement dans l’action Apple. La figure 1 ci-dessous représente la série temporelle de la rentabilité journalière de l’action Apple sur la période allant de novembre 2017 à novembre 2022.

Figure 1. Série temporelle de rentabilités de l’action Apple.
Série de rentabilité
Source : calcul par l’auteur (données : Yahoo Finance).

Figure 2. Histogramme des rentabilités de l’action Apple.
Histogramme de rentabilité
Source : calcul par l’auteur (données : Yahoo Finance).

Moments d’une distribution statistique

Le moment d’ordre r ∈ ℕ est un indicateur de la dispersion de la variable aléatoire X. Le moment ordinaire d’ordre r est défini, s’il existe, par la formule suivante :

mr = 𝔼 (Xr)

Nous avons aussi le moment centré d’ordre r défini, s’il existe, par la formule suivante :

cr = 𝔼([X-𝔼(X)]r)

Moment d’ordre un : la moyenne

Définition

La moyenne ou l’espérance mathématique d’une variable aléatoire est la valeur attendue en moyenne si la même expérience aléatoire est répétée un grand nombre de fois. Elle correspond à une moyenne pondérée par probabilité des valeurs que peut prendre cette variable, et elle est donc connue comme la moyenne théorique ou la vraie moyenne.

Si une variable X prend une infinité de valeurs x1, x2,… avec les probabilités p1, p2,…, l’espérance de X est définie comme :

Μ = m1= 𝔼(X) = ∑i=1pixi

L’espérance existe à condition que cette somme soit absolument convergente.

Estimation statistique

La moyenne empirique est un estimateur de l’espérance. Cet estimateur est sans biais, convergent (selon la loi des grands nombres), et distribué normalement (selon le théorème centrale limite).

A partir d’un échantillon de variables aléatoire réelles indépendantes et identiquement distribuées (X1,…,Xn), la moyenne empirique est donc :

X̄ = (∑ni=1xi)/n

Pour une loi normale centrée réduite (μ = 0 et σ = 1), la moyenne est égale à zéro.

Moment d’ordre deux : la variance

Définition

La variance (moment d’ordre deux) est une mesure de la dispersion des valeurs par rapport à sa moyenne.

Var(X) = σ 2 = 𝔼[(X-μ)2]

Elle exprime l’espérance du carré de l’écart à la moyenne théorique. Elle est donc toujours positive.

Pour une loi normale centrée réduite (μ = 0 et σ = 1), la variance est égale à un.

Estimation statistique

A partir d’un échantillon (X1,…,Xn), nous pouvons estimer la variance théorique à l’aide de la variance empirique :

S2 = (∑ni=1(xi – X̄)2)/n

Cependant, cet estimateur est biaisé, parce que 𝔼(S2) = (n-1)/(n) σ2. Nous avons donc un estimateur non-biaisé Š2 = (∑ni=1(xi – X̄)2)/(n-1)

Application en finance

La variance correspond à la volatilité d’un actif financier. Une variance élevée indique une dispersion plus importante, et ce n’est pas favorable du regard des investisseurs rationnels qui présentent de l’aversion au risque. Ce concept est un paramètre clef dans la théorie moderne du portefeuille de Markowitz.

Moment d’ordre trois : la skewness

Définition

La skewness (coefficient d’asymétrie en bon français) est le moment d’ordre trois, défini comme ci-dessous :

γ1 = 𝔼[((X-μ)/σ)3]

La skewness mesure l’asymétrie de la distribution d’une variable aléatoire. On distingue trois types de distributions selon que la distribution est asymétrique à gauche, symétrique, ou asymétrique à droite. Un coefficient d’asymétrie négatif indique une asymétrie à gauche de la distribution, dont la queue gauche est plus importante que la queue droite. Un coefficient d’asymétrie nul indique une symétrie, les deux queues de la distribution étant aussi importante l’une que l’autre. Enfin, un coefficient d’asymétrie positif indique une asymétrie à droite de la distribution, dont la queue droite est plus importante que la queue gauche.

Pour une loi normale, la skewness est égale à zéro car cette loi est symétrique par rapport à la moyenne.

Moment d’ordre quatre : la kurtosis

Définition

La kurtosis (coefficient d’acuité en bon français) est le moment d’ordre quatre, défini par :

β2 = 𝔼[((X-μ)/σ)4]

Il décrit l’acuité d’une distribution. Un coefficient d’acuité élevé indique que la distribution est plutôt pointue en sa moyenne, et a des queues de distribution plus épaisses (fatter tails en anglais).

Le coefficient d’une loi normale est de 3, autrement dit, une distribution mésokurtique. Au-delà de ce seuil, une distribution est appelée leptokurtique. Les distributions présentes au marché financier sont principalement leptokurtique, impliquant que les valeurs anormales et extrêmes sont plus fréquentes que celles d’une distribution gaussienne. Au contraire, un coefficient d’acuité de moins de 3 indique une distribution platykurtique, dont les queues sont plus légères.

Pour une loi normale, la kurtosis est égale à trois.

Exemple : distribution des rentabilités d’un investissement dans l’action Apple

Nous donnons maintenant un exemple en finance en étudiant la distribution des rentabilités de l’action Apple. Dans les données récupérées de Yahoo! Finance pour la période allant de novembre 2017 à novembre 2022, on se sert de la colonne du cours de clôture pour calculer les rentabilités journalières. Nous utilisons des fonctions Excel afin de calculer les quatre premiers moments de la distribution empirique des rentabilités de l’action Apple comme indiqué dans la table ci-dessous.

Moments de l’action Apple

Pour une distribution normale standard (centrée réduite), la moyenne est de zero, la variance est de 1, le skewness est de zéro, et le kurtosis est de 3. À comparaison avec une distribution normale, la distribution de rentabilité de l’action Apple a une moyenne légèrement positive. Cela signifie qu’à long terme, la rentabilité de l’investissement dans cet actif est positive. Son skewness est négatif, indiquant l’asymétrie vers la gauche (les valeurs négatives). Son kurtosis est supérieur de 3, ce qui indique que les extrémités sont plus épaisses que la distribution normale.

Fichier Excel pour calculer les moments

Vous pouvez télécharger le ficher Excel d’analyse des moments de l’action Apple en suivant le lien ci-dessous :

Télécharger le fichier Excel pour analyser les moments de la distribution

Autres article sur le blog SimTrade

▶ Shengyu ZHENG Catégories de mesures de risques

▶ Shengyu ZHENG Mesures de risques

Ressources

Articles académiques

Robert C. Merton (1980) On estimating the expected return on the market: An exploratory investigation, Journal of Financial Economics, 8:4, 323-361.

Données

Yahoo! Finance Données de marché pour l’action Apple

A propos de l’auteur

Cet article a été écrit en janvier 2023 par Shengyu ZHENG (ESSEC Business School, Grande Ecole Program – Master in Management, 2020-2023).

The effect of Elon Musk's Tweets on the Cryptocurrency Market

The effect of Elon Musk’s Tweets on the Cryptocurrency Market

Ines ILLES MEJIAS

In this article, Ines ILLES MEJIAS (ESSEC Business School, Global BBA, 2020-2024) analyzes the effect of Elon Musk’s tweets on the cryptocurrency market and its link with the concept of market efficiency.

Who is Elon Musk?

Founder of SpaceX and Tesla, Elon Musk, is known to be one of the richest and most famous people in the world. He is known to be a “technological visionary”, especially working in companies which focus on innovation and technology. Elon Musk has currently over 120 million followers on Twitter, a social media platform which he is regularly active on to speak about his life, his business or give his opinion on a wide variety of topics, one of them being cryptocurrency. No surprises he likes Twitter so much that he chose to purchase this one for US$ 44 billion not so long time ago in 2022.

Why does Elon Musk have an impact on the crypto market?

The effect of Elon Musk on the crypto market seems to be explained by his tweets due to his persona, as he is also known to be a successful investor and one of the wealthiest people in the world in 2022.

His activity on Twitter seems to affect the prices and volumes of cryptocurrencies on the short-term, by looking at the price changes or volatility following his tweets. This is called the “Elon Musk Effect”. The two most known cryptocurrencies having been influenced by Elon Musk are the Bitcoin and the Dogecoin. Likewise, we know thanks to his tweets and affirmation in conferences that he currently owns three cryptocurrencies: Bitcoin, Ethereum, and Dogecoin.

Examples of the positive impact of Elon Musk’s tweets on the crypto market

Elon Musk’s tweets seem to have an influence in the variation of cryptocurrency prices.

December 2021: “Bitcoin is my safe word”

In December 2021, Elon Musk positively tweeted about the Bitcoin saying that it is his “safe word”. This made the value of Bitcoin increase largely as the graph below shows.

Figure 1. Elon Musk’s tweet effect on Bitcoin
 Tweet of Elon Musk 2021
Source: Source: Reuters

January 2022: Elon Musk shows he’s a Bitcoin supporter.

In January 2022, Elon Musk changed his Twitter bio by adding “#bitcoin” which caused the Bitcoin to increase its value by 20%.

Figure 2. Elon Musk’s tweet effect on Bitcoin
 Tweet of Elon Musk 2021
Source: Source: Blockchain Research Lab

Figure 3. Elon Musk’s tweet effect on Bitcoin
Elon Musk’s tweet effect on Bitcoin
Source: Source: Blockchain Research Lab

Moreover, the price of Dogecoin raised by more than 500% after he tweeted that it was his favorite cryptocurrency. For this he is also known to be the “Dogefather” or “King of Dogecoin”. He also tweeted that SpaceX will accept Dogecoin payments which again, made the value of one of his cryptocurrencies raise largely.

Figure 4. Elon Musk’s tweet effect on Dogecoin
Elon Musk’s tweet effect on Dogecoin
Source: Source: Blockchain Research Lab

Figure 5. Elon Musk’s tweet effect on Dogecoin
Elon Musk’s tweet effect on Dogecoin
Source: Source: Blockchain Research Lab

Examples of the negative impact of Elon Musk’s tweets on the crypto market

Elon Musk can have a positive but also a negative impact on the crypto market by creating its own up and downs. For instance, after his presence on Saturday Night Live in May 2021, the Dogecoin’s value fell 34%. This was shocking considering that it was predicted by many crypto enthusiasts that it would increase the Dogecoin’s value to US$ 1.

Also, after Musk called Dodgecoin to be a “hustle”, its price went down by more than 30%.

A last example I will add is of when Elon Musk tweeted a meme about breaking up with bitcoin on June 3. This caused the price of Bitcoin to decrease by 5%.

Figure 6. Tweet of Elon Musk on June 4, 2021
 Tweet of Elon Musk on June 4, 2021
Source: Twitter.

Impact of Elon Musk’s tweets on the cryptocurrency market

Figure 7. Impact of Elon Musk’s tweets on the cryptocurrency market
Impact of Elon Musk’s tweets on the cryptocurrency market
Source: Coinjournal.

Why did it interest me?

This topic really caught my attention as I’ve always been very interested in investing, although never had the courage to do so due to the potential loss of real money. So, when I heard about this virtual currency, I became interested in knowing more about it, and after some research I found out about the news regarding Elon Musk and his effect on these. It was surprising and shocking seeing how an individual can have so much power over something, especially the power of social media.

Link with market efficiency

There are three types of market efficiency: weak efficiency related to market data (prices and transaction volumes), semi-strong efficiency related to all public information (company accounts, analyst reports, etc.) and strong efficiency (all public as well as private information).

Given the market reaction after Elon Musk’s tweets, the market is definitely efficient in the semi-strong sense. By observing the market reaction before Elon Musk’s tweets, we may wonder if the market is also efficient in the strong sense…

Useful resources

Academic articles

Gupta, R.R., Arya, R.K., Kumar, J., Gururani, A., Dugh, R., Dugh, A. (2022). The Impact of Elon Musk Tweets on Bitcoin Price. In: Mandal, J.K., Hsiung, PA., Sankar Dhar, R. (eds) Topical Drifts in Intelligent Computing. ICCTA 2021. Lecture Notes in Networks and Systems, vol 426. Springer, Singapore. https://doi.org/10.1007/978-981-19-0745-6_44

Business resources

Twitter Elon Musk

Bitcoin

DodgeCoin

Blockchain Research Lab

Joe Khalique-Brown (15/06/2021) The Elon Musk Bitcoin saga continues: BTC rallies 10% Coin Journal

Noel Randewich (08/02/2021) Musk’s Bitcoin investment follows months of Twitter talk Reuters.

Related posts on the SimTrade blog

   ▶ Hugo MEYER The regulation of cryptocurrencies: what are we talking about?

   ▶ Alexandre VERLET Cryptocurrencies

   ▶ Alexandre VERLET The NFTs, a new gold rush?

About the author

The article was written in December 2022 by Ines ILLES MEJIAS (ESSEC Business School, Global BBA, 2020-2024).

Arbitrage Pricing Theory (APT)

Arbitrage Pricing Theory (APT)

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the concept of arbitrage portfolio, a pillar concept in asset pricing theory.

This article is structured as follows: we present an introduction for the notion of arbitrage portfolio in the context of asset pricing, we present the assumptions and the mathematical foundation of the model and we then illustrate a practical example to complement this post.

Introduction

Arbitrage pricing theory (APT) is a method of explaining asset or portfolio returns that differs from the capital asset pricing model (CAPM). It was created in the 1970s by economist Stephen Ross. Because of its simpler assumptions, arbitrage pricing theory has risen in favor over the years. However, arbitrage pricing theory is far more difficult to apply in practice since it requires a large amount of data and complicated statistical analysis.The following points should be kept in mind when understanding this model:

  • Arbitrage is the technique of buying and selling the same item at two different prices at the same time for a risk-free profit.
  • Arbitrage pricing theory (APT) in financial economics assumes that market inefficiencies emerge from time to time but are prevented from occurring by the efforts of arbitrageurs who discover and instantly remove such opportunities as they appear.
  • APT is formalized through the use of a multi-factor formula that relates the linear relationship between the expected return on an asset and numerous macroeconomic variables.

The concept that mispriced assets can generate short-term, risk-free profit opportunities is inherent in the arbitrage pricing theory. APT varies from the more traditional CAPM in that it employs only one factor. The APT, like the CAPM, assumes that a factor model can accurately characterize the relationship between risk and return.

Assumptions of the APT model

Arbitrage pricing theory, unlike the capital asset pricing model, does not require that investors have efficient portfolios. However, the theory is guided by three underlying assumptions:

  • Systematic factors explain asset returns.
  • Diversification allows investors to create a portfolio of assets that eliminates specific risk.
  • There are no arbitrage opportunities among well-diversified investments. If arbitrage opportunities exist, they will be taken advantage of by investors.

To have a better grasp on the asset pricing theory behind this model, we can recall in the following part the foundation of the CAPM as a complementary explanation for this article.

Capital Asset Pricing Model (CAPM)

William Sharpe, John Lintner, and Jan Mossin separately developed a key capital market theory based on Markowitz’s work: the Capital Asset Pricing Model (CAPM). The CAPM was a huge evolutionary step forward in capital market equilibrium theory, since it enabled investors to appropriately value assets in terms of systematic risk, defined as the market risk which cannot be neutralized by the effect of diversification. In their derivation of the CAPM, Sharpe, Mossin and Litner made significant contributions to the concepts of the Efficient Frontier and Capital Market Line. The seminal contributions of Sharpe, Litner and Mossin would later earn them the Nobel Prize in Economics in 1990.

The CAPM is based on a set of market structure and investor hypotheses:

  • There are no intermediaries
  • There are no limits (short selling is possible)
  • Supply and demand are in balance
  • There are no transaction costs
  • An investor’s portfolio value is maximized by maximizing the mean associated with projected returns while reducing risk variance
  • Investors have simultaneous access to information in order to implement their investment plans
  • Investors are seen as “rational” and “risk averse”.

Under this framework, the expected return of a given asset is related to its risk measured by the beta and the market risk premium:

CAPM risk beta relation

Where :

  • E(ri) represents the expected return of asset i
  • rf the risk-free rate
  • βi the measure of the risk of asset i
  • E(rm) the expected return of the market
  • E(rm)- rf the market risk premium.

In this model, the beta (β) parameter is a key parameter and is defined as:

CAPM beta formula

Where:

  • Cov(ri, rm) represents the covariance of the return of asset i with the return of the market
  • σ2(rm) is the variance of the return of the market.

The beta is a measure of how sensitive an asset is to market swings. This risk indicator aids investors in predicting the fluctuations of their asset in relation to the wider market. It compares the volatility of an asset to the systematic risk that exists in the market. The beta is a statistical term that denotes the slope of a line formed by a regression of data points comparing stock returns to market returns. It aids investors in understanding how the asset moves in relation to the market. According to Fama and French (2004), there are two ways to interpret the beta employed in the CAPM:

  • According to the CAPM formula, beta may be thought in mathematical terms as the slope of the regression between the asset return and the market return. Thus, beta quantifies the asset sensitivity to changes in the market return.
  • According to the beta formula, it may be understood as the risk that each dollar invested in an asset adds to the market portfolio. This is an economic explanation based on the observation that the market portfolio’s risk (measured by σ2(rm)) is a weighted average of the covariance risks associated with the assets in the market portfolio, making beta a measure of the covariance risk associated with an asset in comparison to the variance of the market return.

Mathematical foundations

The APT can be described formally by the following equation

APT expected return formula

Where

  • E(rp) represents the expected return of portfolio p
  • rf the risk-free rate
  • βk the sensitivity of the return on portfolio p to the kth factor (fk)
  • λk the risk premium for the kth factor (fk)
  • K the number of risk factors

Richard Roll and Stephen Ross found out that the APT can be sensible to the following factors:

  • Expectations on inflation
  • Industrial production (GDP)
  • Risk premiums
  • Term structure of interest rates

Furthermore, the researchers claim that an asset will have varied sensitivity to the elements indicated above, even if it has the same market factor as described by the CAPM.

Application

For this specific example, we want to understand the asset price behavior of two equity indexes (Nasdaq for the US and Nikkei for Japan) and assess their sensitivity to different macroeconomic factors. We extract a time series for Nasdaq equity index prices, Nikkei equity index prices, USD/CHY FX spot rate and US term structure of interest rate (10y-2y yield spread) from the FRED Economics website, a reliable source for macroeconomic data for the last two decades.

The first factor, which is the USD/CHY (US Dollar/Chinese Renminbi Yuan) exchange rate, is retained as the primary factor to explain portfolio return. Given China’s position as a major economic player and one of the most important markets for the US and Japanese corporations, analyzing the sensitivity of US and Japanese equity returns to changes in the USD/CHY Fx spot rate can help in understanding the market dynamics underlying the US and Japanese equity performance. For instance, Texas Instrument, which operates in the sector of electronics and semiconductors, and Nike both have significant ties to the Chinese market, with an overall exposure representing approximately 55% and 18%, respectively (Barrons, 2022). In the example of Japan, in 2017 the Japanese government invested 117 billion dollars in direct investment in northern China, one of the largest foreign investments in China. Similarly, large Japanese listed businesses get approximately 18% of their international revenues from the Chinese market (The Economist, 2019).

The second factor, which is the 10y-2y yield spread, is linked to the shape of the yield curve. A yield curve that is inverted indicates that long-term interest rates are lower than short-term interest rates. The yield on an inverted yield curve decreases as the maturity date approaches. The inverted yield curve, also known as a negative yield curve, has historically been a reliable indicator of a recession. Analysts frequently condense yield curve signals to the difference between two maturities. According to the paper of Yu et al. (2017), there is a significant link between the effects of varying degrees of yield slope with the performance of US equities between 2006 and 2012. Regardless of market capitalization, the impact of the higher yield slope on stock prices was positive.

The APT applied to this example can be described formally by the following equation:

APT expected return formula example

Where

  • E(rp) represents the expected return of portfolio p
  • rf the risk-free rate
  • βp, Chinese FX the sensitivity of the return on portfolio p to the USD/CHY FX spot rate
  • βp, US spread the sensitivity of the return on portfolio p to the US term structure
  • λChinese FX the risk premium for the FX risk factor
  • λUS spread the risk premium for the interest rate risk factor

We run a first regression of the Nikkei Japanese equity index returns onto the macroeconomic variables retained in this analysis. We can highlight the following: Both factors are not statistically significant at a 10% significance level, indicating that the factors have poor predictive power in explaining Nikkei 225 returns over the last two decades. The model has a low R2, equivalent to 0.48%, which indicates that only 0.48% of the behavior of Nikkei performance can be attributed to the change in USD/CHY FX spot rate and US term structure of the yield curve (Table 1).

Table 1. Nikkei 225 equity index regression output.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 1 and 2 captures the linear relationship between the USD/CHY FX spot rate and the US term structure with respect to the Nikkei equity index.

Figure 1. Relationship between the USD/CHY FX spot rate with respect to the Nikkei 225 equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 2. Relationship between the US term structure with respect to the Nikkei 225 equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

We conduct a second regression of the Nasdaq US equity index returns on the retained macroeconomic variables. We may emphasize the following: Both factors are not statistically significant at a 10% significance level, indicating that they have a limited ability to predict Nasdaq returns during the past two decades. The model has a low R2 of 4.45%, indicating that only 4.45% of the performance of the Nasdaq can be attributable to the change in the USD/CHY FX spot rate and the US term structure of the yield curve (Table 2).

Table 2. Nasdaq equity index regression output.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 3 and 4 captures the linear relationship between the USD/CHY FX spot rate and the US term structure with respect to the Nasdaq equity index.

Figure 3. Relationship between the USD/CHY FX spot rate with respect to the Nasdaq equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Figure 4. Relationship between the US term structure with respect to the Nasdaq equity index.
 Time-series regression
Source: computation by the author (Data: FRED Economics)

Applying APT

We can create a portfolio with similar factor sensitivities as the Arbitrage Portfolio by combining the first two index portfolios (with a Nasdaq Index weight of 40% and a Nikkei Index weight of 60%). This is referred to as the Constructed Index Portfolio. The Arbitrage portfolio will have a full weighting on US equity index (100% Nasdaq equity index). The Constructed Index Portfolio has the same systematic factor betas as the Arbitrage Portfolio, but has a higher expected return (Table 3).

Table 3. Index, constructed and Arbitrage portfolio return and sensitivity table.img_SimTrade_portfolio_sensitivity
Source: computation by the author (Data: FRED Economics)

As a result, the Arbitrage portfolio is overvalued. We will then buy shares of the Constructed Index Portfolio and use the profits to sell shares of the Arbitrage Portfolio. Because every investor would sell an overvalued portfolio and purchase an undervalued portfolio, any arbitrage profit would be wiped out.

Excel file for the APT application

You can find below the Excel spreadsheet that complements the example above.

 Download the Excel file to assess an arbitrage portfolio example

Why should I be interested in this post?

In the CAPM, the factor is the market factor representing the global uncertainty of the market. In the late 1970s, the portfolio management industry aimed to capture the market portfolio return, but as financial research advanced and certain significant contributions were made, this gave rise to other factor characteristics to capture some additional performance. Analyzing the historical contributions that underpins factor investing is fundamental in order to have a better understanding of the subject.

Related posts on the SimTrade blog

   ▶ Jayati WALIA Capital Asset Pricing Model (CAPM)

   ▶ Youssef LOURAOUI Origin of factor investing

   ▶ Youssef LOURAOUI Factor Investing

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: stock and portfolio approach

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: Analysis of the market factor

   ▶ Youssef LOURAOUI Fama-MacBeth regression method: N-factors application

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Lintner, J. (1965) The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. The Review of Economics and Statistics 47(1): 13-37.

Lintner, J. (1965) Security Prices, Risk and Maximal Gains from Diversification. The Journal of Finance 20(4): 587-615.

Roll, Richard & Ross, Stephen. (1995). The Arbitrage Pricing Theory Approach to Strategic Portfolio Planning. Financial Analysts Journal 51, 122-131.

Ross, S. (1976) The arbitrage theory of capital asset pricing Journal of Economic Theory 13(3), 341-360.

Sharpe, W.F. (1963) A Simplified Model for Portfolio Analysis. Management Science 9(2): 277-293.

Sharpe, W.F. (1964) Capital Asset Prices: A theory of Market Equilibrium under Conditions of Risk. The Journal of Finance 19(3): 425-442.

Yu, G., P. Fuller, D. Didia (2013) The Impact of Yield Slope on Stock Performance Southwestern Economic Review 40(1): 1-10.

Business Analysis

Barrons (2022) Apple, Nike, and 6 Other Companies With Big Exposure to China.

The Economist (2019) Japan Inc has thrived in China of late.

Investopedia (2022) Arbitrage Pricing Theory: It’s Not Just Fancy Math.

Time series

FRED Economics (2022) Chinese Yuan Renminbi to U.S. Dollar Spot Exchange Rate (DEXCHUS).

FRED Economics (2022) 10-Year Treasury Constant Maturity Minus 2-Year Treasury Constant Maturity (T10Y2Y).

FRED Economics (2022) NASDAQ Composite Index (NASDAQCOM).

FRED Economics (2022) Nikkei Stock Average, Nikkei 225 (NIKKEI225).

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).