Market Making

Market Making

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) explains the activity of market making which is key to bring liquidity in financial markets.

Market Making: What is It and How Does It Work?

Market making is a trading strategy that involves creating liquidity in a security by simultaneously being ready to buy and sell amount of that security. Market makers provide an essential service to the market by providing liquidity to buyers and sellers, which helps to keep stock prices stable (by limiting the price impact of incoming orders). This type of trading is often done by large institutional investors such as banks. In this article, I will explore what market making is, how it works, and provide some real-life examples of market makers in action.

What is Market Making?

Market making is a trading strategy that involves simultaneously being ready to buy and sell amounts of a security in order to create or improve market liquidity for other participants. Market makers are also known as “specialists” or “primary dealers” on the stock market. They act as intermediaries between buyers and sellers, providing liquidity to the market by always being willing to buy and sell a security at a certain price (or more precisely at two prices: a price to buy and a price to sell). The remuneration of a market maker is obtained by making a profit by taking the spread between the bid and ask prices of a security.

How Does Market Making Work?

Market makers create liquidity by always having an inventory of securities that they can buy and sell. They are willing to buy and sell a security at any given time, and they do so at a certain price. The price they buy and sell at may be different from the current market price, as market makers may be trying to influence the price of a security in order to make a profit.

Market makers buy and sell large amounts of a security in order to maintain an inventory, and they use a variety of techniques to do so. For example, they may buy large amounts of a security when the price is low and sell it when the price is high. They may also use algorithms to quickly buy and sell a security in order to take advantage of small price movements.

By providing liquidity to the market, market makers help to keep stock prices stable. They are able to do this by quickly buying and selling large amounts of a security in order to absorb excess demand or supply. This helps to prevent large price fluctuations and helps to keep the price of a security within a certain range.

Market making nowadays

One of the most well-known examples of market making is the role played by Wall Street banks. These banks act as market makers for many large stocks on the NYSE and NASDAQ. They buy and sell large amounts of a security in order to maintain an inventory, and they use algorithms to quickly buy and sell a security in order to take advantage of small price movements.

Another example of market making is the practice of high-frequency trading. In his book Flash Boys, author Michael Lewis examines the impact of high frequency trading (HFT) on market making. HFT uses powerful computers and sophisticated algorithms to rapidly analyze large amounts of data, allowing traders to make trades in milliseconds. This has led to an increased use of HFT for market making activities, which has caused some to argue that it may be harming market liquidity and efficiency. Market makers have begun using HFT strategies to gain an edge over traditional market makers, allowing them to make markets faster and at narrower spreads. This has resulted in tighter spreads and higher trading volumes, but it has also been argued that it has led to increased volatility and decreased liquidity. As a result, some investors have argued that HFT strategies have created an uneven playing field, where HFT firms have an advantage over traditional market makers.

The use of HFT has also raised concerns about the fairness of markets. HFT firms have access to large amounts of data, which they can use to gain an informational advantage over other market participants. This has raised questions about how well these firms are able to price securities accurately, and whether they are engaging in manipulative practices such as front running. Additionally, some argue that HFT firms are able to take advantage of slower traders by trading ahead of them and profiting from their trades.

These concerns have led regulators to take a closer look at HFT and market making activities. The SEC and other regulators have implemented a number of rules designed to protect investors from unfair or manipulative practices. These include Regulation NMS, which requires market makers to post their best bid and ask prices for securities, as well as Regulation SHO, which prohibits naked short selling and other manipulative practices. Additionally, the SEC has proposed rules that would require exchanges to establish circuit breakers and limit the amount of order cancellations that can be done in a certain period of time. These rules are intended to ensure that markets remain fair and efficient for all investors.

Conclusion

In conclusion, market making is a trading strategy that involves creating liquidity in a security by simultaneously being ready to buy and sell large amounts of that security. Market makers provide an essential service to the market by providing liquidity to buyers and sellers, which helps to keep stock prices stable. Wall Street banks and high-frequency traders are two of the most common examples of market makers.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Market maker – Job Description

Useful resources

SimTrade course Market making

Michael Lewis (2015) Flash boys.

U.S. Securities and Exchange Commission (SEC) Specialists

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Posted in Contributors, Financial techniques | Tagged , , | Leave a comment

Evidence of underpricing during IPOs

Evidence of underpricing during IPOs

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) exposes the results of various studies concerning IPO underpricing.

What is IPO Underpricing?

Underpricing is estimated as the percentage difference between the price at which the IPO shares were sold to investors (the offer price) and the price at which the shares subsequently trade in the market. As an example, imagine an IPO for which the shares were sold at $20 and that the first day of trading shows shares trading at $23.5, thus the associated underpricing induced is (23.5 / 20) -1 = 17.5%.

In well-developed capital markets and in the absence of restrictions on how much prices are allowed to fluctuate by from day to day, the full extent of underpricing is evident fairly quickly, certainly by the end of the first day of trading as investor jump on an occasion to reflect the fair value of the asset entering the market, and so most studies use the first-day closing price when computing initial underpricing returns. Using later prices, say at the end of the first week of trading, is useful in less developed capital markets, or in the presence of ‘daily volatility limits’ restricting price fluctuations, because aftermarket prices may take some time before they equilibrate supply and demand.

In the U.S. and increasingly in Europe, the offer price is set just days (or even more typically, hours) before trading on the stock market begins. This means that market movements between pricing and trading are negligible and so usually ignored. But in some countries (for instance, Taiwan and Finland), there are substantial delays between pricing and trading, and so it makes sense to adjust the estimate of underpricing for interim market movements.

As an alternative to computing percentage initial returns, underpricing can also be measured as the (dollar) amount of ‘money left on the table’. This is defined as the difference between the aftermarket trading price and the offer price, multiplied by the number of shares sold at the IPO. The implicit assumption in this calculation is that shares sold at the offer price could have been sold at the aftermarket trading price instead—that is, that aftermarket demand is price-inelastic. As an example, imagine an IPO for which the shares were sold at $20 and that the first day of trading shows shares trading at $23.5, with 20 million shares sold. The initial IPO in dollars was $400,000,000 and at the end of the first trading day this amount goes down to $470,000,000, inducing an amount of money left on the table of $70,000,000.

The U.S. probably has the most active IPO market in the world, by number of companies going public and by the aggregate amount of capital raised. Over long periods of time, underpricing in the U.S. averages between 10 and 20 percent, but there is a substantial degree of variation over time. There are occasional periods when the average IPO is overpriced, and there are periods when companies go public at quite substantial discounts to their aftermarket trading value. In 1999 and 2000, for instance, the average IPO was underpriced by 71% and 57%, respectively. In dollar terms, U.S. issuers left an aggregate of $62 billion on the table in those two years alone. Such periods are often called “hot issue markets”. Given these vast amounts of money left on the table, it is surprising that issuers appear to put so little pressure on underwriters to change the way IPOs are priced. A recent counterexample, however, is Google’s IPO which unusually for a U.S. IPO, was priced using an auction.

Why Has IPO Underpricing Changed over Time?

Underpricing is the difference between the price of a stock when it is first offered on the public market (the offer price) and the price at which it trades after it has been publicly traded (the first-day return). Various authors note that underpricing has traditionally been seen as a way for firms to signal quality to potential investors, which helps them to attract more investors and raise more capital.

In their study “Why Has IPO Underpricing Changed over Time? “, authors Tim Loughran and Jay Ritter discuss how the magnitude of underpricing has varied over time. They note that the average underpricing was particularly high in the 1970s and 1980s, with average first-day returns of around 45%. However, they also find that underpricing has declined significantly since then, with average first-day returns now hovering around 10%.

They then analyze the reasons for this decline in underpricing. They argue that the increased availability of information has made it easier for potential investors to assess a company’s quality prior to investing, thus reducing the need for firms to signal quality through underpricing. Additionally, they suggest that increased transparency and reduced costs of capital have also contributed to the decline in underpricing. Finally, they suggest that improved liquidity has made it easier for firms to raise capital without relying on underpricing.

These changes in underpricing have affected both existing and potential investors. Main arguments are that existing shareholders may benefit from reduced underpricing because it reduces the amount of money that is taken out of their pockets when a company goes public. On the other hand, potential investors may be disadvantaged by reduced underpricing because it reduces the return they can expect from investing in an IPO.

In conclusion we can note that while underpricing has declined significantly over time, there is still some evidence of underpricing in today’s markets. It suggests that further research is needed to understand why this is the case and how it affects investors. Many argue that research should focus on how different types of IPOs are affected by changes in underpricing, as well as on how different industries are affected by these changes. Additionally, they suggest that researchers should investigate how different investor groups are affected by these changes, such as institutional investors versus retail investors.

Overall, studies provide valuable insight into why IPO underpricing has changed so dramatically over the past four decades and how these changes have affected both existing shareholders and potential investors. It provides convincing evidence that increased access to information, greater transparency, reduced costs of capital, and improved liquidity have all contributed to the decline in underpricing. While it is clear that underpricing has declined significantly over time, further research is needed to understand why some IPOs still exhibit underpricing today and what effect this may have on different investor groups.

Related posts on the SimTrade blog

   ▶ Louis DETALLE A quick review of the ECM (Equity Capital Market) analyst’s job…

Useful resources

Ljungqvist A. (2004) IPO Underpricing: A Survey, Handbook in corporate finance: empirical corporate finance, Edited by B. Espen Eckbo.

Loughran T. and J. Ritter (2004) Why Has IPO Underpricing Changed over Time? Financial Management, 33(3), 5-37.

Ellul A. and M. Pagano (2006) IPO Underpricing and After-Market Liquidity The Review of Financial Studies, 19(2), 381-421.

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Posted in Contributors, Financial techniques | Tagged , | Leave a comment

Market efficiency

Market efficiency

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) explains the key financial concept of market efficiency.

What is Market Efficiency?

Market efficiency is an economic concept that states that financial markets are efficient when all relevant information is accurately reflected in the prices of assets. This means that the prices of assts reflect all available information and that no one can consistently outperform the market by trading on the basis of this information. Market efficiency is often measured by the degree to which prices accurately reflect all available information.

The efficient market hypothesis (EMH) states that markets are efficient and that it is impossible to consistently outperform the market by utilizing available information. This means that any attempt to do so will be futile and that all investors can expect to earn the same expected return over time. The EMH is based on the idea that prices are quickly and accurately adjusted to reflect new information, which means that no one can consistently make money by trading on the basis of this information.

Types of Market Efficiency

Following Fama’s academic works, there are three different types of market efficiency: weak, semi-strong, and strong.

Weak form of market efficiency

The weak form of market efficiency states that asset prices reflect all information from past prices and trading volumes. This implies that technical analysis, which is the analysis of past price and volume data to predict future prices, is not an effective way to outperform the market.

Semi-strong form of market efficiency

The semi-strong form of market efficiency states that asset prices reflect all publicly available information, including financial statements, research reports, and news. This implies that fundamental analysis, which is the analysis of a company’s financial statements and other publicly available information to predict future prices, is also not an effective way to outperform the market.

Strong form of market efficiency

Finally, the strong form of market efficiency states that prices reflect all available information, including private information. This means that even insider trading, which is the use of private information to make profitable trades, is not an effective way to outperform the market.

The Grossman-Stiglitz paradox

If financial markets are informationally efficient in the sense they incorporate all relevant information available, then considering this information is useless when making investment decisions in the sense that this information cannot be used to beat the market on the long term. We may wonder how this information can be incorporate in the market prices if no market participants look at information. This is the Grossman-Stiglitz paradox.

Real-Life Examples of Market Efficiency

The efficient market hypothesis has been extensively studied and there are numerous examples of market efficiency in action.

NASDAQ IXIC 1994 – 2005

One of the most famous examples is the dot-com bubble of the late 1990s. During this time, the prices of tech stocks skyrocketed to levels that were far higher than their fundamental values. This irrational exuberance was quickly corrected as the prices of these stocks were quickly adjusted to reflect the true value of the companies.

NASDAQ IXIC Index, 1994-2005

Source: Wikimedia.

The figure “NASDAQ IXIC Index, 1994-2005” shows the Nasdaq Composite Index (IXIC) from 1994 to 2005. During this time period, the IXIC experienced an incredible surge in value, peaking in 2000 before its subsequent decline. This was part of the so-called “dot-com bubble” of the late 1990s and early 2000s, when investors were optimistic about the potential for internet-based companies to revolutionize the global economy.

The IXIC rose from around 400 in 1994 to a record high of almost 5000 in March 2000. This was largely due to the rapid growth of tech companies such as Amazon and eBay, which attracted huge amounts of investment from venture capitalists. These investments drove up stock prices and created a huge market for initial public offerings (IPOs).

However, this rapid growth was not sustainable, and by the end of 2002 the IXIC had fallen back to around 1300. This was partly due to the bursting of the dot-com bubble, as investors began to realize that many of the companies they had invested in were unprofitable and overvalued. Many of these companies went bankrupt, leading to large losses for their investors.

Overall, the figure “Indice IXIC du NASDAQ, 1994-2005” illustrates the boom and bust cycle of the dot-com bubble, with investors experiencing both incredible gains and huge losses during this period. It serves as a stark reminder of the risks associated with investing in tech stocks. During this period, investors were eager to pour money into internet-based companies in the hopes of achieving huge returns. However, many of these companies were unprofitable, and their stock prices eventually plummeted as investors realized their mistake. This led to large losses for investors, and the bursting of the dot-com bubble.

In addition, this period serves as a reminder of the importance of proper risk management when it comes to investing. While it can be tempting to chase high returns, it is important to remember that investments can go up as well as down. By diversifying your portfolio and taking a long-term approach, you can reduce your risk profile and maximize your chances of achieving successful returns.

U.S. Subprime lending expanded dramatically 2004–2006.

Another example of market efficiency is the global financial crisis of 2008. During this time, the prices of many securities dropped dramatically as the market quickly priced in the risks associated with rising defaults and falling asset values. The market was able to quickly adjust to the new information and the prices of securities were quickly adjusted to reflect the new reality.

U.S. Subprime Lending Expanded Significantly 2004-2006

Source: US Census Bureau.

The figure “U.S. Subprime lending expanded dramatically 2004–2006” illustrates the extent to which subprime mortgage lending in the United States increased during this period. It shows a dramatic rise in the number of subprime mortgages issued from 2004 to 2006. In 2004, less than $500 billion worth of mortgages were issued that were either subprime or Alt-A loans. By 2006, that figure had risen to over $1 trillion, an increase of more than 100%.

This increase in the number of subprime mortgages being issued was largely driven by lenders taking advantage of relaxed standards and government policies that encouraged home ownership. Lenders began offering mortgages with lower down payments, looser credit checks, and higher loan-to-value ratios. This allowed more people to qualify for mortgages, even if they had poor credit or limited income.

At the same time, low interest rates and a strong economy made it easier for people to take on these loans and still be able to make their payments. As a result, many people took out larger mortgages than they could actually afford, leading to an unsustainable increase in housing prices and eventually a housing bubble.

When the bubble burst, millions of people found themselves unable to make their mortgage payments, and the global financial crisis followed. The dramatic increase in subprime lending seen in this figure is one of the primary factors that led to the 2008 financial crisis and is an illustration of how easily irresponsible lending can lead to devastating consequences.

Impact of FTX crash on FTT

Finally, the recent rise (and fall) of the cryptocurrency market is another example of market efficiency. The prices of cryptocurrencies have been highly volatile and have been quickly adjusted to reflect new information. This is due to the fact that the market is highly efficient and is able to quickly adjust to new information.

Price and Volume of FTT

Source: CoinDesk.

FTT price and volume is a chart that shows the impact of the FTX exchange crash on the FTT token price and trading volume. The chart reflects the dramatic drop in FTT’s price and the extreme increase in trading volume that occurred in the days leading up to and following the crash. The FTT price began to decline rapidly several days before the crash, dropping from around $3.60 to around $2.20 in the hours leading up to the crash. Following the crash, the price of FTT fell even further, reaching a low of just under $1.50. This sharp drop can be seen clearly in the chart, which shows the steep downward trajectory of FTT’s price.

The chart also reveals an increase in trading volume prior to and following the crash. This is likely due to traders attempting to buy low and sell high in response to the crash. The trading volume increased dramatically, reaching a peak of almost 20 million FTT tokens traded within 24 hours of the crash. This is significantly higher than the usual daily trading volume of around 1 million FTT tokens.

Overall, this chart provides a clear visual representation of the dramatic impact that the FTX exchange crash had on the FTT token price and trading volume. It serves as a reminder of how quickly markets can move and how volatile they can be, even in seemingly stable assets like cryptocurrencies.

Today, the FTT token price has recovered somewhat since the crash, and currently stands at around $2.50. However, this is still significantly lower than it was prior to the crash. The trading volume of FTT is also much higher than it was before the crash, averaging around 10 million tokens traded per day. This suggests that investors are still wary of the FTT token, and that the market remains volatile.

Conclusion

Market efficiency is an important concept in economics and finance and is based on the idea that prices accurately reflect all available information. There are three types of market efficiency, weak, semi-strong, and strong, and numerous examples of market efficiency in action, such as the dot-com bubble, the global financial crisis, and the recent rise of the cryptocurrency market. As such, it is clear that markets are generally efficient and that it is difficult, if not impossible, to consistently outperform the market.

Related posts on the SimTrade blog

   ▶ All posts related to market efficiency

   ▶ William ANTHONY Peloton’s uphill battle with the world’s return to order

   ▶ Aamey MEHTA Market efficiency: the case study of Yes bank in India

   ▶ Aastha DAS Why is Apple’s new iPhone 14 release line failing in the first few months?

Useful resources

SimTrade course Market information

Academic research

Fama E. (1970) Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383-417.

Fama E. (1991) Efficient Capital Markets: II Journal of Finance, 46, 1575-617.

Grossman S.J. and J.E. Stiglitz (1980) On the Impossibility of Informationally Efficient Markets The American Economic Review, 70, 393-408.

Business resources

CoinDesk These Four Key Charts Shed Light on the FTX Exchange’s Spectacular Collapse

Bloomberg Crypto Prices Fall Most in Two Weeks Amid FTT and Macro Risks

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Posted in Contributors, Financial techniques | Tagged , | Leave a comment

Special Acquisition Purpose Companies (SPAC)

Special Acquisition Purpose Companies (SPAC)

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) develops on the SPACs.

What are SPACs

Special purpose acquisition companies (SPACs) are an increasingly popular form of corporate finance for businesses seeking to go public. SPACs are publicly listed entities created with the objective of raising capital through their initial public offering (IPO) and then using that capital to acquire a private operating business. As the popularity of this financing method has grown, so have questions about how SPACs work, their potential risks and rewards, and their implications for investors. This essay will provide an overview of SPAC structures and describe key considerations for investors in evaluating these vehicles.

How are SPACs created

A special purpose acquisition company (SPAC) is created by sponsors who typically have a specific sector or industry focus; they use proceeds from their IPO to acquire target companies within that focus area without conducting the usual due diligence associated with traditional IPOs. The target company is usually identified prior to the IPO taking place; after it does take place, shareholders vote on whether or not they would like to invest in the acquisition target’s stock along with other aspects such as management compensation packages.

The SPAC process

The process begins when sponsors form a shell corporation that issues share via investment banks’ underwriting services; these shares are then offered in an IPO which typically raises between $250 million-$500 million dollars depending on market conditions at time of launch. Sponsors can also raise additional funds through private placements before going public if needed and may even receive additional cash from selling existing assets owned by company founders prior to launching its IPO. This allows them more flexibility in terms of what targets they choose during search process as well as ability transfer ownership over acquired business faster than traditional M&A processes since no need wait secure regulatory approval beforehand. Once enough capital has been raised through IPO/private placement offerings, sponsor team begins searching for suitable candidate(s) purchase using criteria determined ahead time based off desired sector/industry focus outlined earlier mentioned: things like size revenue generated per quarter/yearly periods competitive edge offered current products compared competitors etcetera all come play here when narrowing down list candidates whose acquisitions could potentially help increase value long-term investments made original shareholders..

Advantages of SPACs

Unlike traditional IPOs where companies must fully disclose financial information related past performance future prospects order comply regulations set forth Securities & Exchange Commission (SEC), there far less regulation involved investing SPACs because purchase decisions already being made prior going public stage: meaning only disclose details about target once agreement reached between both parties – though some do provide general information during pre-IPO phase give prospective buyers better idea what expect once deal goes through.. This type of structure helps lower cost associated taking business public since much due diligence already done before opening up share offer investors thus allowing them access higher quality opportunities at fraction price versus those available traditional stock exchange markets. Additionally, because shareholder votes taken into consideration each step way, risk potential fraud reduced since any major irregularities discovered regarding selected targets become transparent common knowledge everyone voting upon proposed change (i.e., keeping board members accountable).

Disadvantages of SPACs

As attractive option investing might seem, there are still certain drawbacks that we should be aware such the high cost involved structuring and launching successful campaigns and the fact that most liquidation events occur within two years after listing date – meaning there is a lot of money spent upfront without guarantee returns back end. Another concern regards transparency: while disclosure requirements are much stricter than those found regular stocks, there is still lack of full disclosure regarding the proposed acquisitions until the deal is finalized making difficult to determine whether a particular venture is worth the risk taken on behalf investor. Lastly, many believe merging different types of businesses together could lead to the disruption of existing industries instead just creating new ones – something worth considering if investing large sums money into particular enterprise.

Examples of SPACs

VPC Impact Acquisition (VPC)

This SPAC was formed in 2020 and is backed by Pershing Square Capital Management, a leading hedge fund. It had an initial funding of $250 million and made three acquisitions. The first acquisition was a majority stake in the outdoor apparel company, Moosejaw, for $280 million. This acquisition was considered a success as Moosejaw saw significant growth in its business after the acquisition, with its e-commerce sales growing over 50% year-over-year (Source: Business Insider). The second acquisition was a majority stake in the lifestyle brand, Hill City, for $170 million, which has also been successful as it has grown its e-commerce and omnichannel businesses (Source: Retail Dive). The third acquisition was a minority stake in Brandless, an e-commerce marketplace for everyday essentials, for $25 million, which was not successful and eventually shut down in 2020 after failing to gain traction in the market (Source: TechCrunch). In conclusion, VPC Impact Acquisition has been successful in two out of three of its acquisitions so far, demonstrating its ability to identify successful investments in the consumer and retail sector.

Social Capital Hedosophia Holdings Corp (IPOE)

This SPAC was formed in 2019 and is backed by Social Capital Hedosophia, a venture capital firm co-founded by famed investor Chamath Palihapitiya. It had an initial funding of $600 million and has made two acquisitions so far. The first acquisition was a majority stake in Virgin Galactic Holdings, Inc. for $800 million, which has been extremely successful as it has become a publicly traded space tourism company and continues to make progress towards its mission of accessible space travel (Source: Virgin Galactic). The second acquisition was a majority stake in Opendoor Technologies, Inc., an online real estate marketplace, for $4.8 billion, which has been successful as the company has seen strong growth in its business since the acquisition (Source: Bloomberg). In conclusion, Social Capital Hedosophia Holdings Corp has been incredibly successful in both of its acquisitions so far, demonstrating its ability to identify promising investments in the technology sector.

Landcadia Holdings II (LCA)

This SPAC was formed in 2020 and is backed by Landcadia Holdings II Inc., a blank check company formed by Jeffery Hildebrand and Tilman Fertitta. It had an initial funding of $300 million and made one acquisition, a majority stake in Waitr Holdings Inc., for $308 million. Unfortunately, this acquisition was not successful and it filed for bankruptcy in 2020 due to overleveraged balance sheet and lack of operational improvements (Source: Reuters). Waitr had previously been a thriving food delivery company but failed to keep up with the rapid growth of competitors such as GrubHub and DoorDash (Source: CNBC). In conclusion, Landcadia Holdings II’s attempt at acquiring Waitr Holdings Inc. was unsuccessful due to market conditions outside of its control, demonstrating that even when a SPAC is backed by experienced investors and has adequate funding, there are still no guarantees of success.

Conclusion

Despite all these drawbacks, Special Purpose Acquisition Companies remain a viable option for entrepreneurs seeking to take advantage of the rising trend toward the digitalization of global markets who otherwise wouldn’t have access to the resources necessary to fund projects themselves. By providing unique opportunity to access higher caliber opportunities, this type of vehicle serves fill gap left behind many start-up ventures unable to compete against larger organizations given the limited financial capacity to operate self-sufficiently. For reasons stated above, it is clear why SPACs continue to gain traction both among investors entrepreneurs alike looking to capitalize quickly on changing economic environment we live today…

Related posts on the SimTrade blog

   ▶ Daksh GARG Rise of SPAC investments as a medium of raising capital

Useful resources

U.S. Securities and Exchange Commission (SEC) Special Purpose Acquisition Companies

U.S. Securities and Exchange Commission (SEC) What are the differences in an IPO, a SPAC, and a direct listing?

U.S. Securities and Exchange Commission (SEC) What You Need to Know About SPACs – Updated Investor Bulletin

PwC Special purpose acquisition companies (SPACs)

Harvard Business Review SPACs: What You Need to Know

Harvard Business Review SPACs: What You Need to Know

Bloomberg

Reuters

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Posted in Contributors, Financial techniques | Tagged , , | Leave a comment

My experience as an intern in the Corporate Finance department at Maison Chanel

My experience as an intern in the Corporate Finance department at Maison Chanel

Martin VAN DER BORGHT

In this article, Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024) shares his professional experience as a Corporate Finance intern at Maison Chanel.

About the company

Chanel is a French company producing haute couture, as well as ready-to-wear, accessories, perfumes, and various luxury products. It originates from the fashion house created by Coco Chanel in 1910 but is the result of the takeover of Chanel by the company Les Parfums Chanel in 1954.

Chanel logo.
Channel logo
Source: Chanel.

In February 2021, the company opened a new building called le19M.This building was designed to bring together 11 Maisons d’art, the Maison ERES and a multidisciplinary gallery, la Galerie du 19M, under one same roof. Six hundred artisans and experts are gathered in a building offering working conditions favorable to the wellbeing of everyone and to the development of new perspectives at the service of the biggest names in fashion and luxury.

My internship

From September 2021 to February 2022, I was an intern in the Corporate Finance and Internal Control department at Maison Chanel, Paris, France. As part of Manufactures de Mode, subsidiary of Chanel, which aims to serve as support for all the Maisons d’art and Manufactures de Modes, located in le19M building, my internship was articulated around three main missions.

My missions

My first mission was to develop and implement an internal control process worldwide in every entity belonging to the fashion division of Chanel. The idea behind this was to make a single process that could be used in every entity, whatever its size, so all of them have the same, improving the efficiency during internal and external audits.

During the six months of my internship, we focus our development on a particular aspect of internal control that is called “segregation of duties” or SoD. The segregation of duties is the assignment of various steps in a process to different people. The intent behind doing so is to eliminate instances in which someone could engage in theft or other fraudulent activities by having an excessive amount of control over a process. In essence, the physical custody of an asset, the record keeping for it, and the authorization to acquire or dispose of the asset should be split among different people. We developed multiple procedures and matrix to allow the company to check whether their actual processes were at risk or not, with different level of risks, and adjustments proper to each entity.

My second mission was to value each company to test them for goodwill impairment in Chanel SAS consolidation. We use a discounted cash flow (DCF) model to value every company and based on the value determined, we tested the goodwill. Goodwill impairment is an earnings charge that companies record on their income statements after they identify that there is persuasive evidence that the asset associated with the goodwill can no longer demonstrate financial results that were expected from it at the time of its purchase.

Let me take an example. Imagine company X acquire company Y for $100,000 while company Y was valued at $60,000 in fair value. In this situation, the goodwill is $40,000 (=100,000 – 60,000). Now let’s say we are a year later, and the fair value of company Y is calculated as $45,000 while its recoverable amount is $80,000. The carrying amount of the asset and the goodwill (85,000) is now higher than the recoverable amount of the asset (80,000), and this is misleading, so we have to impair the goodwill by $5,000 (85,000 – 80,000) to account for this decrease in value. As the company was acquired at a price higher than the fair value, it is the goodwill that will be impaired of such a loss.

My last mission was a day-to-day exercise by which I had to assist and support each entity in its duties towards Chanel SAS. It could have been everything related to finance or accounting (reporting, valuation, integration post-acquisition, etc.), and sometimes not even related to finance but to the development of these companies (IT, audits, etc.). This last mission allowed me to travel and visit multiple Maisons d’art and Manufactures de modes to help prepared internal and external audits.

Required skills and knowledge

The main requirements for this internship were to be at ease with accounting and financial principle (reporting, consolidation, fiscal integration, valuation, etc.) to be able to communicate with a multitude of employees by writing and talking, and to be perfectly fluent in English as entities are located everywhere.

What I learned

This internship was a great opportunity to learn because it required a complete skillset of knowledge to be able to work at the same time on internal control aspects, financial aspects, accounting aspects, and globally audit aspects. It gave me the possibility to meet a huge number of people, all interesting and knowledgeable, to travel, to learn more about the fashion luxury industry at every stage of the creation process, and to discover how it is to work in a large company operating on a worldwide scale.

Three concepts I applied during my journey

Discounted cash flow (DCF)

Discounted cash flow (DCF) analysis is a valuation method used to estimate the value of an asset or business. It does this by discounting all future cash flows associated with the asset or business back to the present time, so that they have a consistent value in today’s terms. DCF analysis is one of the most commonly used methods for valuing a business and its assets, as it takes into account both current and expected future earnings potential.

The purpose of using DCF analysis is to determine an accurate value for an asset or company in order to make informed decisions about investing in it. The method takes into account all expected future cash flows from operating activities such as sales, expenses, taxes and dividends paid out over time when calculating its intrinsic worth. This allows investors to accurately evaluate how much they should pay for an investment today compared to what it could be worth in the future due to appreciation or other factors that may affect its price at any given moment over time.

The process involves estimating free cash flow (FCF), which includes net income plus non-cash items like depreciation and amortization minus capital expenditures required for day-to-day operations, then discounting this figure back at a rate determined by market conditions such as risk level and interest rates available on similar investments. The resulting number provides investors with both a present value (PV) which reflects what would be earned from holding onto their money without risking any capital gains tax if held long enough; as well as terminal value (TV) which considers what kind of return can be expected after taking into account growth rates for remaining years left on investments being considered.

Since DCF only takes into consideration anticipated figures based off research conducted prior through financial data points, there are certain limitations associated with using this type of calculation when trying to determine fair market values since unexpected events can occur during timespan between now until end date calculated period ends causing prices either rise above estimated figures proposed earlier before end date was reached thus creating higher returns than originally forecasted initially before actual event took place; at same opposite can occur where unforeseen economic downturns could lower prices below predicted projections resulting lower returns than assumed initially prior situation happening firstly. Therefore, while estimates provided via discounted cash flow are helpful tools towards making more informed decisions when considering buying/selling specific assets/companies, ultimately investor should also conduct additional due diligence beyond just relying solely upon these calculations alone before making final decision whether proceed further move ahead not regarding particular opportunities being evaluated currently.

Goodwill impairment is an analysis used to determine the current market value of a company’s intangible assets. It is usually performed when a company has acquired another company or has merged with another entity but can also be done in other situations such as when the fair value of the reporting unit decreases significantly due to market conditions or internal factors. The purpose of goodwill impairment analysis is to ensure that a company’s financial statements accurately reflect its financial position by recognizing any potential losses in intangible asset values associated with poor performance.

When conducting goodwill impairment analysis, companies must first calculate their total identifiable assets and liabilities at fair value less costs associated with disposal (FVLCD). This includes both tangible and intangible assets like trademarks, patents, and customer relationships. Next, they must subtract FVLCD from the acquisition price of the target entity to calculate goodwill. Goodwill represents any excess amount paid for an acquiree above its fair market value which cannot be attributed directly to specific tangible or intangible assets on its balance sheet. If this calculated goodwill amount is greater than zero, then it needs to be tested for potential impairment losses over time.
The most common method used for testing goodwill impairments involves comparing the implied fair value of each reporting unit’s net identifiable asset base (including both tangible and intangible components) against its carrying amount on the balance sheet at that moment in time. Companies may use either a discounted cash flow model or their own proprietary valuation techniques as part of this comparison process which should consider future expected cash flow streams from operations within each reporting unit affected by acquisitions prior years among other inputs including industry trends and macroeconomic factors etcetera where applicable. If there is evidence that suggests that either one would result in lower overall returns than originally anticipated, then it could indicate an impaired asset situation requiring additional accounting adjustments.

Goodwill

In summary, goodwill impairment analysis plays an important role in ensuring accurate accounting practices are followed by companies so that their financial statements accurately reflect current values rather than simply relying on historic acquisition prices which may not necessarily represent present day realities. By taking all relevant information into consideration during these tests, businesses can identify potential issues early on and make necessary changes accordingly without having too much negative impact downstream operations going forward.

Segregation of duties (SoD)

Segregation of duties (SoD) is an important part of any company’s internal control system. It involves the separation and assignment of different tasks to different people within a business, in order to reduce the risk that one person has too much power over critical functions. This segregation helps to ensure accuracy, integrity, and security in all areas.

Segregation of duties can be broken down into two main components: functional segregation and administrative segregation. Functional segregation involves assigning specific responsibilities or tasks to individuals with expertise or knowledge in that area while administrative segregation focuses on preventing an individual from having too much authority over a process or task by dividing those responsibilities among multiple people.

The purpose behind segregating duties is to limit potential risks associated with fraud, errors due to lack of proper supervision, mismanagement, waste, and misuse of resources as well as other potential criminal activities that could lead to loss for the business. Segregation also ensures accountability for everyone’s actions by making sure no single employee has access or control over more than one critical function at any given time: thereby reducing opportunities for mismanagement and manipulation without proper oversight from management personnel. Additionally, it allows businesses better manage their internal processes by providing checks-and-balances between departments; thus, promoting better coordination between them which can be beneficial when dealing with complex procedures such as budgeting cycles or payroll processing, etc.

In conclusion, segregating duties helps businesses reduce risks related not only fraud but also mismanagement, waste, misuse & other criminal activities which may lead businesses losses & create transparency & accountability within departments so they are able coordinate properly & execute operations efficiently. It is therefore an essential component business should consider implementing into their internal controls systems if they wish to ensure their financial stability long run.

Useful resources

Maison Chanel

le19m

Related posts on the SimTrade blog

   ▶ All posts about professional experience in the SimTrade blog

   ▶ Emma LAFARGUE Mon expérience en contrôle de gestion chez Chanel

   ▶ Marie POFF Film analysis: Rogue Trader

   ▶ Louis DETALLE The incredible story of Nick Leeson & the Barings Bank

   ▶ Maite CARNICERO MARTINEZ How to compute the net present value of an investment in Excel

   ▶ William LONGIN How to compute the present value of an asset?

About the author

The article was written in January 2023 by Martin VAN DER BORGHT (ESSEC Business School, Master in Finance, 2022-2024).

Posted in Contributors, Professional experience | Tagged , , , | Leave a comment

My experience as a Risk Advisory Analyst in Deloitte

My experience as a Risk Advisory Analyst in Deloitte

Nithisha CHALLA

In this article, Nithisha CHALLA (ESSEC Business School, Grande Ecole – Master in Management, 2021-2023) shares her experience as a Risk Advisory Analyst in Deloitte.

About the company

Deloitte is one of the Big Four accounting firms along with EY (Ernst & Young), KPMG, and PricewaterhouseCoopers (PWC). It is the largest professional services network (with teams in different countries working together) by the number of professionals and revenue in the world, headquartered in London, England. The firm was founded by William Welch Deloitte in London in 1845 and expanded into the United States in 1890. Deloitte provides audit, consulting, financial advisory, risk advisory, tax, and legal services with approximately 415,000 professionals globally. In fiscal year 2021, the network earned a revenue of US$50.2 billion in aggregate. Additionally, a few of Deloitte’s largest customers as of 2021 includes Morgan Stanley, The Blackstone Group, Berkshire Hathaway, etc.

Logo of Deloitte.
Logo of Deloitte
Source: Deloitte.

As a risk advisory analyst, I had the opportunity to read a lot of surveys that Deloitte conducted on an annual basis to assess work ethics, strategy and their influence on a particular business line. In order for individuals to relate, these polls also provide an overview of the global standing in the relevant business sector. The 11th edition of the Global Risk Management Survey states that despite the relatively stable global economy, risk management is currently dealing with numerous significant impending risks that will force financial service institutions to reconsider their traditional methods. The company also maintains that risk management must be integrated into strategy so that the institution’s risk appetite and risk utilization are important factors to consider.

My experience as a Financial Risk Advisory Analyst at Deloitte

My hands-on experience with risk management and its applications kick-started with my first profile in the Anti-Money Laundering division after graduation as a Financial Risk Advisory Analyst at Deloitte USI (Deloitte USI is a division of Deloitte US that serves customers of the US member firm and is physically located in India.). In this project, I worked for an international bank to audit and assess the company’s customer risk.

My responsibility at work

As an employee in the Risk Advisory department at Deloitte, I provided a host of advanced services to an international bank. I conducted Enhanced Due Diligence for the client’s high-risk and high-net-worth customers through sources of origin and transactions that exhibit irregular behavior. A large part of my work was to minimize or optimize risk, in maintaining the highest standard of financial understanding, I undertook regular risk assessments. The nature of my tasks has brought me close familiarity with numerous domains, highlighting clientele involvement in economically sensitive industries and geographies all over the world.

The work involved holistic net-worth assessment for high-profile customers in accordance with their diversified financial portfolios. The team starts by researching the client and using public records to confirm any criminal history. The team then determines the customer’s net worth by conducting a thorough analysis of the client’s varied sources of income, including a family trust, an inheritance, self-employment, and stock investments. Additionally, the team examines the transactions to look for any potential signs of money laundering.

The whole process is carried out in accordance with the three stages:

  • Placement
  • Layering
  • Integration

The first step in money laundering is depositing illegal funds in financial institutions to make them appear legitimate. This entails splitting up larger sums of money into smaller, less noticeable amounts, transporting cash across borders to deposit the money in foreign banks, or purchasing pricey items like fine art, antiques, gold, etc. Once the money has entered the financial system, it is moved around, or layered, from one place to another in an effort to conceal criminal activity.

For instance, buying an antique item with the money and selling it later to fund the establishment of a holding company or non-financial trust. These financial entities, which are typically corporations or limited liability companies (LLCs), hold the controlling stack of their subsidiary companies and, as a result, oversee the management of child companies without getting directly involved in their day-to-day management.

Another example would be by locating the holding company in a region with a low tax rate. These controlling companies are simple to establish and can significantly reduce the tax burden of the entire corporation. If a child company declares bankruptcy, the holding company, which may hold additional child companies or portions of child companies, is shielded from the loan creditors. After the money appears legitimate, the money is integrated into the system to gain profit. At this stage, identifying black money is very difficult for the bank system.

My missions

My job has broadened the scope of my leadership abilities, and I have led a group of five analysts for a quality check to ensure that projects with strict deadlines are completed on time and to the standard of quality that clients have come to expect from the company. I’ve received several spot awards during my time at Deloitte for my willingness and capacity to go above and beyond.

By establishing a scope to coordinate with on-site teams and executives across geographies, I have gained significant international exposure in the comparatively brief time I have spent at Deloitte. Additionally, I’ve had a profound introduction to the procedures that enable experts to identify elements that pose risks to the regular functioning of enterprises, and thus eliminate and streamline the same.

What I have gained from the job

The following points mentioned below are a brief sum-up of what I learned through my full-time role in the project:

Tax obligations in various jurisdictions

The tax is calculated for a company based on the base location irrespective of how money is flowing into the company.

Different financial entities

The functioning, policies, and structure are different for different entities like LLCs, LLPs, holding companies, non-financial trusts, etc.

Beneficial Ownership

One company can have multiple form of owners, like joint ownership, proprietorship, or partnership, and in a such complex model, how beneficial ownership is decided.

Required skills and knowledge

The hard skills I needed to make presentations or scatterplots when I first started working included knowledge of Money Laundering, Microsoft Suite and Excel. Since the projects associated with these business lines are typically enormous, having solid soft skills will make it easier to manage them. Good soft skills, compliance, teamwork, and cooperation are necessary on an individual level.

Key concepts

I developed below key concepts that I use during my job.

Know your customer (KYC)

Know Your Customer (KYC) can also refer to Know Your Client. Financial institutions are protected by Know Your Customer (KYC) regulations from fraud, corruption, money laundering, and financing of terrorism. When opening an account and on an ongoing basis, KYC checks are required to identify and confirm the client’s identity. In other words, banks need to confirm that their customers are actually who they say they are.

Due Diligence

It refers to the procedures employed by financial organizations to gather and assess pertinent data regarding a customer. It seeks to identify any potential risk associated with doing business with them for the financial institution. The procedure entails assessing public data sources, including firm listings, private data sources from third parties, or government sanction lists. Meeting Know Your Customer (KYC) standards, which differ from nation to country, involves conducting extensive customer due diligence.

Anti-Money Laundering (AML)

The network of rules and norms known as anti-money laundering (AML) aims to expose attempts to pass off illegal money as legitimate income. Money laundering aims to cover up offenses like minor drug sales and tax evasion as well as public corruption and funding of terrorist organizations. AML initiatives seek to make it more difficult to conceal the proceeds of crime. Financial institutions need rules to create regulated customer due diligence plans to evaluate money laundering risks and identify questionable transactions.

Why should I be interested in this post?

I believe that this post’s description of anti-money laundering, a significant business sector of Risk and Financial Advisory, might be very helpful to those interested in pursuing professions in finance. It will help them bridge the gap between real life work experience and theoretical knowledge. My understanding is that this article also provides a quick overview of the auditing and RFA (risk and financial advisory) work environments at Deloitte, one of the Big Four organizations.

Useful resources

Deloitte

Related posts on the SimTrade blog

   ▶ All posts about professional experience

   ▶ Basma ISSADIK My experience as an M&A/TS intern at Deloitte

   ▶ Anant JAIN My internship experience at Deloitte

   ▶ Pierre-Alain THIAM My experience as a junior audit consultant at KPMG

About the author

The article was written in January 2023 by Nithisha CHALLA (ESSEC Business School, Grande Ecole – Master in Management, 2021-2023).

Posted in Contributors, Professional experience | Tagged , | Leave a comment

Catégories de mesures de risque

Catégories de mesures de risque

Shengyu ZHENG

Dans cet article, Shengyu ZHENG (ESSEC Business School, Grande Ecole – Master in Management, 2020-2023) présente les catégories de mesures de risque couramment utilisées.

Selon le type d’actif et l’objectif de gestion de risques, on se sert de mesures de risques de différentes catégories. Il y a trois approches : la distribution statistique, la sensibilité et les scénarios. Généralement, les méthodes des différentes catégories sont employées et combinées, en constituant un système de gestion de risque qui facilite de différents niveaux des besoins managériaux.

Approche basée sur la distribution statistique

Les mesures modernes de risque ont des propriétés statistiques décrivant la distribution de perte, soit conditionnelle, soit non-conditionnelle, à un horizon de temps prédéterminé.

Le concept de la distribution de perte se focalise sur la perte, ce qui est l’objet clef de la gestion de risques. À travers ce concept, l’agrégation et les effets de diversification et de compensation peuvent être réalisés, et la comparaison de risques entre portefeuilles est rendue possible également.

Les mesures se divise principalement en deux types, globales et locales. Les mesures globales (variance, beta) rendent compte de la distribution entière. Les mesures locales (Value-at-Risk, Expected Shortfall, Stress Value) se focalisent sur les queues de distribution, notamment la queue où se situent les pertes.

Cette approche n’est toutefois pas parfaite. Généralement un seul indicateur statistique n’est pas suffisant pour décrire tous les risques présents dans la position ou le portefeuille. Le calcul des propriétés statistiques et l’estimation des paramètres sont basés sur les données du passé, alors que le marché financier ne cesse d’évoluer. Même si la distribution reste inchangée entre temps, l’estimation précise de distribution n’est pas évidente et les hypothèses paramétriques ne sont pas toujours fiables.

Approche basée sur les sensibilités

Cette approche permet d’évaluer l’impact d’une variation d’un facteur de risque sur la valeur ou la rentabilité du portefeuille. Les mesures factorielles, telles que la duration et la convexité pour les obligations et les Grecques pour les produits dérivés, font partie de cette catégorie.

Ces mesures offrent une perspective directe des fluctuations de valeur d’actif à cause de l’incertitude courante, ce qui est informatif en matière de la robustesse de l’actif face aux risques quotidiens de marché.

Elles comportent aussi des limites, notamment en termes d’agrégation de risque.

Approche basée sur les scénarios

Cette approche considère la perte maximale dans tous les scénarios générés sous les conditions de changements majeurs du marché. Les chocs peuvent s’agir, par exemple, d’une hausse de 10% d’un taux d’intérêt ou d’une devise, accompagnée d’une chute de 20% des indices d’actions importants.

Un test de résistance est un dispositif souvent mis en place par les banques centrales afin d’assurer la solvabilité des acteurs importants et la stabilité du marché financier. Un test de résistance, ou en anglicisme un « stress test », est un exercice consistance à simuler des conditions économiques et financières extrêmes mais effectivement plausibles, dans le but d’étudier les conséquences majeures apportées surtout aux établissements financiers (par exemple, les banques ou les assureurs), et de quantifier la capacité de résistance de ces établissements.

Ressources

Alexander J. McNeil (2015) Quantitative Risk Management: Concepts, Techniques and Tools – Revised Edition, Princeton University Press

A propos de l’auteur

Cet article a été écrit en janvier 2023 par Shengyu ZHENG (ESSEC Business School, Grande Ecole – Master in Management, 2020-2023).

Posted in Contributors | Leave a comment

Hedge fund diversification

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the notion of hedge fund diversification by analysing the paper “Hedge fund diversification: how much is enough?” from Lhabitant and Learned (2002).

This article is organised as follows: we describe the primary characteristics of the research paper. Then, we highlight the research paper’s most important points. This essay concludes with a discussion of the principal findings.

Introduction

The paper discusses the advantages of investing in a multi-strategy hedge fund. It is a relevant subject in the field of alternative investments since it has attracted the interest of institutional investors seeking to uncover the alternative investment universe and increase their portfolio’s return. The paper’s primary objective is to determine the appropriate number of funds (hedge funds) that a manager should combine in their portfolio to maximise returns. The purpose of this paper is to examine the impact of adding hedge funds to a portfolio and its effect on the various statistical indicators (return, volatility, skewness, kurtosis). The authors produce basic portfolios (randomly chosen and equally weighted portfolios). The purpose is to evaluate the diversification advantage and the dynamics of the diversification effect on hedge funds.

Key elements of the paper

The pioneering works of Henry Markowitz (1952) depicted the effect of diversification by analysing the number of assets in a portfolio and their impact in terms of risk and return. Since unsystematic risk (specific risk) can be neutralised, investor will not receive an additional return. Systematic risk (market risk) is the component that the market rewards. Diversification is then at the heart of the modern portfolio theory asset allocation framework. The academic literature has since then delved deeper on the analysis of the optimal number of assets to hold in a well-balanced portfolio. Some notable contributions worth mentioning:

  • Elton and Gruber (1977), Evans and Archer (1968), Tole (1982) and Statman among other delved deeper into the optimal number of assets to hold to generate the best risk and return portfolio. No definitive answer shared in academic on the optimal number.
  • Evans and Archer (1968) depicted that the best results are achieved at 8-10 assets, while raising doubts about portfolios with number of assets above the threshold. Statman (1987) concluded that at least thirty to forty stocks should be included in a portfolio to achieve the portfolio diversification.

The authors also mention the concept of naïve diversification (also known as “1/N heuristics”) is an allocation strategy where the investor split the overall fund available is distributed into same. Naïve diversification seeks to spread asset risk evenly in the portfolio to reduce overall risk. However, the authors mention important considerations to consider for naïve/Markowitz optimization:

  • Drawback of naïve diversification: since it doesn’t account for correlation between assets, the allocation will yield a sub-optimal result and the diversification won’t be fully achieved. In practice, naïve diversification can result in portfolio allocations that lie on the efficient frontier. On the other hand, mean variance optimisation, the framework revolving he Modern Portfolio Theory is subject to input sensitivity of the parameter used in the optimisation process. On a side note, it is worth mentioning that naïve diversification is a good starting point, better than gut feeling. It simplifies allocation process while also benefiting by some degree of risk diversification.
  • Non normality of distribution of returns: Hedge funds exhibit non-normal returns (fat tails and skewness). Those higher statistical moments are important for investors allocation but are disregarded in a mean-variance framework.
  • Econometric difficulties arising from hedge fund data in an optimizer framework. Mean-variance optimisers tend to consider historical return and risk, covariances as an acceptable point to assess future portfolio performance. Applied in a construction of a hedge fund portfolio, it becomes even more difficult to derive the expected return, correlation, and standard deviation for each fund since data is scarcer and more difficult to obtain. Add to that the instability of the hedge funds returns and the non-linearity of some strategies which complicates the evaluation of a hedge fund portfolio.
  • Operational risk arising from fund selection and implementation of the constraints in an optimiser software. Since some parameters are qualitative (i.e., lock up period, minimum investment period), these optimisers tool find it hard to incorporate these types of constraints in the model.

Conclusion

Due to entry restrictions, data scarcity, and a lack of meaningful benchmarks, hedge fund investing is difficult. The paper analyses in greater depth the optimal number of hedge funds to include in a diversified portfolio. According to the authors, adding funds naively to a portfolio tends to lower overall standard deviation and downside risk. In this context, diversification should be improved if the marginal benefit of adding a new asset to a portfolio exceeds its marginal cost.

The authors reiterate that investors should not invest “naively” in hedge funds due to their inherent risk. The impact of naive diversification on the portfolio’s skewness, kurtosis, and overall correlation structure can be significant. Hedge fund portfolios should account for this complexity and examine the effect of adding a hedge fund to a well-balanced portfolio, taking into account higher statistical moments to capture the allocation’s impact on portfolio construction. Naive diversification is subject to the selection bias. In the 1990s, the most appealing hedge fund strategy was global macro, although the long/short equity strategy acquired popularity in the late 1990s. This would imply that allocations will be tilted towards these two strategies overall.

The answer to the title of the research paper? Hedge funds portfolios should hold between 15 and 40 underlying funds, while most diversification benefits are reached when accounting with 5 to 10 hedge funds in the portfolio.

Why should I be interested in this post?

The purpose of portfolio management is to maximise returns on the entire portfolio, not just on one or two stocks. By monitoring and maintaining your investment portfolio, you can accumulate a substantial amount of wealth for a range of financial goals, such as retirement planning. This article facilitates comprehension of the fundamentals underlying portfolio construction and investing. Understanding the risk/return profiles, trading strategy, and how to incorporate hedge fund strategies into a diversified portfolio can be of great interest to investors.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Equity market neutral strategy

   ▶ Youssef LOURAOUI Fixed income arbitrage strategy

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Elton, E., and M. Gruber (1977). “Risk Reduction and Portfolio Size: An Analytical Solution.” Journal of Business, 50. pp. 415-437.

Evans, J.L., and S.H. Archer (1968). “Diversification and the Reduction of Dispersion: An Empirical Analysis”. Journal of Finance, 23. pp. 761-767.

Lhabitant, François S., Learned Mitchelle (2002). “Hedge fund diversification: how much is enough?” Journal of Alternative Investments. pp. 23-49.

Markowitz, H.M (1952). “Portfolio Selection.” The Journal of Finance, 7, pp. 77-91.

Statman, M. (1987). “How many stocks make a diversified portfolio?”, Journal of Financial and Quantitative Analysis , pp. 353-363.

Tole T. (1982). “You can’t diversify without diversifying”, Journal of Portfolio Management, 8, pp. 5-11.

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Posted in Contributors, Financial techniques | Tagged , , , , | Leave a comment

Managed futures strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the managed futures/CTAs strategy. The objective of the managed futures strategy is to look for market moving trends across different asset markets.

This article is structured as follow: we introduce the managed futures strategy principle. Then, we present the different types of managed futures strategies available. We conclude with a performance analysis of this strategy in comparison with a global benchmark (MSCI All World Index and the Credit Suisse Hedge Fund index).

Introduction

According to Credit Suisse, a managed futures strategy can be defined as follows: “Managed Futures funds (often referred to as CTAs or Commodity Trading Advisors) focus on investing in listed bond, equity, commodity futures and currency markets, globally. Managers tend to employ systematic trading programs that largely rely upon historical price data and market trends. A significant amount of leverage is employed since the strategy involves the use of futures contracts. CTAs do not have a particular biased towards being net long or net short any particular market.”

Managed futures make money based on below points:

  • Trend following strategy: the trading rationale behind is that trending markets will tend to keep the same direction
  • Combine long and short term indicators:
  • Diversify across different markets: at least one market should move in trend
  • Leverage: the majority of managed futures funds are leveraged in order to get increased exposures to a certain market

The methodology for the managed futures strategy can be described below:

  • Identify appropriate markets: Concentrate on the markets that are of interest for this style of trading strategy
  • Identify technical indicators: This step is based on understanding the key technical indicators to assess if the market is trading on a trend
  • Backtesting: The investor will test the indicators retained for the strategy on the market chosen using historical data and assess the profitability of the strategy across a sample data frame. The important point to mention is that the results can be prone to errors. The results obtained can be optimised to historical data, but don’t offer the returns computed historically.
  • Execute the strategy in out of sample: This step is important to see if the in-sample back testing result is similar to the out of sample.

This strategy makes money by investing in trending markets. The strategy can potentially generate returns in both rising and falling markets. However, understanding the market in which this strategy is employed, coupled with a deep understanding of the key drivers behind the trending patterns and the rigorous quantitative approach to trading is of key concern since this is what makes this strategy profitable (or not!).

Types of managed futures strategies

Managed futures may contain varying percentages of equity and derivative investments. In general, a diversified managed futures account will have exposure to multiple markets, including commodities, energy, agriculture, and currencies. The majority of managed futures accounts will have a trading programme that explains their market strategy. The market-neutral and trend-following strategies are two main methods.

Market neutral strategy

Market-neutral methods look to profit from mispricing-induced spreads and arbitrage opportunities. Investors that utilise this strategy usually attempt to limit market risk by taking long and short positions in the same industry to profit from both price increases and decreases.

Trend following strategy

Trend following strategies seek to generate profits by trading long or short based on fundamental and/or technical market indicators. When the price of an asset is falling, trend traders may decide to enter a short position on that asset. When an asset is going in the other direction, trend traders may enter a long position. The objective is to collect gains by examining multiple indicators, deciding an asset’s direction, and then executing the appropriate trade.

Investors interested in managed futures can request disclosure documents outlining the trading strategy, annualised rate of return, and other performance metrics.

Performance of the managed futures strategy

Overall, the performance of the managed futures were overall not correlated from equity returns, but volatile nature(Credit Suisse, 2022). To capture the performance of the global macro strategy, we use the Credit Suisse hedge fund strategy index. To establish a comparison between the performance of the global equity market and the managed futures strategy, we examine the rebased performance of the Credit Suisse managed futures index with respect to the MSCI All-World Index. Over a period from 2002 to 2022, the managed futures strategy index managed to generate an annualised return of 3.98% with an annualised volatility of 10.40%, leading to a Sharpe ratio of 0.077. Over the same period, the Credit Suisse Hedge Fund Index managed to generate an annualised return of 5.18% with an annualised volatility of 5.53%, leading to a Sharpe ratio of 0.208. The managed futures strategy had a negative correlation with the global equity index, just about -0.02 overall across the data analysed. The results are in line with the idea of global diversification and decorrelation of returns derived of the managed futures strategy from global equity returns.

Figure 1. Performance of the managed futures strategy compared to the MSCI All-World Index and Credit Suisse Hedge fund Index across time.
 Managed futures
Source: computation by the author (Data: Bloomberg)

You can find below the Excel spreadsheet that complements the explanations about the Credit Suisse managed futures strategy.

Managed futures

Why should I be interested in this post?

Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Equity market neutral strategy

   ▶ Youssef LOURAOUI Fixed income arbitrage strategy

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Investopedia Managed Futures CTA

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Credit Suisse Managed futures strategy

Credit Suisse Managed futures performance benchmark

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Posted in Contributors, Financial techniques | Tagged , , , | Leave a comment

Dedicated short bias strategy

Youssef LOURAOUI

In this article, Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022) presents the dedicated short bias strategy, which is a special case of the long/short equity strategy without the long component. The strategy holds a net short position, which implies more shorts (selling) than long (buying) positions. The objective of the dedicated bias strategy is to profit from shorting overvalued equities.

This article is structured as follow: we introduce the dedicated short bias strategy. Then, we present a practical case study to grasp the overall methodology of this strategy. We conclude with a presentation of the risk associated with this hedge fund strategy.

Introduction

According to Credit Suisse, a dedicated short bias strategy can be defined as follows: “Dedicated Short Bias funds take more short positions than long positions and earn returns by maintaining net short exposure in long and short equities. Detailed individual company research typically forms the core alpha generation driver of dedicated short bias managers, and a focus on companies with weak cash flow generation is common. To affect the short sale, the manager borrows the stock from a counter-party and sells it in the market. Short positions are sometimes implemented by selling forward. Risk management consists of offsetting long positions and stop-loss strategies”.

This strategy makes money by short selling overvalued equities. The strategy can potentially generate returns in falling markets but would underperform in rising equity market. The interesting characteristic of this strategy is that it can potentially offer to investors the added diversification by being non correlated with equity market returns.

Example of the dedicated short bias strategy

Jim Chanos (Kynikos Associates) short selling trade: Enron

In 2000, Enron dominated the raw material and energy industries. Kenneth Lay and Jeffrey Skilling were the two leaders of the group that disguised the company’s financial accounts for years. Enron’s directors, for instance, hid interminable debts in subsidiaries in order to create the appearance of a healthy parent company whose obligations were extremely limited because they were buried in the subsidiary accounts. Enron filed for bankruptcy on December 2, 2001, sparking a big scandal, pulling down the pension funds intended for the retirement of its employees, who were all laid off simultaneously. Arthur Andersen, Enron’s auditor, failed to detect the scandal, and the scandal ultimately led to the dissolution of one of the five largest accounting firms in the world. Figure 1 represents the share price of Enron across time.

Figure 1. Performance Enron across time.
 Enron performance
Source: (Data: BBC)

Fortune magazine awarded Enron Corporation “America’s Most Innovative Company” annually from 1996 to 2000. Enron Corporation was a supposedly extremely profitable energy and commodities company. At the beginning of 2001, Enron had around 20,000 employees and a market valuation of $60 billion, approximately 70 times its earnings. Short seller James Chanos gained notoriety for identifying Enron’s problems early on. This trade was dubbed “the market call of the decade, if not the past fifty years” (Pederssen, 2015).

Risk of the dedicated short bias strategy

The most significant risk that can make this strategy loose money is a short squeeze. A short seller can borrow shares through a margin account if they believe a stock is overvalued and its price is expected to decline. The short seller will then sell the stock and deposit the money into his margin account as collateral. The seller will eventually have to repurchase the shares. If the price of the stock has decreased, the short seller gains money owing to the difference between the price of the stock sold on margin and the price of the stock paid later at the reduced price. Nonetheless, if the price rises, the buyback price may rise the initial sale price, and the short seller will be forced to sell the security quickly to avoid incurring even higher losses.

Gamestop shot squeeze

GameStop is best known as a video game retailer, with over 3,000 stores still in operation in the United States. However, as technology in the video game business advances, physical shops faced substantial problems. Microsoft and Sony have both adopted digital game downloads directly from their own web shops for their Xbox and Playstation systems.

While GameStop continues to offer video games, the company has made steps to diversify into new markets. Toys and collectibles, gadgets, apparel, and even new and refurbished mobile phones are included.

Borrowing shares at the current market price, selling them, and then repurchasing them at a lower price is what shorting a company entails. If the stock price declines, the short seller will earn from the difference in price between when they sold the shares and when they repurchased them. In this scenario, roughly 140% of GameStop’s shares were sold short in January 2021. In this case, investors have two choices: keep the short position or cover it. Covering a short position includes purchasing the shares at a loss since the stock price is higher than what was sold. And when 140% of a stock’s float is sold short while demand is high, a large number of positions are closed.

As a result, short sellers were constantly buying shares to cover their bets. When there is this much purchasing pressure, the stock mechanically continued to rise. From the levels reached in early 2020 to the levels reached in mid-2021, the stock price climbed by a factor of a nearly a hundred times (Figure 2).

Figure 2. Performance of Gamestop stock price.
 Gamestop performance
Source: (Data: Tradingview)

Unfortunately, the people on the other side of the trade lost huge amount of money. The hedge fund Melvin Capital lost billions of dollars after being on the wrong side of the GameStop short-squeeze. This, along with a few other poor trades, urged an emergency funding from Steve Cohen’s Point72 and Ken Griffin’s Citadel, both well respected hedge funds in the industry.

As Melvin Capital’s underperformance has persisted, the two billionaire investors have since reduced their holdings in Melvin Capital. The fund was down 21% at the end of the first quarter of 2022. This comes after a 39% decline in 2021. In the hedge fund industry, investors typically do not pay a performance fee for bad performance. If a hedge fund loses 10% for its investors in a given year, the investor would not be obligated to pay performance fees until the fund returns to breakeven. In the instance of Melvin Capital, the hedge fund would have required to achieve a performance gain of more than 100% to reach breakeven before it could begin to receive performance fees.

Why should I be interested in this post?

Understanding the profits and risks of such a strategy might assist investors in incorporating this hedge fund strategy into their portfolio allocation.

Related posts on the SimTrade blog

   ▶ Akshit GUPTA Short selling

   ▶ Youssef LOURAOUI Introduction to Hedge Funds

   ▶ Youssef LOURAOUI Global macro strategy

   ▶ Youssef LOURAOUI Long/short equity strategy

   ▶ Youssef LOURAOUI Portfolio

Useful resources

Academic research

Pedersen, L. H., 2015. Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined. Princeton University Press.

Business Analysis

Credit Suisse Hedge fund strategy

Credit Suisse Hedge fund performance

Wikipedia Gamestop short squeeze

TradingView, 2023 Gamestop stock price historical chart

About the author

The article was written in January 2023 by Youssef LOURAOUI (Bayes Business School, MSc. Energy, Trade & Finance, 2021-2022).

Posted in Contributors, Financial techniques | Tagged , , , | Leave a comment