Substack

Showing posts with label Risk. Show all posts
Showing posts with label Risk. Show all posts

Thursday, July 3, 2025

Some thoughts on VC and PE funds

Private equity is a type of investment strategy, consisting, among other things, of venture capital and leveraged buyouts, which are considered among the most impactful innovations of financial intermediation. Much of this owes to the spectacular success of firms in the information technology sector. 

Venture capital (VC) works on the principle that there are promising ideas and dynamic entrepreneurs out there who are short of capital. If they can be identified, funded and provided light-touch portfolio support (mainly in the form of forging connections), a few among them will hit the bulls-eye and generate windfall returns that more than make up for the failure of the majority.

The central assumption is that of identification. This, in turn, has two parts. One is the belief that venture capitalists have acquired some form of prescience to spot great ideas in their nascent stages, well before their commercial potential becomes evident. Second, they are also able to identify the great entrepreneurs who are behind those promising ideas.

I’m not sure about the former at all in any credible enough manner that can justify investing tens, even hundreds, of millions of dollars in them. The strategy of making ten such bets in the hope that one or two will hit big appears closer to gambling than a strategy grounded on some sophisticated skills. If confined to a promising technology sector in its emerging phase, then the context itself dictates that some firms must succeed, and if you have large pots of money, you are more likely than not to hit some bulls-eye. This raises questions about the value proposition of the venture capitalist, certainly enough to question their outsized rewards.

The second part about the identification of entrepreneurs raises even more troubling questions. It’s hard to believe even the shrewdest brains can spot great entrepreneurs with a few interactions, with enough confidence to be able to make the kind of large financial bets made by venture capitalists. Except, if they are friends or friends of friends, or part of a connected closed network.

This raises concerns of cronyism and exclusion bias. Wouldn’t there be perverse incentives, especially given that venture capitalists are investing others’ money, and also the moral hazard afoot from the fact that most or many of these bets will fail? What about the inefficiency arising from the exclusion of those several others outside the network?

Given the aforesaid, there are some fundamental questions that one could be asking. How can we say that burning several hundred million dollars to generate one or two unicorns or decacorns is an efficient use of capital? What if, instead, investors should be more discriminating and do rigorous due diligence before investing? What if there is a model whereby the high-risk assuming angel and seed stage investors, including governments, are compensated by the later-stage investors who have a less risky pool to choose from?

It’s not that such questions are not asked elsewhere. In fact, industrial policy is subjected to this scrutiny continuously and has been declared an inefficient and wasteful pursuit by the same set of experts. For example, in the context of the Chinese government’s massive Made in China 2025 campaign to boost strategic industries and achieve technological self-sufficiency, which involved Big Funds picking sectors and firms and pouring hundreds of billions of dollars, experts have been quick to castigate it for its colossal waste. 

But they conveniently overlook the same portfolio aspect of these investments. Who can deny that these investments have resulted in a portfolio which is the Chinese economy that utterly dominates the world across several sectors and technologies? Just in terms of the incremental output, jobs created, surpluses from exports, not to mention the geopolitical power conferred, these investments appear to have generated returns in multiples. In narrower terms of sectors – electric vehicles, EV batteries, critical minerals refining and processing, solar panels, wind turbines, electronics components and products, etc. – the success of those sectoral funds is spectacular. 

Admittedly, in all these cases, the successes have come at a very high cost in terms of the amounts spent. But the logical conclusion from this line of reasoning is that wastage and losses are fine as long as the portfolio generates a high net return. 

If experts can question the collateral wastage associated with the emergence of this portfolio, why are they shying away from scrutinising the VC industry’s capital deployment efficiency on similar lines?

Leveraged buyout (LBO) is different in an important way. LBO funds identify industries and firms that have promise but are now operating far below their potential, either due to poor management or deficient enterprise or some other factor that can be worked upon. If these firms can be bought out and their operational efficiencies improved or business models modified through very active portfolio management, most likely by replacing the entire senior management, then there might be large efficiencies to be realised. LBO funds use significant leverage to supplement investor equity in purchasing firms, and place the debt on the balance sheets of the firms being bought. 

The critical assumption here is that of very active portfolio management. This would include the PE LBO fund changing the management, and in general, necessarily getting into the nuts and bolts of the firm’s business, from the high-level business model to the granular unit economics and small operational details.

There’s no quibbling about the value of this model. In simple terms, it’s about identifying firms that are not managed well (and there are several out there), buying them, and addressing their inefficiencies to unlock the hidden value. Who could dispute this proposition?

Two things follow from this model. One, the PE LBO firm must have the internal domain expertise to be able to do this effectively. There are hard limits to the use of outsourced expertise. But acquiring in-house domain expertise of the quality required to do such portfolio management effectively across several sectors is very difficult. Two, since it demands proficiency and intense engagement, there are binding bandwidth constraints on how many firms, even a large PE fund with several teams can manage.

Taken together, there ought to be a self-limiting (in terms of size) nature inherent to the PE business model. This also means there’s only so much that the fund can generate as returns and pay the General Partners (GPs).

In this backdrop, it’s natural that problems start when PE LBO funds try to scale beyond a certain level in the quest to amplify and expedite returns and payouts. The incentive distortions and inefficiencies surface at multiple levels. Each team is now stretched over far more projects than they can effectively manage, leading them to follow a light-touch portfolio management. Further, as the fund size increases, it becomes increasingly difficult to identify good investment opportunities.

Leverage is attractive, especially when rates are low, to make investors’ equity go further, besides also amplifies the GPs’ returns. The period of the PE industry’s growth coincided with that of ultra-low interest rates in advanced economies. Now that rates are normalising, the PE/VC industry faces serious vulnerabilities. 

The use of leverage also expands the envelope of sectors that become attractive for LBO firms. In fact, LBO firms come to believe that they have a model that can achieve high returns even with low-risk assets. So, they buy out low and stable return assets like those in infrastructure or affordable/public housing, load them up with debt, strip assets, and pay out large dividends and pass the parcel along. 

This creates problems and externalities that are borne by society and taxpayers. The British water and sewage sector, specifically Thames Water, is a classic example. The same logic makes similarly boring, low-return and mass-market assets like kindergartens, salons, gyms, laundromats, vape shops, and so on attractive to LBOs, but with large negative externality risks. Is this practice of amplifying risks by using leverage to increase returns on low and stable return mass market assets desirable?

Finally, the incentive to indulge in financial engineering – excess leveraging, skimping on investments, sale and lease back, raiding pension chests, etc. – and strip assets has become pervasive. The squeeze in exit options has led to PE LBO funds indulge in practices like selling to another fund managed by it at a higher valuation to reset the clock, continuation funds, strip sales of part of a fund’s assets, net asset value borrowing, defer interest payments and add them to debt, transfer the best companies across funds, and so on to raise money to pay LPs and kick the can down the road.

This article is about how PE funds have come to see insurance premiums as an attractive source of credit to finance their activities and have therefore created a financial model where they encourage the securitisation of insurance premiums and then buy those securities. The model gets strained once the insurance companies face a liquidity crunch or when the LBO fund is unable to exit its investments.

All this raises concerns about the negative externalities inflicted by LBO funds when the cost of capital becomes normalised. See thisthisthisthis, and this. It is especially important since LBO funds now attract investments from pension funds, insurers, sovereign wealth funds, and public endowment funds, thereby raising questions on how private (and therefore lightly regulated) these funds actually are. 

Monday, June 23, 2025

China made iPhone, iPhone made China?

Patrick McGee has written the definitive book on Apple’s relationship with China.

The short story goes something like this. 

Apple has always sought to build a deep moat around its products. It pursued a manufacturing strategy in China that followed this principle. Its design focus meant that many of its components were bespoke. 

Therefore, unlike its smartphone rivals, who used generic components off the shelf and could therefore hand over the design and outsource the entire manufacturing to contract manufacturers, Apple had to work very closely with its suppliers and contract manufacturers. Its obsessive focus on quality also made this an imperative. Accordingly, it sent its product designers and manufacturing design engineers from Cupertino to embed themselves with its contract manufacturers and suppliers, and transfer knowledge, skills, work practices, and work ethics. 

It set a very high standard for the deliverables from suppliers, who in turn acquired a reputation for being the best in class. This also meant that the employees in Apple’s supply chain had to undergo training and acquire a higher level of skills than was the case for others. 

Given the high employee turnover in the industry, Apple’s supply chain became a training ground for millions of manufacturing workers at all levels. McGee points to Apple’s own estimate that since 2008, a staggering 28 million workers have gone through Apple’s rigorous training, a number greater than the entire labour force of California! It may well be the single biggest skilling program that the world has ever seen, one that involved an American company imparting knowledge, skills, and practices to the entire Chinese electronics industry. 

Apple was not outsourcing as the word was commonly understood. Instead, it was sending its top product designers and manufacturing design engineers from California and embedding them into suppliers’ facilities for weeks or months at a time. There they’d whip local suppliers into shape, co-invent new production processes, and stay until the operations were up and running. “The think that really stood out was not just that it’s all in China, but that it’s the most vertically integrated manufacturing system in the world and yet they don’t theoretically own anything,” he says… Instead of selecting components off the shelf, Apple was designing custom parts, crafting the manufacturing behind them, and orchestrating their assembly into enormously complex systems at such scale and flexibility that it could respond to fluctuating customer demand with precision. Just half a decade earlier, these sorts of feats were not possible in China. The main thing that had changed, remarkably, was Apple’s presence itself. So many of its engineers were going into the factories to train workers that the suppliers were developing new forms of practical know-how.

Apple was also investing heavily in the production process to build moats around its manufacturing innovations, while rivals were just giving suppliers spec sheets and saying, “Build this.”… Apple did something totally novel. It purchased hundreds of millions of dollars of machinery, placed it in the factories of its supply partners, and ‘tagged’ it for Apple use only… The investments allowed its suppliers to operate at a level they’d otherwise be incapable of… As a former Apple manufacturing design engineer puts it: “The model we had developed was: We’re going to use your factory. We’re going to use your people. But we’re going to go in there and use them as our arms and legs. You know, ‘You do this, and you go do this,’ and ‘You set the dials here.’”… Apple’s engineers were deep in the weeds building, and even inventing those capabilities. Apple was doing this on such a scale that it created an entire organisation within Ops dedicated to the procurement, planning, and deployment of this capital-intensive machinery. 

All this also meant that Apple had complete control over its supply chain. In fact, it may not be incorrect to say that it did manufacturing, but without owning any factory!

In return for the capabilities development and tight guidance (plus, of course, the large volumes and the privilege of supplying Apple), Apple’s procurement division, headed by Tony Blevins, negotiated cut-throat deals that paid the lowest margins. Counterpoint Research has estimated that in 2016, even as Apple had a profit margin of 33%, its Chinese rivals Oppo, Vivo, and Xiaomi had 7%, 6%, and 2% margins. Foxconn’s margins fell to just 2.4% in 2011, and while its revenues more than doubled from $53 bn in 2007 to $107 bn in 2011, its profits barely rose from $2.41 bn to $2.53 bn. Apple’s suppliers realised that the skill acquisition, the massive volumes, and the reputation that comes with working for Apple compensated for the low margins. They could charge a premium for supplying to other smartphone makers.

Apple actively encouraged diversification among its suppliers by requiring that none of them should have more than half of their revenues from Apple. This was also for derisking since its models often involved radical shifts that obviated certain components that would have closed down suppliers, with all the negative press around job losses. This effectively meant that Apple’s suppliers were cross-subsidising their manufacturing for Apple by charging higher prices for their supplies to Apple’s rivals. 

McGee describes this relationship between Apple and its suppliers and contract manufacturers in terms of the Apple Squeeze

Apple’s engineering and operations teams would rigorously train local partners, in the process giving away manufacturing knowledge, in particular how to efficiently scale while maintaining the highest quality standards. In exchange, the local supplier would work for soul-crushingly low margins with the understanding that it could profit from the incredible volumes Apple demanded. It could also use these skills to win orders from other clients, charging them more for similar work.

Importantly, this condition also may have contributed to the birth of the Chinese smartphone industry. 

In 2009, the majority of smartphones sold in China were produced by Nokia, Samsung, HTC, and Blackberry. But as Apple taught the supply chain how to perfect multi-touch glass and make the thousand components within the iPhone, Apple’s suppliers took what they knew and offered it to homegrown companies led by Huawei, Xiaomi, Vivo, and Oppo. Result: the local market share of such Chinese brands grew by leaps and bounds, from 10% in 2009 to 35% by 2011, and then to 74% by 2014. It’s no exaggeration to say that the iPhone didn’t kill Nokia; Chinese imitators of the iPhone did. And the imitations were so good because Apple trained all their suppliers… Apple became the developer for China… Apple, in other words, set in motion a series of events that helped Chinese suppliers win more orders and advance their understanding of cutting-edge manufacturing. At the same time, Western manufacturing of electronics atrophied.

It also birthed a high-quality Chinese contract manufacturing industry. By shifting orders from its Taiwanese contract manufacturers, Apple has allowed Luxshare, BYD Electronic, Goertek, and Wingtech to take significant shares of Apple’s supplier network. More than half its component suppliers are Chinese firms, and many of the rest manufacture in China. The spectacular success of China’s electronics manufacturing ecosystem owes no small measure to Apple. I have blogged here about iPhone’s domestic value addition in China. 

Given its bespoke components, obsession with quality, and the massive volumes involved, Apple often invested in the equipment and machinery for its suppliers. As McGree writes,

The value of its “machinery, equipment and internal-use software” – namely the instruments placed in third-party factories for production – totalled less than $2 billion in 2009, but then soared beyond $44.5 billion by 2016 – more than four times the value of “land and buildings” owned by Apple – as the company took unprecedented control of its supplier network.

All told, Apple invested a staggering $55 billion a year for five years from 2015, for a total of $275 billion, more than double the entire post-war Marshall Plan. In addition to the investments in equipment to suppliers and construction of its retail stores, this estimate includes a sizeable part of wages that Apple paid to workers across its supply chain as training costs for teaching new skills and processes to refresh its multidimensional product portfolio. 

I’ll let McGee summarise the Apple story

By investing in and teaching local suppliers, Apple was inculcating a corpus of hands-on knowledge, both in tangible skills and abstract concepts, which applied well beyond serving its own needs. True, this was fairly unintentional; Apple hadn’t designed its supply chain to spur innovation at its suppliers. Yet that’s exactly what it had accomplished. And Apple’s investments weren’t just large, they were ruthlessly efficient and narrowly targeted in the advanced electronics sector… Thinking of Apple’s investment like a government program is instructive. Year in, year out, China didn’t have the talent or expertise to build the products that Jony Ive’s studio conceived, but the engineers Apple hired out of MIT, Caltech, and Stanford, or poached from Tesla, Dell, and Motorola, routinely got them up to speed. Apple could send a calibre of talent to China – what one Apple veteran calls “an influx of the smartest of the smart people” – that no government program ever could. And the culture was such that the Apple engineers would work up to 18 hours a day. Moreover, whereas a government program could at best train a workforce to engineer products, it wouldn’t have the ability to actually purchase the goods. But Apple could and did.

In economic terms, Apple was creating the whole market – supplying inputs in the form of worker training and machinery, then purchasing the outputs. The suppliers who won Apple contracts were given a massive order book and were taught to ramp up at a pace none had ever experienced. Better still, Apple had put so much design, brand image, and superb marketing into its products that even without commanding a dominant market share, it nevertheless attained a dominant market style. A new Apple product would set into motion the look, feel, and substance of what a laptop or smartphone should be. So the processes it often co-invented with China-based suppliers were in great demand…

What Apple had realised was that, unwittingly, its presence in China was enabling technology transfer on an extraordinary scale… Apple wasn’t just creating millions of jobs in the country; it supported entire industries by facilitating an epic transfer of “tacit knowledge” – hard-to-define but practical know-how “in the art of making things, in organising practical matters, and in the way people produce, distribute, travel, communicate, and consume,” as the China-born Federal Reserve economist Yi Wen defines it… The technology transfer that Apple facilitated made it the biggest corporate supporter of Made in China 2025, Beijing’s ambitious, anti-Western plan to sever its reliance on foreign technology.

Some thoughts: 

1. While China’s manufacturing prowess undoubtedly arose from multiple factors, it may not be incorrect to highlight the point about the central role played by Apple’s iPhone manufacturing to claim that China made the iPhone, and the iPhone made China!

2. When history is written, Apple will be considered an icon of efficiency and profit-maximising capitalism. The cost-minimising contracts with suppliers, low-margin outsourcing, the transfer of inventory to the contract manufacturer, the tight oversight of its suppliers and contract manufacturers, and the concentration of everything in China meant that Apple could harvest economies of scope and scale in an unprecedented manner, and thereby maximise its profits.

3. The other side of efficiency and profit maximisation is that Apple will also be considered a totemic example of risk concentration. It has yoked itself so deeply and intimately to China that any exit is near impossible, and it’s now virtually at the mercy of the Communist Party and President Xi Jinping. McGee compares Tim Cook to Jack Welch, who laid the foundations for GE’s demise. 

4. As McGee writes, unlike Japan, Taiwan, Korea, and China, which first made components before getting into SMT, assembly, testing, marking, packaging, and higher value-added activities, India has jumped straight into SMT and ATMP. Manufacturing of components is hard and requires the development of several critical capabilities, besides a workforce with high productivity that can also produce with high quality. These tasks are not amenable to the kind of learning by doing skill and knowledge spillovers like actual manufacturing, even if of components. 

5. Finally, the book is a story about how conventional theories of institutions and the rule of law to attract foreign investors break down completely in the context of China. If anything, as McGee highlights with several examples, China followed the opposite model of Rule by Law, where everything was subordinate to national interest as defined by the Party. Western multinational corporations invested and remained in China despite these problems. 

Update 1 (30.06.2025)

India has the opportunity to emulate China with Apple.
Analysts at Counterpoint Research calculated that India had succeeded in satisfying 18 percent of the global demand for iPhones by early this year, two years after Foxconn started making iPhones in India. By the end of 2025, with the Devanahalli plant fully online, Foxconn is expected to be assembling between 25 and 30 percent of iPhones in India.
While sceptics may scorn this as “screwdriver work”, whether Apple will help do to India what it did to China will depend on the next stage of actual manufacturing, in the form of components, locating to India.
The government, dangling subsidies, is persuading companies like Apple to source more of those parts locally. It is already getting casings, specialized glass and paints from Indian firms. Apple, which opened its first Indian stores two years ago, is required by the Indian government to source 30 percent of its products’ value from India by 2028… Prachir Singh, an analyst for Counterpoint, said it had taken 15 years to figure out what would work in China and five years to import this much of it to India.

Saturday, May 17, 2025

Weekend reading links

1. Tim Harford points to "zero-sum thinking", or the frame where we think in terms of winners and losers, us and them. This contrasts with the frame where the pie is expanded and everyone wins, or the rising tide lifts all the boats. 
If one person is to get richer, someone else must get poorer. If China is doing well, then the US must logically be doing badly. Jobs go either to the native born, or to foreigners... a zero-sum thinker tends to be in favour of more redistribution and in favour of affirmative action — traditionally leftwing policies — but also in favour of strict immigration rules. Rightwing populists also think affirmative action is important, they just think it’s important and wrong... Stantcheva’s work strongly suggests that zero-sum thinking isn’t some sort of senseless blind spot. When people see the world in dog-eat-dog terms, they usually have a reason. Young people in the US tend to see the world as zero sum, reflecting the fact that they have grown up in a slower-growth economy than those born in the 1940s and 1950s. A similar pattern emerges across countries: the higher the level of economic growth a person grew up with, the less likely they are to see the world in zero-sum terms. People whose ancestors were enslaved, forced on to reservations or sent to concentration camps are more likely to see the world in zero-sum terms.

2. Early takeaways from the UK-US trade deal. The main theme is the UK's commitment to ensure that Chinese manufactured goods don't enter the US through the UK. The text of the agreement is here

The tariff reductions on UK exports will depend on the findings of the US Section 232 investigations (to determine whether and how specific imports affect US national security). It argues that the the United Kingdom will work to promptly meet U.S. requirements on the security of the supply chains of steel and aluminum products intended for export to the United States and on the nature of ownership of relevant production facilities.

3. China's weaponsisation of its manufacturing dominance should be seen as part of a long-drawn-out conscious strategy. Sample this from Xi Jinping (the speech here).

Chinese leaders must “tighten international production chains’ dependence on our country, forming a powerful capacity to counter and deter foreign parties from artificially disrupting supplies” to China, Mr. Xi said in his speech to the Central Financial and Economic Affairs Commission in 2020.

The Chinese language original version of the speech appeared to have a more threatening tone.

"We should increase the dependence of international supply chains on China and establish powerful retaliatory and menacing capabilities against foreign powers that would try to cut supplies."

4. Interesting long read on the late French philosopher, Rene Girard, who has emerged as an ideologue for those currently ruling the US. His central contribution is the idea of "mimetic desire".

Girard is best known for his theory of “mimetic desire”, the idea that humans don’t desire things in and of themselves, but out of a wish to imitate and compete with others. On the back of this insight, the writer built a distinctive anthropology, borrowing from and contest-ing the theories of Nietzsche and Freud... Girard’s first book, Deceit, Desire and the Novel (published in French in 1961), which describes how Don Quixote, Madame Bovary and characters from Stendhal, Proust and Dostoyevsky come to desire things because others already want them. “Man is the creature who does not know what to desire, and he turns to others in order to make up his mind,” he wrote. The fact that desires are borrowed means they are necessarily competitive. If you desire your neighbour’s husband, you have to contend with your neighbour in order to get what you want — or what you think you want. Mimetic desire leads to fruitless competition, unhappiness and even violence... Over the past half-century, mimetic desire has been Girard’s chief legacy, not only in humanities departments but also, increasingly, among Silicon Valley entrepreneurs and east London brand managers. Inducting Girard into the Académie Française in 2005, the philosopher Michel Serres called him “the Darwin of the human sciences”.

He also came up with a set of ideas on scapegoating and how it impacts politics. 

His second book, Violence and the Sacred, published in 1972 and perhaps the most influential of all his work, describes how human societies enter into periods of crisis in which competition becomes unbearable. The solution, Girard claimed, is a violent act of scapegoating. The scapegoat has certain recurrent features: they are a foreigner, someone with a disability or a person in a position of authority. Such acts are then commemorated in the founding myths of cultures, myths in which the scapegoat becomes deified... Girard rarely used contemporary case studies, preferring to find his evidence in ancient literature, scripture and anthropology, but his view on lynchings ancient and modern was unambiguous: they were unconscionable. The insistence that the scapegoat was innocent would become a justification of Girard’s faith as well as the basis for a darkly pessimistic vision of politics later taken up by both Vance and Thiel. Girard’s next book, Things Hidden Since the Foundation of the World, published in 1978, argues that Christianity had revealed the hidden truth of the scapegoat mechanism. By insisting on their saviour’s innocence, Christians had deconstructed the “primitive” belief in the scapegoat’s guilt. It is for this defence of Christianity that Girard has been called a modern Church Father.

5. After scorning and abhorring arms manufacturing for decades, buoyed by a punishing industrial slowdown and the commitment to much higher defence spending due to the growing unreliability of the US defence umbrella, the German Mittelstand are taking to the defence industry with some vengeance. Thanks to the legacy of industrial co-operation with the Nazis, arms making had become taboo in Germany. 

6. Indians are the largest content consumers in the digital world, but does not rank among the top 7 content creators. 

Image
In addition to its software industry stuck at the lower end of the value chain, the lack of world-class brands and mass market companies, lack of companies and startups who have gone on to become global companies, its startup ecosystem with little to show at the frontier, we now also have its massive entertainment industry which does not figure among the top content creators globally despite being the largest consumption market by volume and digital traffic. 

The FT article points to how Japanese content makers have since the pandemic conquered the world with their anime genre of cartoons, and their manga comic books from which these anime characters and stories are derived. 
The Japanese content industry — including gaming, publishing, movies TV and animation — saw overseas sales triple during the past decade, to an estimated ¥5.8tn in 2023. “The export value of the content industry is bigger than the steel, petrochemicals and semiconductor sectors,” says Minoru Kiuchi, the country’s economic security minister and the man now in charge of its anime and manga strategy. The government now wants to push even harder, Kiuchi says, increasing overseas content sales to ¥20tn by 2033. Yet previous efforts to reap the proceeds domestically have struggled. In 2013, the government launched an initiative called Cool Japan, which funded an ill-fated anime streaming platform called Daisuki that aimed to rival the likes of Netflix. Cool Japan has been relaunched multiple times — most recently last year, with greater emphasis on subsidising better working conditions, combating piracy and promoting overseas expansion.
Image
This is a tantalising possibility
If Japan succeeds in boosting the economic clout of its entertainment industry, then anime, manga and other sources of valuable IP could help offset the effects of the country’s declining population and vulnerable industrial base.

The article is a good short history of the emergence of manga comics and anime cartoons, and how since the pandemic it has gone global. 

Begs the question why India's Jataka Tales or Hitopadesha or Panchatantra in the form of the Tinkle comics did not spread beyond the country's borders.

7. The Supreme Court's ruling overturning the NCLAT order on the sale of Bhushan Steel to JSW, more than four years after its consummation, opens several questions for discussion - the competence and integrity of the Resolution Professionals, the rigour and fidelity of the IBC processes, the competence of the NCLT and NCLAT, and finally, the Supreme Court's decision-making delays and its decision principles. 

This is a good summary of the issues. This is another good article. Finally, this raises some important issues about its impact on investor confidence.
It is a case study in institutional compromise. The Resolution Professional acted more as a passive bystander than a statutory officer. The CoC, far from being a sentinel of creditor interests, capitulated to a flawed plan and later defended it in Court with shifting arguments. The NCLT and NCLAT, expected to be guardians of due process, failed to check even the most basic procedural violations, including eligibility criteria, payment timelines, and the resolution applicant’s bona fides... The Supreme Court invoked Article 142 to direct BPSL’s liquidation. While this may be legally tenable, one is compelled to ask: could this power have been better used to restore legality without derailing an otherwise successful business revival?

Substantively, JSW has already paid substantial sums to creditors, restarted operations, and brought BPSL back into the industrial fold. Was it not possible to preserve this progress by correcting procedural anomalies, imposing penalties, or directing compliance retrospectively? Couldn’t the Court have modified the Plan to align with the IBC instead of nullifying it entirely? This verdict may inadvertently send a chilling message to global investors that in India, even resolution plans implemented over 7- 8 years may be overturned due to procedural infirmities, regardless of real-world success. With the world watching India’s insolvency ecosystem as a key plank in its “ease of doing business” pitch, the implications are serious.

In this context, MS Sahoo makes an important point. 

If irregularities are discovered post-facto, those responsible must face swift and stringent civil, regulatory, or criminal consequences. However, the underlying transaction must remain undisturbed. This principle of punishing the wrongdoer without unsettling the transaction is firmly embedded in securities jurisprudence. Trades executed on stock exchanges are never reversed, nor are public issues unwound, even if grave irregularities are discovered post-facto... It is time the law, policy, and institutions recognised the finality of commercial transactions, which should form the bedrock of all economic regulatory frameworks. The legal architecture should enable rigorous oversight to prevent and deter misconduct and hold wrongdoers accountable. However, such oversight must be disentangled from the validity of commercial transactions once they have been lawfully approved or deemed approved.
 This is the balance sheet of the IBC itself since its formation.

Image

8. Shifting market expectations on tariffs

There is an emerging view that Trump’s tariff climbdown will ultimately bring US duty rates closer into line with his campaign plans; 10 to 20 per cent for most countries, and 60 per cent for China. Given all the tariff twists and turns over the past few weeks, markets might be forgiven for thinking that’s a good outcome. But prior to the president’s inauguration, that was most analysts’ worst-case scenario.

9. Trump tariffs and their impact on the US Dollar is helping indebted developing countries.

In practice, and purely by accident, Trump’s tariff wars have created a surprisingly benign environment for emerging markets. Although no one could claim with a straight face that he is judiciously managing the exchange rate lower as part of some fantastical “Mar-a-Lago Accord”, the dollar has weakened, benefiting EMs that borrow in the US currency. The traditional perverse effect whereby risk aversion arising from eccentric US policymaking actually causes a flight to safety and strengthens the dollar has so far been absent. The net effect of a shambolic trade strategy and weakening growth has also been to reduce US Treasury yields, similarly supporting capital flows to higher-yield markets elsewhere. The spread of EM bond prices over US bonds, which typically rises at times of financial market stress and uncertainty, has remained well contained.

10. UK minimum wages now match those of some white collar entry level workers.

Image
11. Ajai Srivastava of GTRI feels that India's FTA with UK has crossed several red lines.
For the first time in any free trade agreement (FTA), India has agreed to slash car import duties, open up its vast government procurement market to a foreign country, and weaken its patent regime under external pressure... India’s decision to slash car import duties from 100 per cent to 10 per cent — even with quotas — is a first in any trade deal. The cuts also cover electric and hybrid cars where Indian industry is just beginning to grow. India will soon receive requests from the European Union, the United States, Japan, and South Korea, demanding equal or deeper tariff cuts...
Around 40,000 high-value Indian government contracts will now be open to UK companies, covering transport, green energy, and infrastructure sectors. One of the most problematic provisions is that UK firms will be treated as “Class 2” local suppliers if just 20 per cent of their product value originates in the UK. This grants UK firms the same procurement preference previously reserved for Indian suppliers with 20–50 per cent domestic content. It allows them to use up to 80 per cent Chinese or European inputs while still benefiting from local supplier status in India. UK companies will also have access to India’s central e-procurement portal, making tracking and winning public contracts easier... India has, for the first time in any FTA, agreed to rules that go beyond its obligations under the WTO’s Agreement on Trade-Related Aspects of Intellectual Property Rights. This threatens not only access to affordable medicines within India but also its global leadership as a supplier of generic drugs to developing countries. This move hands over a big win to global pharma giants.

12. It's the uncertainty that kills. John Coates writes,

What happens if you go up and over that cortisol curve? Then you start to change. In our studies we found that prolonged volatility elevated cortisol chronically and caused traders to become dramatically more risk-averse. The masters of the universe turned timid (potential pushovers in bonus — or tariff — negotiations). Here again we can see a biological mechanism driving macro events: during a bear market, the higher volatility increases risk aversion, which causes more selling and even more volatility and risk aversion, in a runaway chain reaction that ends in a crash. Uncertainty has this power. Uncertainty over whether something nasty might happen can be more stressful than the nasty thing itself. Experiments have shown this. Imagine you are exposed to something mildly unpleasant, like brief blasts of white noise; but the blasts come at predictable time intervals, say once every two minutes. Between blasts you have downtime and need not brace yourself against the noise. In this timing regime, your stress hormones would probably be only slightly elevated. But now imagine the intervals fluctuate, making it more difficult to predict when to brace yourself, so you brace for longer periods of time. Now your stress hormones begin to rise. As the intervals become random and cannot be predicted, cortisol levels reach a maximum. Under each timing regime, you have been subjected to an identical amount of noise. But your cortisol levels increased with the uncertainty of the timing. 

My colleagues and I observed this effect in traders: their cortisol did not track their profits and losses but rather the variance of their returns. This effect was also observed during the second world war. German soldiers on the front lines during the Battle of Stalingrad faced constant attack, while soldiers manning supply lines faced danger less frequently but more unpredictably, and it was here, behind the front lines, that they suffered a higher incidence of gastric ulcers. So uncertainty over when something nasty is going to happen to you, such as losing money, or a bandage being ripped off, can be more stressful than the actual event. It is the not-knowing when it will happen that keeps us on edge, keeps us revving our engine. In fact, we hate being kept in a state of uncertainty. Some macabre experiments conducted in the 1970s found that animals — and humans too, presumably — will accept four times more aversive stimuli if they are delivered predictably rather than unpredictably.

This has relevance to policy making

This stress biology could be harnessed by policymakers. Central banks, for example, could use uncertainty, as does Ref #2, to control the financial markets, and to deflate bubbles by increasing risk aversion. Paul Volcker, chair of the Federal Reserve from 1979-87, understood this power, and possessed the knack to scare the pants off the financial markets. Part of that fear stemmed from his tendency to move interest rates enormously, in the early 1980s raising the Fed funds rate to 20 per cent. But he also kept the market guessing as to when he would act, and by how much. Street wisdom says do not fight the Fed, and no one did with Volcker lurking in the hood. Today, the Fed could veil its activities with a similar uncertainty as a means of calming market exuberance, even cooling an inflationary economy, and all without raising rates. In fact, a deft application of uncertainty could well drive investors into Treasury bonds, thereby lowering long-term interest rates and reducing the debt burden. Since Volcker’s time, however, central banks everywhere have relinquished uncertainty, one of their most potent weapons, in favour of a policy called forward guidance, which involves communicating clearly their intentions, in other words reducing uncertainty. Not surprisingly, this namby-pamby policy has failed utterly in taming the wild beast that is irrational exuberance. To control the market you need to corral its animal spirits, and uncertainty has more than enough power to do so.

This is important since for all his unpredictability, it's emerging that there's one big predictability with Trump policies - they'll not cross a threshold laid down by the markets. In other words, there's an emerging Trump put to the downside risk with US equity markets. It's no surprise that the markets have responded to the temporary truce with China by rebounding in a manner that makes one feel that the whole issue is now settled. The market reaction was stunning. Wall Street had the biggest one-day gain in five years.

Image
In fact, markets are now behaving as though Liberation Day not only did not happen but the underlying issues have been fully resolved.
Image

Tuesday, January 2, 2024

Basel III and cost of capital for infrastructure

A widely held misconception about infrastructure finance is the belief that bonds are a major source of infrastructure debt finance. In reality, even in developed countries (excluding the US), banks are the overwhelmingly major source of debt mobilisation. Historically, bond markets have been small contributors even in Europe. I have blogged extensively on this. 

In India this belief has led to several years of intense debates and policy making efforts at broadening and deepening bond markets. Influential opinion makers weigh in with theoretical arguments about why bond markets are critical to infrastructure financing and how our reliance on banks have been the bane of the banking sector. In sharp contrast, efforts to enable banks to overcome their asset-liability mismatches, lower their cost of financing infrastructure, and improve their quality of due-diligence of infrastructure projects have received disproportionately less attention. 

This post will point to an important regulatory constraint arising from Basel III framework that raises the cost of capital for bank financing of infrastructure. 

Before we get to the main point, let me point to a few more illustrations of the marginal role of bonds in infrastructure finance. 

Even in areas of structured finance like project finance, loans have dominated.
Image
A longer dataset confirms the low share of bonds in total project finance.
Image

Bonds have been a very small proportion of all lending specifically to infrastructure projects.

Image

In 2022, fresh private investment by way of equity and debt in greenfield and brownfield projects globally was around $350 bn, of which bank loans made up 72% of the debt mobilised and bonds made up just 19%. The share of bonds would have been even lower but for green bonds.

Image

All the major infrastructure companies in India rely predominantly on bank loans to mobilise their debt. Bond offerings form just a small share. As I have blogged on several occasions, outside of China and the US, bond markets contribute only a very small share of infrastructure debt and syndicated loans for the predominant source.
Image
For all the advantages of long-term tenor, bonds deliver the same capital at much higher cost and far higher transaction costs and take a much longer time. Even with all the possible deepening and broadening of India's bond markets, this situation is unlikely to change. Banks will remain the predominant source of debt for infrastructure projects. 

As mentioned earlier, despite this reality, a disproportionate amount of policy effort in India and elsewhere goes into bond market reform.  It's in this context that there should be more policy efforts at making it easier for banks to lend for infrastructure projects. Such reforms should try to reduce the cost of funds for banks and address the critical issue of asset-liability mismatch that banks face from lending long-term.

But unfortunately, instead of making it easier for banks to lend to infrastructure, banking regulatory requirements under the Basel III norms under implementation since 2013 have had the opposite effect. It increased the capital buffer requirements for long-term illiquid loans of the kind taken by infrastructure projects, thereby increasing the cost of capital for banks to undertake such lending. 

Thorsten Beck in a CGD Blog has a good summary of the challenges to infrastructure financing from Basel III norms
First, there will be a tightening of the large exposure rule, i.e., how much a bank can be exposed to a given borrower or project. Given that infrastructure projects are typically large, this might prevent lending especially by smaller banks. Second, under Basel III, there is a tightening of capital requirements for infrastructure projects which makes lending for such projects costlier. A third constraint comes through liquidity requirements, newly introduced under Basel III, under the so-called net stable funding ratio (NSFR) and liquidity coverage ratio (LCR). These requirements will force banks to (i) match longer-term lending (such as for infrastructure) with longer-term funding, which, of course, implies that banks have access to this type of funding, and (ii) hold more cash-like assets for project funds. Both requirements are more difficult to fulfill for banks in many EMDEs. Finally, there might be reluctance to commit to longer-term funding structure given the increased uncertainty over further regulatory tightening. Since the 2008 crisis, there have been frequent changes to regulatory standards, ranging from Basel II.5 to Basel III to recent additional reforms to Basel III (sometimes referred to as Basel IV) and discussion on another round in a few years (sometimes referred to as Basel IV or V).
A Bloomberg oped writes about how Basel III norms have had the effect of reducing cross-border bank loans to infrastructure projects in developing countries.
Cross-border syndicated loans to developing countries — often used as a way to get foreign bankers into projects in emerging economies — fell as a proportion of the total from almost 90% in the 2000s to just over half by 2014. New ways of weighting the risk attached to various assets — such as those the Basel III endgame proposes to implement for US banks — threaten to penalize the poorest countries the most. One study by the G-20’s Global Infrastructure Hub found that if banks used actual historical data instead of the new mechanisms, it might make a 37% difference in how they evaluated their possible losses from loans to infrastructure in developing countries, but only 11% in loans to high-income ones... In the 2000s, banks could lend to infrastructure abroad at margins of 50 basis points; that rose by 2016 to 250 to 300 basis points. When you add this much friction, bankers stop looking for the best projects, only the safest ones — and lazy banking means that savers earn less... Policymakers should consider how endless restrictions aimed at preventing a future financial crisis are worsening a climate crisis that threatens devastating impacts now.

The Base III endgame, or the last round of the Basel III reforms finalised in December 2017 and now getting implemented, involves the following:

A key objective of the revisions … is to reduce excessive variability of risk-weighted assets (RWAs) … [and] help restore credibility in the calculation of RWAs by: (i) enhancing the robustness and risk sensitivity of the standardised approaches for credit risk and operational risk, which will facilitate the comparability of banks’ capital ratios; (ii) constraining the use of internally-modelled approaches; and (iii) complementing the risk-weighted capital ratio with a finalised leverage ratio and a revised and robust capital floor.
Banks can calculate their RWAs either using the Basel III's standardised approach or using their own internal risk models (the internal ratings-based, IRB, approach). The Basel III endgame mandates that the large and internationally active banks are required to calculate the two different ratios (capital / standardised approach RWA, and capital / IRB approach RWA) and apply the lower of the two ratios to their capital adequacy requirement. This would force the banks to assume more capital buffer.

The study by the Global Infrastructure Hub referenced in the Bloomberg article writes about how Basel III norms raise capital costs for infrastructure borrowers.

As banking regulations are not explicitly defined for the infrastructure asset class, they impose higher capital charges and equity investment than appropriate. These higher financing costs either translate into higher prices for infrastructure services (straining consumer affordability), or higher government support (impacting the already record-high global government debt levels)... The lack of recognition in the Basel Framework of infrastructure as an asset class or its subset means that the regulatory rules applied on infrastructure loans are not attuned to its risk sensitivities. The risk weights used for infrastructure loans are typically based on the credit profile of the issuing entity (government, multilateral development banks (MDBs), or corporates). This is problematic mainly for infrastructure loans given to project finance entities which are newly created for a given infrastructure project and have no credit history. The project finance route is taken so public and private sectors can work together for infrastructure development. Current risk weights applied on project finance loans are much higher than those seen in historical risk profiles of infrastructure projects. A GI Hub data assessment finds there is scope to reduce regulatory capital charges by 60% if historical data are used to define risk weights for the infrastructure asset class.
Image

The GI Hub study also writes about the incentive problem with the IRB approach.

Ensuring internal ratings based (IRB) outputs are not less than 75% of those from the standardised model, the output floor was introduced for consistency between the different approaches used by banks. Prior to the introduction of the output floor, the lack of treatment of infrastructure as an asset class was not a major problem. Through the IRB route, banks could use actual risk values observed in the historical performance of infrastructure loans. While banks can still use IRB models, the output floor reduces the incentive to do so. As infrastructure projects do not constitute a large share in their total asset portfolios, banks may not adopt the IRB approach just for the infrastructure asset class. Under the standardised approach, it will be more costly for banks to finance infrastructure projects due to high capital charges.

It finds that use of actual historical data instead of Basel III standards would lower loss estimations from infrastructure by 37% for middle and low-income countries.

For the IRB approach, the Basel Framework has defined Loss Given Default (LGD) input floor by asset class. As regulatory rules are not specifically defined for the infrastructure asset class, the default LGD input floor of 25% for unsecured lending applies. Historical data shows that average LGD values for the infrastructure asset class are highly attractive at less than half that for non-financial corporates. The estimated capital charges based on historical LGD data from actual infrastructure projects are lower than the charges implied by the 25% LGD input floor.
Image

comparison of the actual infrastructure project defaults covering 7047 project loans originated from 1983 to 2018 across several sectors reveals interesting insights. There were 19 defaults in 1006 social projects, 97 defaults in 1114 transportation projects, 18 in 305 water and waste projects, 46 in 395 media and telecom projects, 17 in 278 oil and gas distribution projects, and 240 in 3881 power generation and transmission projects. It had 335 defaults in 5909 projects in high income countries and 107 in 1138 projects in low income countries. 

Image

The right-hand column of Table 5 shows Loss Given Default (LGD) estimates for defaulted infrastructure loans. The figures for the broad categories of HIC and MIC/LIC are 22.1% and 15.8%. These LGDs (of approximately a fifth and a sixth) are extremely low compared to corporate bonds for which LGDs of 50% are more standard. Corporate loans are often presumed to have LGDs of around 45-40%. LGDs of half or a third of those for say senior unsecured bonds mean that the expected losses on infrastructure loans are comparable to relatively highly rated corporate debt securities. To illustrate, a 10-year unrated, HIC infrastructure loan has an Expected Loss equal to 4.8% x 22.1%, i.e., approximately 1%. Assuming a 50% LGD as is common for a senior unsecured bond, this represents the same EL as a 10-year A-rated bond (which, from Table 4, has a default rate of 2.1%).

The GI Hub’s Infrastructure Monitor Report 2023 has a graphic which shows that with lower default and higher recovery rates, the average expected loss on infrastructure loans are only a fourth of that for non-infrastructure loans in both high-income and middle- to low-income countries. 

Image

Default rates on infrastructure loans are lower than that for non-infrastructure sector and have been decreasing over time across all infrastructure sectors.

Image

The GI Hub report draws from research which is cited in this note by Allianz Research. As pointed out above, infrastructure projects have a much lower default probability than corporate loans over the long term.

Image

The research note’s conclusion is encouraging 
Our findings based on new data from Moody’s Investor Services and Standard and Poor’s (Jobst, 2018a) suggest sufficient scope for lower capital charges to be applied to infrastructure investment—through project loans—without altering the current (or planned) calibration methods. While the initial default rate exceeds the level for investment-grade corporates, it steadily declines as the loans mature. After about five years, the marginal default rate is consistent with solid investment-grade credit quality, creating a distinctive “hump-shaped” risk profile (Figures 6 and 7). The recovery rate is high, comparable to that of senior secured corporate loans. This favorable credit performance is even more pronounced for projects in sectors that would fall within the scope of the eligibility requirements for green bonds (Jobst, 2018b). In fact, on a global basis, green infrastructure projects seem to default only half as often over a 10-year period as “brown” projects, with a greater difference in emerging markets relative to advanced economies. Capital charges that recognize the declining downgrade risk of infrastructure debt over time could potentially free up capital; this would help mobilize resources to finance infrastructure—thus promoting the green transition.
Andreas Jobst, then at World Bank, has a very good graphic that captures the possibilities from a differentiated capital requirement regulatory treatment for infrastructure assets.
Image

All this makes the strong case for treating infrastructure as a separate asset category with its own lower risk weights.

The study points to other ways in which Basel III discourages infrastructure lending by banks.

In the Basel Framework, the benefits of credit-risk mitigation instruments can be availed if the legal language of unconditional, continuous, and irrevocability is met. Project finance contracts are not straightforward and have legal obligations defined for all parties for different categories of performance outcomes and risk categories. Such contractual complexity curtails the benefits of credit-risk mitigation instruments (i.e. lower capital charges and better terms of finance) for infrastructure project finance loans... the complexity of project finance contracts makes ratings difficult and expensive to obtain - despite the Basel Framework’s high reliance on them. Additionally, many rating agencies do not follow a recovery-based approach, which should be used to rate infrastructure projects given their superior recovery rates over most other asset classes.

It’s clear that given the predominance of bank loans in infrastructure financing, it’s important to examine the problems associated with bank lending to infrastructure projects and remove the bottlenecks and make it easy to undertake such lending. In this context, here are some thoughts on infrastructure financing through banks.

1. For a start, the GIHub article itself refers to changes by way of lower regulatory capital requirements for infrastructure in Europe, South Africa, and China.

The International Association of Insurance Supervisors (IAIS) - the international standards-setting body for the insurance sector - conducted an extensive definition and data review for infrastructure investments and is reforming the Insurance Capital Standard to introduce risk-sensitive regulations for the infrastructure asset class. In Europe, the Solvency II regulations were amended in 2016 to lower capital charges for ‘Qualifying Infrastructure Investments’, and the same followed in South Africa. China’s Risk-Oriented Solvency System (C-ROSS) also has distinct capital charges for infrastructure exposures. The European Banking Authority (EBA) introduced ‘Infrastructure Supporting Factor (ISF)’ to provide a 20% regulatory capital discount to eligible infrastructure investments. EBA found that the market adoption of ISF was lower than expected.

I confess to not having researched India’s own regulatory treatment of infrastructure. But assuming that it has not made any significant changes to the Basel III framework, India would do well to examine and develop its own appropriate regulatory capital requirements instead of adopting Basel III endgame in full. This is most essential given its direct role in the determination of cost of capital for infrastructure project loans.

The Government could collect historical data on infrastructure loans from banks and analyse them for actual default related parameters across sectors and banks. It should then be compared with the regulatory capital requirements prescribed for banks to decide on the nature of changes required to be made to the existing regulations.

2. India should also engage at the Basel Committee on Banking Regulation to revisit the regulatory requirements under Basel III for infrastructure, especially in light of the prioritisation of efforts to attract private capital from developed countries to finance climate change adaptation and mitigation projects. This is important to increase lending by foreign banks to developing countries which has fallen alarmingly since early 2000s.

3. In an earlier post in the context of sovereign credit ratings, I had suggested that Government of India should support the development of a data repository on ratings and few other parameters. On the same lines (and similar to the Moody’s Analytics Data Alliance Project Finance Consortium), the GoI should make a data depository that tracks all infrastructure projects, their financing structures, and their life-cycle outcomes. This would require engagement with all financial institutions and a mechanism to facilitate contiuous sharing of information. This has to be a project in itself and cannot be done by a Department within the government. One option is to mandate NIIF to do this as part of its infrastructure market development role. This database can serve as invaluable decision support for several important decisions on policy making as well as financial structuring of individual projects.

4. Infrastructure financing combines two distinct and qualitatively different kinds of risks - construction and operations. As shown above, the life-cycle risk profile of an infrastructure project is hump shaped, with the initial construction risks being high often rising further, only to decline sharply and stabilise at very low levels during the operations phase. This deters private capital from investing in greenfield projects which invariably have construction risks. It's essential to acknowledge this while structuring infrastructure financing. Accordingly, bank loans should be structured to finance construction and O&M as distinct activities with different pricing and terms. It’s required to examine the banking regulations in this regard and remove bottlenecks that deter such structuring.

5. Finally, banks grapple with the problem of asset-liability mismatches when lending to infrastructure projects. It’s required to actively engage with policy facilitation that address this problem. One way is the aforementioned changes to the Basel III norms to recognise infrastructure as a distinct asset category with its long-term and illiquid nature. Another way to ease this is to make it easier for banks to raise resources by issuing long-term bonds and then using it to make loans to projects. A third option is to make it easier for banks to securitise infrastructure loans and free up their balance sheets. This would require policies that encourage and facilitate demand for securitised infrastructure debt assets. A fourth option is to make it easier for banks to come together and provide syndicated loans. Syndication arrangements like takeout financing should be incentivised.

Monday, June 19, 2023

The challenges of scaling and firm capability

The issue of state capability has been a recurrent theme in this blog. Governments in developing countries struggle to implement programs with fidelity thereby weakening their impact. This is especially so with what is human engagement intensive and non-quantifiable (quality-based), or thick, activities, compared to logistics-heavy and quantifiable, or thin, activities. 

I'm inclined to argue that even private companies are not immune from this problem. Firms in the business of producing goods and services that primarily involve thick activities will struggle with execution fidelity as they expand in size. Whether done in the public or private sector, scaling thick activities is very hard, one of the hardest of human endeavours. 

In general, there are at least three differences between the activities of the public and private sectors. One, many of the basic public sector activities are inherently thick activities where the quality of human engagement is critical and is also the difference between success and failure. Two, the transactional nature of private sector activities, where one side pays to buy a good or service, is a powerful incentivising force to keep the system disciplined. Three, public sector activities are embedded in a social and political context and interact constantly with these contextual factors. In contrast, private sector activities are performed in a near sanitised or controlled environment. 

But I'll argue that even with these differences, businesses run into the problems of size-related vulnerabilities as they scale. The thin nature of private sector activities, the disciplining force of the market, and technology cannot mask the scaling challenges. 

The Times has an article that tries to capture Amazon's emerging vulnerabilities, especially in terms of labour unrest, as it grows in size and expands the scope of its business activities. As Amazon grows, so do the perils of bigness. 

Amazon’s recent growth helped create the choke points that workers have sought to exploit. During its first two decades, the company stayed out of the delivery business and simply handed off your cat toys and razor blades to the likes of UPS, FedEx and the Postal Service. Amazon began transporting many of its own packages after the 2013 holiday season, when a surge of orders backed up UPS and other carriers. Later, during the pandemic, Amazon significantly increased its transportation footprint to handle a boom in orders while seeking to drive down delivery times... 

The problem is that shipping networks are fragile. If workers walk off the job at one of Amazon’s traditional warehouses, the fulfillment center, the business impact is likely to be minimal because the sheer number of warehouses means orders can be easily redirected to another one. But a shipping network has far less redundancy. If one site goes down, typically either the packages don’t arrive on time or the site must be bypassed, often at considerable expense. All the more so if the site handles a huge volume of packages... And as Amazon’s chief executive, Andy Jassy, seeks to drive down shipping times further, the disruptive potential of this kind of organizing may be growing...
According to data from MWPVL International, the consulting firm, a small portion of Amazon fulfillment centers ship an extremely high volume of goods — more than one million items a day during last year’s peak period... If a union strikes and shuts down one of those buildings, “there will be penalties to pay” for Amazon even with its redundant capacity... More precarious is the company’s delivery infrastructure, where such extensive redundancy is impractical. For example, Amazon also operates dozens of so-called sort centers, where often more than 100,000 packages a day are grouped by geographic area. Many metro areas the size of Albuquerque or St. Louis have only one or two such centers, and a metro area as large as Chicago has only four. If one went down... Amazon could be forced to reroute packages to sort centers in other cities, raising costs... To get a sense of what this could cost, consider that FedEx spent hundreds of millions of dollars on such rerouting in 2021.

The Times did an earlier investigation into human resource problems that bedevil Amazon. 

For at least a year and a half — including during periods of record profit — Amazon had been shortchanging new parents, patients dealing with medical crises and other vulnerable workers on leave, according to a confidential report on the findings. Some of the pay calculations at her facility had been wrong since it opened its doors over a year before. As many as 179 of the company’s other warehouses had potentially been affected, too... That error is only one strand in a longstanding knot of problems with Amazon’s system for handling paid and unpaid leaves, according to dozens of interviews and hundreds of pages of internal documents obtained by The New York Times. Together, the records and interviews reveal that the issues have been more widespread — affecting the company’s blue-collar and white-collar workers — and more harmful than previously known, amounting to what several company insiders described as one of its gravest human resources problems.

Workers across the country facing medical problems and other life crises have been fired when the attendance software mistakenly marked them as no-shows, according to former and current human resources staff members, some of whom would speak only anonymously for fear of retribution. Doctors’ notes vanished into black holes in Amazon’s databases. Employees struggled to even reach their case managers, wading through automated phone trees that routed their calls to overwhelmed back-office staff in Costa Rica, India and Las Vegas. And the whole leave system was run on a patchwork of programs that often didn’t speak to one another. Some workers who were ready to return found that the system was too backed up to process them, resulting in weeks or months of lost income. Higher-paid corporate employees, who had to navigate the same systems, found that arranging a routine leave could turn into a morass. In internal correspondence, company administrators warned of “inadequate service levels,” “deficient processes” and systems that are “prone to delay and error.”

This is the longer detailed investigation. It shows how Amazon's spectacularly successful rapid scale-up of operations in the aftermath of the Covid 19 pandemic (it hired 350,000 new workers between July and October 2020, through computer screening and with little conversation or vetting) relied on efficiency maximising business process automation which carried within itself the seeds of its failures. 

Amazon and its founder, Jeff Bezos, had pioneered new ways of mass-managing people through technology, relying on a maze of systems that minimized human contact to grow unconstrained. But the company was faltering in ways outsiders could not see... In contrast to its precise, sophisticated processing of packages, Amazon’s model for managing people — heavily reliant on metrics, apps and chatbots — was uneven and strained even before the coronavirus arrived, with employees often having to act as their own caseworkers, interviews and records show. Amid the pandemic, Amazon’s system burned through workers, resulted in inadvertent firings and stalled benefits, and impeded communication, casting a shadow over a business success story for the ages.

The mass layoffs in Amazon in recent months are a social cost inflicted by the company's efficiency and profits maximising and resilience neglecting its business model. It's the classic private appropriation of all profits and socialisation of costs. It's therefore important to have regulations that force the likes of Amazon to internalise these social costs. Do R&D investments that promote such efficiency maximisation deserve its current generous tax concessions?

In fact, super-scaling of any activity, even the thin kinds, creates its own vulnerabilities. The numbers of people, functional units, and the multiplicity of processes become too big to be supervised and managed effectively through centralised systems, much less ones that are mostly automated. Such organisations require some form of delegation of powers, and discretion and exercise of judgment at appropriate levels. This, in turn, generates risks and vulnerabilities. It's for this reason that all large corporations experience recurrent episodes of management failures. The threshold at which an activity becomes scale or super-scale enough to reach peak-automation varies widely across activities. 

It's appealing to overcome this challenge by going further down the work-flow automation pathway. Amazon's leave management system is a good example. This is teachable.

As the country’s second largest private employer, Amazon offers a wide array of leaves — paid or unpaid, medical or personal, legally mandated or not. While Amazon used to outsource the management of its leave programs, it brought the effort in-house when providers couldn’t keep up with its growth. It is now one of the largest leave administrators in the country. Employees apply for leaves online, on an internal app, or wade through automated phone trees. The technology that Amazon uses to manage leaves is a patchwork of software from a variety of companies — including Salesforce, Oracle and Kronos — that do not connect seamlessly. That complexity forces human resource employees to input many approved leaves, an effort that last fall alone required 67 full-time employees, an internal document shows... 

Current and former employees involved in administering leaves say that the company’s answer has often been to push them so hard that some required leaves themselves... Amazon’s own teams have not always been well-versed in the system, internal documents show. An external assessment last fall found that the back-office staff members who talk with employees “do not understand” the process for taking leaves and regularly gave incorrect information to workers. In one audited call, which dragged on for 29 minutes, the phone agent told a worker that he was too new to be eligible for short-term disability leave, when in fact workers are eligible from their first day...
In some cases, Amazon has been accused of violating the law. In 2017, Leslie Tullis, who managed a subscription product for children, faced a mounting domestic violence crisis and requested an unpaid leave that employers must offer under Washington State law to protect victims. Once approved, Ms. Tullis would be allowed to work intermittently; she could be absent from work as much as necessary, and with little notice; and she would be protected against retaliation. Amazon granted the leave, but the company didn’t seem to understand what it had said yes to. It had no policy that corresponded to the law of the company’s home state, court documents show. Ms. Tullis said she spent as many as eight hours a week dealing with the company to manage her leave.

In its search for efficiency maximisation, cost minimisation, and limiting discretion Amazon transformed the simplest and most commonest administrative task, approval of leaves, into a logistics-only process. Through workflow automation, it has not left any activity to the slightest discretion of managers, even high enough ones. Applying the accountability framework, one could state that Amazon has taken accounting-based accountability to its logical extremes in an effort to avoid the hard task of building account-based accountability even among its middle and senior managers. Sample this.

David Niekerk, a former Amazon vice president who built the warehouse human resources operations and who retired in 2016 after nearly 17 years at the company, said that some problems stemmed from ideas the company had developed when it was much smaller. Mr. Bezos did not want an entrenched work force, calling it “a march to mediocrity,” Mr. Niekerk recalled, and saw low-skilled jobs as relatively short-term. As Amazon rapidly grew, Mr. Niekerk said, its policies were harder to implement with fairness and care. “It is just a numbers game in many ways,” he said. “The culture gets lost.”...
Amazon intentionally limited upward mobility for hourly workers, said Mr. Niekerk... Instead... wanted to double down on hiring “wicked smart” frontline managers straight out of college... Amazon’s founder didn’t want hourly workers to stick around for long, viewing “a large, disgruntled” work force as a threat, Mr. Niekerk recalled. Company data showed that most employees became less eager over time, he said, and Mr. Bezos believed that people were inherently lazy. “What he would say is that our nature as humans is to expend as little energy as possible to get what we want or need.” That conviction was embedded throughout the business, from the ease of instant ordering to the pervasive use of data to get the most out of employees. So guaranteed wage increases stopped after three years, and Amazon provided incentives for low-skilled employees to leave...
He and the other newcomers had been hired after only a quick online screening. Internally, some describe the company’s automated employment process as “lights-out hiring,” with algorithms making decisions, and limited sense on Amazon’s part of whom it is bringing in. Mr. Niekerk said Mr. Bezos drove the push to remove humans from the hiring process, saying Amazon’s need for workers would be so great, the applications had to be “a check-the-box screen.” Mr. Bezos also saw automated assessments as a consistent, unbiased way to find motivated workers, Mr. Niekerk said.

While many routine activities are automated in governments too, the system still allows managers significant discretion to step in and relies on them to address emergent problems as per the relevant rules. By and large, despite the complex nature of their tasks and their context, they do a reasonably good job. It's then surprising that massively endowed behemoths like Amazon prefer complete automation for even routine tasks.

Over-engineered solutions, like that of Amazon, will always struggle to anticipate all possible contingencies and not be able to stay up to speed in a dynamic environment. Amazon's leave automation system is an example that illustrates the limit to work-flow automation of business processes. 

For this reason, I would rate the functioning of governments in many developed countries, the conduct of census and elections in India etc as being the most impressive organisational performances of our times, much superior to anything in the private sector. 

Amazon is a classic example of a logistics-heavy business. It manufactures/procures goods, aggregates suppliers, connects buyers, lightly curates the sale, manages the storage and transportation logistics, manages the payments and accounting, and delivers the item to the customer's doorstep. The important processes and outputs associated with each of these activities are inherently clearly definable, quantifiable, and amenable to accounting-based accountability. Besides, there's the disciplining force of the seller's accountability to the buyer. 

But even here, as one goes beyond a certain size, vulnerabilities emerge at the margin across several dimensions. These vulnerabilities are a direct cost of the company's efficiency and profits maximising business models across people, logistics, cost of inputs, and pricing. These models tend to skimp on the numbers of people, their wages, their capacity building, storage space, transportation logistics, and time, thereby leaving insufficient slack when vulnerabilities materialise. 

Employees face the force of efficiency maximisation in four dimensions - just enough people are employed to do the business at the least cost, employees are stretched out to the margins of their breakdown limits (through intense monitoring, for even how much time a worker pauses between tasks), are trained just enough for them to do their basic tasks, and are paid just enough to retain them for only just enough time (its hourly associates turnover was 150% a year, twice that in the retail and logistics industries). 

Image

Sample this.

Two measurements dominated most hourly employees’ shifts. Rate gauged how fast they worked, a constantly fluctuating number displayed at their station. Time off task, or T.O.T., tracked every moment they strayed from their assignment — whether trekking to the bathroom, troubleshooting broken machinery or talking to a co-worker... In newer, robotics-driven warehouses like JFK8, those metrics were at the center of Amazon’s operation. A single frontline manager could keep track of 50, 75, even 100 workers by checking a laptop. Auto-generated reports signaled when someone was struggling. A worker whose rate was too slow, or whose time off task climbed too high, risked being disciplined or fired. If a worker was off task, the system assumed the worker was to blame. Managers were told to ask workers what happened, and manually code in what they deemed legitimate excuses, like broken machinery, to override the default

Storage spaces and transportation fleets are just enough to maximise their utilisation, the margin of safety on delivery times is squeezed enough to leave limited room to manoeuvre for even the average failures, and commissions are maximised to just about retain suppliers. The power of data analytics and automation is harnessed to reach an exalted efficiency maximisation plane. It's easy enough to imagine how the likelihood of things going wrong increases when the size increases. 

The conclusion from the Times article on Amazon's HR problems is apt

The extent of the problem puts in stark relief how Amazon’s workers routinely took a back seat to customers during the company’s meteoric rise to retail dominance. Amazon built cutting-edge package processing facilities to cater to shoppers’ appetite for fast delivery, far outpacing competitors. But the business did not devote enough resources and attention to how it served employees, according to many longtime workers.

The administrative tools available with a business like Amazon are heavily skewed in the direction of efficiency maximisation and at the cost of resilience. For a start, efficiency maximisation is at the core of management theories and what's taught in business schools. It permeates everything the company does. Reflecting on the business model choices, the resources (time and man-hours) spent modelling resilience will be negligible compared to what's expended on efficiency and profit maximisation. In fact, all models will heavily under-weight resilience and over-weight efficiency (and profits) maximisation. Sample this.

“Amazon can solve pretty much any problem it puts its mind behind,” Paul Stroup, who until recently led corporate teams understanding warehouse workers said in an interview. The human resources division, though, had nowhere near the focus, rigor and investment of Amazon’s logistical operations, where he had previously worked. “It felt like I was in a different company,” he said.

In some sense, it's a choice companies make in terms of their willingness to pay the cost of doing business. On each of these dimensions, the standard efficiency and profits maximisation approach invariably trades-off against resilience. And as size increases, the risks too increase.