Ed Leamer (1944-2025)

3 Dec, 2025 at 12:55 | Posted in Statistics & Econometrics | 1 Comment

Ed Leamer transformed economists’ understanding of empirical evidence with his landmark 1988 paper, Let’s Take the Con Out of Econometrics. In it, he challenged the profession’s fixation on ‘statistical significance’, describing much empirical research as “measuring with a rubber ruler.”

Leamer’s central claim was that complex econometric models depend heavily on hidden, subjective decisions made by the researcher — particularly regarding which variables to include or omit. Again and again, he reminded the profession that honest measurement is more demanding — and vastly more valuable  — than merely discovering a convenient ‘significant’ coefficient.

.

Central bank independence — a neoliberal con trick

2 Dec, 2025 at 20:57 | Posted in Economics | Leave a comment

.

[h/t Jan Milch]

Central bank independence

1 Dec, 2025 at 21:40 | Posted in Economics | 6 Comments

The independence of central banks is under threat from politicsCentral banks wield near-unlimited power over monetary policy, a policy which largely governs inflation, employment, and economic stability. This power ought to be subject to greater democratic oversight to ensure it aligns with the interests of us, the citizens. Over time, central banks have developed a sort of ‘policy bias’ where inflation control is prioritised over unemployment and welfare, which to broad sections of society appears deeply offensive.

If, through a constitutionally sound and democratically taken decision, we conclude that instead of a given inflation target we wish to prioritise fairness, welfare, and jobs, it is difficult to see why we should be bound by institutions that impose a more narrowly ‘economicist’ choice upon us via a framework designed precisely for that purpose. We must not forget that economics is not always a trump card in policy matters. This really ought to be obvious, but not least economists have a tendency to forget it. From an economic perspective, one could certainly argue, for example, that it is more economically efficient to let the elderly jump off a cliff than to expand costly elderly care. But who would seriously advocate or accept such a thing? Other considerations sometimes carry more weight than purely economic ones.

Beyond the problems with the ‘democratic aspects’ of central banks’ independence, there are also — especially given the remarkable failures to meet inflation targets over the last two decades — good reasons to question central banks’ competence. International research has also convincingly shown that the empirical evidence for the argument that independent central banks benefit the economy is almost non-existent.

For anyone naive enough to believe that central bank governors’ work is based on solid evidence-based science — forget it! As the Swedish documentary Skuldfeber convincingly demonstrates, the work of central bank governors is little more than subtle, fairy-tale charlatanism — which they themselves admit!

She’s Not There

1 Dec, 2025 at 20:27 | Posted in Varia | Leave a comment

.

Ergodicity — an introduction

30 Nov, 2025 at 17:17 | Posted in Economics | 2 Comments

Ergodicity often hides behind a veil of mathematical complexity, yet at its core, it offers a profoundly simple and insightful lens through which to view and understand probability and time. At its heart, it challenges us to distinguish between two distinct types of averages: the ensemble average and the time average.

To grasp this distinction, let us look at a simple, everyday example: to determine which is the most popular shop in a city. How would we proceed? We are faced with two distinct methods.

What is Ergodicity?The first method is the ensemble average. We freeze time for a single moment, and we calculate a statistical average across a large population. We count the number of people in the bakery, the coffee shop, and the hardware store, and divide by the total population. We find that the ensemble average for being in the bakery is, say, 10%. This conclusion is drawn from a cross-section of the population at a single point in time. It is a photograph of the collective.

The second method is the time average. Here, we shift our focus from the breadth of the crowd to the depth of a single life. We choose one person, yours truly, and we follow him for a year. We record the total time he spends in each shop and divide it by the total time of the study. We discover that his time average for being in the bakery is only 0.5%, while his time average for his local grocery store is much higher. This conclusion is a longitudinal study of a single trajectory.

When we compare these two averages, we arrive at the critical juncture. The ensemble average tells us the bakery is a crowd-puller. The time average tells us the grocery store is the cornerstone of yours truly’s routine. The two calculations yield different results, which is typical of a non-ergodic system.

A system is ergodic if, and only if, the time average equals the ensemble average for a given observable. In such a world, the cross-sectional snapshot would perfectly mirror the long-term experience of any one individual within it. Yours truly’s annual time averages would align precisely with the city’s instantaneous ensemble averages. But the real world is not so uniform or predictable. It is full of variation, shaped by diverse preferences and random events. The two measures diverge.

The implications of this insight are profound. It forces us to scrutinise the statistics we use to understand our lives. When we hear about an ‘average’ outcome, we must ask: is this an ensemble average or a time average? In finance, for example, a risky investment might have a positive ensemble average return (the expected value across all possible investors looks good), but could lead to total ruin for a single investor who follows that path over time (yours truly has elaborated on this here), resulting in a negative time average of their wealth growth. The ensemble looks promising. The individual’s trajectory is catastrophic.

Ultimately, understanding ergodicity is about understanding the critical difference between these two averages. It emphasises the ontological reality that an individual life constitutes a singular, continuous time average unfolding within a shifting landscape of ensemble averages. Recognising this distinction enables us to formulate more precise questions, critically interrogate aggregate data, and approach the probabilistic complexity of life with greater analytical acuity.

Over the years, some of us have tried to teach our economics students something of the importance of questioning the common ergodicity assumption. This assumption, often left unstated, underpins most mainstream economic models concerning preferences and expected utility.

An Introduction to Ergodicity Economics: 9781068649134: Economics Books @  Amazon.comOne of the problems has been the lack of an accessible textbook on ergodicity economics. However, this has now been remedied!

Ole Peters’ and Alexander Adamou’s newly published An Introduction to Ergodicity Economics provides an excellent introduction to a rapidly expanding field of research within economics.

Paul Samuelson once famously claimed that the ‘ergodic hypothesis’ is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?

Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.

Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.

Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expected average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.

Instead of arbitrarily assuming that people have a certain type of utility function — as in mainstream theory — time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe, it could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks — creates extensive and recurrent systemic crises.

Suppose I want to play a game. Let’s say we are tossing a coin. If heads come up, I win a dollar, and if tails come up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?

A strict mainstream utility-maximising economist would suggest that my goal should be to maximise the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.

Does that sound rational? Most people would answer ‘no’. The risk of losing is so high that after just a few games — the expected time until the first loss is 1/(1-p), which in this case is 2.5 — one would, in all likelihood, have lost and gone bankrupt. The expected-value maximising economist does not seem to have a particularly compelling approach.

While I share many of the critiques that Peters and Adamou level against mainstream expected utility theory, I diverge from their conclusion regarding human decision-making. In their book, Peters and Adamou write:

We will do decision theory by using mathematical models … The wealth process and the decision criterion may or may not remind you of the real world. We will not worry too much about the accuracy of these reminiscences. Instead we will ‘shut up and calculate’ — we will let the mathematical model create its world … Importantly, we need not resort to psychology to generate a host of behaviours, such as preference reversal as time passes or impatience of poorer individuals … Unlike utility functions, treated in mainstream decision theory as encoding psychological risk preferences, wealth dynamics are something about which we can reason mechanistically

Contrary to their position, I contend that psychological factors are not merely incidental but are fundamental. Any framework that seeks to describe or predict human action must place them at its core.

When evaluating decisions, the way we measure ‘growth’ changes the story dramatically. Consider two very different processes. In the first (Gamble 1), an investor begins with $10,000 and passes through three rounds of wealth reduction, ending with just half a cent. In the second (Gamble 2), the investor faces a single gamble: a 99.9% chance of walking away with $10,000,000 and a 0.1% chance of ending with nothing.

In Gamble 1, the deterministic shrinking process is straightforward: each round reduces the investor’s wealth by a constant proportion. Mathematically, the wealth after three rounds is $0.005.The investor loses about 99% of wealth per round. The average per-round growth rate is about −99%. The result is a guaranteed catastrophe.

In Gamble 2, the investor risks everything on a single binary outcome — a 99.9% chance of $10,000,000, and a 0.1% chance of nothing. The expected value is huge — on average, the gamble turns $10,000 into $9,990,000. However, if the gamble were repeated many times with the entire bankroll at stake, ruin would be inevitable. Since there is always some probability of hitting zero, the long-run geometric growth rate is negative infinity. Once the investor reaches zero wealth, no recovery is possible.

Which gamble is regarded as superior depends on the objective. If the goal is maximising expected wealth from a one-off decision, Gamble 2 dominates, offering huge expected gains. But if the goal is preserving wealth over repeated plays, Gamble 2 is disastrous. Gamble 1 is equally unappealing — it guarantees destruction without the possibility of recovery.

The metric we use — arithmetic or geometric averages and growth rates — can give entirely different conclusions. From an expected value perspective, one would favour Gamble 1, since it offers a higher average growth rate. Yet I suspect very few investors would actually share that preference.

When it comes to human decision-making, psychological factors are paramount. This is especially true when confronting uncertainty. The models and examples presented often operate, either explicitly or implicitly, within the realm of quantifiable risk. On this point, it is wise to recall the crucial distinction made by Keynes a century ago: measurable risk is fundamentally different from unmeasurable uncertainty. In the latter domain, where probabilistic calculations break down, psychology inevitably plays a decisive role.

Consequently, while ‘optimal growth rates’ may serve as a useful decision criterion in specific, well-defined contexts, they cannot be considered the sole or universally best guide for human action. A comprehensive theory of decision-making must account for the full spectrum of human cognition and attitude, particularly when navigating the unquantifiable unknowns of the real world.

Robert Reich on Trump’s fascism

30 Nov, 2025 at 15:56 | Posted in Politics & Society | 1 Comment

.

America needs Robert Reich’s unflinching clarity. He diagnoses Trumpian fascism not as politics, but as a contagion: a cult of personality, the eroding trust in elections and a free press, and the strategic use of scapegoats and violent rhetoric. His voice is a vaccine against complacency, and the nation needs this civic defence now more than ever.

Statler & Waldorf

30 Nov, 2025 at 14:22 | Posted in Varia | Leave a comment

.

Statler and Waldorf, the definitive hecklers, made watching The Muppet Show an enduring delight through their brutal honesty and brilliantly bickering chemistry.

Are you stupid?

29 Nov, 2025 at 09:09 | Posted in Politics & Society | Leave a comment

.

Confirms — again — what we already knew: Trump is a reckless, untruthful, outrageous, incompetent and undignified buffoon!

Logic and truth

27 Nov, 2025 at 19:39 | Posted in Theory of Science & Methodology | 3 Comments

To be ‘analytical’ and ‘logical’ is a quality most people find commendable. These words carry a positive connotation. Scientists are thought to think more deeply than most because they employ ‘logical’ and ‘analytical’ methods. Dictionaries often define logic as “reasoning conducted or assessed according to strict principles of validity” and ‘analysis’ as concerning the “breaking down of something.”

Why there is no relationship between truth and logic | LARS P. SYLLBut this is not the whole picture. As used in science, analysis usually implies something more specific. It means to separate a problem into its constituent elements, thereby reducing complex — and often complicated — wholes into smaller, simpler, and more manageable parts. One takes the whole and breaks it down (decomposes it) into its separate parts. By examining the parts individually, one is supposed to gain a better understanding of how they operate. Built upon this more or less ‘atomistic’ knowledge, one then expects to be able to predict and explain the behaviour of the complex and complicated whole.

In economics, this means taking the economic system, dividing it into its separate parts, analysing these parts one at a time, and then, after separate analysis, putting the pieces back together.

The ‘analytical’ approach is typically used in economic modelling, where one begins with a simple model containing few isolated and idealised variables. Through ‘successive approximations,’ one then adds more variables, aiming finally to arrive at a ‘true’ model of the whole.

This may sound like a convincing and sound scientific approach.

The approach, however, rests on a precarious assumption.

The procedure only works effectively when one has a machine-like whole — a system or economy — where the parts exist in fixed and stable configurations. And if there is anything we know about reality, it is that it is not a machine! The world we inhabit is not a ‘closed’ system. On the contrary, it is an essentially ‘open’ system. Things are uncertain, relational, interdependent, complex, and ever-changing.

Without assuming that the underlying structure of the economy you are trying to analyse remains stable and invariant, there is no chance the equations in your model will hold constant. This is the very rationale for economists’ use — often only implicitly — of the ceteris paribus assumption. But — nota bene — this can only be a hypothesis. You must argue the case for it. If you cannot supply any sustainable justifications or warrants for the adequacy of that assumption, then the entire analytical economic project becomes pointless, uninformative nonsense. Not only must we assume that we can shield variables from each other analytically (external closure), we must also assume that each variable itself is amenable to being understood as a stable, regularity-producing machine (internal closure). We know, of course, that this is generally not possible. Some things, relations, and structures are not analytically graspable. Trying to analyse parenthood, marriage, or employment piece by piece does not make sense. To be a chieftain, a capital-owner, or a slave is not an individual property of a person. It can only exist when individuals are integral parts of certain social structures and positions. Social relations and contexts cannot be reduced to individual phenomena. A cheque presupposes a banking system, and being a tribe-member presupposes a tribe. By failing to account for this in their ‘analytical’ approach, economic ‘analysis’ becomes uninformative nonsense.

Using ‘logical’ and ‘analytical’ methods in the social sciences often means economists succumb to the fallacy of composition — the belief that the whole is nothing but the sum of its parts. In society and the economy, this is arguably not the case. An adequate analysis of society and the economy, a fortiori, cannot proceed by merely adding up the acts and decisions of individuals. The whole is more than the sum of its parts.

Mainstream economics is built on using the ‘analytical’ method. The models built with this method presuppose that social reality is ‘closed.’ Since social reality is known to be fundamentally ‘open,’ it is difficult to see how such models can explain anything about what happens in such a universe. Postulating closed conditions to make models operational and then imputing these closed conditions to society’s actual structure is an unwarranted procedure that fails to take necessary ontological considerations seriously.

In the face of the methodological individualism and rational choice theory that dominate mainstream economics, we must admit that whilst knowing the aspirations and intentions of individuals is necessary for explaining social events, it is far from sufficient. Even the most elementary ‘rational’ actions in society presuppose the existence of social forms that cannot be reduced to individual intentions. Here, the ‘analytical’ method fails once more.

The overarching flaw with the ‘analytical’ economic approach, using methodological individualism and rational choice theory, is that it reduces social explanations to purportedly individual characteristics. But many of an individual’s characteristics and actions originate in, and are made possible only through, society and its relations. Society is not a Wittgensteinian ‘Tractatus’-world characterised by atomistic states of affairs. Society is not reducible to individuals, since the social characteristics, forces, and actions of the individual are determined by pre-existing social structures and positions. Even though society is not a volitional individual, and the individual is not an entity existing outside society, the individual (actor) and society (structure) must be kept analytically distinct. They are tied together through the individual’s reproduction and transformation of already given social structures.

Since at least the marginal revolution in economics in the 1870s, it has been an essential feature of the discipline to ‘analytically’ treat individuals as essentially independent and separate entities of action and decision. But, in truth, in a complex, organic, and evolutionary system like an economy, that kind of independence is a profoundly unrealistic assumption. To simply assume strict independence between the variables we try to analyse does not help us in the least if that hypothesis proves unwarranted.

To apply the ‘analytical’ approach, economists must essentially assume that the universe consists of ‘atoms’ that exercise their own separate and invariable effects, such that the whole consists of nothing but an addition of these separate atoms and their changes. These simplistic assumptions of isolation, atomicity, and additivity are, however, at odds with reality. In real-world settings, we know that ever-changing contexts make it futile to seek knowledge through such reductionist assumptions. Real-world individuals are not reducible to contentless atoms and are thus not susceptible to atomistic analysis. The world is not reducible to a set of atomistic ‘individuals’ and ‘states.’ How variable X works and influences real-world economies in situation A cannot simply be assumed to be understood by looking at how X works in situation B. Knowledge of X probably tells us little if we do not consider its dependence on Y and Z. It can never be legitimate simply to assume the world is ‘atomistic.’ Assuming real-world additivity cannot be correct if the entities around us are not ‘atoms’ but ‘organic’ entities.

If we want to develop new and better economics, we must give up the single-minded insistence on using a deductivist straitjacket methodology and the ‘analytical’ method. To focus scientific endeavours solely on proving things within models is a gross misapprehension of the purpose of economic theory. Deductivist models and ‘analytical’ methods disconnected from reality are not relevant for predicting, explaining, or understanding real-world economies.

To have ‘consistent’ models and ‘valid’ evidence is not enough. What economics needs are real-world relevant models and sound evidence. Aiming only for ‘consistency’ and ‘validity’ sets the aspirations of economics too low for developing a realistic and relevant science.

Economics is not mathematics or logic. It is about society. The real world.

Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy of mainstream economic theory is that it believes the logic and mathematics it uses are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning.

The world in which we live is inherently uncertain, and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a ‘weight of argument’ that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice,” as Einstein maintained, I would add, “nor do people.” The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent us from treating it as if it were constituted by ‘legal atoms’ with discretely distinct, separable, and stable causal relations. Our knowledge, accordingly, has to be of a rather fallible kind.

If the real world is fuzzy, vague, and indeterminate, then why should our models be built upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we must ask ourselves if our models are truly relevant.

‘Human logic’ must supplant the classical — formal — logic of deductivism if we wish to say anything interesting about the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, where concepts and entities lack clear boundaries and continually interact and overlap. In this world, we are better served by a methodology that acknowledges that the more we know, the more we know we do not know.

Mathematics and logic cannot establish the truth value of facts. Never have. Never will.

Go Now

26 Nov, 2025 at 21:47 | Posted in Varia | Leave a comment

.

Economics of uncertainty

26 Nov, 2025 at 11:26 | Posted in Economics | 1 Comment

Radical Uncertainty: Decision-making for an unknowable future : Kay, John,  King, Mervyn: Amazon.se: BooksThe disregard of radical uncertainty by a generation of economists condemned modern macroeconomics to near irrelevance … Keynes’ critique of ‘getting on the job’ without asking ‘whether the job is worth getting on with’ would prove to be as true of the new macroeconomic theorising as of the older econometric modeling …

Over forty years, the authors have watched the bright optimism of a new, rigorous approach to economics dissolve into the failures of prediction and analysis which were seen in the global financial crisis of 2007-08. And it is the pervasive nature of radical uncertainty which is the source of the problem.

Kay and King’s challenge to contemporary economics resonates with a much older intellectual tradition — one that recognises the limits of human foresight and the complexity of economic life. In this respect, Radical Uncertainty is a sophisticated rearticulation of concerns voiced by earlier thinkers such as Frank Knight, John Maynard Keynes, and Friedrich Hayek, each of whom, in different ways, emphasised the irreducible uncertainty inherent in socio-economic systems.

Frank Knight famously distinguished between measurable risk and unmeasurable uncertainty. Risk can be insured against or incorporated into actuarial tables; uncertainty cannot. For Knight, entrepreneurial judgement — not optimisation — was at the heart of economic activity because it required navigating precisely those situations that could not be captured by known numerical probabilities.

John Maynard Keynes developed a similar, though more expansive, view. His concept of ‘weight of argument’ and his insistence that economic decisions are shaped by expectations, conventions, and ‘animal spirits’ reflect a recognition that the future is genuinely unknowable. In Keynes’s world, probabilities are not merely difficult to calculate; they may not exist in any meaningful sense. Kay and King explicitly situate their claims within this Keynesian insight, arguing that modern macroeconomics has lost sight of the epistemic humility that Keynes regarded as essential.

[Although Kay and King more or less equate Knight’s and Keynes’ views, I think there exists a fundamental distinction between the uncertainty concepts of Knight and Keynes: Knight’s is rooted primarily in epistemology, while Keynes’s is decidedly ontological.

The most profound implication of this difference is that subscribing to the epistemological (Knightian) view can foster the mistaken belief that with superior information and greater computational power, we might eventually be able to calculate probabilities and describe the world as an ergodic system. However, as Keynes convincingly argued, this is ontologically impossible. For Keynes, the wellspring of uncertainty lies in the intrinsic nature of a non-ergodic reality.

Keynes’ concern was not merely, or primarily, the epistemological fact that we currently lack knowledge of certain things. Instead, it addressed a deeper and more far-reaching ontological fact: in most ‘large world’ contexts, there simply is no firm or stable basis upon which to form any quantifiable probabilities or expectations at all.

Often we do not know because we cannot know.]

Friedrich Hayek, from a quite different intellectual tradition, likewise emphasised the decentralised, dispersed nature of knowledge and the impossibility of capturing it within a single model. Whilst Hayek’s critique was aimed at central planning, the underlying point — that no system of economic calculation can fully grasp the complexities of an evolving society — echoes strongly in Kay and King’s work.

Nassim Nicholas Taleb has in his analysis of ‘Black Swan’ events and ‘fat-tailed’ distributions highlighted the fragility of systems built on the assumption of stable probabilistic structures. Taleb focuses on the statistical dangers of small-world modelling; Kay and King broaden the argument to show that these dangers are not merely technical but conceptual. Where Taleb stresses tail risk, Kay and King stress the impossibility of enumerating the relevant distribution in the first place.

Thus, Radical Uncertainty sits at a crossroads of multiple intellectual traditions: Knightian and Keynesian genuine uncertainty, Hayekian epistemic limits, and Talebian systemic fragility. What unifies these perspectives is the argument that modern economic modelling often betrays a profound misunderstanding of the world it seeks to describe.

The implications of Kay and King’s critique extend far beyond academic debates over modelling. If the economy is truly a ‘large world’ system, then policies designed for a ‘small world’ — those predicated on precise forecasts and supposedly robust risk management — are doomed to fail.

Central banks, for instance, increasingly rely on sophisticated models to forecast inflation, output, and financial stability risks. These models typically embed assumptions of rational expectations, representative agents, and stable distributions. Yet, as the crisis of 2008 revealed, such models can be dangerously blind to systemic interactions, feedback loops, and shifts in expectations. The belief that interest rates can be fine-tuned in response to forecasts gives policymakers a false sense of control — one that evaporates when the economy behaves in ways that lie outside their model’s conceptual universe.

Kay and King argue that central banking must adopt a radically different orientation — one rooted in scenario thinking, robustness, and institutional resilience rather than the pursuit of optimality. The Bank of England, the ECB, and the Federal Reserve have all, to varying degrees, started to move in this direction, but the legacy of small-world modelling remains deeply embedded.

Financial regulation likewise suffers when based on risk-weighted models that underestimate the interdependence of financial institutions. When regulators rely on models that assume a normal distribution of shocks or stable correlations between asset classes, they fail to anticipate cascading failures — precisely the form that systemic crises typically take. The result is a regulatory architecture that appears rigorous but is, in fact, brittle.

Radical uncertainty implies that what matters is not numerical precision but the capacity to absorb surprise. This suggests a shift away from micro-prudential rules towards a macro-prudential philosophy focused on diversity, redundancy, and buffers — concepts far closer to engineering or ecology than to traditional mainstream ideas of optimisation.

At a broader societal level, Kay and King advocate for institutions that promote resilience, adaptability, and distributed decision-making. A world of radical uncertainty rewards institutions that can learn, revise, and evolve.

Optimisation, as they point out, is a fair-weather strategy. It performs brilliantly in stable, predictable environments but fails catastrophically when confronted with shocks. Resilient systems, by contrast, appear ‘inefficient’ in conventional economic terms — they hold spare capacity, maintain buffers, and decentralise authority. Yet these very features make them capable of weathering the unpredictable.

A further implication concerns decision theory itself. Kay and King reject the view that decision-making should revolve around probabilistic calculus. Instead, they argue for the centrality of narrative reasoning: constructing coherent stories about how the world works, how events may unfold, and what actions are prudent. This is not irrationality; it is a practical and context-sensitive mode of reasoning that mirrors how successful individuals, firms, and policymakers actually navigate complex environments.

Judgement is elevated above optimisation. Experience and contextual awareness become more important than the ability to manipulate formal models.

In our careers we have seen repeatedly how people immersed in technicalities, engaged in day-to-day preoccupations, have failed to stand back and ask, ‘What is going on here?’

Taken together, these arguments amount to a call for a more realistic and humble discipline — echoing Keynes’ wish that economists should “manage to get themselves thought of as humble, competent people, on the level with dentists” — one that accepts the limits of knowledge rather than obscuring them. Economics, in Kay and King’s vision, would become more interdisciplinary, drawing on history, psychology, political economy, and sociology. It would resist the temptation to convert uncertainty into pseudo-certainty. And it would recognise that, in an open world filled with novelty and genuine unpredictability, the primary task of economic reasoning is not to forecast the future but to equip us to cope with whatever the future brings.

The world is not a casino or a laboratory. It is a place of surprise, creativity, and human fallibility. By acknowledging this, economists can abandon the false comfort of the small world and embrace the challenges — and opportunities — of the large one.

Kenneth Boulding on economists and madmen

25 Nov, 2025 at 17:18 | Posted in Economics | Comments Off on Kenneth Boulding on economists and madmen

TOP 25 QUOTES BY KENNETH E. BOULDING (of 126) | A-Z Quotes

Keynes and Tinbergen on econometric modelling

24 Nov, 2025 at 00:16 | Posted in Statistics & Econometrics | 3 Comments

Mainstream economists often hold the view that Keynes’s criticism of econometrics was the result of a profoundly mistaken thinker who disliked and largely failed to understand it.

This, however, is nothing but a gross misapprehension.

To be careful and cautious is not the same as to dislike. Keynes did not misunderstand the crucial issues at stake in the development of econometrics. Quite the contrary—he knew them all too well, and was deeply unsatisfied with the validity and philosophical underpinning of the assumptions required to apply its methods.

Methodology must be suited to the nature of the object of study—that is, the ontology of the social world. Keynes’s critique of econometrics is fundamentally ontological: one cannot apply statistical tools designed for stable, repetitive systems (such as games of chance or classical physics) to a social reality that is inherently unstable, non-atomic, and permeated by human agency and genuine uncertainty (in the Knightian/Keynesian sense).

Keynes’s critique of the ‘logical issues’—the conditions that must be satisfied to validly apply econometric methods—remains valid and unanswered. The problems he identified are still with us today and are largely unsolved. To ignore them—the most common practice amongst applied econometricians—is not to solve them.

To apply statistical and mathematical methods to the real-world economy, the econometrician must make some rather strong assumptions. In a review of Tinbergen’s early econometric work, published in The Economic Journal in 1939, Keynes delivered a comprehensive critique, focusing on the limiting and unrealistic character of the assumptions upon which econometric analyses are built:

Completeness: Where Tinbergen attempts to specify and quantify the factors influencing the business cycle, Keynes maintains that one must have a complete list of all relevant factors to avoid misspecification and spurious causal claims. Typically, this problem is ‘solved’ by econometricians assuming they have somehow arrived at a ‘correct’ model specification. Keynes was, to put it mildly, profoundly sceptical:

It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.

Homogeneity: To make inductive inferences possible—and to be able to apply econometrics—the system we try to analyse must have a large degree of ‘homogeneity’. According to Keynes, most social and economic systems—particularly from the perspective of real historical time—lack this ‘homogeneity’. As he had argued as early as his Treatise on Probability (Ch. 22), it was not always possible to take repeated samples from a fixed population when analysing real-world economies. In many cases, there is simply no reason at all to assume the samples are homogeneous. A lack of ‘homogeneity’ renders the principle of ‘limited independent variety’ inapplicable, and hence makes inductive inferences, strictly speaking, impossible, since one of their fundamental logical premises remains unsatisfied. Without “much repetition and uniformity in our experience,” there is no justification for placing “great confidence” in our inductions (TP Ch. 8).

Furthermore, there is also the ‘reverse’ variability problem of non-excitation: factors that do not change significantly during the period analysed can still very well be extremely important causal factors.

Stability: Tinbergen assumes a stable spatio-temporal relationship exists between the variables his econometric models analyse. But as Keynes had argued in his Treatise on Probability, it was not truly possible to make inductive generalisations based on correlations in a single sample. As later studies of ‘regime shifts’ and ‘structural breaks’ have demonstrated, it is exceedingly difficult to find and establish stable econometric parameters for anything other than rather short time series.

Measurability: Tinbergen’s model assumes that all relevant factors are measurable. Keynes questions whether it is possible to adequately quantify and measure elements such as expectations, and political and psychological factors. More than anything, he questioned—on both epistemological and ontological grounds—that it was always and everywhere possible to measure real-world uncertainty with the help of probabilistic risk measures. Thinking otherwise can, as Keynes wrote, “only lead to error and delusion.”

Independence: Tinbergen assumes that the variables he treats are independent (an assumption that remains standard in econometrics). Keynes argues that in a complex, organic, and evolutionary system such as an economy, independence is a profoundly unrealistic assumption. Building econometric models upon such simplistic and unrealistic assumptions risks producing nothing but spurious correlations and causalities. Real-world economies are organic systems for which the statistical methods used in econometrics are ill-suited, or even, strictly speaking, inapplicable. Mechanical probabilistic models have little purchase when applied to non-atomic, evolving, organic systems—such as economies.

originalIt is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish.

Building econometric models cannot be an end in itself. Good econometric models are a means to enable us to make inferences about the real-world systems they purport to represent. If we cannot demonstrate that the mechanisms or causes we isolate and manipulate in our models are applicable to the real world, then they are of little value to our understanding of, explanations for, or predictions about real-world economic systems.

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts.

Linearity: To make his models tractable, Tinbergen assumes the relationships between the variables he studies to be linear. This is still standard procedure today, but as Keynes writes:

It is a very drastic and usually improbable postulate to suppose that all economic forces are of this character, producing independent changes in the phenomenon under investigation which are directly proportional to the changes in themselves; indeed, it is ridiculous.

To Keynes, it was a ‘fallacy of reification’ to assume that all quantities are additive (an assumption closely linked to independence and linearity).

The unpopularity of the principle of organic unities shows very clearly how great is the danger of the assumption of unproved additive formulas. The fallacy, of which ignorance of organic unity is a particular instance, may perhaps be mathematically represented thus: suppose f(x) is the goodness of x and f(y) is the goodness of y. It is then assumed that the goodness of x and y together is f(x) + f(y) when it is clearly f(x + y) and only in special cases will it be true that f(x + y) = f(x) + f(y). It is plain that it is never legitimate to assume this property in the case of any given function without proof.

J. M. Keynes “Ethics in Relation to Conduct” (1903)

And as even one of the founding fathers of modern econometrics — Trygve Haavelmo — wrote:

What is the use of testing, say, the significance of regression coefficients, when maybe, the whole assumption of the linear regression equation is wrong?

Real-world social systems are usually not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established are, in fact, laws and relations about entities within models. These models presuppose that causal mechanisms and variables—and the relationships between them—are linear, additive, homogeneous, stable, invariant, and atomistic. However, when causal mechanisms operate in the real world, they do so only within ever-changing and unstable combinations, where the whole is more than a mechanical sum of its parts.

Given that statisticians and econometricians—as far as I can see—have been unable to convincingly warrant their assumptions of homogeneity, stability, invariance, independence, and additivity as being ontologically isomorphic with real-world economic systems, Keynes’s critique remains valid. As long as—to quote Keynes in a 1935 letter to Frisch—”nothing emerges at the end which has not been introduced expressly or tacitly at the beginning,” I remain deeply sceptical of the scientific aspirations of econometrics. This is especially true when it comes to its use for making causal inferences, which often still rely upon counterfactual assumptions of exceptionally weak foundation.

In his critique of Tinbergen, Keynes highlights the fundamental logical, epistemological, and ontological problems inherent in applying statistical methods to a social reality that is fundamentally unpredictable, uncertain, complex, unstable, interdependent, and ever-changing. Methods designed for analysing repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic, non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for critical thought. From this perspective, it is profoundly depressing to observe how much of Keynes’s critique of the pioneering econometrics of the 1930s and 1940s remains relevant today.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

J. M. Keynes, letter to E.J. Broster, December 19, 1939

Behavioural economics — a theory-induced blindness

23 Nov, 2025 at 13:01 | Posted in Economics | 1 Comment

Although discounting empirical evidence cannot be the right way to solve economic issues, there are still, in my view, a couple of weighty reasons why we perhaps shouldn’t be too enthusiastic about the so-called ‘empirical revolution’ that behavioural economics has brought about in mainstream economics.

Thaler and behavioural economics — some critical perspectives | Real-World  Economics Review BlogBehavioural experiments and laboratory research face the same fundamental problem as theoretical models — they are built on often rather artificial conditions and have difficulties with the ‘trade-off’ between internal and external validity. The more artificial the conditions, the greater the internal validity, but the less the external validity. The more we rig up experiments to avoid ‘confounding factors’, the less the conditions resemble the real ‘target system.’ The nodal issue is how economists using different isolation strategies in different ‘nomological machines’ attempt to learn about causal relationships. One may have justified doubts about the generalisability of this research strategy, since the probability is high that causal mechanisms are different in different contexts and that a lack of homogeneity and invariance doesn’t give us warranted export licenses to ‘real’ societies or economies.

If we see experiments or laboratory research as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central (and for a long time was also a key reason why behavioural economists had trouble getting their research results published).

A standard procedure in behavioural economics — think of, for example, dictator or ultimatum games — is to set up a situation where one induces people to act according to the standard microeconomic — homo oeconomicus — benchmark model. In most cases, the results show that people do not behave as one would have predicted from the benchmark model, in spite of the setup almost invariably being ‘loaded’ for that purpose. [And in those cases where the result is consistent with the benchmark model, one, of course, has to remember that this in no way proves the benchmark model to be right or ‘true,’ since there, as a rule, may be many outcomes that are consistent with that model.]

For most heterodox economists, this is just one more reason for giving up on the standard model. But not so for mainstreamers and many behaviouralists. To them, the empirical results are not reasons for abandoning their preferred hardcore axioms. So they set out to ‘save’ or ‘repair’ their model and try to ‘integrate’ the empirical results into mainstream economics. Instead of accepting that the homo oeconomicus model has zero explanatory real-world value, one puts lipstick on the pig and hopes to carry on with business as usual. Why we should keep on using that model as a benchmark when everyone knows it is false is something we are never told. Instead of using behavioural economics and its results as building blocks for a progressive alternative research programme, the ‘save and repair’ strategy immunises a hopelessly false and irrelevant model.

By this, I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically — though not without reservations — in favour of the increased use of behavioural experiments and laboratory research within economics. Not least as an alternative to completely barren ‘bridge-less’ axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

The increasing use of natural and quasi-natural experiments in economics during the last couple of decades has led several prominent economists to triumphantly declare it as a major step on a recent path toward empirics, where instead of being a deductive philosophy, economics is now increasingly becoming an inductive science.

Limiting model assumptions in economic science always have to be closely examined, since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori are only of limited value to our understanding, explanations or predictions of real economic systems.

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export warrant to the target system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods is despairingly small.

So — although it is good that people like Kahneman and Thaler have been rewarded with ‘Nobel prizes’ and that much of their research has vastly undermined the lure of axiomatic-deductive mainstream economics, there is still a long way to go before economics has become a truly empirical science. The great challenge for the future of economics is not to develop methodologies and theories for well-controlled laboratories, but to develop relevant methodologies and theories for the messy world in which we happen to live.

Most of the observed ‘biases’ in behavioural economics are not the result of errors in beliefs or logic, although some are. Most are the product of a reality in which decisions must be made in the absence of a precise and complete description of the world in which people live, in contrast to the small worlds in which the students whose choices are studied in experimental economics are asked to participate …

Biases and Agile - Theory-Induced BlindnessEconomists who label certain types of behaviour as cognitive illusions may be missing the point that the people they observe are not living in the small world which they themselves inhabit (or the small world they model) …

Kahneman offers an explanation of why earlier and inadequate theories of choice persisted for so long — a ‘theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws’. We might say the same about behavioural economics. We believe that it is time to move beyond judgemental taxonomies of ‘biases’ derived from a benchmark which is a normative model of human behaviour deduced from implausible a priori principles. And ask instead how humans do behave in large worlds of which they can only ever have imperfect knowledge …

There is an alternative story to that told by behavioural economics. It is that many of the characteristics of human reasoning which behavioural economics describes as biases are in fact adaptive — beneficial to success — in the large real worlds in which people live, even if they are sometimes misleading in the small worlds created for the purposes of economic modelling and experimental psychology. It is an account which substitutes evolutionary rationality for axiomatic rationality.

John Kay & Mervyn King

Theory of science books for economists

22 Nov, 2025 at 13:56 | Posted in Economics | Comments Off on Theory of science books for economists

top-10-retail-news-thumb-610xauto-79997-600x240

• Harré, Rom (1960). An introduction to the logic of the sciences. London: Macmillan

• Bhaskar, Roy (1978). A realist theory of science. Hassocks: Harvester

• Garfinkel, Alan (1981). Forms of explanation: rethinking the questions in social theory. New Haven: Yale U.P.

• Lieberson, Stanley (1987). Making it count: the improvement of social research and theory. Berkeley: Univ. of California Press

• Miller, Richard (1987). Fact and method: explanation, confirmation and reality in the natural and the social sciences. Princeton, N.J.: Princeton Univ. Press

• Archer, Margaret (1995). Realist social theory: the morphogenetic approach. Cambridge: Cambridge University Press

• Lawson, Tony (1997). Economics and reality. London: Routledge

• Lipton, Peter (2004). Inference to the best explanation. 2. ed. London: Routledge

• Cartwright, Nancy (2007). Hunting causes and using them. Cambridge: Cambridge University Press

• Kay, John & King, Mervyn (2020). Radical uncertainty. The Bridge Street Press, London

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.