What is causality?

5 Feb, 2026 at 19:27 | Posted in Theory of Science & Methodology | 2 Comments

What is Causality | Explained in 2 minYours truly has been offering a crash course on causality to fellow researchers at Malmö University over the past couple of years.

The course PowerPoint is available here: What is causality? 

Many contemporary research questions in the social sciences are fundamentally concerned with issues of causality. What lies behind rising unemployment? What effects do independent schools have on pupils’ attainment?

Questions of cause and effect are equally central to politics and public policy. What measures are required to curb rising inflation? What impact has the COVID-19 pandemic had on public health? Answering such questions presupposes sound causal reasoning.

Since the mid-twentieth century, causality has increasingly been discussed in probabilistic terms. Using traditional statistical methods, we can identify correlations and describe their properties (statistical significance, robustness, sensitivity, and so on). However, researchers — and policymakers — often wish to go further by identifying the causal mechanisms and relationships that explain these observed correlations. This, in turn, makes it possible, through the active use of interventions or manipulations, to estimate the causal effects of different policy packages and measures.

In contemporary research, it has become critically important — particularly for securing research funding — to employ research designs with high ‘evidential value’. Accordingly, evidence hierarchies are typically structured around the degree of causal knowledge generated by different research strategies, with debates surrounding randomised controlled trials (RCTs) and meta-analyses playing a prominent role.

The course introduces core theories and models of causal inference and explains, illustrates, and demonstrates how to apply the most widely used methods for causal analysis, primarily within the social sciences.

What is the point of doing experiments?

3 Feb, 2026 at 21:14 | Posted in Theory of Science & Methodology | 1 Comment

Error and InferenceWhy bother with the theories? Because science seeks to understand the world and to explain the experimental facts, and you need theories to do that. Theories are not just heuristic aids, “useful instruments that aid the growth of experimental knowledge,” as Chalmers accuses Mayo of thinking … What is the point of doing experiments? It is not just to accumulate reliable experimental knowledge. That is important, to be sure, but we seek reliable experimental knowledge to adjudicate between rival explanatory theories about the way the world is.

Alan Musgrave

Alan Musgrave’s response to Deborah Mayo’s claim that empirical testing does not tell us whether a theory is true strikes at the very heart of his critical rationalism. Mayo maintains that empirical tests can only show whether a hypothesis has passed or failed ‘severe tests’. For her, science is fundamentally about error detection, not truth confirmation.

But although empirical testing never yields certainty, this does not mean testing is epistemically neutral with respect to truth. The very point of ‘severe tests’ is that passing them gives us good reason to believe a theory is true. If empirical success does not count as evidence for truth, then it is unclear why science should be taken seriously at all.

As scientists, we do not merely aim to avoid error — we also seek to discover how the world actually is. That a theory has passed “severe tests” which other theories have not offers no absolute guarantee of its truth. But arguably, it does furnish a warrant for believing it to be so.

Conversations in Real-World Economics

3 Feb, 2026 at 15:15 | Posted in Economics | Leave a comment

Conversations with heterodox economists | Real-World Economics Review BlogJamie: Lars, perhaps a useful place to start would be with some introductory comment on what informs your reasoning. Whilst your postings range across many subjects you regularly return to a common theme. Specifically, the use economists make of mathematics to express theory and of analytical statistical techniques to conduct research. What are the key problems you see here and in what sense can or should matters of methodology provide a common thread to this common theme?

Lars: Well, I think the main problem here when it comes to applying mathematics and inferential statistics to economics is that mainstream economists usually do not start by asking themselves if the ontology — real-world economies and societies — is constituted in a way that makes it possible to explain, understand or forecast our economies and societies with the kind of models and theories that mathematics and inferential statistics supply.

The basic fault with modern mainstream economics, in my view, is that the concepts and models it uses — often borrowed from mathematics, physics, and statistics — are incompatible with the very objects of economic study. The analytical instruments borrowed from the natural sciences and mathematics were constructed and used for totally different issues and problems. This has fundamentally contributed to the non-correspondence between the structure of economic science and the structure of real-world economies. And I think it may also be one of the main reasons why economists so often have come up with doubtful — and sometimes harmful — oversimplifications and generalizations.

Using simplifying tractability assumptions — rational expectations, common knowledge, linearity, ergodicity, etc. — because otherwise one cannot “manipulate” the models or come up with rigorous and precise predictions and explanations, does not really exempt economists from having to justify their modelling choices. Being able to manipulate things in models cannot be enough to warrant a methodological choice.

Take, for example, the discussion on rational expectations as a modelling assumption. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actor models. As I tried to show in my book On the use and misuse of theories and models in mainstream economics (2016) there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for economic models should not be if we — once the tractability assumptions are made — can manipulate them. As long as no convincing justification is put forward for how the inferential bridging is made, mainstream model building is little more than hand-waving.

Liberalernas supersänke — Romina Pourmokhtari

2 Feb, 2026 at 21:14 | Posted in Politics & Society | 3 Comments

Pourmokhtari vägrar sitta i regering med SDI dagarna presenterade Indikator en undersökning som visar att Liberalerna är det riksdagsparti som minst andel väljare förknippar med ”ambitiös klimatpolitik”. Och detta trots att Liberalerna själva pekar ut klimatet som en av partiets viktigaste frågor och är det parti i regeringen som innehar klimatministerposten!

Det här förvånar kanske först, men när man väl erinrar sig att den nämnda klimatministern är Romina Pourmokhtari faller allt på plats. En större klimatkrisförnekande svammelminister — som år efter år vägrat svara på alla berättigade frågor som ställs om frånvaron av en seriös klimatpolitik — har vi nog aldrig haft i vårt land.

Med en sådan klimatminister är det föga förvånande att förtroendet för Liberalerna har nått ett bottenrekord och att partiet ä på väg ut ur riksdagen.

Välförtjänt!

Oxfam report on growing inequality in Sweden

1 Feb, 2026 at 10:33 | Posted in Economics, Politics & Society | 1 Comment

Report archive - Oxfam SwedenThe 2026 Oxfam report, Our Unequal Sweden, issues a damning indictment of the nation’s economic trajectory, systematically dismantling the enduring myth of Swedish egalitarianism. It reveals a society undergoing a profound and deliberate schism, where escalating mass vulnerability exists in parallel with unprecedented wealth consolidation among a tiny elite.

The empirical evidence is overwhelming and alarming. A surge of 120,000 individuals into poverty within a single year expands the impoverished population to approximately 700,000. Concurrently, over a quarter of households report being unable to afford essential goods, while energy poverty — measured by the inability to heat one’s home — tripled between 2021 and 2023. This tangible deprivation contrasts violently with the fortunes of the apex: the combined wealth of Sweden’s 46 billionaires, which surpasses the assets of the poorest 80% (8 million people), grew by an estimated 24% in one year alone.

Sweden ranks (according to last year’s the UBS Global Wealth Report) as the sixth most unequal country in the world in terms of wealth distribution. This places Sweden ahead of many countries commonly associated with extreme inequality and, strikingly, more unequal than the United States, which ranks seventh in the same index. The comparison underscores the severity of wealth concentration in Sweden and challenges the widespread perception of the country as an economic outlier defined by equality. In fact, Sweden is among the most wealth-unequal societies globally.

Critically, the Oxfam report frames this widening income- and wealth-gap not as a natural economic phenomenon but as the direct consequence of deliberate fiscal policy. Decades of regressive tax reforms — systematically reducing burdens on capital, inheritance, and wealth while increasing reliance on consumption taxes — have actively engineered one of the world’s highest levels of wealth inequality.

Oxfam rightly contends that such extreme economic concentration poses an existential threat that extends beyond questions of social justice; it corrodes the foundational principles of democracy. Disproportionate wealth translates into disproportionate political influence, eroding institutional trust and undermining the principle of equal citizenship. The report thus calls not for minor adjustments but for a major policy shift: the establishment of a comprehensive wealth register, radical transparency in lobbying, and aggressively progressive taxation. It concludes that the celebrated Swedish model is not self-sustaining and requires urgent, deliberate intervention to prevent the irreversible unravelling of its social contract.

Wall Street gambles with people and the planet

31 Jan, 2026 at 20:41 | Posted in Economics | Leave a comment

.

What yours truly most admires about Ann Pettifor is her ability to cut through economic abstraction with clarity and moral purpose. She identifies what most mainstream economists miss: that finance is not a mere technical side note, but the very arena where power and real-world consequences collide. Her Minsky-Keynes-inspired analysis of the financial system is not just academically sharp — it also enabled her to be one of the few to foresee the 2008 crash long before it happened.

Ann Pettifor is a realist and deeply relevant economist who criticises an ivory-tower discipline, forcefully demonstrating that economics, when done properly, is far too important to be left to abstract mainstream modellers.

Dr Pangloss på besök i Starta pressarna

30 Jan, 2026 at 19:35 | Posted in Politics & Society | 2 Comments

Som alltid nuförtiden, när man vill att vi ska lyssna på någon nationalekonom som säger att allt är bra och att alla egentligen tjänar på att inkomst- och förmögenhetsskillnaderna blivit skyhöga, så bjuder man in landets egen Dr Pangloss — Daniel Waldenström.

Hästskitsteoremet i svensk tappning | LARS P. SYLLWaldenströms uppfattning är i grund och botten att ekonomisk ojämlikhet egentligen inte är något problem, eftersom skillnaden mellan toppen och botten alltid blir större i en global värld med framgångsrika ekonomier.  Detta — att ojämlikhet inte är något problem — är i den ekonomisk-politiska debatten något som ofta marknadsfundamentalistiska förespråkare för fram. I enlighet med det så kallade hästskitsteoremet sägs de allt större inkomsterna och förmögenheterna hos de rika att så småningom sippra ner till de fattiga. Göd hästen och fåglarna kan äta sig mätta på spillningen.

Problemet med teoremet är att den empiriska evidensen för dess existens är lika med noll!

Waldenström hävdar att förmögenheterna aldrig varit mer jämlikt fördelade än vad de är idag och att svensken i genomsnitt är rikare idag än för hundra år sedan på grund av ett ökat ‘sparande’ i pensionsfonder och bostäder. Att ökade bostadspriser och en ökad andel hushåll som själva äger sina lägenheter också gjort oss till ett av världens mest skuldsatta folk viftar han bara bort. De — inte minst ungdomar — som idag tvingas låna flera miljoner bara för att ha någonstans att bo har nog svårt att se sig som något slags vinnare i det här spelet.

Den genomsnittliga inkomsten för vd:arna vid de 50 största svenska företagen motsvarade år 2024 mer än 77 industriarbetarlöner. Men för vår egen Dr Pangloss är den ökade ojämlikheten i vårt land inte ett problem. Vad Waldenström, uppbackad av forskningsmedel från näringslivet, försöker sig på är inget annat än en skönmålning av de stora fömögenhets- och inkomstskillnader som i dag kännetecknar Sverige. Men — det funkar inte! Sverige har den näst högsta förmögenhetskoncentrationen i EU. Att detta inte skulle utgöra ett demokratiproblem är en uppfattning som ytterst få seriösa samhällsforskare delar.

Waldenströms ‘analyser’ visar med besked att vi här har en forskare som valt att bli en av näringslivets mest högljudda marknadsapologetiska megafoner och vars ’analyser’ numera mest går att likna vid ideologiskt färgad skönmålning av allvarliga samhällsproblem.

I Sverige är det uppenbart att vi måste ta hänsyn till institutionella, politiska och sociala krafter för att förklara den extraordinära ökningen av den funktionella inkomstfördelningen. Inte minst förändringar i löneregleringen, försvagade fackföreningar, Riksbankens nya ”oberoende” roll och dess ensidiga fixering vid prisstabilitet, ett nytt skattesystem, globalisering, finansialisering av ekonomin, nyliberala Thatcher-Reagan-avregleringar av marknaderna, etc., etc., har påverkat förmögenhets- och inkomstfördelningen djupt. Det som en gång var en egalitär svensk modell har under de senaste fyra decennierna reducerats till något som mer liknar Kontinentaleuropa, med kraftigt ökade inkomstskillnader (särskilt inkomster från kapitalägande och handel med finansiella tillgångar). Det är svårt att föreställa sig en hållbar förklaring till den sjunkande löneandelen sedan 1980-talet — inte bara i Sverige, utan i så gott som alla utvecklade länder — som inte i stor utsträckning tar hänsyn till kampen om fördelningen mellan klasserna i en pågående omstrukturering av vårt samhälle och dess underliggande grundläggande socioekonomiska relationer.

Nationalekonomiska läroböcker hänvisar vanligtvis till samspelet mellan teknologisk utveckling och utbildning som den främsta orsakskraften bakom ökad ojämlikhet. Om utbildningssystemet (utbudet) utvecklas i samma takt som teknologin (efterfrågan) borde det, ceteris paribus, inte bli någon ökning i förhållandet mellan höginkomstgrupper (högutbildade) och låginkomstgrupper (lågutbildade). I kapplöpningen mellan teknologi och utbildning har spridningen av ‘färdighetsbiasad’ teknologisk förändring emellertid enligt uppgift ökat premien för den högutbildade gruppen.

Men det finns några ganska uppenbara problem med den här typen av ojämlikhetsförklaringar. Inkomstökningarna har koncentrerats särskilt till topp 1%. Om utbildning var huvudorsaken till det växande inkomstgapet skulle man förvänta sig en mycket bredare grupp människor i de övre skikten av fördelningen som delar av denna ökning.

Det är, som senare forskning har visat, minst sagt tvivelaktigt att försöka förklara de höga ersättningarna i finanssektorn med ett marginalproduktivitetsargument. Höga topplöner tycks mer vara ett resultat av ren tur eller medlemskap i samma ”klubb” som de som bestämmer om löner och bonusar, än av ”marginalproduktivitet”.

Waldenströms försvar av inkomstskillnaderna påminner om ett vanligt mainstreamresonemang där man försöker förklara och försvara inkomstskillnader med hänvisning till Adam Smiths osynliga hand: Genom att leverera extraordinära prestationer i storfilmer kan toppstjärnor göra mer än att underhålla miljoner biobesökare och bli rika i processen. De kan också bidra med flera miljoner i federala skatter, och andra miljoner i delstatsskatter. Och dessa miljoner hjälper till att finansiera skolor, poliskårer och nationellt försvar för resten av oss … De i den rikaste 1 procentenheten är inte motiverade av en altruistisk önskan att främja det allmännas bästa. Men i de flesta fall är det precis den effekten de har.

Den här inbitna ojämlikhetsapologetiken påminner om John Bates Clarks — en av teorertikerna bakom marginalproduktivitesteorin — argument att marginalproduktivitet resulterar i en etiskt rättvis fördelning. Men det är inte något – även om det vore sant – som vi kan bekräfta empiriskt, eftersom det är omöjligt realiter att separera ut vad som är det marginella bidraget från någon produktionsfaktor. När näringslivsmegafoner  som Waldenström och nyliberala debattörer som Johan Norberg pratar om vd-inkomster som ”rättmätiga belöningar” får man en stark känsla av att de i slutändan försöker argumentera för att en marknadsekonomi är ett slags moralfri zon där människor, om de lämnas ostörda, får vad de ”förtjänar”. De flesta samhällsvetare uppfattar det nog snarare som ett försök att bortförklara en mycket störande strukturell ”regimförskjutning” som har skett i vårt samhällen. En förskjutning som har väldigt lite att göra med ”stokastisk avkastning på utbildning”. Skillnader fanns också för 30 eller 40 år sedan. På den tiden innebar de att en toppchef kanske tjänade 10–20 gånger mer än ”vanliga” människor. Idag innebär det att de tjänar kanske 100–200 gånger mer än ”vanliga” människor. En fråga om utbildning? Knappast. Det är troligen mer en fråga om girighet och en förlorad känsla av ett gemensamt projekt för att bygga ett hållbart samhälle.

Eftersom kapplöpningen mellan teknologi och utbildning inte verkar förklara det nya växande inkomstgapet hänvisar mainstreamekonomer som Waldenström i allt högre grad till ”vinnare-tar-allt-marknader” och ”superstjärnteorier” för förklaringar. Men det är också högst tvivelaktigt. Fans kanske vill betala extra för att se topprankade idrottare eller filmstjärnor uppträda på tv och film, men företagsledare är knappast det folkets drömmar är gjorda av — och de dyker sällan upp på tv och i biograferna. Alla kanske föredrar att anställa den bästa företagsledaren som finns, men en företagsledare, till skillnad från en filmstjärna, kan bara tillhandahålla sina tjänster till ett begränsat antal kunder. Ur ”superstjärnteoriernas” perspektiv borde en bra företagsledare bara tjäna marginellt bättre än en genomsnittlig företagsledare. Chefersättningspaket som innebär att VD:ar har inkomster 77 gånger större än en industriarbetarlön bestäms vanligtvis bekvämt av företagsstyrelser som består av människor som är mycket lika de chefer de betalar. Det är verkligen svårt att se toppchefernas inkomstökning som något annat än en belöning för att vara medlem i samma illustra klubb.

I de senaste årens ojämlikhetsdiskussion har Waldenström som ett mantra upprepat att inkomstskillnaderna inte har ökat dramatiskt under 2000-talet. Men trots att han vet bättre, nämner han inte med ett ord att Sverige under de senaste fyra decennierna är ett av de länder i världen där ökningstakten i ojämlikhet vad avser rikedom och inkomster varit som störst. Bedrövligt och intellektuellt ovärdigt!

Does Bregman champion basic income for the wrong reasons?

30 Jan, 2026 at 10:09 | Posted in Politics & Society | 4 Comments

Utopia for Realists by Rutger Bregman (Part 1) | Book & Quote MonsterWhile I very much sympathise with Bregman’s championing of basic income, I would argue that he champions it for the wrong reasons. If its time has come, it is not because the affluent masses of the North no longer need to work to earn their keep, but because it might be a way of replacing an obscenely unequal system of world trade that converts resource extraction and low-wage labour in the South into consumer goods and labour-saving technologies for the North. But to generate such transformative consequences, a system of basic income would have to be designed in a radically different way than Bregman suggests. To merely “give free money to everyone” (as in the title of his chapter 2) would not alleviate global inequalities but exacerbate them. In expanding the purchasing power of those in most need, it would clearly increase the demand for inexpensive food, clothes, and other commodities produced with low-wage labour and scant environmental considerations in the Global South. It would thus aggravate the contemporary logic of globalised capitalism. It is true that “free cash greases the wheels of the whole economy” (31), but “the economy” here primarily means multinational corporations and their distant suppliers. To create resilient local economies that provide people with basic provisioning security, we must design a more carefully considered system than simply giving “free money to everyone.” …

In Utopia for Realists, Bregman devotes the first four chapters to arguing for basic income as a means of ending poverty, made possible by advancing technology. From my perspective, this is highly paradoxical, as the advancing technologies of the Global North are inextricably part of the global economic system that is generating poverty. I have suggested that modern technologies save time (and space) for those who can afford them, at the expense of time and space lost for those who cannot.  While such a global zero-sum perspective can be applied to various technological systems since the Industrial Revolution, it is also implicit in recent critiques of new technologies such as for renewable energy, electric vehicles, and digital information processing. This view of advanced technologies implies that they cannot be universalised, any more than the economic affluence with which they are associated.

Alf Hornborg

The Centrality of Inference to the Best Explanation in Scientific Reasoning

29 Jan, 2026 at 11:37 | Posted in Theory of Science & Methodology | 1 Comment

Inference to the Best Explanation (IBE) offers a compelling and robust framework for understanding how scientific knowledge advances. Moving beyond the traditional dichotomy between deduction and induction, IBE is not merely one instrument among many in the scientist’s toolkit, but a fundamental engine of scientific reasoning itself. Its significance lies in its capacity to model realistically the complex, creative, and comparative judgements that underpin theory choice, guide research, and link observable phenomena to deeper ontological understanding. In doing so, IBE captures a central feature of everyday life and scientific practice that purely formal accounts of inference struggle to accommodate: the role of explanatory judgement in navigating uncertainty.

Scientific realism and inference​ to the best explanation | LARS P. SYLLAt its core, IBE is a form of ampliative reasoning (cf. Stathis Psillos), in which one infers, from a body of available evidence, the truth — or approximate truth — of the hypothesis that would, if correct, provide the best explanation of that evidence. Unlike deductive inference, IBE is not truth-preserving, yet it is epistemically indispensable precisely because science aims not merely at certainty, but at understanding. Scientific reasoning must often proceed under conditions of incomplete information, and IBE offers a principled way of extending knowledge beyond what is strictly entailed by the data.

The importance of IBE can be articulated across several interconnected dimensions.

(1) IBE provides a superior model for theory choice and evaluation. Science is continually confronted with underdetermination: the logical fact that a given body of evidence can, in principle, be explained by multiple competing hypotheses (cf. the Duhem–Quine thesis). Deductive logic alone cannot resolve this problem, while simple induction is both fragile and superficial. IBE, by contrast, employs a set of explanatory virtues—such as scope, precision, coherence, simplicity, and fruitfulness—as pragmatic criteria for comparison. These virtues function not as strict rules, but as guides to rational preference under epistemic constraint. Scientists infer the superiority of models, theories, and hypotheses not through decisive logical refutations, but through judgements of comparative explanatory power. Darwin’s theory of evolution by natural selection prevailed over special creation not because speciation had been directly observed, but because it uniquely and elegantly explained the nested hierarchy of life, the fossil record, and patterns of biogeographical distribution as the outcome of a single, lawful process.

(2) IBE also structures the very process of scientific investigation. IBE functions as a ‘thinking machine’ that drives discovery. The search for the best explanation is not a passive assessment of existing ideas or hypotheses, but an active generator of new ones. When confronted with anomalous or surprising data, scientists seek causal mechanisms that would render the data intelligible. This ‘abductive’ (cf. Peirce) search frequently precedes hypothesis testing and experimentation, shaping the direction of inquiry itself. In this way, IBE plays a crucial heuristic role, guiding the imagination towards hypotheses that are not only testable, but explanatorily promising.

Peter Lipton famously distinguished between what he termed the ‘likeliest’ and the ‘loveliest’ explanations. The likeliest explanation is the one judged most probable given the current evidence, whereas the loveliest is the one that offers the greatest understanding—the most comprehensive, coherent, and fruitful account. According to Lipton, scientific practice typically aims at loveliness rather than mere likelihood. Even if scientists assess the ‘likeliness’ of competing hypotheses, the ‘likeliness’ is typically based on assessments of the explanatory importance of the hypotheses. There is no perfect match between likelihood and explanatory value. If I take the contraceptive pill, the probability that I will not become pregnant is high, but this fact hardly explains why I do not become pregnant. Even for a committed Bayesian, it is difficult not to concede that the assignment of prior probabilities is standardly guided by explanationist considerations, and that conditionalisation on evidence therefore occurs only after explanatory criteria have been applied to determine the value of the prior.

Scientists often favour bold, unifying explanations that are initially less probable over cautious, ad hoc alternatives that appear more likely on narrow evidential grounds. The history of science is replete with such ‘lovely’ but initially improbable ideas—continental drift and quantum theory being notable examples—pursued because of their explanatory promise. In seeking the loveliest explanation, IBE becomes a logic of scientific progress rather than mere post hoc justification.

(3) IBE plays a crucial role in connecting empirical data with theoretical understanding. Observations — such as statistical correlations within a dataset — are, in isolation, devoid of significance. They acquire evidential status only within an explanatory framework. IBE is the cognitive process through which this framework is constructed, by asking what the world must be like for the observations to make sense. This inferential move allows scientists to posit unobservable entities, causal mechanisms, and structural relations that underpin observable phenomena. In this respect, IBE underwrites critical realist ambitions of science, supporting the view that successful explanations reveal something genuine about the structure of the world.

(4) Some critics argue that IBE is unreliable, claiming that it conflates explanatory appeal with truth. In response, it can be argued that explanatory virtues are not arbitrary aesthetic preferences, but epistemic heuristics refined through the cumulative success of science. The explanatory virtues of IBE have repeatedly proved to be reliable guides to truth because they reflect deep features of how the world is organised. The sustained predictive success of theories selected via IBE provides a pragmatic vindication of its epistemic legitimacy, even in the absence of deductive certainty.

Inference to the Best Explanation is indispensable to science because it captures the dynamic, comparative, and creative character of scientific reasoning. It functions both as a ‘logic’ of discovery and as a rationale for justification. IBE transforms raw data into evidence, guides the scientific imagination towards fruitful hypotheses, and employs explanatory virtues to select theories that offer the deepest understanding of the world. By seeking the ‘loveliest’ explanation — the explanations with the greatest explanatory merit/value — IBE propels science beyond incremental refinement towards genuinely transformative insight, making it the central cognitive process through which we construct an increasingly accurate picture of reality.

23 July 2012 Media Release, News | University of OtagoPeople object that the best available explanation might be false. Quite so — and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so — and again, so what? The explanationist principle — “It is reasonable to believe that the best available explanation of any fact is true” — means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

The danger of treating causality as deduction

28 Jan, 2026 at 22:30 | Posted in Theory of Science & Methodology | 5 Comments

Critical Thinking: 6. Inference to the Best Explanation

As I have argued, explanations are not deductive proofs in any particularly interesting sense. Although they can always be presented in the form of deductive proofs, doing so seems not to capturing anything essential, or especially useful, and usually requires completing an incomplete explanation. Thinking of explanations as proofs tends to confuse causation with logical implication. To put it simply: causation is in the world, implication is in the mind. Of course, mental causation exists (e.g., where decisions cause other decisions), which complicates the simple distinction by including mental processes in the causal world, but that complication should not be allowed to obscure the basic point, which is not to confuse an entailment relationship with what may be the objective, causal grounds for that relationship. Deductive models of causation are at their best when modeling deterministic closed-world causation, but this is too narrow for most real-world purposes. Even for modeling situations where determinism and closed world are appropriate assumptions, treating causality as deduction is dangerous, since one must be careful to exclude non-causal and anti-causal (effect-to-cause) conditionals from any knowledge base if one is to distinguish cause-effect from other kinds of inferences.

Per se, there is no reason to seek an implier of some given fact. The set of possible impliers includes all sorts of riff raff, and there is no obvious contrast set at that level to set up reasoning by exclusion. But there is a reason to seek a possible cause: broadly speaking, because knowledge of causes gives us powers of influence and prediction.

John R. Josephson

Paul Davidson and yours truly on uncertainty and ergodicity

28 Jan, 2026 at 09:47 | Posted in Economics | 1 Comment

A few years back, I had an interesting discussion over at the Real-World Economics Review Blog with Paul Davidson on ergodicity and the differences between Knight and Keynes concerning uncertainty. It all began when I commented on Davidson’s article Is economics a science? Should economics be rigorous? :

LPS:

Davidson’s article is a nice piece — but ergodicity is a difficult concept that many students of economics have problems with understanding. To understand real-world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and a fortiori in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset-price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset-price first rises by 50% to 150 €​ and then falls by 50% to 75 € (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.

Assuming ergodicity there would have been no difference at all.

Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of physicist and mathematician Ole Peters in the Santa Fe Institute Bulletin from 2009 — “On Time and Risk” — that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:

“In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase​ when correlation hinders diversification​​ when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?”

PD:

Lars, if the stochastic process is ergodic, then for ​an infinite realization​, the time and space (ensemble) averages will coincide. An ensemble a is samples drawn at a fixed point of time drawn from a universe of realizations For finite realizations, the time and space statistical averages tend to converge (with a probability of one) the more data one has.

Even in physics,​ there are some processes that physicists recognize are governed by nonergodic stochastic processes. [see A. M. Yaglom, An Introduction to Stationary Random Functions [1962, Prentice Hall]]

I do object to Ole Peters exposition quote where he talks about “when risks increase”. Nonergodic systems are not about increasing or decreasing risk in the sense of the probability distribution variances differing. It is about indicating that any probability distribution based on past data cannot be reliably used to indicate the probability distribution governing any future outcome. In other words even if (we could know) that the future probability distribution will have a smaller variance (“lower risks”) than the past calculated probability distribution, then the past distribution is not​ a reliable guide to future statistical means and other moments around the means.

LPS:

Paul, re nonergodic processes in physics I would even say that most processes definitely are nonergodic. Re Ole Peters I totally agree that what is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty — not risk — rules the roost. That was something both Keynes and Knight basically said in their 1921 books. But I still think that Peters’ discussion is a good example of how thinking about uncertainty in terms of “rational expectations” and “ensemble averages” has had seriously bad repercussions on the financial system.

PD:

Lars, there is a difference between the uncertainty concept developed by Keynes and the one developed by Knight.

As I have pointed out, Keynes’s concept of uncertainty involves a nonergodic stochastic process.​ On the other hand, Knight’s uncertainty — like Taleb’s black swan — assumes an ergodic process. The difference is the for Knight (and Taleb) the uncertain outcome lies so far out in the tail of the unchanging (over time) probability distribution that it appears empirically to be [in Knight’s terminology] “unique”. In other words, like Taleb’s black swan, the uncertain outcome already exists in the probability distribution but is so rarely observed that it may take several lifetimes for one observation — making that observation “unique”.

In the latest edition of Taleb’s book,​ he was forced to concede that philosophically there is a difference between a nonergodic system and a black swan ergodic system — but then waves away the problem with the claim that the difference is irrelevant​.


LPS:

Paul, on the whole, I think you’re absolutely right on this. Knight’s uncertainty concept has an epistemological founding and Keynes’s definitely an ontological founding. Of course,​ this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’s view is the most warranted of the two.

BUT — from a “practical” point of view I have to agree with Taleb. Because if there is no reliable information on the future, whether you talk of epistemological or ontological uncertainty, you can’t calculate probabilities.

The most interesting and far-reaching difference between the epistemological and the ontological view is that if you subscribe to the former, Knightian​ view — as Taleb and “black swan” theorists basically do — you open up for the mistaken belief that with better information and greater computer-power we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As both you and Keynes convincingly have argued, that is ontologically just not possible.

PD:

Lars, your last sentence says it all. If you believe it is an ergodic system and epistemology is the only problem, then you should urge more transparency , better data collection, hiring more “quants” on Wall Street to generate “better” risk management computer problems, etc — and above all keep the government out of regulating financial markets — since all the government can do is foul up the outcome that the ergodic process is ready to deliver.

Long live Stiglitz and the call for transparency to end asymmetric information — and permit all to know the epistemological solution for the ergodic process controlling the economy.

Or as Milton Friedman would say, those who make decisions “as if” they knew the ergodic stochastic process create an optimum market solution — while those who make mistakes in trying to figure out the ergodic process are like the dinosaurs, doomed to fail and die off — leaving only the survival of the fittest for a free market economy to prosper on. The proof is why all those 1% far cats CEO managers in the banking business receive such large salaries for their “correct” decisions involving financial assets.

Alternatively, if the financial and economic system is non ergodic then there is a positive role for government to regulate what decision makers can do so as to prevent them from mass destruction of themselves and other innocent bystanders — and also for government to take positive action when the herd behavior of decision makers are causing the economy to run off the cliff.

So this distinction between ergodic and nonergodic is essential if we are to build institutional structures that make running off the cliff almost impossible — and for the government to be ready to take action when some innovative fool(s) discovers a way to get around institutional barriers and starts to run the economy off the cliff.

For Keynes, the source of uncertainty lay in the nature of the real — nonergodic — world. It concerned, not merely — or primarily — the epistemological fact of our not knowing what is currently unknown, but rather the far more profound ontological fact that there often exists no firm basis upon which we can form quantifiable probabilities and expectations.

Science as crossword puzzle solving

28 Jan, 2026 at 09:20 | Posted in Theory of Science & Methodology | Leave a comment

Evidence and Inquiry: Towards Reconstruction in Epistemology : Haack, Susan:  Amazon.co.uk: BooksThe model is not . . . how one determines the soundness or otherwise of a mathematical proof; it is, rather, how one determines the reasonableness or otherwise of entries in a crossword puzzle. . . . The crossword model permits pervasive mutual support, rather than, like the model of a mathematical proof, encouraging an essentially one-directional conception. . . . How reasonable one’s confidence is that a certain entry in a crossword is correct depends on: how much support is given to this entry by the clue and any intersecting entries that have already been filled in; how reasonable, independently of the entry in question, one’s confidence is that those other already filled-in entries are correct; and how many of the intersecting entries have been filled in.

Yours truly — himself an avid crossword puzzle solver — cannot but agree.

In inference to the best explanation, we begin with a body of (purported) data, facts, or evidence and search for explanations capable of accounting for them. To have the best explanation is to have, given context-dependent background assumptions, an explanation that accounts for the evidence better than any competing alternative — such that it is reasonable to regard the hypothesis as true. Even though we inevitably lack deductive certainty, this form of reasoning licences us to regard belief in the hypothesis as reasonable.

Accepting a hypothesis means believing that it explains the available evidence better than any competing hypothesis. Knowing that we have earnestly considered and analysed the available alternatives, and have been able to rule them out, warrants and strengthens our confidence that our preferred explanation is indeed the best explanation — that is, the explanation which, if true, provides us with the greatest understanding.

What is Inference? | Ontotext Fundamentals SeriesThis does not, of course, imply that we cannot be mistaken. We certainly can. Inferences to the best explanation are fallible, since the premises do not logically entail the conclusion. From a strictly logical point of view, inference to the best explanation is therefore a weak mode of inference. Nevertheless, when sufficiently strong, such arguments can be warranted and may yield justified true belief — and hence knowledge — despite their fallibility. As scientists, we sometimes — much like crossword solvers or detectives — experience disillusionment. We believe we have reached a robust conclusion by eliminating alternative explanations, only to discover that what we took to be true is, in fact, false.

This does not necessarily mean that we lacked good reasons for our belief at the time. If one cannot live with such contingency and uncertainty, then one has chosen the wrong line of work. If it is deductive certainty one seeks — rather than the ampliative and defeasible reasoning characteristic of inference to the best explanation — then one should pursue mathematics or logic, not science.

Sir. Austin Bradford Hill (1897-1991). | Download Scientific Diagram What I do not believe — and this has been suggested — is that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we accept cause and effect. None of my viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question — is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Austin Bradford Hill

Johan Norberg — ojämlikhetsivrarnas snuttefilt

27 Jan, 2026 at 09:19 | Posted in Politics & Society | 2 Comments

Det Norberg säger i den här videon är typiskt för libertarianer och nyliberaler. För anhängare av Marknadens tusenåriga rike är ökade inkomst- och förmögenhetsklyftor i samhället bara av godo. För dem handlar mänskliga rättigheter bara om den egna äganderätten och friheten att få sköta sig själv. Att människor skulle kunna ha en rätt till välfärd föresvävar inte den iskalla egoismens försvarare. För dessa Ayn Rand adepter handlar rättigheter och frihet främst om att “slippa ha att göra med myndigheterna” och att få “leva i frihet från statsingrepp”.

Detta är kanske frihet för marknadens övermänniskor, men för alla oss andra, vi som inte accepterar nyliberalernas omvärdering av alla värden? För gamla, sjuka och fattiga? Upplever verkligen uteliggaren det som en frihetsinskränkning att kommunen försöker förse honom med en hygglig bostad? Var det verkligen en frihetsinskränkning som gjorde det möjligt för den fattige arbetargrabben från Backarna att via en frikostig utbildningspolitik bli doktor flera gånger om och sedan professor? Snarare är det väl så att deras verkliga frihet ökade och att de känner stolthet och tacksamhet över de som en gång byggde upp vårt välfärdssamhälle — numer  tyvärr illa tilltygat på grund av politiker som lyssnar till det tankeludd som saluförs av Norberg et consortes.

Samhällets ingripande innebär inte nödvändigtvis att friheten inskränks. Att som Norberg och hans nyliberala gelikar bara prata om abstrakt frihet betyder ingenting. Vad som verkligen betyder något är — för att tala med nobelpristagaren i ekonomi, Amartya Sen — våra förmågor. För vad har den rörelsehindrade för glädje av rörelsefrihet om ingen möjliggör för honom att utnyttja denna frihet? Vad har vi för glädje av pressfriheten om det inte finns någon tidning att trycka sina åsikter i?

Idag i Sverige har de 10 procent av befolkningen som har högst inkomster lika stor andel av den totala disponibla inkomsten som de 50 procent av befolkningen som har lägst inkomster. Sverige har också den näst högsta förmögenhetskoncentrationen i EU. Dessa skillnader i inkomst och förmögenhet utgör ett av samtidens stora demokratiproblem.

Johan Norberg borde läsa och fundera över vad en riktig liberal, Isaiah Berlin, skriver i Four Essays on Liberty:

Vargarnas frihet har ofta betytt fårens död. Den ekonomiska individualismens och ohämmade kapitalistiska konkurrensens blodbesudlade historia behöver inte särskilt understrykas  … Den obegränsade laissez-faire-politiken och de sociala och juridiska system som tillät och uppmuntrade den ledde till brutala kränkningar av friheten … Sådana system uppfyller inte de minimiförutsättningar som krävs för att något meningsfullt mått av frihet skall kunna utövas av individer eller grupper  och utan vilket en sådan frihet är av begränsat eller inget värde för dem som teoretiskt kanske besitter den.

Och inkomstskillnaderna bara fortsätter öka …

26 Jan, 2026 at 15:55 | Posted in Politics & Society | 1 Comment

Spoiler: klyftan mellan makteliten och vanliga arbetare har ökat  explosionsartat. 💥 Skillnaderna i inkomst påverkar allt i vår vardag:  jobb, boende, till och med hälsan. Imorgon släpper LO rapporten Makteliten,  där fårI denna rapport studeras inkomstutvecklingen för makteliten mellan 1950-2024. Vi kan konstatera att maktelitens genomsnittliga inkomst i relation till industriarbetarlönen i årets undersökning är på den högsta nivå som uppmätts under dessa dryga 70 år. Den genomsnittliga inkomsten för den grupp vi kallar den ekonomiska eliten – vd:arna på 50 av de största svenska företagen – motsvarar i årets undersökning 77 industriarbetarlöner. Sedan fjolårets undersökning har det skett en mycket stor ökning av gruppens relativinkomst: Inkomstgapet har på ett år vidgats med motsvarande sex industriarbetarlöner.

LO

Den genomsnittliga inkomsten för vd:arna vid de 50 största svenska företagen år 2024 motsvarar alltså 77 industriarbetarlöner.

Idag i Sverige har de 10 procent av befolkningen som har högst inkomster lika stor andel av den totala disponibla inkomsten som de 50 procent av befolkningen som har lägst inkomster. Detta är så klart oroande och visar att något har skett i vårt land efter den nyliberala synvändan på 80-talet.

Ekonomi handlar om politik och makt.

Förutom att ha en inkomstfördelning som löper amok har Sverige också den näst högsta förmögenhetskoncentrationen i EU.

Detta är inget annat än en skandal för ett land som för inte så många årtionden sedan hyllades världen över för sin jämlikhet.

Dessa skillnader i inkomst och förmögenhet utgör ett av samtidens stora demokratiproblem.

My teaching commandments

26 Jan, 2026 at 15:15 | Posted in Education & School | Leave a comment

George Polya. Y su Método de 4 Pasos para la Solución de Problemas |  Salvador MoraI. Be interested in your subject.

II. Know your subject.

III. Know about the ways of learning: the best way to learn anything is to discover it by yourself.

IV. Try to read the faces of your students, try to see their expectations and difficulties, put yourself in their place.

V. Give them not only information, but “know-how,” attitudes of mind, the habit of methodical work.

VI. Let them learn guessing.

VII. Let them learn proving.

VIII. Look out for such features of the problem at hand as may be useful in solving the problems to come — try to disclose the general pattern that lies behind the present concrete situation.

IX. Do not give away your whole secret at once — let the students guess before you tell it — let them find out for themselves as much as feasible.

X. Suggest it; do not force it down their throats.

George Pólya

Having taught at the university level for more than 45 years, yours truly has regularly put Pólya’s wisdom into practice. These principles have proven themselves repeatedly — reliable not just for solving particular problems, but for fundamentally shaping how students learn and think. They advocate for the reflection and structure that lie at the heart of genuine understanding, providing a steadfast guide for my teaching and supervision. To this day, they have been a touchstone for instilling clarity, independent thought, and intellectual confidence in my students.

Next Page »

Blog at WordPress.com.
Entries and Comments feeds.