Introduction

Welcome to Tim Harding’s blog of writings and talks about logic, rationality, philosophy and skepticism. There are also some reblogs of some of Tim’s favourite posts by other writers, plus some of his favourite quotations and videos This blog has a Facebook connection at The Logical Place.

There are over 2,900 posts here about all sorts of topics – please have a good look around before leaving.

If you would like to submit a comment about anything written here, please read our comments policy.

Follow me on Academia.edu

Copyright notice: © All rights reserved. Except for personal use or as permitted under the Australian Copyright Act, no part of this website may be reproduced, stored in a retrieval system, communicated or transmitted in any form or by any means without prior written permission (except as an authorised reblog). All inquiries should be made to the copyright owner, Tim Harding at [email protected], or as attributed on individual blog posts.

If you find the information on this blog useful, you might like to consider supporting us. Make a Donation Button

Image Image

3 Comments

Filed under Uncategorized

What is logic?

The word ‘logic‘ is not easy to define, because it has slightly different meanings in various applications ranging from philosophy, to mathematics to computer science. In philosophy, logic determines the principles of correct reasoning. It’s a systematic method of evaluating arguments and reasoning, aiming to distinguish good (valid and sound) reasoning from bad (invalid or unsound) reasoning.

The essential difference between informal logic and formal logic is that informal logic uses natural language, whereas formal logic (also known as symbolic logic) is more complex and uses mathematical symbols to overcome the frequent ambiguity or imprecision of natural language. Reason is the application of logic to actual premises, with a view to drawing valid or sound conclusions. Logic is the rules to be followed, independently of particular premises, or in other words using abstract premises designated by letters such as P and Q.

So what is an argument? In everyday life, we use the word ‘argument’ to mean a verbal dispute or disagreement (which is actually a clash between two or more arguments put forward by different people). This is not the way this word is usually used in philosophical logic, where arguments are those statements a person makes in the attempt to convince someone of something, or present reasons for accepting a given conclusion. In this sense, an argument consist of statements or propositions, called its premises, from which a conclusion is claimed to follow (in the case of a deductive argument) or be inferred (in the case of an inductive argument). Deductive conclusions usually begin with a word like ‘therefore’, ‘thus’, ‘so’ or ‘it follows that’.

A good argument is one that has two virtues: good form and all true premises. Arguments can be either deductiveinductive  or abductive. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. The term ‘good argument’ covers all three of these types of arguments.

Deductive arguments

A valid argument is a deductive argument where the conclusion necessarily follows from the premises, because of the logical structure of the argument. That is, if the premises are true, then the conclusion must also be true. Conversely, an invalid argument is one where the conclusion does not logically follow from the premises. However, the validity or invalidity of arguments must be clearly distinguished from the truth or falsity of its premises. It is possible for the conclusion of a valid argument to be true, even though one or more of its premises are false. For example, consider the following argument:

Premise 1: Napoleon was German
Premise 2: All Germans are Europeans
Conclusion: Therefore, Napoleon was European

The conclusion that Napoleon was European is true, even though Premise 1 is false. This argument is valid because of its logical structure, not because its premises and conclusion are all true (which they are not). Even if the premises and conclusion were all true, it wouldn’t necessarily mean that the argument was valid. If an argument has true premises and its form is valid, then its conclusion must be true.

Deductive logic is essentially about consistency. The rules of logic are not arbitrary, like the rules for a game of chess. They exist to avoid internal contradictions within an argument. For example, if we have an argument with the following premises:

Premise 1: Napoleon was either German or French
Premise 2: Napoleon was not German

The conclusion cannot logically be “Therefore, Napoleon was German” because that would directly contradict Premise 2. So the logical conclusion can only be: “Therefore, Napoleon was French”, not because we know that it happens to be true, but because it is the only possible conclusion if both the premises are true. This is admittedly a simple and self-evident example, but similar reasoning applies to more complex arguments where the rules of logic are not so self-evident. In summary, the rules of logic exist because breaking the rules would entail internal contradictions within the argument.

Inductive arguments

An inductive argument is one where the premises seek to supply strong evidence for (not absolute proof of) the truth of the conclusion. While the conclusion of a sound deductive argument is supposed to be certain, the conclusion of a cogent inductive argument is supposed to be probable, based upon the evidence given. Here’s a classic example of an inductive argument:

  1. Premise: Every time you’ve eaten peanuts, you’ve had an allergic reaction.
  2. Conclusion: You are likely allergic to peanuts.

In this example, the specific observations are instances of eating peanuts and having allergic reactions. From these observations, you generalize that you are probably allergic to peanuts. The conclusion is not certain, but if the premise is true (i.e., every time you’ve eaten peanuts, you’ve had an allergic reaction), then the conclusion is likely to be true as well.

Whilst an inductive argument based on strong evidence can be cogent, there is some dispute amongst philosophers as to the reliability of induction as a scientific method. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as ‘All swans are white’, yet it is logically possible to falsify it by observing a single black swan.

Abductive arguments

Abduction may be described as an “inference to the best explanation”, and whilst not as reliable as deduction or induction, it can still be a useful form of reasoning. For example, a typical abductive reasoning process used by doctors in diagnosis might be: “this set of symptoms could be caused by illnesses X, Y or Z. If I ask some more questions or conduct some tests I can rule out X and Y, so it must be Z.

Incidentally, the doctor is the one who is doing the abduction here, not the patient. By accepting the doctor’s diagnosis, the patient is using inductive reasoning that the doctor has a sufficiently high probability of being right that it is rational to accept the diagnosis. This is actually an acceptable form of the Argument from Authority (only the deductive form is fallacious).

References:

Hodges, W. (1977) Logic – an introduction to elementary logic (2nd ed. 2001) Penguin, London.
Lemmon, E.J. (1987) Beginning Logic. Hackett Publishing Company, Indianapolis.

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Image

20 Comments

Filed under Essays and talks

Reasoning

Rationality may be defined as as the quality of being consistent with or using reason, which is further defined as the mental ability to draw inferences or conclusions from premises (the ‘if – then’ connection). The application of reason is known as reasoning; the main categories of which are deductive and inductive reasoning. A deductive argument with valid form and true premises is said to be sound. An inductive argument based on strong evidence is said to be cogent. It is rational to accept the conclusions of arguments that are sound or cogent, unless and until they are effectively refuted.

A fallacy is an error of reasoning resulting in a misconception or false conclusion. A fallacious argument can be deductively invalid or one that has insufficient inductive strength. A deductively invalid argument is one where the conclusion does not logically follow from the premises. That is , the conclusion can be false even if the premises are true. An example of an inductively invalid argument is a conclusion that smoking does not cause cancer based on the anecdotal evidence of only one healthy smoker.

By accident or design, fallacies may exploit emotional triggers in the listener (e.g. appeal to emotion), or take advantage of social relationships between people (e.g. argument from authority). By definition, a belief arising from a logical fallacy is contrary to reason and is therefore irrational, even though a small number of such beliefs might possibly be true by coincidence.

Image

Leave a comment

Filed under Uncategorized

Unnecessary additional syllables

Why do some Americans, particularly Californians, unnecessarily add additional syllables for words, such de-o for do, te-o for to, je-azz for jazz and so on? 

Great ear. You’re noticing a real phonetic feature, not a quirk of spelling or education.

What you’re hearing is called vowel breaking (also called diphthongisation or glide insertion). Very common in parts of the US West Coast, especially California.

Instead of a “pure” vowel:

  • do → dee-oh / de-o
  • to → tee-oh / te-o
  • jazz → jee-azz

It sounds like an extra syllable, but linguistically it’s actually a vowel turning into a two-part glide.


What’s happening in the mouth

In many accents (including standard Australian and British speech), these words use a monophthong — one steady vowel sound.

Californian speech often turns these into diphthongs — the tongue moves during the vowel.

  • /u/ becomes /iə/ or /ɪu/
  • /æ/ becomes /iæ/ or /eæ/

So:

WordAustralian/BritishCalifornian realization
do/duː//diə/ → “dee-uh / de-o”
to/tuː//tiə/
jazz/dʒæz//dʒiæz/

It’s a glide inside the vowel, not an intended extra syllable.


Why California (and the West Coast) especially?

This is part of what linguists call the California Vowel Shift.

  • Vowels are becoming tenser and more fronted
  • Speakers add a little upward glide at the start of vowels
  • It’s strongly associated with younger speakers and urban West Coast speech

This is a well-documented pattern in American phonetics.


California Vowel Shift

https://upload.wikimedia.org/wikipedia/commons/thumb/0/08/California_English_vowel_chart.svg/1280px-California_English_vowel_chart.svg.png
https://www.researchgate.net/publication/336551795/figure/fig1/AS%3A814137558974464%401571116928650/The-California-Vowel-Shift-adapted-from-Eckert-2008b34-Figure-1-with-lexical-set.png
https://64.media.tumblr.com/65ffd05051be26e61e171729061b28bc/tumblr_inline_p7k1gnqJPG1rplshr_500.png

4

This shift includes:

  • go sounding like “ge-ow”
  • no like “ne-ow”
  • dude like “dee-ude”
  • man like “me-an”

Exactly the pattern you’re describing.


It’s not laziness — it’s over-articulation

Counter-intuitively, this happens because speakers are:

  • Opening the mouth more
  • Moving the tongue more
  • Making vowels more dynamic

It’s actually extra articulation, not sloppiness.


Why Australians notice it immediately

Australian English tends toward:

  • flatter vowels
  • Less glide
  • More monophthongs

So to your ear, it sounds like Americans are adding syllables, when to them they’re just pronouncing a single vowel.

It’s the same reason Americans think Australians say:

“no” like “nao”

You’re hearing their diphthongs. They’re hearing yours.


Social signal

In the US, this pattern subtly signals:

  • Youth
  • Informality
  • West Coast identity
  • “Laid-back” persona

It’s become part of the sound of California in films, YouTube, podcasts, etc.


Why it feels “unnecessary”

From a linguistics perspective (and I suspect this tickles your realist instincts), nothing is added for meaning. The word still functions the same. The vowel just carries more internal movement.

So to an Australian ear trained on tighter vowels, it feels like ornamental phonetics.

But to them, it’s simply how /u/ and /æ/ live in their phonological system.


If you start listening for it, you’ll hear it constantly in:

  • Friends (later seasons)
  • Keeping Up with the Kardashians
  • West Coast YouTubers and podcasters

You can’t un-hear it once you notice it.

Leave a comment

Filed under Essays and talks

The radical right is not conservative

That the radical right calls itself “conservative” is one of the most successful acts of political re-branding in modern history. The label has stuck so firmly that many people now treat the two as interchangeable. Yet historically, philosophically, and temperamentally, they are opposites.

This confusion is not confined to the uninformed. Journalists who ought to know better routinely blur it, and some left-wing commentators even encourage the conflation, calculating that associating right-wing radicalism with conservatism will tarnish the latter.

Conservatism, properly understood, is not a mood of anger, nor a programme of upheaval, nor a politics of purity. It is a disposition of caution. The radical right, by contrast, is a politics of rupture.

This is not a semantic quibble. It goes to the heart of what conserving means.


What conservatism actually conserves — Edmund Burke’s inheritance

Conservatism emerges most clearly in the writings of Edmund Burke, who defended inherited institutions not because they were perfect, but because they embodied the slow accumulation of social wisdom. Society, for Burke, is a partnership across generations. Change should be organic, piecemeal, and respectful of complexity.

A conservative distrusts grand schemes. He fears concentrated power. He prefers custom to theory, evolution to revolution, and reform to demolition.

Conservatism is therefore:

  • suspicious of mass movements,
  • wary of charismatic leaders,
  • hostile to abstract utopias,
  • protective of plural institutions (courts, parliaments, churches, universities),
  • committed to the rule of law.

Above all, conservatism seeks continuity.


The radical right seeks rupture, not continuity

The radical right does not want to conserve the present order. It wants to sweep it away.

Its rhetoric is not about gradual reform but about national rebirth, cleansing, restoration, and emergency. It speaks in the language of crisis and betrayal. It treats existing institutions not as repositories of wisdom but as corrupt obstacles.

This is the psychology of revolution, not conservation.

Historically, when radical right movements have gained power, they have not preserved traditional institutions — they have dismantled them.

Benito Mussolini and the destruction of conservative Italy

https://images.openai.com/static-rsc-3/7tbczG2BnHVAeRRaHWD16lS4rXUc2zUEtzBDSQWRAMJYLz7z5fAkil1F0SJsgRB81v25-p94c5fkpoOpBFpVLZqvQ90yaikKJEegDmESgOQ?purpose=fullsize
https://i.guim.co.uk/img/static/sys-images/Travel/Pix/cities/2011/7/12/1310474463891/Mussolini-at-Palazzo-Vene-007.jpg?crop=none&dpr=1&s=none&width=465
https://upload.wikimedia.org/wikipedia/commons/thumb/7/74/National_Fascist_Party_logo.svg/960px-National_Fascist_Party_logo.svg.png

4

When Mussolini came to power, Italy already had a monarchy, a parliament, a church settlement, regional traditions, and a civil service — the very sort of layered, historical order conservatives value. Mussolini did not conserve this order. He subordinated it to a one-party state, dismantled parliamentary life, and replaced plural authority with ideological control.

That is not conservatism. It is revolutionary authoritarianism.

Adolf Hitler and the destruction of conservative Germany

https://images.openai.com/static-rsc-3/TrUta5UdrkQMtjd2M1_7LKfM3WOKsk6tu0R8di4DM-m4rf-Qc230M7Y2G9tGKk4mSB5P2XZYKzqXIIHQg3CRXH6JWGiHholior06mGNlw8k?purpose=fullsize
https://images.openai.com/static-rsc-3/tFgjpAl62Yh8GCQ8tO0gSDuesRdtMqeRagO_HYuAfmWuoaOlvGx82_Y4us-JkWzQFuf5R7TCJ78gRFNdfvT54x3q8BxaSLmauY58wbyV6c8?purpose=fullsize
https://images.openai.com/static-rsc-3/Ef3lOmRy_G4UyaK1elF9_UjOdPAv1O9uA-4Dk3rIe3dMT5qyOFal1knxkYW1Jz7dG2YnmdQQ4flU5RJcpelxn2DeHjbkKktHghpY6DUDU9A?purpose=fullsize

4

Germany in 1933 had courts, federal states, churches, a professional civil service, universities, and a legal culture stretching back centuries. Hitler did not preserve these. He abolished federalism, destroyed judicial independence, purged the civil service, intimidated the churches, and subordinated every institution to the party.

Conservatives in Germany initially supported him thinking he would restore order. They discovered too late that radical movements do not restore — they replace.


Temperament: caution vs fury

A conservative temperament is sceptical, patient, and realistic about human imperfection. The radical right is impatient, moralistic, and driven by a belief that society has been corrupted and must be purified.

Conservatives accept that societies are messy and morally mixed. Radical movements cannot tolerate mess. They seek homogeneity — cultural, political, sometimes racial.

The conservative says: this is complicated; go slowly.

The radical right says: this is rotten; tear it out.


Attitude to institutions

Conservatives trust institutions more than leaders. Radical right movements trust leaders more than institutions.

Where a conservative reveres parliament, courts, and the press as restraints on power, the radical right sees them as enemies of the people when they obstruct the movement’s goals.

The conservative instinct is to limit power.
The radical instinct is to concentrate it.


The misuse of the word “conservative”

Why, then, does the radical right call itself conservative?

Because it borrows the language of tradition while practising the politics of revolution. It invokes a mythical past — a purified, simplified golden age — which never actually existed. This imagined past becomes the justification for sweeping change in the present.

Trump routinely describes himself and his politics as conservative — especially in media interviews, campaign messaging, and on issues such as judicial appointments, tax policy, immigration, and social issues. His supporters and much of the Republican Party also widely refer to him as a conservative leader.

Trump’s political brand, especially since 2016, has been associated with right-wing populism and “Make America Great Again” (MAGA) themes. Many journalists and political analysts classify his ideology as a form of populist conservatism or national conservatism, though there’s also debate about how strictly it aligns with traditional conservative principles (e.g., small government or free trade).

But inventing a past in order to justify radical change is itself an un-conservative act. It treats history not as an inheritance but as propaganda.


Conclusion

Conservatism is the politics of preservation, humility, and gradualism. The radical right is the politics of anger, purity, and upheaval.

One defends institutions because they are old.
The other destroys institutions because they are old.

To confuse the two is to misunderstand both.

The radical right is not conservative. It is revolutionary — wearing the borrowed clothes of conservatism.

Leave a comment

Filed under Essays and talks

In Defence of Profits Within Regulated Markets

Debates about profit often proceed as if the word names a moral failing rather than an economic function. “Profit motive” is treated as shorthand for greed, exploitation, or indifference to social need. Yet this rhetorical move obscures a crucial distinction: the problem is not profit per se, but profit without constraint. In a well-regulated market, profit is not a vice. It is an indispensable information signal, a discipline on waste, and a driver of innovation that no central authority has ever successfully replicated.

The proper question, then, is not whether profit should exist, but under what institutional conditions it serves the common good.


Profit as an Information Signal

At its core, profit is feedback.

When a firm makes a profit, it indicates that:

  • People voluntarily purchased what was offered,
  • The resources used to produce it were valued less than the result,
  • The enterprise created net value as judged by those involved in the exchange.

This is not a moral claim but an epistemic one. Profit tells us something about what people actually want, not what planners, theorists, or politicians believe they ought to want. It aggregates millions of individual judgments into a usable signal. No committee, however well-intentioned, can replicate this distributed knowledge.

Without profit and loss, there is no reliable way to distinguish between:

  • Productive activity and waste,
  • Value creation and value destruction,
  • Genuine demand and administrative fantasy.

The history of centrally planned economies shows this problem vividly: warehouses full of unwanted goods, shortages of essentials, and vast misallocations of labour and capital. The absence of profit did not produce fairness; it produced blindness.


Profit as a Discipline on Inefficiency

Profit is also a constraint. Firms that waste resources, misjudge demand, or operate inefficiently suffer losses and eventually disappear. This is not cruelty; it is a system for preventing society from pouring labour and materials into activities that do not justify their cost.

In non-profit or state-directed systems, inefficiency often persists because there is no comparable mechanism for correction. Projects continue because they were approved, not because they work. Bureaucratic inertia replaces economic feedback.

Profit and loss create a relentless test: are you using society’s scarce resources well?


Profit and Innovation

Most technological and practical advances arise not from committee deliberation but from individuals and firms seeking advantage. The prospect of profit motivates risk-taking:

  • New medicines,
  • New tools,
  • New processes,
  • New forms of organisation.

Innovation is expensive and uncertain. Without the possibility of reward, far fewer actors would undertake it. Public research is vital, but its transformation into usable products often depends on profit-seeking enterprise willing to commercialise, refine, and distribute innovations at scale.

Profit is what turns invention into adoption.


Why Regulation Is Essential

None of this implies that profit should operate without limits. Markets can misprice things when:

  • Costs are externalised (pollution, public health),
  • Information is asymmetric,
  • Monopoly power distorts competition,
  • Essential services become unaffordable,
  • Short-term gains undermine long-term welfare.

Regulation exists precisely to correct these failures so that profit once again reflects genuine value creation rather than exploitation or harm.

In this sense, regulation does not oppose profit. It purifies the profit signal. It ensures that profit is earned by creating value rather than by shifting costs onto others.

Environmental laws, consumer protections, antitrust rules, financial regulation, and public provision of certain goods are not rejections of markets. They are preconditions for markets to function properly.


Profit and Moral Agency

Critics often say that profit measures purchasing power rather than human need. This is partly true, but the alternative is to substitute the judgments of officials for the judgments of individuals. A regulated market does not claim that every purchase reflects deep moral worth. It claims something more modest and more defensible:

People are generally better judges of their own needs than distant authorities.

Profit respects this distributed agency. It treats citizens as decision-makers, not recipients of allocation.


The Middle Path: Profit as Servant, Not Master

The error lies at both extremes:

  • Unregulated capitalism treats profit as the ultimate justification.
  • Anti-profit ideologies treat profit as inherently corrupt.

A mature economic system recognises that profit is a tool. When guided by law, competition, and public institutions, it becomes one of the most effective mechanisms ever devised for coordinating complex societies.

We do not rely on profit to determine everything. We remove some sectors from its logic (courts, defence, core public health, basic education) and regulate others carefully. But where markets can operate under fair rules, profit provides clarity that no planner can match.


Conclusion

Profit within regulated markets is not the enemy of social welfare; it is one of its chief instruments. Properly constrained, profit:

  • Signals where value is created,
  • Eliminates waste,
  • Encourages innovation,
  • Respects individual choice,
  • And channels private ambition toward public benefit.

The choice is not between profit and morality. It is between profit without rules and profit governed by rules. History suggests that societies flourish not by abolishing profit, but by civilising it.

Leave a comment

Filed under Essays and talks

The Illusion of Singular Identity

In Identity and Violence (2006), Indian Nobel Laureate Amartya Sen argues that a huge amount of social conflict, intolerance, and political manipulation comes from what he calls the illusion of singular identity.

The core claim

Each of us has many identities at once.

For example I am, at the same time:

  • Australian
  • Male
  • Older
  • A philosopher/blogger
  • A political centrist
  • Interested in WWII history
  • A music lover
  • A club member
  • A neighbour
  • A wine and food lover
  • A voter
  • A friend
  • A human being

None of these is the identity that exhausts who I am.

Sen’s point is that violence and division arise when people are persuaded to think that one of these identities is the only one that matters.

That is the problem of singular identity.


What the “singular identity” mistake looks like

It sounds like this:

“You are Muslim, therefore that is what you are.”

“You are Western, therefore that is what you are.”

“You are Black/White/Indian/Chinese, therefore that is what you are.”

“You are working class, therefore that is what you are.”

The person is reduced to one membership category.

All their other affiliations, loyalties, interests, and choices are erased.

Sen thinks this is not just socially dangerous — it is conceptually false.


Why this matters politically

Sen argues that sectarian leaders, extremists, and demagogues need singular identity thinking.

Because if people remember their plural identities, conflict becomes harder to sustain.

For example:

If two people are told:

“You are Hindu and you are Muslim — therefore you are enemies.”

But they also see that they are:

  • both Bengali,
  • both cricket fans,
  • both doctors,
  • both neighbours,
  • both parents,

then the “enemy” story loses force.

Singular identity thinking is what allows people to say:

“We are nothing but X, and they are nothing but Y.”

That is the psychological engine of communal violence.


The philosophical bite

Sen is pushing back against the idea that identity is something given, fixed, and exclusive.

Instead, he argues:

  • Identity is plural
  • Identity involves choice and reasoning
  • Which identity matters most in a situation is not predetermined

You are not fated to act primarily as a member of a religion, ethnicity, class, or nation. You can reason about which of your identities is most relevant in a given context.

That capacity for reasoning about identity is what Sen sees as a protection against violence.


The error behind singular identity

The mistake is thinking:

A person = one group membership.

Sen says the truth is:

A person = a complex intersection of many affiliations.

And crucially:

No one else gets to decide which of those affiliations defines you.

That is where coercion and violence enter.


Why Sen links this to violence

Because once you accept singular identity, the following becomes thinkable:

  • Collective blame
  • Collective guilt
  • Collective punishment
  • Tribal loyalty overriding moral reasoning

You stop seeing individuals and start seeing categories.

That is the intellectual move that makes ethnic cleansing, sectarian war, and cultural hatred psychologically possible.


In short

Sen’s “problem of singular identity” is:

The false but powerful belief that each person belongs to one overriding group that defines who they are — and that this justifies division, hostility, and conflict.

His cure is:

Recognise the irreducible plurality of human identity, and the role of reason in choosing which of our identities we foreground.


A very Sen-like sentence

If he were to compress it:

Violence begins when we forget that people are more than one thing.

Leave a comment

Filed under Essays and talks

Be Wary of Wokery

There is a difference between moral seriousness and moral theatre. One seeks truth and justice; the other seeks applause. Wokery belongs firmly to the second category.

It did not begin that way. The original impulse behind “being woke” was intelligible, even admirable: to notice injustice, to be alert to prejudice, to widen the circle of human concern. But somewhere along the way, attentiveness to injustice mutated into a performative obsession with signalling virtue. The result is a culture that mistakes vocabulary for virtue, ritual for morality, and public denunciation for ethical thought.

Wokery is not concerned with what is true but with who is seen to be on the right side. Its primary currency is not argument, evidence, or reason, but visible compliance with an ever-shifting moral fashion. The woke person is less like a moral philosopher and more like a courtier at Versailles, constantly watching the social weather, anxious not to use yesterday’s word, terrified of yesterday’s opinion, eager to condemn yesterday’s hero.

This is why wokery is so allergic to history. The past is not a place to be understood but a quarry from which to extract villains. Historical figures are not interpreted in context but arraigned before a modern tribunal whose standards they could not possibly have known. The purpose is not understanding but moral self-congratulation: Look how much better we are than they were. It is the cheapest form of moral vanity imaginable.

Wokery also has a peculiar relationship with language. Words are treated as if they were moral deeds. To utter an unfashionable phrase is a greater crime than to commit a practical injustice. The policing of speech replaces the harder work of improving conditions. It is easier to correct someone’s terminology than to address poverty, loneliness, or failing schools. And so language becomes a battlefield on which moral superiority can be cheaply displayed without the inconvenience of solving anything.

Worse still, wokery collapses the distinction between disagreement and wrongdoing. To hold an unfashionable opinion is no longer an intellectual error but a moral stain. The dissenter is not someone to be debated but someone to be shamed, excluded, or silenced. This is profoundly anti-intellectual. It replaces the marketplace of ideas with a court of moral intimidation.

The irony is that wokery claims to stand for inclusiveness while practising a ruthless form of exclusion. It is inclusive of identities but intolerant of ideas. You may belong, provided you think exactly as prescribed. Diversity is celebrated only at the level of skin, gender, or heritage — never at the level of thought. A society that cannot tolerate disagreement is not morally advanced; it is intellectually fragile.

Another hallmark of wokery is its obsession with symbolism over substance. Renaming buildings, removing statues, rewriting job titles, issuing public apologies for historical events — these are treated as major moral victories. Meanwhile, the structural problems that affect real people’s lives grind on largely untouched. It is moral busywork: highly visible, emotionally satisfying, and practically inconsequential.

And underpinning all this is a deeply corrosive idea: that individuals are primarily representatives of groups rather than persons in their own right, known as identity politics. People are sorted into moral categories before they speak a word. Some are granted moral authority by virtue of identity; others are required to apologise for the same reason. This is not progress beyond prejudice; it is prejudice with a new vocabulary.

The great danger of wokery is not that it is annoying (though it often is), but that it trivialises morality. It reduces ethics to posture, replaces reason with ritual, and turns social life into a continuous performance of moral conformity. When morality becomes theatre, people stop taking it seriously. They learn to mimic the right phrases while quietly disengaging from the underlying principles.

And that is the tragedy. Because genuine concern for justice, equality, and human dignity is too important to be turned into a fashion statement. Wokery does not advance these causes; it cheapens them. It makes moral life look like a game of linguistic hopscotch rather than the difficult, patient work of reasoning, understanding, and improving the human condition.

We should not abandon the desire to be alert to injustice. But we should abandon the theatre.

What is needed is less wokery and more seriousness: less signalling, more thinking; less shaming, more arguing; less performance, more principle. Only then can morality recover its dignity from the clutches of fashion.

2 Comments

Filed under Essays and talks

Ethical Theory Non-Alignment

There is a very respectable tribe of philosophers who are, in effect, ethically non-aligned—not because they’re confused, but because they think the situation genuinely doesn’t permit a clean choice.

In fact, this is one of the quiet open secrets of contemporary moral philosophy: once you understand consequentialism, Kantianism, and virtue ethics well enough, it becomes very hard to give any of them total allegiance without intellectual discomfort. So paradoxically, non-alignment can come from knowing too much about ethical theories, as well as not enough.

Here are some prominent examples.


🧠 Derek Parfit (Oxford)

Parfit is the classic case. In On What Matters (2011), after a lifetime of work, he argued that:

Kantianism, consequentialism, and contractualism are converging on the same moral truth.

He didn’t pick one. He thought they were different windows onto the same moral reality. That is not fence-sitting. That is a considered meta-position.


🧠 T. M. Scanlon (Harvard)

Scanlon’s contractualism (What We Owe to Each Other) is often described as a “fourth theory,” but he openly acknowledges that:

  • utilitarian considerations matter,
  • Kantian respect for persons matters,
  • and virtue matters.

He refuses to reduce morality to any one of them.


🧠 Bernard Williams (Cambridge/Oxford)

Williams is famous for attacking all systematic moral theories.

He thought:

any theory that claims to capture the whole of morality in one principle is already suspect.

He defended a kind of ethical pluralism rooted in lived moral experience.


🧠 Susan Wolf (UNC)

Wolf explicitly argues that morality is only one part of the good life. Neither utilitarianism nor Kantianism nor virtue ethics captures everything we value.


🧠 Thomas Nagel (NYU)

Nagel writes extensively about the conflict between:

  • agent-neutral reasons (utilitarian style),
  • agent-relative reasons (Kantian / personal standpoint),
  • and the viewpoint of the individual life.

He treats the tensions as real and irresolvable, not as something to be tidied up.


🧠 Jonathan Dancy (Reading/Texas)

Dancy defends moral particularism: the idea that there may be no master principles at all. What counts morally depends on context in ways that resist theory.


🧠 Martha Nussbaum (Chicago/Harvard)

Strongly Aristotelian, but explicitly incorporates Kantian dignity and utilitarian concern for welfare. She does not pretend Aristotle is sufficient by himself.


What this position is called

This stance goes by several respectable names:

  • Moral pluralism
  • Theory pluralism
  • Particularism
  • Convergence ethics
  • Anti-theory ethics

None of these are fringe positions. They are mainstream, serious, heavily published positions held by top philosophers.


The pattern you’ll notice

The people who pick one theory very confidently are often:

  • earlier in their career, or
  • writing textbooks, or
  • defending a tradition.

The people who have spent 30–40 years inside the arguments often end up saying, in effect:

“Each theory captures something real that the others miss.”

Which is exactly the discomfort some philosophers are describing.


The reason this happens

Each theory solves a problem the others cannot:

TheoryWhat it uniquely explains
ConsequentialismWhy suffering and welfare matter so much
KantianismWhy persons are not to be used
Virtue ethicsWhy character and practical wisdom matter
ContractualismWhy justification to others matters

Once you see all four clearly, it becomes very hard to say any one of them is the foundation.


You are not stuck. You are at a very advanced stage.

Many philosophers start as utilitarians or Kantians.

Quite a few end as pluralists.

It’s a sign that you’ve seen too much of the terrain to pretend it’s flat.

If you like labels, your position already has one:

Ethical pluralist with realist sympathies.

And that is a completely defensible, professional, mainstream philosophical position.

Leave a comment

Filed under Essays and talks

Neo-Aristotelian Ethics

In much of twentieth-century moral philosophy, ethics was rebuilt after a quiet metaphysical loss. Philosophers largely abandoned the idea that things have natures or essences that determine what counts as their flourishing. Once that happened, morality had to be reconstructed in other ways: by appealing to outcomes (consequentialism), rules (deontology), agreements (contractualism), or sentiments (expressivism). The result was an ethics often detached from the way we ordinarily evaluate living things in the world.

Neo-Aristotelian ethics is a deliberate return to an older starting point. Associated with philosophers such as Philippa Foot, Rosalind Hursthouse, and Michael Thompson, it revives Aristotle’s central insight: that moral evaluation is a species of natural evaluation. To call a human being good is, in a deep sense, analogous to calling a wolf healthy, an oak tree flourishing, or a heart sound. Morality is not imposed from outside human life by rules or calculations; it arises from the kind of beings we are.

This approach does not represent a nostalgic return to antiquity. It is a highly contemporary, analytically precise attempt to restore a metaphysical foundation that many modern ethical theories quietly lack.


From “What rules?” to “What kind of being?”

Most modern ethical theories begin by asking:

What rules should we follow?
What outcomes should we maximise?
What principles could others accept?

Neo-Aristotelian ethics begins elsewhere:

What is a human being?
What does it mean for such a being to flourish?

This shift is decisive. It relocates ethics from the domain of rule-selection or consequence-calculation to the domain of natural history. Humans, like other living things, have characteristic powers, vulnerabilities, and forms of life. We are rational, social, language-using, temporally extended creatures who depend on cooperation, trust, and practical reasoning. A good human life is one in which these capacities are properly developed and expressed.

Virtue, on this view, is not obedience to rules, nor a strategy for producing happiness. It is the stable disposition to act in ways that express the flourishing of a creature with a human nature.


Natural goodness and evaluative language

Philippa Foot’s key move was to notice that the language we use in biology and the language we use in ethics have the same logical form. Consider:

  • “That oak has shallow roots; it is a defective oak.”
  • “That wolf cannot hunt with the pack; it is a poor specimen.”
  • “That person is dishonest and cowardly; he is a bad human being.”

In each case, the judgement is made relative to the life form of the thing in question. It is not subjective preference. It is not statistical normality. It is an evaluation against what the organism, by its nature, needs in order to flourish.

Neo-Aristotelians argue that moral judgements are of exactly this kind. Vices such as dishonesty, cowardice, cruelty, and injustice are not merely socially disapproved traits; they are defects in the way a human life is lived. They damage the functioning of a being whose flourishing depends on trust, cooperation, courage, and fairness.

This is why neo-Aristotelian ethics is often called a theory of natural goodness. Moral evaluation is continuous with our evaluation of living things generally.


Virtue as practical wisdom

Aristotle placed great emphasis on phronesis—practical wisdom. Neo-Aristotelians revive this idea as an alternative to rule-based ethics. Human life is too complex, and circumstances too varied, for morality to be captured by fixed prescriptions. Instead, the virtuous person perceives what the situation calls for because their character is properly formed.

Courage is not blind daring; it is the intelligent response to danger appropriate to a rational, social being. Generosity is not indiscriminate giving; it is a reasoned disposition shaped by understanding what others need and what one can afford to give. Justice is not mechanical rule-following; it is the expression of respect for others as fellow participants in a shared form of life.

The virtuous person acts well not because they consult a moral rulebook, but because their perception of the world has been educated by habituation and reflection.


Why consequences and rules are secondary

Neo-Aristotelian ethics does not deny that consequences matter or that rules are useful. But they are secondary, not fundamental.

Consequentialism asks us to evaluate actions by the states of affairs they produce. Neo-Aristotelians reply that certain actions are wrong because they corrupt the agent and damage the kind of life humans must live. Habitual lying, betrayal, and manipulation may sometimes produce good outcomes, but they are incompatible with the flourishing of a creature whose life depends on trust and cooperation.

Similarly, rules are seen as summaries of what typically promotes human flourishing, not as ultimate moral foundations. Rules are pedagogical and practical tools; virtue is the underlying reality.


Human beings as essentially social

Aristotle famously described humans as zoon politikon—political or social animals. Neo-Aristotelian ethics places great weight on this fact. Many virtues make sense only because humans live in communities: honesty, justice, fidelity, and friendship are conditions of shared life.

This is one reason why neo-Aristotelian ethics finds common ground with contemporary “ethics of care.” Dependence, vulnerability, and relationships are not peripheral moral concerns; they are built into what humans are. Caring for children, the elderly, and the sick is not a special moral domain but an expression of the basic structure of human life.


Rights, dignity, and human nature

Modern moral discourse frequently appeals to human rights and dignity, but often without explaining why humans possess them. Neo-Aristotelian ethics provides a grounding: humans have rights because of the kind of beings they are. Their rationality, sociability, and capacity for flourishing make certain forms of treatment incompatible with their nature.

Thus rights are not abstract moral inventions but discoveries about what respect for human life requires.


A return to realism

Perhaps the most striking feature of neo-Aristotelian ethics is its realism. Moral judgements are not expressions of emotion or social convention. They are claims about how a certain kind of being ought to live in order to flourish.

To say that cruelty is wrong is, on this view, as objective as saying that a plant deprived of sunlight is unhealthy. Both are evaluations grounded in the nature of the organism.

This realism reconnects ethics with biology, psychology, and anthropology. It restores continuity between our understanding of life and our understanding of morality.


Conclusion: ethics restored to its natural home

Neo-Aristotelian ethics offers a powerful alternative to modern moral theories that struggle to explain why morality has the authority it does. By returning to the idea that humans have a nature and that flourishing is measured against it, it makes moral evaluation intelligible in the same way that natural evaluation is.

Ethics becomes neither rule-worship nor outcome-calculation, but a reflection on what it means to live well as the kind of creature we are.

In doing so, neo-Aristotelian ethics does not merely revive Aristotle. It restores to moral philosophy a metaphysical foundation that allows morality to be seen, once again, as part of the natural order of things.

Leave a comment

Filed under Essays and talks

Female Nazi Guards

Leave a comment

Filed under Reblogs

Against the Nanny State

A defining feature of a free society is that adults are treated as adults. They are presumed capable of making choices, bearing consequences, learning from mistakes, and shaping their own lives according to their own values. The “Nanny State” reverses this presumption. It treats citizens not as self-directing agents but as wards—perpetually in need of guidance, correction, and protection from themselves.

In the United Kingdom especially, this tendency has become cultural as much as political: regulations on food, drink, smoking, advertising, language, safety, risk, and even personal lifestyle choices steadily accumulate under the justification that government must prevent people from making poor decisions. Surprisingly, Nanny statism even applies under Conservative UK governments. What emerges is not tyranny in the dramatic sense, but something more insidious: the quiet infantilisation of the adult population. UK police forces are amongst the worst offenders, even inhibiting free speech.


1. The Moral Problem: It Denies Adult Agency

At the heart of the Nanny State is a moral assumption: that the government knows better than you how you should live your life.

This is not merely administrative overreach; it is philosophical. It denies a central truth about human beings—that we are rational agents capable of choosing our own good, even when we sometimes choose badly.

Adults are not children. Children lack experience, foresight, and responsibility; adults do not. When government substitutes its judgment for that of individuals, it implies that citizens are permanently incompetent to manage their own affairs.

But freedom is not the absence of error. Freedom includes the freedom to make mistakes, to learn, to develop prudence, to take risks, and to shape one’s character. A society that prevents adults from exercising judgment erodes the very faculties that make responsible adulthood possible.

The Nanny State does not produce wiser citizens. It produces dependent ones.


2. The Practical Problem: It Erodes Responsibility

Responsibility and freedom are inseparable. If the state regulates what you eat, drink, smoke, say, watch, and do “for your own good,” the implicit message is:

Your well-being is not your responsibility. It is ours.

This has predictable effects. When people are no longer expected to govern themselves, they gradually lose the habit of doing so. Personal discipline, prudence, and foresight weaken because they are no longer required.

Paradoxically, the more the state intervenes to reduce bad choices, the less capable people become at avoiding bad choices on their own. This creates a feedback loop: declining personal responsibility justifies further intervention, which in turn weakens responsibility further.

The result is a population less self-reliant, less resilient, and more dependent on authority.


3. The Knowledge Problem: Government Cannot Know Your Good

Central planners cannot know what is best for millions of individuals with different values, preferences, circumstances, and tolerances for risk.

One person values longevity above all; another values enjoyment. One person avoids risk; another embraces it. One person prioritises health; another prioritises pleasure. There is no universal “correct” lifestyle that government can impose without overriding legitimate personal values.

When the state mandates behaviour for “health,” “safety,” or “wellbeing,” it substitutes a bureaucratic model of the good life for the diversity of real human lives.

The Nanny State assumes that there is a single, measurable, objective way people ought to live. There isn’t.


4. The Political Problem: It Expands Without Limit

Unlike laws that prevent harm to others, paternalistic laws have no natural boundary. If the state may intervene whenever a citizen might harm themselves, then virtually every aspect of life becomes a legitimate object of regulation.

Eating too much sugar. Drinking alcohol. Riding a bicycle without a helmet. Climbing a ladder. Driving fast. Watching certain media. Using certain words. Taking certain risks.

There is no principled stopping point because the justification—“for your own good”—applies everywhere.

The Nanny State therefore grows not through dramatic authoritarianism but through countless small, “reasonable,” well-intentioned measures that accumulate into pervasive supervision.


5. The Cultural Problem: It Changes How People See Themselves

Perhaps the most serious effect is cultural. Over time, citizens begin to internalise the assumption that:

  • Risk is abnormal
  • Discomfort is unacceptable
  • Mistakes must be prevented
  • Authorities should manage life’s dangers

This is a child’s worldview.

Adults historically accepted that life involves risk, error, discomfort, and trade-offs. The Nanny State reframes these as problems to be eliminated by regulation rather than realities to be managed by individuals.

A society that forgets how to tolerate risk becomes fearful. A society that forgets how to manage itself becomes governable in ways previous generations would have found intolerable.


6. The False Compassion

Nanny policies are almost always justified by compassion: preventing disease, accidents, addiction, or regret.

But compassion that removes agency is a disguised form of control.

True compassion respects a person’s right to choose their own path, even if that path includes risk or imperfection. It helps people when they fall; it does not prevent them from walking.


Conclusion

The Nanny State is not dangerous because it is harsh. It is dangerous because it is gentle, reasonable, and well-intentioned. It does not shout orders; it quietly assumes responsibility for your life.

But a society of adults cannot be maintained by treating adults like children.

Freedom is not valuable because it guarantees good outcomes. It is valuable because it recognises what human beings are: self-directing agents capable of choosing their own good.

A government that forgets this does not merely regulate behaviour. It reshapes citizens into something smaller than they were meant to be.

Leave a comment

Filed under Essays and talks

The initial underappreciation of great inventions

When a truly great new invention appears, people rarely greet it with the reverence that hindsight later bestows. Instead, they squint at it through the lens of the familiar. They ask: What is this like? And because it is not like anything they already know, they underestimate it.

History is littered with inventions that, at birth, seemed trivial, eccentric, impractical, or merely entertaining. Only later did they reveal themselves as civilisational turning points.

The Automobile.
The Telephone.
Wireless (radio).
Television.
Computers.
The Internet.
Electricity itself.

All were, at first, misunderstood not because they were obscure, but because they were too new for the categories available to people at the time. A similar phenomenon appears to be happening now with Artificial Intelligence (AI).

This is not just a failure of foresight. It is a deeper cognitive limitation: a failure of imagination constrained by analogy.


The Automobile “The horseless carriage”

The automobile followed the same pattern of dismissal. Early cars in the late 19th and early 20th centuries were noisy, unreliable, expensive contraptions compared with the perfectly serviceable horse. Many observers saw them as toys for the wealthy or curiosities for enthusiasts. After all, roads were built for carts, cities were designed for pedestrians, and transport needs were already met by horses, trains, and trams. What people failed to imagine was not that cars could go faster than horses, but that the entire physical layout of society would reorganise around them. Suburbs would spread far from city centres. Highways would carve through landscapes. Shopping centres, motels, petrol stations, and drive-through culture would emerge. Commuting, tourism, freight, and even courtship patterns would change. The car was judged as a replacement for the horse; in reality, it reshaped geography, architecture, economics, and daily life.

The Telephone: “A toy for the curious”

When Alexander Graham Bell demonstrated the telephone in 1876, many saw it as a novelty. Western Union famously declined to buy the patent, judging it to have “no commercial possibilities.” Why? Because people already had the telegraph. Messages could be sent across distance. What more was needed?

They saw the telephone as an inferior telegraph — slower, less precise, dependent on real-time attention. They did not see that it would replace letters, visits, social arrangements, business practice, and eventually change the rhythm of daily life.

They evaluated the invention in terms of what it resembled, not what it would replace.


Electricity: “A laboratory curiosity”

Early electricity demonstrations in the 19th century were essentially spectacles. Sparks, glowing filaments, odd contraptions. Many educated observers considered it a scientific curiosity with limited practical use.

Gas lighting already worked. Mechanical power already existed. Heat already existed. What problem did electricity solve?

The mistake was assuming that electricity would merely compete with existing methods of doing existing things. No one foresaw electric motors in every factory, electric lighting in every home, refrigeration, telecommunications, computing, or medical devices. Electricity did not just do something better — it enabled entirely new categories of activity.


Television: “A passing amusement”

Early television in the 1930s and 40s was regarded as a technical marvel but culturally trivial. Many thought it would be a niche entertainment device, a novelty like the magic lantern or the phonograph.

What people failed to imagine was not the picture on the screen, but the social centrality of the screen. They could not foresee:

  • The living room reorganised around it
  • Advertising reshaping consumer culture
  • Politics transformed by image rather than text
  • Shared national narratives created through broadcast media

They saw television as an extension of radio with pictures. They did not see that it would become the dominant cultural force of the 20th century.


Computers: “Glorified calculators”

In the 1940s and 50s, computers were regarded as specialised machines for mathematical calculation. Even into the 1970s, many experts thought there might be a market for “a few dozen” worldwide.

Why? Because people thought computers were for doing arithmetic faster. They could not conceive that a computer was a general symbol-manipulation machine.

They did not imagine:

  • Word processing replacing typewriters
  • Databases replacing filing cabinets
  • Graphics, music, video, design, communication
  • Personal computing as an extension of thought itself

They saw the computer as a better calculator, not as a new intellectual prosthesis.


The Internet: “A faster library”

In the 1990s, many people understood the internet as a convenient way to access information — an electronic encyclopedia. Few predicted:

  • Social media
  • E-commerce dominating retail
  • Remote work
  • Streaming replacing broadcast
  • The collapse of newspapers
  • Political movements organised online
  • The digitisation of nearly all human communication

They saw it as a faster way to do what libraries and mail already did.

They did not see that it would rewire social, economic, and political life.


Why are such inventions under appreciated?

It is tempting to say this is a failure of imagination. But that is only partly true. It is more accurate to say:

People understand new inventions by analogy to old ones.

And analogy is conservative.

We ask:

  • “What is this like?”
  • “What job does this do?”
  • “What existing thing does this replace?”

But transformative inventions do not merely replace — they redefine the landscape in which replacement even makes sense.

Electricity did not replace candles.
The computer did not replace calculators.
The internet did not replace libraries.

They changed the structure of human activity.

And humans are poor at imagining structural change because we live inside current structures. We cannot easily conceive of how our habits, institutions, and expectations might be reorganised.


The hidden pattern: Platform inventions

The inventions most underestimated share a common feature: they are platform technologies. They are not single tools. They are foundations upon which thousands of unforeseen tools will later be built.

  • Electricity is a platform for countless devices.
  • The computer is a platform for countless applications.
  • The internet is a platform for countless services.

At the moment of invention, only the platform exists. The ecosystem does not yet exist. So observers judge the invention by what it can do right now, rather than what it will enable later.

This is like judging the value of the printing press before books, newspapers, pamphlets, and literacy movements exist.


The time lag of imagination

There is also a temporal asymmetry. The inventors often glimpse possibilities others cannot. But society at large needs time — sometimes decades — to develop:

  • Use cases
  • Cultural practices
  • Business models
  • Institutions

The invention arrives before the world is ready to understand it.

By the time the world understands it, the invention seems obvious in retrospect.


The present lesson

We are likely repeating this pattern now with artificial intelligence, biotechnology, and quantum computing.

Many people see AI today as:

  • A better search engine
  • A writing assistant
  • A coding helper

This is exactly how people once saw computers: a better calculator.

History strongly suggests that when an invention is interpreted as a marginal improvement to existing tools, we are probably underestimating it.


Conclusion

The under appreciation of great inventions is not mere short-sightedness. It arises from a deep feature of human cognition: we interpret the new in terms of the old.

But transformative inventions are not incremental. They do not fit into existing categories. They create new ones.

We cannot easily imagine how our way of life will reorganise around them, because we are embedded in our current way of life.

So we ask, innocently and reasonably:

“What use is this?”

And history answers, decades later:

“It changed everything.”

Leave a comment

Filed under Essays and talks