God does not belong in a science class

This piece was written as a reply for Heterodox STEM, where it first appeared.

Randy Wayne of Cornell University has recently presented his arguments for wanting to bring God into a science class, arguing that this is necessary for “the most complete scientific understanding”. He sees the exclusion from science of the idea of “immaterial intelligence” as an unwarranted restriction that impoverishes science and short-changes students. Here I’ll attempt to rebut Wayne, and will argue that omitting gods from science’s worldview follows quite properly from science itself.

One central part of Wayne’s argument is that:

“A foundational assumption such as reality is composed of matter and energy and nothing else, is an assumption — what Euclid calls a postulate. Foundational assumptions are untested, otherwise they would be called facts. Evidence gathering, logic, reason, and analysis are built on the assumptions, and science cannot proceed without faith in the assumptions …”

This view, that science rests on metaphysical assumptions that must be taken on faith, is commonly supposed, but is (I submit) profoundly wrong. At root, science comes from observing the world around us and developing a set of ideas that help us understand, predict and manipulate the world. Observing regularities in the natural world would have helped humans hunt or herd animals or grow crops more successfully. Over time, observing the night sky and the cycles of days, months and seasons led to an understanding of planetary orbits, and from there to Newton’s account of gravity and thence to Einstein’s account. We know that these accounts are true (in the sense of being good models of the world) because they make good predictions.

When Edmond Halley, the second Astronomer Royal, predicted that a solar eclipse would occur over England in 1715, that prediction came true to within a 4-minute accuracy. And when he predicted that a comet would return in 75 years after another orbit, which also came true, he demonstrated that astronomers did have a good understanding of celestial mechanics.

It is important to realise that the successful outcome of his prediction verified not only his understanding of gravity and orbits, but also the mathematics that he used, the logic and reasoning that he used, and any other necessary assumptions underlying his science. Either: (1) making different assumptions would have affected the prediction, in which case the outcome verified them; or (2) they made no difference, in which case he needn’t have assumed them.

Within the inter-woven package of ideas that constitutes science there are none that are so fundamental that they cannot be challenged. All one need do is point to that idea and ask what would be the case if it weren’t true, if we replaced it with its converse? Would that improve or worsen the models? The “improvement” is judged in terms of: (1) explanatory power (making sense of all the facts we currently know about); (2) predictive power (its easy to scheme up ad-hoc explanations for known facts, but much harder to successfully predict things one didn’t already know); and (3) parsimony (excising superfluous stuff that doesn’t improve the explanations).

Einstein’s gravity replaced Newton’s because of its explanatory power (it gave a correct calculation of the precession of the orbit of Mercury, something that Einstein already knew about) but also because of its predictive power (it correctly predicted the warping of space by the sun’s gravity, and hence the change of position of stars during solar eclipse, something for which there was no prior measurement), and indeed its parsimony (in essence it consists of only one equation which states how mass, energy and momentum warp space).

An illuminating metaphor is Neurath’s raft, which compares the ideas of science to the planks of a wooden raft afloat on the sea. One can swap out and replace any of the planks while standing on the others (though one can’t replace all of them at once, having nowhere else to stand). Similarly, we can evaluate any of the ideas underpinning science, by using the rest of the ensemble to do so, and can replace any idea it that would improve the ensemble. No idea is too fundamental to be questioned. Over time, any and all of the ideas could be replaced or improved, as science iterates to an ensemble with more and more explanatory and predictive power.

You may now be tempted to ask, ok then, on what is this account of science that you’ve just given based, how is that verified? I would reply that this account is also arrived at by figuring out what works best in modelling the world. Thus the “scientific method” is itself a product of science, it is itself the result of an iterative bootstrap that is ultimately verified by the fact that science works. Science does not rest on untestable metaphysical assumptions, it rests on the fact that iPhones work, airplanes fly, and NASA’s predictions of eclipse times do come true.

Wayne argues that leaving God and the supernatural out of science is an arbitrary and unwarranted choice. But the history of science shows this not to be so. Early scientists were fully content to invoke God if they needed him to patch up their models. James Clerk Maxwell wrote: “I have looked into most philosophical systems, and I have seen that none will work without a God”.

Newton applied his theory of gravity to the solar system and concluded that the whole edifice would be unstable over the long term, and so needed God’s intervention to make it work. “This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being” he wrote in Principia, and later: “A continual miracle is needed to prevent the sun and the fixed stars from rushing together through gravity”. Similarly, leading astronomer John Herschel wrote that the laws of nature had been established by the “Divine Author of the universe” and were being maintained by “the constant exercise of His direct power in maintaining the system of nature” while all material causes emanated “from his immediate will, acting in conformity with his own laws”.

But, as decades passed and understanding improved, scientists developed better models that worked fine without divine intervention. Hence Pierre-Simon Laplace’s (possibly apocryphal) remark to Napoleon that he “had no need of that hypothesis”. And in 1859, defending Darwinism, Thomas Henry Huxley wrote: “But what is the history of astronomy … but a narration of the steps by which the human mind has been compelled, often sorely against its will, to recognise the operation of secondary causes in events where ignorance beheld an immediate intervention of a higher power?”.

The change from a science entwined with religion to a science devoid of references to God can be traced to the decades between 1820 and 1880. That was not so much about metaphysical commitment and more a practical matter: models worked fine without gods, and adding gods into them just made the models un-parsimonious while doing nothing to improve the explanations. A similar process had, of course, been going on through history. Many early religions attributed rain, thunder and successful harvests to the whims of nature gods. Even daytime was caused by a sun-god driving his chariot across the sky. Over eons these explanations were gradually replaced by an understanding of natural processes.

Before turning to Wayne’s arguments that invoking a God does improve the explanations available to science, let’s have a brief interlude:

“As I will show you, limiting all discussion in a science class to the material and denying the immaterial is unnecessarily restrictive. […] the First Amendment exists to protect the freedom to think. […] That is, a professor can use his right to freedom of speech to talk about God in a science class …”

I fully support Wayne’s right to think, advocate, write Substacks and seek to persuade others about such matters. But not in a science class! The students are there for an education in science, and that means mainstream science, the stuff in textbooks. Suppose I thought that Einstein was wrong, and instead had my own pet theory of gravity (that had persuaded no-one else). It would be remiss of me to spend time teaching this in science class. I’m there for the students’ benefit, not to advance my own hobby horses. I should not depart from accepted mainstream science to talk about God, any more than I should give my opinions on the War in Gaza or Vladimir Putin. There’s a time and place. If Wayne considers that God should be a part of science then he should first persuade his fellow scientists, not try it out first on students.

After that interlude, let’s return to Wayne’s argument that God should be in science classes because it “helps the scientific enterprise”. He says: “Like any anchor, the anchor of scientific investigation only works when there is something to which the anchor can attach”. Wayne wants that thing to be an immaterial intelligence, God. There’s a long Christian tradition that the world is only intelligible because God made it so, and that science must rest on that commitment.

In contrast, I consider that science attaches to an empirically observed external world, and that science is ultimately bootstrapped from the fact that it works, verified by the fact that we can indeed predict eclipses. That the world is ordered enough to display such regularities is simply an observed fact. We could not have evolved in (and so would not be here to ponder) a chaotic universe with no regularities.

Wayne gives examples of where he thinks that God is needed:

“I conclude that bringing God into science class HELPS explain the origin of the universe, the origin of life, the origin of humans, and the origin and nature of mind, free will, and conscience — materialism’s greatest failures.”

I won’t attempt to do justice to Wayne’s full argument (for which read his piece), nor delve into how well a job materialism does with each of those (else this piece would get way too long; though I don’t agree that materialism fails and would readily defend the materialist account of all of those). I will just outline how I (a scientist with an atheistic bent) would evaluate how well the proposed inclusion of God does as a scientific explanation.

(1) Invoking God — an infinitely powerful, infinitely capable, infinitely knowledgeable being with purposes that we cannot understand — is an explanatory sledgehammer to crack a few small nuts. Obviously if you start with such a being one can then explain anything at all via “God did it”. It’s about the least parsimonious explanation possible, and so does the opposite of what a good explanation does, which is to explain more out of less. For example, Einstein’s general relativity posits one equation about how matter warps space, but from there can explain an astonishingly wide array of phenomena, including the detection of gravitational waves from colliding black holes that are half-way across the visible universe. Darwinian evolution posits the neat idea of natural selection (statable in a few lines) and from there explains the amazing proliferation of life on Earth.

(2) Starting with the thing one is trying to explain is not an explanation. If I were trying to explain the existence of mice, you would not be impressed if I said “let’s start by having some mice”. Similarly, if one is trying to explain the existence of humans, starting the explanation with a God that is conceived in the image of humans does not impress. And if one is trying to explain the existence of human minds that are intelligent and have a will, then starting with a super-intelligence that has a mind and a will is underwhelming. In contrast, a materialist explains these things as the end products of an evolutionary process, and thus explains them out of simpler and more mundane origins. Even if you disagree that this succeeds, at least it attempts to be an explanation.

(3) Explaining the origin of the universe by invoking a god just leaves you needing to explain the god. And if you’re going to argue that God: always existed/made itself/is necessary/just is, then one could just as well say the same about the universe and excise the god. That would be a simpler explanation, especially as all the attributes of God have been souped up to infinity. Indeed, if we want something that might just pop into existence, uncaused and for no reason, then elementary particles would be our best bet; they seem to do that as far as we can tell, intelligences don’t. The only intelligences we know of are fragile, dependent and contingent products of a long evolutionary process. If anything needs an explanation, they certainly do. Just starting with an intelligence (nay, a super-intelligence) is about as far from an “explanation” as one can get.

(4) Invoking God doesn’t explain anything that the idea was not designed to explain. And that is the hallmark of an ad-hoc hypothesis, constructed to arrive at a desired conclusion. It also exhibits parochial thinking (God being envisaged in the image of an idealised tribal leader, and then abstracted and made apophatic from there) along with a large dollop of wishful thinking (What does a human most want? To be loved and live forever. What does a god provide? Being loved and living forever).

(5) The idea of God makes no predictions and so is unfalsifiable. Consider a child dying of brain cancer. If we gave the mother the ability to cure her child then she would do so without hesitation. God loves the child even more than the mother, and has the power to cure him as easily as lifting a little finger, so he cures the child, right? Well, … maybe not.

I’ve no doubt that theologians have schemed up lots of good reasons why that might not happen and why the God hypothesis is compatible with any and all outcomes, but the cost is to strip the idea of any possibility of doing what any good explanation should do: predicting things we didn’t already know, but can then verify. By adding in lots of ineffability and “God has his reasons” the theologians ensure that the hypothesis is vague and enigmatic, and complex and unwieldy, and also devoid of any actual explanatory or predictive power. This is the exact opposite of what a good scientific explanation is like.

Theologians know that if they made some concrete predictions that could potentially be falsified then they’d quickly get their fingers burned, so instead they carefully construct a God hypothesis that makes no testable difference in the observable world. But if it makes no difference then it is dispensable, and thus science picks up Occam’s razor and excises it.

It was for such reasons that invocations of God within science gradually died out as science progressed, summed up by Huxley’s remark that “Extinguished theologians lie about the cradle of every science”. The notion of The Divine is not omitted from science out of prejudice or as an arbitrary fiat, instead it gradually lost its place in science for the quite proper and scientific reason that it fails to improve any of science’s explanations.

Of course our knowledge of the world is incomplete, so one can always point to gaps in our understanding and fill them with a “God of the gaps”, but as our understanding progresses, and the gaps get filled with knowledge, this leads to a Cheshire Cat god who gradually does less and less and then disappears, leaving only a hankering from those who want to believe. Science has moved on from a sun-god driving a chariot across the sky, and from other superseded explanations such as phlogiston or élan vital. I submit that the God that Randy Wayne points to similarly fails to improve any of science’s explanations, and so should not be brought into today’s science classes.

Is the dimethyl sulphide in the atmosphere of exoplanet K2-18b real?

This was first published on Jerry Coyne’s website: Why Evolution is True

Everyone is interested in whether life exists on other planets. Thus the recent claim of a detection of a biomarker molecule in the atmosphere of an exoplanet has attracted both widespread attention and some skepticism from other scientists.

The claim is that planet K2-18b shows evidence of dimethyl sulphide (DMS), a molecule that on Earth arises from biological activity. Below is an account of the claim, where I attempt to include more science than the mainstream media does, but do so largely with pictures in the hope that the non-expert can follow the gist.

Transiting exoplanets such as K2-18b are discovered owing to the periodic dips they cause in the light of the host star:

Image

So here is the lightcurve of K2-18b, as observed by the James Webb Space Telescope, showing the transit that led to the claim of DMS by Madhusudhan et al.:

Image

If we know the size of the star (deduced from knowing the type of star from its spectrum), the fraction of light that is blocked then tells you the size of the planet.

But we also need to know its mass. One gets that from measuring how much the host star is tugged around by the planet’s gravity, and that is obtained from the Doppler shift of the star’s light.

The black wiggly line in the plot below is the periodic motion of the star caused by the orbiting planet. Quantifying this is made harder by lots of additional variation in the measurements (blue points with error bars), which is the result of magnetic activity on the star (“star spots”). But nevertheless, if one phases all the data on the planet’s orbital period (lower panel), then one can measure the planet’s mass (plot by Ryan Cloutier et al):

Image

So now we have the mass and the size of the planet (and we also know its surface temperature since we know how far it is from its star, and thus how much heating it gets). Combining that with some understanding of proto-planetary disks and planet formation we can thus scheme up models of the internal composition and structure of the planet.

The problem is that multiple different internal structures can add up to the same overall mass and radius. One has flexibility to invoke a heavy core (iron, nickel), a rocky mantle (silicates), perhaps a layer of ice (methane?), perhaps a liquid ocean (water?), and also an atmosphere.

Image

This “degeneracy” is why Nikku Madhusudhan can argue that K2-18b is a “hycean” planet (hydrogen atmosphere over a liquid-water ocean) while others argue that it is instead a mini-Neptune, or that it has an ocean of molten magma.

But one can hope to get more information from the detection of molecules in the planet’s atmosphere, a task that is one of the main design goals of JWST. The basic idea is straightforward: During transit, some of the starlight will shine through the thin smear of atmosphere surrounding the planet, and the different molecules absorb different wavelengths of light in a pattern characteristic of that molecule (figure by ESA):

Image

So one observes the star both during the transit and out of transit, and then subtracts the two, and the result is a spectrum of the planet’s atmosphere.

If the planet is a large gas giant with a fluffy, extended atmosphere and is orbiting a bright star (so that a lot of photons pass through the atmosphere), the results can be readily convincing. For example, here is a spectrum of exoplanet WASP-39b with features from different molecules labelled (figure by Tonmoy Deka et al):

Image

[I include a plot of WASP-39b partly because I was part of the discovery team for the Wide Angle Search for Planets survey, but also because it is pretty amazing that we can now obtain a spectrum like that of the atmosphere of an exoplanet that is 700 light-years away, even while the planet itself is so small and dim and distant that we cannot even see it.]

The problem with K2-18b is that it is much smaller than WASP-39b and its atmosphere less extended (so fewer photons pass through it). This is at the limit of what even the $10-billion JWST can do.

When you’re subtracting two very-similar spectra (the in- and out-of-transit spectra) in order to obtain a rather small signal, any “instrumental systematics” matter a lot. Here is same the spectrum of K2-18b, as processed by several different “data reduction pipelines”, and as you can see the differences between them (effectively, the limits of how well we understand the data processing) are similar in size to the signal (plot by Rafael Luque et al):

Image

The next problem is that there are a lot of different molecules that one could potentially invoke (with the constraint of making the atmospheric chemistry self-consistent). For example, here are the expected spectral features from eight different possible molecules (figure by Madhusudhan):

Image

Then one needs to think about what molecules one might expect to see, depending on what one thinks the observable atmosphere is made of, and how that relates to the overall structure of the planet. Here (for example) is an interpretation “roadmap” from a recent paper by Renyu Hu et al.:

Image

To finally get to the point, here is the crucial figure: Nikku Madhusudhan and colleagues argue — based on an understanding of planet formation, and on arguments that planets like K2-18b are hycean worlds, and from considerations of atmospheric chemistry, in addition to careful processing and modelling of the spectrum itself — that the JWST spectrum of K2-18b is best interpreted as follows (the blue line is the model, the red error bars are the data):

Image

This interpretation involves large contributions from DMS (dimethyl sulphide) and also DMDS (dimethyl disulphide) — the plot below shows the different contributions separated — and if so that would be notable, since on Earth those compounds are products of biological activity.

Image

In contrast, Jake Taylor has analysed the same spectrum and argues that he can fit it adequately with a straight line, and that the evidence for features is at best two sigma. Others point out that the fitted model contains roughly as many free parameters as data points. Meanwhile, a team led by Rafael Luque reports that they can fit the spectrum without invoking DMS or DMDS, and suggest that observations of another 25 transits of K2-18b would be needed to properly settle the matter.

There are several distinct questions here: are the instrumental systematics sufficiently known and accounted for? (perhaps, but not certainly), are the relevant spectral features statistically significant? (that’s borderline), and, if the features are indeed real, are they properly interpreted as DMS? (theorists can usually scheme up alternative possibilities). Perhaps a fourth question is whether there are abiotic mechanisms for producing DMS.

This is science at the cutting edge (and Madhusudhan has been among those emphasizing the lack of certainty, though that has not always been propagated to news stories), and so the only real answer to these questions is that things are currently unclear. This is a fast-moving area of astrophysics and we’ll know a lot better in a few years.

Barnard’s Star is orbited by four small, rocky planets

This was written for The Conversation (this being my original edit of the piece).

Image

Barnard’s Star is a small, dim star, of the type that astronomers call red dwarfs. Consequently, even though it is one of the closest stars, such that its light takes only six years to reach us, it is too dim to see with the naked eye. And much, much too dim to be seen, even with the best telescopes that we have, are the four small planets that we now know to be in close orbits around the star.

Few stars are named after astronomers. The bright, naked eye stars were named in the golden era of Arabic science, while fainter stars typically just have catalogue numbers. But in 1916 Edward Emerson Barnard noticed that this star was moving in the night sky. It is so close to us that its motion through space can be seen against the backdrop of stars that, being much more distant, appear fixed.

How were the orbiting planets found if they’re much too dim to be seen? The answer lies in detecting the effect of their gravity on the star. The mutual gravitational attraction keeps the planets in their orbits, but also tugs on the star, moving it in a rhythmic dance that can be detected by sensitive spectrographs designed to measure the star’s motion.

A significant challenge, however, is the star’s own behaviour. Stars are fluid, with the nuclear furnace at their core driving churning motions that generate magnetic fields (just as the churning of Earth’s molten core produces Earth’s magnetic field). The surface of red-dwarf stars are rife with magnetic storms that cause giant flares and dark “star spots”, and these can mimic the effect of planets.

The task of finding planets by this method boils down to building the most-sensitive spectrographs possible, mounting them on large telescopes that feed sufficient light, and then observing a star over months or years. After carefully calibrating the resulting data, and modelling out the effects of stellar magnetic activity, one can then scrutinise the data for the tiny signals that reveal orbiting planets.

In 2024 a team led by Jonay González Hernández reported on four years of monitoring of Barnard’s Star with the ESPRESSO spectrograph on ESO’s Very Large Telescope. They found one secure planet and reported tentative signals that could indicate three more planets. Now, a team led by Ritvik Basant have added in three years of monitoring with the MAROON-X instrument on the Gemini-North telescope. Analysing their data alone confirmed three of the four planets, while combining both datasets confirms that all four are real.

Often in science, when detections push the limits of current capabilities, one needs to ponder the reliability of the findings. Are there spurious instrumental effects that the teams haven’t accounted for? Hence it is reassuring when independent teams, using different telescopes, instruments and computer codes, arrive at the same conclusions.

The planets form a tight, close-in system, having orbital periods between 2 and 7 Earth days (for comparison, our Sun’s closest planet, Mercury, orbits in 88 days). Most likely they all have masses less than Earth. They’re likely to be rocky planets, with bare-rock surfaces blasted by their star’s radiation. They’ll be too hot to hold liquid water, and any atmosphere is likely to have been stripped away.

The teams looked for longer-period planets, further out in the star’s habitable zone, but didn’t find any. We don’t know much else about these planets, such as their sizes. The best way of figuring that out would be to watch for transits, when planets pass in front of their star, and then measure how much light they block. But the Barnard’s Star planetary system is not edge on to us, so the planets don’t transit, and that makes them harder to study.

Nevertheless, the Barnard’s Star planets tell us about planetary formation. They’ll have formed in a protoplanetary disk swirling around the nascent star. Particles of dust will have stuck together, and gradually built up into rocks that aggregated into planets. Red dwarfs are the most common type of star, and most of them seem to have planets. Whenever we have sufficient observations of such a star we find planets, so likely there are far more planets in the galaxy than there are stars.

Most of the planets that have been discovered are close to their star, well inside the habitable zone, but that’s largely because their proximity makes them much easier to find. Being closer in means that their gravitational tug on their star is bigger, and it means that they have shorter orbital periods (so we don’t have to monitor the star for as long). It also increases their likelihood of transiting, and thus of being found in transit surveys. ESA’s upcoming PLATO mission is designed to find planets further from their stars. This should produce many more planets in their habitable zones, and should begin to tell us whether our own Solar System, which has no close-in planets, is unusual.

Estimates of the heritability of intelligence do include gene–environment interactions

I recently wrote a piece arguing that estimates of the heritability of intelligence that derive from twin studies are likely to be more accurate than the lower estimates from GWAS studies (Genome-Wide Association Studies). This is because GWAS studies lack the statistical power to find many genes with small effects. But in defending the high estimates of “heritability” (the fraction of human differences that can be attributed to genes) it is important to realise what this actually means.

The classic “twin studies” method takes twin babies who were given up for adoption at birth and then separated and raised in different families. Comparing the similarities in later-life outcomes for identical twins (who share all their genes) to those for fraternal twins (who share half their genes, as siblings do) and those of unrelated adoptees, tells us how much of the differences in life outcomes derives from differences in our genes (the “heritability”), versus differences in our “shared” environment (all environmental factors that siblings living together would share) and also differences in our “unshared” environment, a category that includes everything else from chance factors in embryonic development to random life events.

Twin studies tell us that differences in intelligence are roughly 70 per cent genetic in origin, with 20 per cent attributable to “unshared” environment, and only 10 per cent to “shared” environment. Such results have been replicated many times and are robust. But some people regard this as counterintuitive; they intuitively think that parenting and upbringing must be more important (perhaps they see the effects of the genetic similarity of parents and children and erroneously attribute them to the family environment). Similarly, whether a child goes to a “good” school versus a “bad” school matters less for their prospects than is often supposed, since the evidence is that the difference between a good and a bad school is mostly about the intake of kids.

As an aside, and to illustrate the vast gulf between science and popular culture on this topic, a serious newspaper, The Guardian, told us that: “Growing up in a home packed with books has a large effect on literacy in later life”. No, it really does not. But parents who are intelligent and love reading, both: (1) have lots of books in the house, and (2) pass on genes for liking reading to their children. The article doesn’t even mention the latter possibility.

As this illustrates, attributing human differences to genetics is often considered distasteful, even though it is amply supported by the science. This leads to people pointing to the lower estimates from GWAS studies, or arguing that environment must play a larger role than indicated by the “headline” twin-studies numbers. For example, Harvard geneticist Sasha Gusev argues that much of what twin studies attributes to genes is actually the effect of complex gene–environment interactions. He’s right, but let’s consider whether that makes the estimates invalid.

First, it is important to understand that heritability estimates pertain only to the range of environments in the study that yielded that estimate. A study of separated twins in the UK would still have the adopted children going to fairly similar schools and having a similar education. If, as a thought experiment, half were adopted into peasant families in Medieval England, where children helped their parents in the fields and did not go to school, then the greater disparity in environments would increase the disparity in outcomes, and hence the heritability would be lower. Conversely, if one could make all the adopted environments identical, then the heritability estimates would be higher, since genetic differences would be the only differences.

Now consider sending two unrelated children, Sue and Jim, to the very same school. Sue has genes that make her naturally interested in and good at mathematics, but Jim does not. Do they (being sent to the same school) have the same environment? Likely the school would recognise Sue’s ability and would encourage her interest, placing her in the top set for maths where she would be surrounded by similar kids and be stretched by advanced material. Jim would likely reside in a lower set, being given basic material, and the teachers could well react to his lack of interest by not pushing him. Obviously, Sue’s genetic advantage in maths would be intensified by her school’s encouragement and coaching, whereas Jim’s disadvantage would also be compounded.

Hence the differences in their final exam scores would not be just down to genes, but also to how they interact with their school environment, and hence would be about complex gene–environment interactions. If all maths-able kids attend schools that encourage maths ability (which is indeed likely to be the case, because that’s what schools do), then twin studies would attribute all of Sue’s out-performance to genetics. That’s because the environmental condition: “school that does not encourage able kids” would not be present in the study and so the study would be blind to it.

If able kids are usually able to “make their own environment” by gaining access to books, libraries, museums and adult encouragement, then the resulting boost to their abilities would be recorded as a genetic effect in the twin-studies ledger (and again, that’s because the relevant “control”, the able child in an environment where they cannot access those things, is unlikely to be sampled in the study).

Indeed, such an effect has long been known. The estimates for the heritability of IQ from twin studies are lower when measured in teenagers than when measured in later-life adults. That’s because teenagers have less control over their environment, they’re made to read books and do academic work whether they like it or not. But a less-able adult can indulge their natural inclination by never reading a book again, whereas the naturally able adult will seek out intellectual stimulation. And, again, these gene–environment interactions will be scored as a genetic effect in twin studies.

So what, then, do we make of the heretibility estimates from twin studies? Sasha Gusev writes: “The gap between low [GWAS] heritability estimates and high twin heritability estimates could thus be explained by … [gene–environment] interactions incorrectly assigned to genetics by the latter (“missing environments”)”. He then asks: “Could it be that twin studies have been estimating gene–environment interactions this whole time?”.

The answer is Yes! Twin studies do arrive at high heritability estimates by attributing such gene–environment interactions to the “genetics” ledger. But is that the wrong thing to be doing? The answer (as often in science) depends on exactly what question one is asking.

Partly this comes down to how one connotes “environment”. We could regard an environment as being something fixed, that makes no response to how the child behaves. But when it comes to behavioural traits that’s not how the world works. Society does not treat a badly-behaved child identically to a well-behaved child.

Instead we could connote “environment” in a more responsive way, being families, schools and neighbourhoods that contain parents who play in more stimulating ways to a more responsive child; and sports teams that are available to those kids who express an interest and are talented; and adults who will encourage a child who shows an interest in music and who will teach them to play the piano. Sue and Jim would both be in that same overall “environment” but (owing to their genetic disposition) would exploit it differently.

Thus, if the question is about sending children to British schools and asking what factors explain the differences in outcomes, then the high heritability from twin studies is the appropriate answer. For two children sent to the same school, their natural dispositions and abilities are what makes the biggest difference, even though that is mediated by them interacting with their environment differently.

If the question were different, if one was asking about a hypothetical society which did not treat children any differently according to their ability, then the current heritability estimates would not be valid, because studies for those environments have not been done. In practice, the former question is likely to be the one more relevant for understanding today’s world and for public policy. After all, it would be a weird school that treated a pupil who scored 8/100 on a maths test identically to one who scored 92/100.

But heritability estimates are not “fixed”. They only say what happens in the range of environments studied, so they don’t tell you about the effects of interventions that have not been tried. It’s also worth pointing out that the same difficulties in disentangling purely environmental effects from gene–environment interactions also affect GWAS studies. They have no easy way of distinguishing the two any more than twin studies do, and information about environment that helps to do this can be fed into twin studies as easily as into GWAS studies.

Hence, echoing the conclusion of my first piece, the high heritability estimates given by twin studies do seem to be sound and valid. But when it comes to human social and behavioural traits the concept of a fixed and unresponsive environment is not appropriate. Our genetic recipe plays out in a social environment dominated by interactions with other humans.

Indeed, when it comes to behavioural traits like intelligence, the concept of a genetic component that is not about gene–environment interactions, but is purely about genes that are acting independently of environment, is not even coherent. Just for starters, a child, however innately intelligent, would not even have language except through interaction with their environment.

Contra Michael Shermer, facts and reason cannot determine values and morals

Slavery shows us “an example of how facts and reason can determine values and morals”, declares Michael Shermer. That’s starkly put, a direct challenge to Hume’s is/ought distinction between objective facts about the world and “oughts” that derive from values that we humans hold. I reject moral realism (the idea that moral injunctions have objective standing, independently of what humans think about them) and so reject Shermer’s claim.

I am not rejecting the idea that humans have an evolved nature, and that our feelings and values are very real and of the utmost importance to us. Nor am I attempting to dispense with morality, quite the converse. But moral injunctions must ultimately derive from us, from our values — our I like, I dislike, I laud or I abhor feelings — and cannot be derived from objective facts about the world.

Shermer argues for the latter, but I don’t think he succeeds. He argues by repeatedly translating one moral injunction into another moral injunction, giving the impression that eventually he has arrived at bed rock in a brute fact, when he has not. Slavery is an emotive example, so for clarity the discussion below is not about whether to reject slavery, it is about whether that rejection derives from our values, our repugnance and sympathy for other humans, or from facts that hold independently of our values.

“Slavery is morally wrong because it’s a clear-cut case of decreasing the survival and flourishing of sentient beings”, declares Shermer. But, as he then correctly asks: “Why is that wrong?”.

He answers: “It is wrong because it violates the natural law of personal autonomy and our evolved nature to survive and flourish; it prevents sentient beings from living to their full potential as they choose, and it does so in a manner that requires brute force or the threat thereof, which itself causes incalculable amounts of unnecessary suffering.”

That all seems very true. But (as he again asks): “How do we know that’s wrong?” He answers: “Because of what Steven Pinker calls the “interchangeability of perspectives,” which we might elevate to a principle of interchangeable perspectives: I would not want to be a slave, therefore I should not be a slave master.”

But that does not follow, at least not without additional premises. “Should” injunctions are instrumental, that is they pertain to desired goals. You only “should” do something if that gets you some goal you wish for. And it may well follow that: “I don’t want to be a slave; and there is least likelihood of me being a slave if society bans slavery entirely; and therefore it is in my interests to uphold that rejection and relinquish being a slave owner”. But that calculation is different from the above principle.

Let’s consider a thought experiment in which Daniel possesses a superpower such that he can enslave others, but with zero possibility of him being enslaved. If morality were objective and “slavery is wrong” were an objective fact then it would have to bind Daniel, and yet the logic leading up to “… therefore I should not be a slave master” doesn’t hold for him.

The injunction: “you would not want to be a slave, therefore you should not own slaves” might well work fine as an appeal to human sympathy, being a grounding of anti-slavery in human values, but it does not work as an attempt at objective logic.

Shermer then correctly brings in game theory, referring to: “… the evolutionary stable strategy of reciprocal altruism: “I will scratch your back instead of being your master, if you will scratch my back and not make me a slave.” It is the behavioral game theory strategy of tit-for-tat: “I won’t make you a slave if you don’t become my master.””

This may indeed be tactically astute, and adopting this attitude might be your best strategy, especially if you are in danger of being enslaved. But, again, this is a moral scheme adopted instrumentally based on ones values and self advantage (“I don’t want to be a slave”). That does not give you an objective morality. For that you need an argument for why Daniel, with his superpowers, should not own slaves. Indeed, game theory would say that, if you’re trying to maximise your winnings, and you know that you have the power to do so, then you should (instrumentally) exploit others by enslaving them.

Shermer continues: “The principle of interchangeable perspectives is also a restatement of John Rawls’ “original position” and “state of ignorance” arguments, which posit that in the original position of a society in which we are all ignorant of the state in which we will be born … we should favor laws that do not privilege any one class because we don’t know which category we will ultimately find ourselves in.”

Again, this is about tactics that would be in your interests if you indeed were in that “state of ignorance”. But, by the time we’re old enough to reason morally we’re not in that state. And, anyhow, are we really saying that the reason plantation owners in the American South should (morally) have freed their slaves was out of fear that they might one day be enslaved? Even if there were zero likelihood of that (as was pretty much the case), wouldn’t you still want them to abjure slavery? Is fear of being enslaved really your argument for why you consider slavery to be objectively wrong? This does not sound like moral realism (objective moral truths that hold always for everyone, even Daniel), it sounds like politics — the negotiations we make with each other to get along and to attempt to steer society to our liking. And that is indeed what it is!

Perhaps Shermer realises the problem since he then says: “Lincoln’s ultimate moral avowal was simple: “If slavery is not wrong, nothing is wrong”.”. This captures that rejection of slavery is, ultimately, a rejection rooted in human sympathy and human values. That’s all there is. It is indeed the case that “nothing is wrong” in the sense of objective moral injunctions that can be derived from facts and that are independent of human values, because that entire conception is misguided and untenable.

Shermer has produced an insightful descriptive account of human psychology. Game theory does indeed underpin how attitudes evolve in a species where social interactions are all important. Rawls’s “veil of ignorance” does indeed capture aspects of how humans think. It is entirely true that this sort of moral reasoning is in line with our evolved nature. But this still grounds morality in human values. The prescription still comes from us, from our evaluation of the sort of society we want to live in.

Nothing in Shermer’s reasoning makes the leap to moral prescriptions that hold independently of what humans think about them, and are thus objective. It is not true that “facts and reason can determine values and morals”; instead, values and morals derive from our evolved human nature. And yes, you can then rationally explain why we evolved to be like that; there is nothing here outside the realm of science, in that there is nothing here that science cannot explain.

Our human nature is not arbitrary, it is fully explained by our evolution as a social species. But Hume’s distinction holds. Moral values are not determined by “facts and reason”, they are instead part of our nature, part of us. That makes them subjective. To many, that label “subjective” seems akin to saying they are second rate or unimportant. But that’s utterly erroneous, in the end our subjective qualia are the only things that are important to us.

GWAS studies underestimate the heritability of intelligence

Whether differences in intelligence are due to people’s different genes or to their different environments has long been contentious. One answer to this question comes from twin studies and adoption studies. By comparing outcomes for identical twins (who share all their genes) with those of fraternal twins and with unrelated children, one can deduce the relative influences of genes in comparison with “shared environment” (all environmental factors shared by siblings growing up together) and un-shared environment (everything else, which can include things like randomness in embryonic development). Such studies give high estimates for the genetic contribution to differences in intelligence, such that the heritability of IQ is typically estimated as around 70%.

A different method is to look directly at genes, through Genome Wide Association Studies (GWAS), which sample large numbers of genes in large numbers of people, attempting to measure and add up the affect of each gene on IQ. This typically gives much lower estimates for the effect of genes, and the marked difference between estimates from twin studies and those from GWAS studies is referred to as the “missing heritability” problem.

Recently the Harvard geneticist Sasha Gusev argued that twin studies are unreliable and that the true heritability is nearer the much-lower estimates from current GWAS studies. Saying that “intelligence is not like height”, he argues that, while a trait like height might be strongly influenced by genes, intelligence is not. “Adding up all of the genetic variants only predicts a small fraction of IQ score”, he says, adding that: “the largest genetic analysis of IQ scores built a predictor that had an accuracy of 2–5% in Europeans […]”.

In response to Gusev’s critique, Noah Carl wrote a defense of twin studies. Here I add to that by arguing that current GWAS studies must be overlooking much of the genetic influence on intelligence. In short, intelligence must be affected by vast numbers of genes, which means that most of them must have very small effects, and current GWAS studies do not have the statistical power to detect small-enough effects. This is not a new suggestion (see, e.g., Yang et al. 2017), but it could well resolve the issue.

Being taught at school about Mendel and smooth versus wrinkly peas might leave the impression that traits can be determined by only one or a few genes. While this might be true in some few cases, most traits are affected by very many genes, and, in particular, complex traits related to human personality and behaviour must involve huge numbers of genes. (Whereas a simpler trait like height could, in principle, involve fewer genes.)

Intelligence is among the most complex traits, which means that any genetic recipe for intelligence must contain a lot of information. If I asked you how many lines of code you’d need to program an intelligent robot you’d reply: “Eek, millions at least!”. Genes provide, of course, a developmental recipe rather than a direct program, but the underlying point, that this recipe must contain a vast amount of information and so be encoded in tens of thousands of genes, still stands. It then follows that most of these genes must individually be having a very small effect. If N thousand genes each contributed equally to intelligence then each would have an 1/N thousandth effect.

A basic rule of statistics is that to find smaller effects you need a larger sample. Typically the uncertainty scales as the square root of the sample size. So if an opinion poll samples 1000 people then you get a 3% error range [square-root(1000)/1000]. To do ten times better (a 0.3% error) you’d need a sample 100 times bigger. (Though you’d also run into systematic error, such as whether your sample is representative of the population.) And, of course, to find a tiny effect you need an error range smaller than that effect, preferably quite a bit smaller.

In writing about GWAS studies I should own up that I am not a geneticist, but in my “day job” in astrophysics we have exactly the same problem, that’s why we build telescopes with large mirrors to collect many photons. We are studying tiny signals from faint galaxies at large distances in the universe, and every time we apply for time on a large telescope we calculate how much time we need in order to collect enough photons to have enough statistical power to find the small signal that we are looking for.

GWAS studies examine one type of genetic variability, Single Nucleotide Polymorphisms (or SNPs, usually pronounced “snips”), and they might typically record SNPs at 20,000 locations out of 3 billion nucleotides, examining those SNPs over a sample of 10,000 to 100,000 genomes. For a discussion of the statistical power of GWAS studies I refer to the paper Wu et al 2022 (“… Statistical power … of genome-wide association studies of complex traits”). The authors confirm that: “The statistical power for an individual SNP is determined by its effect size, the sample size, and the desired significance level. In a random sample of size n, the test statistic for the association between a quantitative phenotype and a SNP is β sqrt(n), …” (where β is the effect size). Making a range of assumptions (for which see the paper) they develop a model leading to the following plot:

Image

The plot shows the sample size (number of genomes) needed to find SNPs of small, moderate and large effect size, by which they mean 0.01%, 0.1%, and 1% of total SNP heritability. This shows that to detect a gene accounting for 0.1% of the variance requires a sample size of ~ 30,000. A similar paper that again develops a statistical model, making a different set of assumptions (Wang & Xu 2019), concludes that finding a SNP that explains 0.20% of the phenotypic variance requires a sample size of 10,000, which is consistent with the first paper. It’s also worth remarking that real-life studies will almost certainly do worse than these estimates, since there are always sources of noise not accounted for in theoretical models.

Hence GWAS studies sampling tens of thousands of genomes could find the genes associated with intelligence if there were only a few hundred of them, but if they number a few thousand then that requires hundreds of thousands of genomes at a minimum, and if intelligence involves over ten thousand SNPs then that’s well beyond current GWAS studies.

The GWAS study in the quote from Gusev above (Savage etal 2018) sampled the genomes of 270,000 people. They do indeed report that the genetic differences that they found account for only “up to 5.2% of the variance in intelligence”, but they have only found 205 “associated genomic loci” (blocks of SNPs associated with a trait). It seems wildly implausible that these 205 genetic differences are all that there is to a recipe for intelligence. (If you disagree, feel free to write down a developmental recipe for human-like intelligence in only 205 lines of instructions!) It’s worth pointing out that AI models such as ChatGPT-4 are based on hundreds of billions of neural-network “weights” (though of course this is the end product, not the recipe).

Indeed a more recent and bigger study (Okbay et al 2022) analysed 3 million genomes and found 3,952 SNPs associated with educational attainment, that together account for 12 to 16% of variance in educational attainment. So (setting aside that intelligence, IQ and educational attainment are not quite the same thing), they have a larger sample, they find that many more SNPs are involved, and in total this accounts for a larger fraction of the variation.

Even then, this is likely to be merely the tip of the iceberg of genetic variability relevant to intelligence and educational attainment. There are 3 billion nucleotides in the human genome, and we have no good way of estimating what fraction might have some effect on intelligence. Further, GWAS techniques study only one type of genetic variation, the SNP, whereas there are, in principle, lots of other ways in which genomes can vary. And, further, GWAS estimates assume a simple “additive” model, where the overall effect on the phenotype is simply the sum of the effect of each SNP individually. This could well be a good first-order approximation, but the reality of a recipe for intelligence is likely to involve myriads of subtle and complex interactions.

In short, since our understanding of the genetic developmental recipe for intelligence is close to non-existent, and since we are only guessing wildly at how many SNPs it might involve and how those SNPs interact, there is no way that we can conclude that current GWAS studies are sensitive to most of the relevant genetic variability. Hence we cannot conclude that adding up the known effects gives anything like a true estimate of the heritability of any complex trait, such as intelligence. All we can say is that it gives a lower limit.

Note that this argument is not attributing the missing heritability (as has sometimes been suggested) to relatively rare genes of large effect, which, being rare, are simply not sampled in GWAS studies (this idea used to be plausible, but is getting less so as GWAS studies get bigger). Instead, it is attributing the missing heritability to large numbers of common genes that are sampled in GWAS studies, but whose individual effects are too small to be detected with the statistical power of current GWAS studies.

In contrast, twin studies do not depend on knowing anything about how the genes produce intelligence. It is purely a suck-it-and-see method that takes whole genomes (identical twins, fraternal twins, and unrelated children) and evaluates the later-life outcomes. Twin studies do have their own assumptions, including the “equal environment assumption”. For example, do parents tend to treat identical twins differently from fraternal twins? This is one reason why the gold standard is studies of twins that were separated at birth and reared apart. This is a rare occurrence today, but before the ready availability of abortion and the pill in Western countries there was a steady stream of young, un-married mothers giving up babies to be adopted at birth, and it was common to separate twins. More recently, China’s one-child policy has led to twins being separated and adopted at birth (e.g., Segal & Pratt-Thompson 2024). Such studies give high values of ~ 0.7 for the heritability of IQ.

As a result of checks like this heritability estimates from twin and adoption studies have been extensively examined and seem robust. We have no good reason to think that twin studies are severely underestimating the heritability of IQ. In contrast, there is good reason to suspect that the GWAS estimates are only a lower limit, and are currently much too low.

Misinformation and the cost of smoking

That Elon Musk has the ear of the President-elect should mean that X/Twitter is, for now, immune to Democratic threats of regulation. But X is still being threatened by the EU, while from the UK to Ireland, to Canada and Australia there is a growing clamour that social media must be regulated to clamp down on “misinformation”.

Uncensored misinformation, particularly on the topic of public health, is too dangerous to be left to the “marketplace of ideas”. But, while misinformation is bad, prohibiting it is worse. Who gets to decide whether a claim is “misinformation” and can they be trusted?

One could give a hundred examples to illustrate the point but here I’ll pick just one. Recently, Britain’s Prime Minister, Keir Starmer, announced a campaign against smoking: “My starting point on this is to remind everybody that over 80,000 people lose their lives every year because of smoking,” he said. “It’s a huge burden on the NHS and, of course, it is a burden on the taxpayer”.

Superficially that “huge burden” claim has intuitive appeal. “Smoking puts huge pressure on our NHS, and costs taxpayers billions”, said a spokesperson for the Department of Health and Social Care. “We’ve got to take action to reduce the burden on the NHS and the taxpayer”, continued Sir Starmer.

But let’s think further. Illnesses caused by smoking tend to kill people towards the end of their working life or in early retirement. What would happen if they weren’t smokers? They’d get older and continue into middle and late retirement. And health-care costs ramp up massively the older people get. And eventually they’ll die of something or other anyhow, and that will also cost the NHS.

A study in the New England Journal of Medicine analysed this: “Smokers have more disease than nonsmokers, but nonsmokers live longer and can incur more health costs at advanced ages”. Their conclusion was that: “in a population in which no one smoked the costs would be 7 percent higher among men and 4 percent higher among women”, and that “if all smokers quit, health care costs would be lower at first, but after 15 years they would become higher than at present”.

That’s just health-care costs. Each additional decade of retirement also costs the UK taxpayer £100,000 in pension payments. And if they end up needing a care home that can cost the taxpayer as much as £50,000 a year (whereas smokers tend to die before the advanced old-age when people typically need a care home).

Then there’s tobacco taxes, which, according to Full Fact: “… bring in about £12 billion in direct tax revenues”, which is actually much greater than the “… costs anywhere between £3 billion and £6 billion for NHS treatments [related to smoking] in a given year” (2015 figures).

A study in the British Medical Journal reached similar conclusions: “Smoking was associated with a greater mean annual healthcare cost of €1600 per living individual during follow-up. However, due to a shorter lifespan of 8.6 years, smokers’ mean total healthcare costs during the entire study period were actually €4700 lower than for non-smokers. For the same reason, each smoker missed 7.3 years (€126,850) of pension. Overall, smokers’ average net contribution to the public finance balance was €133,800 greater per individual compared with non-smokers”.

So, overall, smokers save the rest of us money. They pay tax over their working lives, and then tend to die in early retirement, relieving the taxpayer of ongoing expense. Thank-you smokers!

So Keir Starmer’s statements are wrong. He should have said: “ending smoking will cost a great deal over the years, but is worth it in terms of people’s improved life-span and improved quality of life”. But he instead said “we’ve got to take action to reduce the burden on the taxpayer”, because people will find the latter a convincing argument. In contrast, people are rightly dubious about governments adopting policies that amount to telling adults what is good for them.

So should social media, and indeed the traditional media, have censored Sir Starmer’s statements by labelling them as “misinformation” or by removing them entirely? But the people keenest on the idea that social media must clamp down on misinformation tend also to be in favour of left-wing or centre-left governments. Keir Starmer is not who they want censored. (Out of interest, I looked for any “fact check” by the BBC or other mainstream media in response to Starmer’s claims, but found none.)

Indeed, I suspect that, were I to write like this after the requested censorship of “misinformation” had been implemented, then it is likely to be me who would be censored. After all, I’m disagreeing with official UK government pronouncements, and doing so on an important matter of public health!

The question of “who gets to say what is misinformation?” highlights the critical flaw in requests for censorship. The price of a society where we can dispute claims and seek the truth is that we have to learn to cope with information that is wrong and misleading.

Did Aboriginal Australians predict solar eclipses?

Image

“Mathematics has been gatekept by the West and defined to exclude entire cultures” declares Professor Rowena Ball of the Australian National University, who wants mathematics to be “decolonised”. In one sense she is right, mathematics is indeed “a universal human phenomenon” that transcends individual cultures. But she is wrong to suppose that anyone disagrees; she is wrong to claim that there are people who think of mathematics as having “an exclusively European and British provenance” and want it to remain that way. Rebutting a strawman serves only to signal one’s superior attitudes.

Professor Ball claims that “Almost all mathematics that students have ever come across is European-based”, and yet “algebra” is an Arabic word and so is “algorithm”. Foundational concepts such as the number zero and negative numbers originated in the Middle East, India and China before being adopted by Europeans. The mathematics now taught to schoolchildren in Mumbai and Tokyo is the same as that taught in London.

Being Australian, Ball’s primary concern is to laud the mathematics of indigenous Australians as being of equivalent merit to globally mainstream mathematics, so she wants a “decolonised” curriculum in which “indigenous mathematics” has equal standing.

But how much substance is there to her case? What would actually be taught? Professor Ball’s article gives only one anecdote about signalling with smoke rings, and based on that alone concludes that: “Theory and mathematics in Mithaka society were systematised and taught intergenerationally”.

In a longer, co-authored article, she reviews evidence of mathematics among indigenous Australians prior to Western contact, but finds little beyond an awareness of counting numbers and an ability to divide 18 turtle eggs equally between 3 people. She recounts that they had concepts of North, South, East and West, could travel and trade over long distances, and knew about the relationship between lunar cycles and tides, and had an understanding of the seasons and the weather. And yes, I’m sure that they were indeed expert in the forms of practical knowledge needed to survive in their environment. But there is no indication of a parallel development of mathematics of equal standing to that elsewhere.

The two authors assert: “We also illustrate that mathematics produced by Indigenous People can contribute to the economic and technological development of our current ‘modern’ world”. But nowhere is that actually illustrated. There is no worked example. The suggestion is purely hypothetical. And there is no exposition of what a “decolonised” mathematics curriculum would actually look like.

There is one claim in the article that did seem intriguing:

But Deakin (2010, p. 236) has devised an infallible test for the existence of Indigenous mathematics! This is that there must be ‘an Aboriginal method of predicting eclipse’. To predict an eclipse, one needs clear and accurate understanding of the relationships between the motions of the Sun and Moon. In spite of the challenge, the answer is yes. Hamacher & Norris (2011) report a prediction by Aboriginal people of a solar eclipse that occurred on 22 November 1900, which was described in a letter dated in December 1899.

Predicting solar eclipses is indeed impressive. It requires considerable understanding and long-term record-keeping over many centuries, in order to discern patterns in eclipse occurrences, or it requires some sophisticated mathematics coupled with measuring and recording the locations of the sun and moon to good accuracy. Either is hard to do in a society lacking a written language. (English astronomer Edmond Halley is best known for having predicted the return of a comet and for predicting a solar eclipse over London in 1715, the first secure example of that feat, though it is likely that Babylonian, Chinese, Arabic or Greek astronomers, such as Thales, possibly using something like the Antikythera Mechanism, had done so centuries earlier.)

Image

This geoglyph in the Ku-ring-gai Chase National Park has been interpreted as a record of a solar eclipse, depicting the eclipsed crescent above two figures.

Hence, Professor Ball’s claim of a successful prediction of a solar eclipse is vastly more significant than anything else in her paper. So I looked up the source, a paper by Hamacher & Norris (2011). The evidence is a letter written in December 1899 by a Western woman who says: “We are to witness an eclipse of the sun next month. Strange! all the natives know about it; how, we can’t imagine!”.

Afterwards the same correspondent wrote: “The eclipse came off, to the fear of many of the natives. It was a glorious afternoon; I used smoked glasses, but could see with the naked eye quite distinctly”. But there was no eclipse until a year after the first letter (Nov 1900), not “next month”; the “fear of many of the natives” is incongruous with the suggestion that they had predicted it; and the text of the letters comes from a later compilation in 1903. This is the only piece of evidence given that Aboriginal Australians had developed the ability to predict eclipses; Professor Ball presents nothing else, and nothing from Aboriginals themselves. She gives no account from any Aboriginal about how this is done, and if that’s because she cannot find anyone who could give that account, then doesn’t that count for more than one anecdote that could have been misunderstood or miscommunicated?

That Professor Ball accepts such weak evidence uncritically shows that she is driven by an agenda, not by a fair assessment of the development of mathematics or of the history of indigenous peoples. There is no substance here, no account of what an “indigenous mathematics” curriculum would look like. It does students from indigenous backgrounds no favours to divert then away from global mathematics into “ethnomathematics”. Ironically, it is people like Professor Ball who are telling them that mathematics is “colonised” and European and not for them. This is the wrong message. Mathematics and science are universal, and should be open to everyone, and we should not be dividing universal enterprises into silos with ethnic labels attached.

This piece was written for the Heterodox Academy STEM Substack and is repoduced here.

Humanists UK demands a blasphemy law

“Blasphemy laws are NEVER justifiable” declares Andrew Copson, Chief Executive of Humanists UK. “Freedom of speech, freedom of expression, and freedom of religion or belief are the very bedrock upon which our humanist ideals rest”, and in response to the disappointing news that Denmark wants to make it a criminal offence to burn the Koran in public, he adds: “appeasing bullies doesn’t work – it merely emboldens them”. As a member and supporter of Humanists UK I entirely agree.

The very next segment of the Humanists’ newsletter, however, declares: “Humanists protest ongoing delays to conversion therapy ban”. Reports are that the UK government is dropping its committment to such a ban, “with ministers concluding it has proven problematic or ineffective in other countries”. Meanwhile, Humanists UK complain that: “LGBT people continue to be subjected to harmful attempts of so-called ‘conversion therapy’”.

I can’t help thinking that a ban on “conversion therapy” amounts to a blasphemy law, a ban on saying something that offends secular norms. As regards sexuality, pretty much the only form of conversion therapy that exists these days is speech, speech that “can include exorcisms and forced prayer” and that occurs “in closed-off religious communities”.

“Humanists UK is strongly committed to freedom of religion or belief, but that freedom should be limited where it causes harm, and conversion therapy is harmful”.

Image

But I have a hard time accepting that such speech meets a legal definition of “harm” sufficient to justify making it a criminal offence. The whole basis of enlightenment values is that we outlaw physical violence and threats thereof, but accept people speaking freely even if others find it offensive and distressing. If we allow “psychological harm” as a reason to shut down speech, then anyone can censor speech simply by claiming to be upset. And that is exactly what activists do these days, claiming that any opinion they disagree with is “hate” speech that makes them feel “unsafe”.

Yes, it may well be psychologically distressing for a young gay adult to be told by their religious mentor that Jesus wants them to be straight, but I have little doubt that it is psychologically distressing for a devout Muslim to hear of their prophet being denigrated or their holy book being treated with disrespect. If we want to say “tough” to the latter, telling them that in a free society they must accept it, then, as I see it, we must also allow a Christian to proclaim that Jesus deplores homosexuality.

Image

I am puzzled that Humanists UK can adopt a hard-nosed attitude to the “harm” caused to Muslims by Mohammed cartoons or burned Korans, but then go with the nebulous and un-evidenced concept of “harm” to a gay person on being told that Jesus could make them straight. It may be that their postion is a knee-jerk opposition to religion in each case, but humanism should not be anti-religion. If we really think that “Freedom of speech, freedom of expression, and freedom of religion or belief are the very bedrock upon which our humanist ideals rest”, then we need to mean it, and reject the idea that such speech causes “harm”. “I support free speech but not where it causes harm …” is how all requests for blasphemy prohibitions start, religious or secular.

As for “forced” prayer and “closed-off religious communities”, any adult can walk away, and if they choose not to then that’s up to them. Humanists UK talk of “harrowing case studies” of “starvation”, though that exposé related only that: “On his first visit to the church, our reporter was invited to take part in a separate three-day residential programme, where those participating would be expected to pray for up to three hours at a time without eating or drinking until the third day”. Unless they were locking the doors, preventing him walking down to the nearest McDonald’s should he choose to (which, of course, would already be illegal), I’m not harrowed. Do we want to outlaw religiously-motivated fasting?

Humanists UK also link to a study of psychiatrists reporting that: “17% [of therapists] reported having assisted at least one client/patient to reduce or change his or her homosexual or lesbian feelings. […] counselling was the commonest treatment offered …” and that some therapists “… considered that a service should be available for people who want to change their sexual orientation”.

If someone goes voluntarily to a therapist and asks for it, I don’t see that it should be a criminal offence for the therapist to offer religious-based counselling, even though that counselling will likely not work. Acupuncture, Reiki and homeopathy also don’t work, but are not illegal.

OK, one might respond, but what about children, what about gay teenagers who may not be able to walk away? And yes, that’s a harder case. In general I want to protect children from religious coercion. But how far would we take it? Should it be illegal for a Catholic parent to teach their child about hell? Should we outlaw cults, outlaw Scientology, indeed outlaw Haredi Judaism or Catholicism or Islam? The teachings of all of these could be fairly regarded as abusive ideas to put in the mind of a child, and yet we tolerate them; indeed we even promote religions by handing over state-funded schools to them and by granting them charitable status.

If we are making it illegal for a religious leader to tell a teenager that gay acts are immoral, should we make it illegal for them to tell that teenager that sex before marriage is immoral, or that they should wear a hijab? I don’t think we should, since if we want freedom from religion and the freedom to blaspheme then we need to accept freedom of religion and their right to broadcast their views, however much we deplore those views.

We already have a problem with British police telling street preachers that they cannot proclaim Christianity. It’s precisely because I want the right to denigrate religion, including being disrespectful to the Prophet Mohammed, that I support their right to broadcast views that might upset others. A better approach is to remove schools from religious institutions, giving kids a right to a secular education, thus ensuring that they can also obtain views and perspective from other than their parents’ religion.

In the current climate, the really contentious issue is not so much sexuality, but “gender identity”. A ban on “conversion therapy” would include a ban on attempts to change a teenager’s “gender identity”. But our understanding of such issues is poor and is evolving rapidly, such that it could be calamitous for the clod-hopping criminal law to step in. We don’t even have an agreed definition of “gender identity”. Heck, we’d have a hard-enough time agreeing a definition of “woman” these days!

Unlike sexuality, “gender identity” seems to be fluid. The evidence is that, of young teenagers who declare a “gender identity” different from their biological sex, many, perhaps most, will desist and come to identify with their sex by early adulthood. Do we want to deny such kids access to professional counselling because the counsellors are worried that, if a teenager does come to desist, then they’ll have committed the criminal offence of “conversion therapy”? Of do we want to ensure that only gung-ho counsellors, who will push an ideological approach whatever, will take on such kids? If you think such a law is a good idea, please could you at least read this account by the parent of a trans-identifying child.

OK, you might reply, we’ll draft the law with an exemption for well-meaning professional counselling. But what about a parent, or an aunt who has doted on the kid since birth and who has their best interests at heart, or a trusted teacher? Are they allowed to talk through the kids’ feelings with them, without risk of being hauled into court? Activists will use accusations to try to silence anyone who disputes their ideology; they already do.

Then there’s the issue that “affirming” a teenager’s self-declared gender identity, that is, going along with a social transition, “is not a neutral act”, in the words of the authoratitive Cass Review, but is instead “an active intervention because it may have significant effects on the child or young person in terms of their psychological functioning”. Refusing to acquiesce with a social transition might thus be in the child’s best long-term interests (we don’t know, Dr Cass says “better information is needed about outcomes”), while being exactly the sort of thing that activists want to prohibit.

Then there’s the issue that social transition often pushes the child towards medical and surgical transition, which can leave the young person permanently without sexual function and unable to have children, and turns them into a medical patient for life, requiring ongoing maintenance — and yet evidence that this improves their psychological well-being or reduces their suicide rate is simply not there.

Given the uncertainties, this is no place for the criminal law and I’m glad that the UK government is realising the problems with any such law and is shelving the idea. Those advocating such a law must believe that they know what’s best for trans-identifying youths, but the evidence base is meagre. Discouraging open enquiry is the opposite of what we need. Any entry of the criminal law into the contested arena of “gender identity” is likely to do much more harm than good.

The proper scope of academic freedom

This piece was written for and first appeared on the Heterodox STEM Substack, and was inspired by this Tweet:

Image

“You can’t complain about attempts to shut down gender critical views in universities, and support the closing down of gender studies depts or courses on queer theory. You either support academic freedom or you don’t. Unfortunately, many people, including many academics, don’t understand academic freedom, or why it’s so important.” — Colin Wight, Professor Emeritus (International Relations)

Perhaps I’m one of the many academics who don’t understand academic freedom, but oh yes I can do both of those things! I don’t see academic freedom in such stark terms, and want to make a distinction between teaching and scholarship.

By “teaching” I’m referring to official courses run by a university, usually for credit towards a degree. I argue that academics don’t have “academic freedom” to do as they wish when teaching. A university should be dedicated to truth-seeking enquiry based on evidence and reason, and the courses it runs and the degrees that it awards carry its imprimatur. I don’t think that a university should run courses that promote creationism or Mormon theology or Queer Theory, or anything else where evidence and reason take a back seat to ideology, even if some academics want to teach it and even if some students want to take it. Continue reading