Category Archives: Science

God does not belong in a science class

This piece was written as a reply for Heterodox STEM, where it first appeared.

Randy Wayne of Cornell University has recently presented his arguments for wanting to bring God into a science class, arguing that this is necessary for “the most complete scientific understanding”. He sees the exclusion from science of the idea of “immaterial intelligence” as an unwarranted restriction that impoverishes science and short-changes students. Here I’ll attempt to rebut Wayne, and will argue that omitting gods from science’s worldview follows quite properly from science itself.

One central part of Wayne’s argument is that:

“A foundational assumption such as reality is composed of matter and energy and nothing else, is an assumption — what Euclid calls a postulate. Foundational assumptions are untested, otherwise they would be called facts. Evidence gathering, logic, reason, and analysis are built on the assumptions, and science cannot proceed without faith in the assumptions …”

This view, that science rests on metaphysical assumptions that must be taken on faith, is commonly supposed, but is (I submit) profoundly wrong. At root, science comes from observing the world around us and developing a set of ideas that help us understand, predict and manipulate the world. Observing regularities in the natural world would have helped humans hunt or herd animals or grow crops more successfully. Over time, observing the night sky and the cycles of days, months and seasons led to an understanding of planetary orbits, and from there to Newton’s account of gravity and thence to Einstein’s account. We know that these accounts are true (in the sense of being good models of the world) because they make good predictions.

When Edmond Halley, the second Astronomer Royal, predicted that a solar eclipse would occur over England in 1715, that prediction came true to within a 4-minute accuracy. And when he predicted that a comet would return in 75 years after another orbit, which also came true, he demonstrated that astronomers did have a good understanding of celestial mechanics.

It is important to realise that the successful outcome of his prediction verified not only his understanding of gravity and orbits, but also the mathematics that he used, the logic and reasoning that he used, and any other necessary assumptions underlying his science. Either: (1) making different assumptions would have affected the prediction, in which case the outcome verified them; or (2) they made no difference, in which case he needn’t have assumed them.

Within the inter-woven package of ideas that constitutes science there are none that are so fundamental that they cannot be challenged. All one need do is point to that idea and ask what would be the case if it weren’t true, if we replaced it with its converse? Would that improve or worsen the models? The “improvement” is judged in terms of: (1) explanatory power (making sense of all the facts we currently know about); (2) predictive power (its easy to scheme up ad-hoc explanations for known facts, but much harder to successfully predict things one didn’t already know); and (3) parsimony (excising superfluous stuff that doesn’t improve the explanations).

Einstein’s gravity replaced Newton’s because of its explanatory power (it gave a correct calculation of the precession of the orbit of Mercury, something that Einstein already knew about) but also because of its predictive power (it correctly predicted the warping of space by the sun’s gravity, and hence the change of position of stars during solar eclipse, something for which there was no prior measurement), and indeed its parsimony (in essence it consists of only one equation which states how mass, energy and momentum warp space).

An illuminating metaphor is Neurath’s raft, which compares the ideas of science to the planks of a wooden raft afloat on the sea. One can swap out and replace any of the planks while standing on the others (though one can’t replace all of them at once, having nowhere else to stand). Similarly, we can evaluate any of the ideas underpinning science, by using the rest of the ensemble to do so, and can replace any idea it that would improve the ensemble. No idea is too fundamental to be questioned. Over time, any and all of the ideas could be replaced or improved, as science iterates to an ensemble with more and more explanatory and predictive power.

You may now be tempted to ask, ok then, on what is this account of science that you’ve just given based, how is that verified? I would reply that this account is also arrived at by figuring out what works best in modelling the world. Thus the “scientific method” is itself a product of science, it is itself the result of an iterative bootstrap that is ultimately verified by the fact that science works. Science does not rest on untestable metaphysical assumptions, it rests on the fact that iPhones work, airplanes fly, and NASA’s predictions of eclipse times do come true.

Wayne argues that leaving God and the supernatural out of science is an arbitrary and unwarranted choice. But the history of science shows this not to be so. Early scientists were fully content to invoke God if they needed him to patch up their models. James Clerk Maxwell wrote: “I have looked into most philosophical systems, and I have seen that none will work without a God”.

Newton applied his theory of gravity to the solar system and concluded that the whole edifice would be unstable over the long term, and so needed God’s intervention to make it work. “This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being” he wrote in Principia, and later: “A continual miracle is needed to prevent the sun and the fixed stars from rushing together through gravity”. Similarly, leading astronomer John Herschel wrote that the laws of nature had been established by the “Divine Author of the universe” and were being maintained by “the constant exercise of His direct power in maintaining the system of nature” while all material causes emanated “from his immediate will, acting in conformity with his own laws”.

But, as decades passed and understanding improved, scientists developed better models that worked fine without divine intervention. Hence Pierre-Simon Laplace’s (possibly apocryphal) remark to Napoleon that he “had no need of that hypothesis”. And in 1859, defending Darwinism, Thomas Henry Huxley wrote: “But what is the history of astronomy … but a narration of the steps by which the human mind has been compelled, often sorely against its will, to recognise the operation of secondary causes in events where ignorance beheld an immediate intervention of a higher power?”.

The change from a science entwined with religion to a science devoid of references to God can be traced to the decades between 1820 and 1880. That was not so much about metaphysical commitment and more a practical matter: models worked fine without gods, and adding gods into them just made the models un-parsimonious while doing nothing to improve the explanations. A similar process had, of course, been going on through history. Many early religions attributed rain, thunder and successful harvests to the whims of nature gods. Even daytime was caused by a sun-god driving his chariot across the sky. Over eons these explanations were gradually replaced by an understanding of natural processes.

Before turning to Wayne’s arguments that invoking a God does improve the explanations available to science, let’s have a brief interlude:

“As I will show you, limiting all discussion in a science class to the material and denying the immaterial is unnecessarily restrictive. […] the First Amendment exists to protect the freedom to think. […] That is, a professor can use his right to freedom of speech to talk about God in a science class …”

I fully support Wayne’s right to think, advocate, write Substacks and seek to persuade others about such matters. But not in a science class! The students are there for an education in science, and that means mainstream science, the stuff in textbooks. Suppose I thought that Einstein was wrong, and instead had my own pet theory of gravity (that had persuaded no-one else). It would be remiss of me to spend time teaching this in science class. I’m there for the students’ benefit, not to advance my own hobby horses. I should not depart from accepted mainstream science to talk about God, any more than I should give my opinions on the War in Gaza or Vladimir Putin. There’s a time and place. If Wayne considers that God should be a part of science then he should first persuade his fellow scientists, not try it out first on students.

After that interlude, let’s return to Wayne’s argument that God should be in science classes because it “helps the scientific enterprise”. He says: “Like any anchor, the anchor of scientific investigation only works when there is something to which the anchor can attach”. Wayne wants that thing to be an immaterial intelligence, God. There’s a long Christian tradition that the world is only intelligible because God made it so, and that science must rest on that commitment.

In contrast, I consider that science attaches to an empirically observed external world, and that science is ultimately bootstrapped from the fact that it works, verified by the fact that we can indeed predict eclipses. That the world is ordered enough to display such regularities is simply an observed fact. We could not have evolved in (and so would not be here to ponder) a chaotic universe with no regularities.

Wayne gives examples of where he thinks that God is needed:

“I conclude that bringing God into science class HELPS explain the origin of the universe, the origin of life, the origin of humans, and the origin and nature of mind, free will, and conscience — materialism’s greatest failures.”

I won’t attempt to do justice to Wayne’s full argument (for which read his piece), nor delve into how well a job materialism does with each of those (else this piece would get way too long; though I don’t agree that materialism fails and would readily defend the materialist account of all of those). I will just outline how I (a scientist with an atheistic bent) would evaluate how well the proposed inclusion of God does as a scientific explanation.

(1) Invoking God — an infinitely powerful, infinitely capable, infinitely knowledgeable being with purposes that we cannot understand — is an explanatory sledgehammer to crack a few small nuts. Obviously if you start with such a being one can then explain anything at all via “God did it”. It’s about the least parsimonious explanation possible, and so does the opposite of what a good explanation does, which is to explain more out of less. For example, Einstein’s general relativity posits one equation about how matter warps space, but from there can explain an astonishingly wide array of phenomena, including the detection of gravitational waves from colliding black holes that are half-way across the visible universe. Darwinian evolution posits the neat idea of natural selection (statable in a few lines) and from there explains the amazing proliferation of life on Earth.

(2) Starting with the thing one is trying to explain is not an explanation. If I were trying to explain the existence of mice, you would not be impressed if I said “let’s start by having some mice”. Similarly, if one is trying to explain the existence of humans, starting the explanation with a God that is conceived in the image of humans does not impress. And if one is trying to explain the existence of human minds that are intelligent and have a will, then starting with a super-intelligence that has a mind and a will is underwhelming. In contrast, a materialist explains these things as the end products of an evolutionary process, and thus explains them out of simpler and more mundane origins. Even if you disagree that this succeeds, at least it attempts to be an explanation.

(3) Explaining the origin of the universe by invoking a god just leaves you needing to explain the god. And if you’re going to argue that God: always existed/made itself/is necessary/just is, then one could just as well say the same about the universe and excise the god. That would be a simpler explanation, especially as all the attributes of God have been souped up to infinity. Indeed, if we want something that might just pop into existence, uncaused and for no reason, then elementary particles would be our best bet; they seem to do that as far as we can tell, intelligences don’t. The only intelligences we know of are fragile, dependent and contingent products of a long evolutionary process. If anything needs an explanation, they certainly do. Just starting with an intelligence (nay, a super-intelligence) is about as far from an “explanation” as one can get.

(4) Invoking God doesn’t explain anything that the idea was not designed to explain. And that is the hallmark of an ad-hoc hypothesis, constructed to arrive at a desired conclusion. It also exhibits parochial thinking (God being envisaged in the image of an idealised tribal leader, and then abstracted and made apophatic from there) along with a large dollop of wishful thinking (What does a human most want? To be loved and live forever. What does a god provide? Being loved and living forever).

(5) The idea of God makes no predictions and so is unfalsifiable. Consider a child dying of brain cancer. If we gave the mother the ability to cure her child then she would do so without hesitation. God loves the child even more than the mother, and has the power to cure him as easily as lifting a little finger, so he cures the child, right? Well, … maybe not.

I’ve no doubt that theologians have schemed up lots of good reasons why that might not happen and why the God hypothesis is compatible with any and all outcomes, but the cost is to strip the idea of any possibility of doing what any good explanation should do: predicting things we didn’t already know, but can then verify. By adding in lots of ineffability and “God has his reasons” the theologians ensure that the hypothesis is vague and enigmatic, and complex and unwieldy, and also devoid of any actual explanatory or predictive power. This is the exact opposite of what a good scientific explanation is like.

Theologians know that if they made some concrete predictions that could potentially be falsified then they’d quickly get their fingers burned, so instead they carefully construct a God hypothesis that makes no testable difference in the observable world. But if it makes no difference then it is dispensable, and thus science picks up Occam’s razor and excises it.

It was for such reasons that invocations of God within science gradually died out as science progressed, summed up by Huxley’s remark that “Extinguished theologians lie about the cradle of every science”. The notion of The Divine is not omitted from science out of prejudice or as an arbitrary fiat, instead it gradually lost its place in science for the quite proper and scientific reason that it fails to improve any of science’s explanations.

Of course our knowledge of the world is incomplete, so one can always point to gaps in our understanding and fill them with a “God of the gaps”, but as our understanding progresses, and the gaps get filled with knowledge, this leads to a Cheshire Cat god who gradually does less and less and then disappears, leaving only a hankering from those who want to believe. Science has moved on from a sun-god driving a chariot across the sky, and from other superseded explanations such as phlogiston or élan vital. I submit that the God that Randy Wayne points to similarly fails to improve any of science’s explanations, and so should not be brought into today’s science classes.

Is the dimethyl sulphide in the atmosphere of exoplanet K2-18b real?

This was first published on Jerry Coyne’s website: Why Evolution is True

Everyone is interested in whether life exists on other planets. Thus the recent claim of a detection of a biomarker molecule in the atmosphere of an exoplanet has attracted both widespread attention and some skepticism from other scientists.

The claim is that planet K2-18b shows evidence of dimethyl sulphide (DMS), a molecule that on Earth arises from biological activity. Below is an account of the claim, where I attempt to include more science than the mainstream media does, but do so largely with pictures in the hope that the non-expert can follow the gist.

Transiting exoplanets such as K2-18b are discovered owing to the periodic dips they cause in the light of the host star:

Image

So here is the lightcurve of K2-18b, as observed by the James Webb Space Telescope, showing the transit that led to the claim of DMS by Madhusudhan et al.:

Image

If we know the size of the star (deduced from knowing the type of star from its spectrum), the fraction of light that is blocked then tells you the size of the planet.

But we also need to know its mass. One gets that from measuring how much the host star is tugged around by the planet’s gravity, and that is obtained from the Doppler shift of the star’s light.

The black wiggly line in the plot below is the periodic motion of the star caused by the orbiting planet. Quantifying this is made harder by lots of additional variation in the measurements (blue points with error bars), which is the result of magnetic activity on the star (“star spots”). But nevertheless, if one phases all the data on the planet’s orbital period (lower panel), then one can measure the planet’s mass (plot by Ryan Cloutier et al):

Image

So now we have the mass and the size of the planet (and we also know its surface temperature since we know how far it is from its star, and thus how much heating it gets). Combining that with some understanding of proto-planetary disks and planet formation we can thus scheme up models of the internal composition and structure of the planet.

The problem is that multiple different internal structures can add up to the same overall mass and radius. One has flexibility to invoke a heavy core (iron, nickel), a rocky mantle (silicates), perhaps a layer of ice (methane?), perhaps a liquid ocean (water?), and also an atmosphere.

Image

This “degeneracy” is why Nikku Madhusudhan can argue that K2-18b is a “hycean” planet (hydrogen atmosphere over a liquid-water ocean) while others argue that it is instead a mini-Neptune, or that it has an ocean of molten magma.

But one can hope to get more information from the detection of molecules in the planet’s atmosphere, a task that is one of the main design goals of JWST. The basic idea is straightforward: During transit, some of the starlight will shine through the thin smear of atmosphere surrounding the planet, and the different molecules absorb different wavelengths of light in a pattern characteristic of that molecule (figure by ESA):

Image

So one observes the star both during the transit and out of transit, and then subtracts the two, and the result is a spectrum of the planet’s atmosphere.

If the planet is a large gas giant with a fluffy, extended atmosphere and is orbiting a bright star (so that a lot of photons pass through the atmosphere), the results can be readily convincing. For example, here is a spectrum of exoplanet WASP-39b with features from different molecules labelled (figure by Tonmoy Deka et al):

Image

[I include a plot of WASP-39b partly because I was part of the discovery team for the Wide Angle Search for Planets survey, but also because it is pretty amazing that we can now obtain a spectrum like that of the atmosphere of an exoplanet that is 700 light-years away, even while the planet itself is so small and dim and distant that we cannot even see it.]

The problem with K2-18b is that it is much smaller than WASP-39b and its atmosphere less extended (so fewer photons pass through it). This is at the limit of what even the $10-billion JWST can do.

When you’re subtracting two very-similar spectra (the in- and out-of-transit spectra) in order to obtain a rather small signal, any “instrumental systematics” matter a lot. Here is same the spectrum of K2-18b, as processed by several different “data reduction pipelines”, and as you can see the differences between them (effectively, the limits of how well we understand the data processing) are similar in size to the signal (plot by Rafael Luque et al):

Image

The next problem is that there are a lot of different molecules that one could potentially invoke (with the constraint of making the atmospheric chemistry self-consistent). For example, here are the expected spectral features from eight different possible molecules (figure by Madhusudhan):

Image

Then one needs to think about what molecules one might expect to see, depending on what one thinks the observable atmosphere is made of, and how that relates to the overall structure of the planet. Here (for example) is an interpretation “roadmap” from a recent paper by Renyu Hu et al.:

Image

To finally get to the point, here is the crucial figure: Nikku Madhusudhan and colleagues argue — based on an understanding of planet formation, and on arguments that planets like K2-18b are hycean worlds, and from considerations of atmospheric chemistry, in addition to careful processing and modelling of the spectrum itself — that the JWST spectrum of K2-18b is best interpreted as follows (the blue line is the model, the red error bars are the data):

Image

This interpretation involves large contributions from DMS (dimethyl sulphide) and also DMDS (dimethyl disulphide) — the plot below shows the different contributions separated — and if so that would be notable, since on Earth those compounds are products of biological activity.

Image

In contrast, Jake Taylor has analysed the same spectrum and argues that he can fit it adequately with a straight line, and that the evidence for features is at best two sigma. Others point out that the fitted model contains roughly as many free parameters as data points. Meanwhile, a team led by Rafael Luque reports that they can fit the spectrum without invoking DMS or DMDS, and suggest that observations of another 25 transits of K2-18b would be needed to properly settle the matter.

There are several distinct questions here: are the instrumental systematics sufficiently known and accounted for? (perhaps, but not certainly), are the relevant spectral features statistically significant? (that’s borderline), and, if the features are indeed real, are they properly interpreted as DMS? (theorists can usually scheme up alternative possibilities). Perhaps a fourth question is whether there are abiotic mechanisms for producing DMS.

This is science at the cutting edge (and Madhusudhan has been among those emphasizing the lack of certainty, though that has not always been propagated to news stories), and so the only real answer to these questions is that things are currently unclear. This is a fast-moving area of astrophysics and we’ll know a lot better in a few years.

Barnard’s Star is orbited by four small, rocky planets

This was written for The Conversation (this being my original edit of the piece).

Image

Barnard’s Star is a small, dim star, of the type that astronomers call red dwarfs. Consequently, even though it is one of the closest stars, such that its light takes only six years to reach us, it is too dim to see with the naked eye. And much, much too dim to be seen, even with the best telescopes that we have, are the four small planets that we now know to be in close orbits around the star.

Few stars are named after astronomers. The bright, naked eye stars were named in the golden era of Arabic science, while fainter stars typically just have catalogue numbers. But in 1916 Edward Emerson Barnard noticed that this star was moving in the night sky. It is so close to us that its motion through space can be seen against the backdrop of stars that, being much more distant, appear fixed.

How were the orbiting planets found if they’re much too dim to be seen? The answer lies in detecting the effect of their gravity on the star. The mutual gravitational attraction keeps the planets in their orbits, but also tugs on the star, moving it in a rhythmic dance that can be detected by sensitive spectrographs designed to measure the star’s motion.

A significant challenge, however, is the star’s own behaviour. Stars are fluid, with the nuclear furnace at their core driving churning motions that generate magnetic fields (just as the churning of Earth’s molten core produces Earth’s magnetic field). The surface of red-dwarf stars are rife with magnetic storms that cause giant flares and dark “star spots”, and these can mimic the effect of planets.

The task of finding planets by this method boils down to building the most-sensitive spectrographs possible, mounting them on large telescopes that feed sufficient light, and then observing a star over months or years. After carefully calibrating the resulting data, and modelling out the effects of stellar magnetic activity, one can then scrutinise the data for the tiny signals that reveal orbiting planets.

In 2024 a team led by Jonay González Hernández reported on four years of monitoring of Barnard’s Star with the ESPRESSO spectrograph on ESO’s Very Large Telescope. They found one secure planet and reported tentative signals that could indicate three more planets. Now, a team led by Ritvik Basant have added in three years of monitoring with the MAROON-X instrument on the Gemini-North telescope. Analysing their data alone confirmed three of the four planets, while combining both datasets confirms that all four are real.

Often in science, when detections push the limits of current capabilities, one needs to ponder the reliability of the findings. Are there spurious instrumental effects that the teams haven’t accounted for? Hence it is reassuring when independent teams, using different telescopes, instruments and computer codes, arrive at the same conclusions.

The planets form a tight, close-in system, having orbital periods between 2 and 7 Earth days (for comparison, our Sun’s closest planet, Mercury, orbits in 88 days). Most likely they all have masses less than Earth. They’re likely to be rocky planets, with bare-rock surfaces blasted by their star’s radiation. They’ll be too hot to hold liquid water, and any atmosphere is likely to have been stripped away.

The teams looked for longer-period planets, further out in the star’s habitable zone, but didn’t find any. We don’t know much else about these planets, such as their sizes. The best way of figuring that out would be to watch for transits, when planets pass in front of their star, and then measure how much light they block. But the Barnard’s Star planetary system is not edge on to us, so the planets don’t transit, and that makes them harder to study.

Nevertheless, the Barnard’s Star planets tell us about planetary formation. They’ll have formed in a protoplanetary disk swirling around the nascent star. Particles of dust will have stuck together, and gradually built up into rocks that aggregated into planets. Red dwarfs are the most common type of star, and most of them seem to have planets. Whenever we have sufficient observations of such a star we find planets, so likely there are far more planets in the galaxy than there are stars.

Most of the planets that have been discovered are close to their star, well inside the habitable zone, but that’s largely because their proximity makes them much easier to find. Being closer in means that their gravitational tug on their star is bigger, and it means that they have shorter orbital periods (so we don’t have to monitor the star for as long). It also increases their likelihood of transiting, and thus of being found in transit surveys. ESA’s upcoming PLATO mission is designed to find planets further from their stars. This should produce many more planets in their habitable zones, and should begin to tell us whether our own Solar System, which has no close-in planets, is unusual.

GWAS studies underestimate the heritability of intelligence

Whether differences in intelligence are due to people’s different genes or to their different environments has long been contentious. One answer to this question comes from twin studies and adoption studies. By comparing outcomes for identical twins (who share all their genes) with those of fraternal twins and with unrelated children, one can deduce the relative influences of genes in comparison with “shared environment” (all environmental factors shared by siblings growing up together) and un-shared environment (everything else, which can include things like randomness in embryonic development). Such studies give high estimates for the genetic contribution to differences in intelligence, such that the heritability of IQ is typically estimated as around 70%.

A different method is to look directly at genes, through Genome Wide Association Studies (GWAS), which sample large numbers of genes in large numbers of people, attempting to measure and add up the affect of each gene on IQ. This typically gives much lower estimates for the effect of genes, and the marked difference between estimates from twin studies and those from GWAS studies is referred to as the “missing heritability” problem.

Recently the Harvard geneticist Sasha Gusev argued that twin studies are unreliable and that the true heritability is nearer the much-lower estimates from current GWAS studies. Saying that “intelligence is not like height”, he argues that, while a trait like height might be strongly influenced by genes, intelligence is not. “Adding up all of the genetic variants only predicts a small fraction of IQ score”, he says, adding that: “the largest genetic analysis of IQ scores built a predictor that had an accuracy of 2–5% in Europeans […]”.

In response to Gusev’s critique, Noah Carl wrote a defense of twin studies. Here I add to that by arguing that current GWAS studies must be overlooking much of the genetic influence on intelligence. In short, intelligence must be affected by vast numbers of genes, which means that most of them must have very small effects, and current GWAS studies do not have the statistical power to detect small-enough effects. This is not a new suggestion (see, e.g., Yang et al. 2017), but it could well resolve the issue.

Being taught at school about Mendel and smooth versus wrinkly peas might leave the impression that traits can be determined by only one or a few genes. While this might be true in some few cases, most traits are affected by very many genes, and, in particular, complex traits related to human personality and behaviour must involve huge numbers of genes. (Whereas a simpler trait like height could, in principle, involve fewer genes.)

Intelligence is among the most complex traits, which means that any genetic recipe for intelligence must contain a lot of information. If I asked you how many lines of code you’d need to program an intelligent robot you’d reply: “Eek, millions at least!”. Genes provide, of course, a developmental recipe rather than a direct program, but the underlying point, that this recipe must contain a vast amount of information and so be encoded in tens of thousands of genes, still stands. It then follows that most of these genes must individually be having a very small effect. If N thousand genes each contributed equally to intelligence then each would have an 1/N thousandth effect.

A basic rule of statistics is that to find smaller effects you need a larger sample. Typically the uncertainty scales as the square root of the sample size. So if an opinion poll samples 1000 people then you get a 3% error range [square-root(1000)/1000]. To do ten times better (a 0.3% error) you’d need a sample 100 times bigger. (Though you’d also run into systematic error, such as whether your sample is representative of the population.) And, of course, to find a tiny effect you need an error range smaller than that effect, preferably quite a bit smaller.

In writing about GWAS studies I should own up that I am not a geneticist, but in my “day job” in astrophysics we have exactly the same problem, that’s why we build telescopes with large mirrors to collect many photons. We are studying tiny signals from faint galaxies at large distances in the universe, and every time we apply for time on a large telescope we calculate how much time we need in order to collect enough photons to have enough statistical power to find the small signal that we are looking for.

GWAS studies examine one type of genetic variability, Single Nucleotide Polymorphisms (or SNPs, usually pronounced “snips”), and they might typically record SNPs at 20,000 locations out of 3 billion nucleotides, examining those SNPs over a sample of 10,000 to 100,000 genomes. For a discussion of the statistical power of GWAS studies I refer to the paper Wu et al 2022 (“… Statistical power … of genome-wide association studies of complex traits”). The authors confirm that: “The statistical power for an individual SNP is determined by its effect size, the sample size, and the desired significance level. In a random sample of size n, the test statistic for the association between a quantitative phenotype and a SNP is β sqrt(n), …” (where β is the effect size). Making a range of assumptions (for which see the paper) they develop a model leading to the following plot:

Image

The plot shows the sample size (number of genomes) needed to find SNPs of small, moderate and large effect size, by which they mean 0.01%, 0.1%, and 1% of total SNP heritability. This shows that to detect a gene accounting for 0.1% of the variance requires a sample size of ~ 30,000. A similar paper that again develops a statistical model, making a different set of assumptions (Wang & Xu 2019), concludes that finding a SNP that explains 0.20% of the phenotypic variance requires a sample size of 10,000, which is consistent with the first paper. It’s also worth remarking that real-life studies will almost certainly do worse than these estimates, since there are always sources of noise not accounted for in theoretical models.

Hence GWAS studies sampling tens of thousands of genomes could find the genes associated with intelligence if there were only a few hundred of them, but if they number a few thousand then that requires hundreds of thousands of genomes at a minimum, and if intelligence involves over ten thousand SNPs then that’s well beyond current GWAS studies.

The GWAS study in the quote from Gusev above (Savage etal 2018) sampled the genomes of 270,000 people. They do indeed report that the genetic differences that they found account for only “up to 5.2% of the variance in intelligence”, but they have only found 205 “associated genomic loci” (blocks of SNPs associated with a trait). It seems wildly implausible that these 205 genetic differences are all that there is to a recipe for intelligence. (If you disagree, feel free to write down a developmental recipe for human-like intelligence in only 205 lines of instructions!) It’s worth pointing out that AI models such as ChatGPT-4 are based on hundreds of billions of neural-network “weights” (though of course this is the end product, not the recipe).

Indeed a more recent and bigger study (Okbay et al 2022) analysed 3 million genomes and found 3,952 SNPs associated with educational attainment, that together account for 12 to 16% of variance in educational attainment. So (setting aside that intelligence, IQ and educational attainment are not quite the same thing), they have a larger sample, they find that many more SNPs are involved, and in total this accounts for a larger fraction of the variation.

Even then, this is likely to be merely the tip of the iceberg of genetic variability relevant to intelligence and educational attainment. There are 3 billion nucleotides in the human genome, and we have no good way of estimating what fraction might have some effect on intelligence. Further, GWAS techniques study only one type of genetic variation, the SNP, whereas there are, in principle, lots of other ways in which genomes can vary. And, further, GWAS estimates assume a simple “additive” model, where the overall effect on the phenotype is simply the sum of the effect of each SNP individually. This could well be a good first-order approximation, but the reality of a recipe for intelligence is likely to involve myriads of subtle and complex interactions.

In short, since our understanding of the genetic developmental recipe for intelligence is close to non-existent, and since we are only guessing wildly at how many SNPs it might involve and how those SNPs interact, there is no way that we can conclude that current GWAS studies are sensitive to most of the relevant genetic variability. Hence we cannot conclude that adding up the known effects gives anything like a true estimate of the heritability of any complex trait, such as intelligence. All we can say is that it gives a lower limit.

Note that this argument is not attributing the missing heritability (as has sometimes been suggested) to relatively rare genes of large effect, which, being rare, are simply not sampled in GWAS studies (this idea used to be plausible, but is getting less so as GWAS studies get bigger). Instead, it is attributing the missing heritability to large numbers of common genes that are sampled in GWAS studies, but whose individual effects are too small to be detected with the statistical power of current GWAS studies.

In contrast, twin studies do not depend on knowing anything about how the genes produce intelligence. It is purely a suck-it-and-see method that takes whole genomes (identical twins, fraternal twins, and unrelated children) and evaluates the later-life outcomes. Twin studies do have their own assumptions, including the “equal environment assumption”. For example, do parents tend to treat identical twins differently from fraternal twins? This is one reason why the gold standard is studies of twins that were separated at birth and reared apart. This is a rare occurrence today, but before the ready availability of abortion and the pill in Western countries there was a steady stream of young, un-married mothers giving up babies to be adopted at birth, and it was common to separate twins. More recently, China’s one-child policy has led to twins being separated and adopted at birth (e.g., Segal & Pratt-Thompson 2024). Such studies give high values of ~ 0.7 for the heritability of IQ.

As a result of checks like this heritability estimates from twin and adoption studies have been extensively examined and seem robust. We have no good reason to think that twin studies are severely underestimating the heritability of IQ. In contrast, there is good reason to suspect that the GWAS estimates are only a lower limit, and are currently much too low.

Did Aboriginal Australians predict solar eclipses?

Image

“Mathematics has been gatekept by the West and defined to exclude entire cultures” declares Professor Rowena Ball of the Australian National University, who wants mathematics to be “decolonised”. In one sense she is right, mathematics is indeed “a universal human phenomenon” that transcends individual cultures. But she is wrong to suppose that anyone disagrees; she is wrong to claim that there are people who think of mathematics as having “an exclusively European and British provenance” and want it to remain that way. Rebutting a strawman serves only to signal one’s superior attitudes.

Professor Ball claims that “Almost all mathematics that students have ever come across is European-based”, and yet “algebra” is an Arabic word and so is “algorithm”. Foundational concepts such as the number zero and negative numbers originated in the Middle East, India and China before being adopted by Europeans. The mathematics now taught to schoolchildren in Mumbai and Tokyo is the same as that taught in London.

Being Australian, Ball’s primary concern is to laud the mathematics of indigenous Australians as being of equivalent merit to globally mainstream mathematics, so she wants a “decolonised” curriculum in which “indigenous mathematics” has equal standing.

But how much substance is there to her case? What would actually be taught? Professor Ball’s article gives only one anecdote about signalling with smoke rings, and based on that alone concludes that: “Theory and mathematics in Mithaka society were systematised and taught intergenerationally”.

In a longer, co-authored article, she reviews evidence of mathematics among indigenous Australians prior to Western contact, but finds little beyond an awareness of counting numbers and an ability to divide 18 turtle eggs equally between 3 people. She recounts that they had concepts of North, South, East and West, could travel and trade over long distances, and knew about the relationship between lunar cycles and tides, and had an understanding of the seasons and the weather. And yes, I’m sure that they were indeed expert in the forms of practical knowledge needed to survive in their environment. But there is no indication of a parallel development of mathematics of equal standing to that elsewhere.

The two authors assert: “We also illustrate that mathematics produced by Indigenous People can contribute to the economic and technological development of our current ‘modern’ world”. But nowhere is that actually illustrated. There is no worked example. The suggestion is purely hypothetical. And there is no exposition of what a “decolonised” mathematics curriculum would actually look like.

There is one claim in the article that did seem intriguing:

But Deakin (2010, p. 236) has devised an infallible test for the existence of Indigenous mathematics! This is that there must be ‘an Aboriginal method of predicting eclipse’. To predict an eclipse, one needs clear and accurate understanding of the relationships between the motions of the Sun and Moon. In spite of the challenge, the answer is yes. Hamacher & Norris (2011) report a prediction by Aboriginal people of a solar eclipse that occurred on 22 November 1900, which was described in a letter dated in December 1899.

Predicting solar eclipses is indeed impressive. It requires considerable understanding and long-term record-keeping over many centuries, in order to discern patterns in eclipse occurrences, or it requires some sophisticated mathematics coupled with measuring and recording the locations of the sun and moon to good accuracy. Either is hard to do in a society lacking a written language. (English astronomer Edmond Halley is best known for having predicted the return of a comet and for predicting a solar eclipse over London in 1715, the first secure example of that feat, though it is likely that Babylonian, Chinese, Arabic or Greek astronomers, such as Thales, possibly using something like the Antikythera Mechanism, had done so centuries earlier.)

Image

This geoglyph in the Ku-ring-gai Chase National Park has been interpreted as a record of a solar eclipse, depicting the eclipsed crescent above two figures.

Hence, Professor Ball’s claim of a successful prediction of a solar eclipse is vastly more significant than anything else in her paper. So I looked up the source, a paper by Hamacher & Norris (2011). The evidence is a letter written in December 1899 by a Western woman who says: “We are to witness an eclipse of the sun next month. Strange! all the natives know about it; how, we can’t imagine!”.

Afterwards the same correspondent wrote: “The eclipse came off, to the fear of many of the natives. It was a glorious afternoon; I used smoked glasses, but could see with the naked eye quite distinctly”. But there was no eclipse until a year after the first letter (Nov 1900), not “next month”; the “fear of many of the natives” is incongruous with the suggestion that they had predicted it; and the text of the letters comes from a later compilation in 1903. This is the only piece of evidence given that Aboriginal Australians had developed the ability to predict eclipses; Professor Ball presents nothing else, and nothing from Aboriginals themselves. She gives no account from any Aboriginal about how this is done, and if that’s because she cannot find anyone who could give that account, then doesn’t that count for more than one anecdote that could have been misunderstood or miscommunicated?

That Professor Ball accepts such weak evidence uncritically shows that she is driven by an agenda, not by a fair assessment of the development of mathematics or of the history of indigenous peoples. There is no substance here, no account of what an “indigenous mathematics” curriculum would look like. It does students from indigenous backgrounds no favours to divert then away from global mathematics into “ethnomathematics”. Ironically, it is people like Professor Ball who are telling them that mathematics is “colonised” and European and not for them. This is the wrong message. Mathematics and science are universal, and should be open to everyone, and we should not be dividing universal enterprises into silos with ethnic labels attached.

This piece was written for the Heterodox Academy STEM Substack and is repoduced here.

Confusion over causation, both top-down and bottom-up

I’m becoming convinced that many disputes in the philosophy of science are merely manufactured, arising from people interpreting words to mean different things. A good example is the concept of “reductionism”, where the meaning intended by those defending the concept usually differs markedly from that critiqued by those who oppose it.

A similar situation arises with the terms “top down” versus “bottom up” causation, where neither concept is well defined and thus, I will argue, both terms are unhelpful. (For examples of papers using these terms, see the 2012 article “Top-down causation and emergence: some comments on mechanisms”, by George Ellis, and the 2021 article “Making sense of top-down causation: Universality and functional equivalence in physics and biology”, by Sara Green and Robert Batterman.)

The term “bottom-up” causation tends to be used when the low-level properties of particles are salient in explaining why something occurred, while the term “top-down” causation is used when the more-salient aspect of a system is the complex, large-scale pattern. But there is no clear distinction between the two, and attempts to propose one usually produce straw-man accounts that no-one holds to. Continue reading

Human brains have to be deterministic (though indeterminism would not give us free will anyhow)

Are human brains deterministic? That is, are the decisions that our brain makes the product of the prior state of the system (where that includes the brain itself and the sensory input into the brain), or does quantum indeterminacy lead to some level of uncaused randomness in our behaviour? I’ll argue here that our brains must be largely deterministic, prompted by being told that this view is clearly wrong.

First, I’ll presume that quantum mechanics is indeed indeterministic (thus ignoring hidden-variable and Everettian versions). But the fact that the underlying physics is indeterministic does not mean that devices built out of quantum-mechanical stuff must also be indeterministic. One can obtain a deterministic device simply by averaging over a sufficient number of low-level particle events. Indeed, that’s exactly what we do when we design computer chips. We build them to be deterministic because we want them to do what we program them to do. In principle, quantum fluctuations in a computer chip could affect its output behaviour, but in practice a minimum of ~50 electrons are involved in each chip-junction “event”, which is sufficient to average over probabilistic behaviour such that the likelihood of a quantum fluctuation changing the output is too small to be an issue, and thus the chip is effectively deterministic. Again, we build them like that because we want to control their behaviour. The same holds for all human-built technology. Continue reading

Confusion about free will, reductionism and emergence

Psychology Today has just published: “Finding the Freedom in Free Will, with the subtitle: “New theoretical work suggests that human agency and physics are compatible”. The author is Bobby Azarian, a science writer with a PhD in neuroscience. The piece is not so much wrong — I actually agree with the main conclusions — but is, perhaps, rather confused. Too often discussion in this area is bedevilled by people meaning different things by the same terms. Here is my attempt to clarify the concepts. Azarian starts:

Some famous (and brilliant) physicists, particularly those clearly in the reductionist camp, have gone out of their way to ensure that the public believes there is no place for free will in a scientific worldview.

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview. Continue reading

Here’s GJ 367b, an iron planet smaller and denser than Earth

This is an article I wrote for The Conversation about a new exoplanet, for which I was a co-author on the discovery paper. One reason for reproducing it here is that I can reverse any edit that I didn’t like!

As our Solar System formed, 4.6 billion years ago, small grains of dust and ice swirled around, left over from the formation of our Sun. Through time they collided and stuck to each other. As they grew in size, gravity helped them clump together. One such rock grew into the Earth on which we live. We now think that most of the stars in the night sky are also orbited by their own rocky planets. And teams of astronomers worldwide are trying to find them.

The latest discovery, given the catalogue designation GJ 367b, has just been announced in the journal Science by a team led by Dr Kristine Lam of the Institute of Planetary Research at the German Aerospace Center.

The first signs of it were seen in data from NASA’s Transiting Exoplanet Survey Satellite (TESS). Among the millions of stars being monitored by TESS, one showed a tiny but recurrent dip in its brightness. This is the tell-tale signature of a planet passing in front of its star every orbit (called a “transit”), blocking some of the light. The dip is only 0.03 percent deep, so shallow that it is near the limit of detection. That means that the planet must be small, comparable to Earth. Continue reading

Science does not rest on metaphysical assumptions

It’s a commonly made claim: science depends on making metaphysical assumptions. Here the claim is being made by Kevin Mitchell, a neuroscientist and author of the book Innate: How the Wiring of Our Brains Shapes Who We Are (which I recommend).

His Twitter thread was in response to an article by Richard Dawkins in The Spectator:

Image

Dawkins’s writing style does seem to divide opinion, though personally I liked the piece and consider Dawkins to be more astute on the nature of science than he is given credit for. Mitchell’s central criticism is that Dawkins fails to recognise that science must rest on metaphysics: Continue reading