Archive for the ‘Evolution’ Category

All the senses we do not have

December 12, 2025

This started as an Appendix to an essay I am writing. However it has grown to stand as a post in its own right. It will now be a citation rather than an Appendix in the essay which I hope to complete soon. “Gods are a matter of epistemology rather than theology”. Cognition, including human cognition, emerges from the interactions between a brain, the senses it has access to and the body they are all housed in. A cognition’s view of the world is as much enabled by its available senses as it is blinkered by the same senses. Senses available to any species are unique to that species’ physiology and the brain which interprests the signals generated. The signals from a spider’s eyes or from a dog’s nose are meaningles and cannot be interpreted by a human brain. Furthermore even within a species each individual cognition has unique features. The experiences of a cognition may be similar to that of another individual of the same species but cannot be truly shared. We have no examples of telepathy in any species. My qualia of experiencing red or pain cannot be shared by any other human – but may be similar to the experiences of others. However a spider’s qualia of experiencing the same red with its eight eyes is something else again.


Introduction

Evolution has no aims, plans, or intended outcomes. It is simply the cumulative result of differential survival and reproduction. Traits persist when organisms carrying them leave more descendants than those without them. Sometimes that happens because a trait spares its bearer from an early death; sometimes it happens because the trait leads to more mating opportunities, or because it helps relatives survive, or simply because there is no better alternative available in the genetic lottery.

The popular idea that evolution “selects” for superior or well-designed features is mostly rhetoric. Natural selection does not favour excellence; it favours whatever works well enough under the conditions at hand. What results in any organism, including humans, is not an optimal design but a set of compromises shaped by history, constraint, and chance. When people speak of evolutionary perfection or elegant fit, they are mistaking local adequacy for intentional design. These traits succeeded because, in a given environment, they did not lose in the competition to leave offspring.

The senses that living organisms possess are no different. Each sensory system that exists today is not the best possible way to perceive the world, but merely one that proved sufficient, in a particular lineage and habitat, to avoid being outcompeted. Evolution leaves us only what has survived, with those traits that were good enough for the conditions of the moment. It contains no foresight, no preparation for what comes next, and any sense of direction we read into it is something we impose after the fact.


Senses Animals Have That Humans Do Not

While humans rely primarily on the five traditional senses (sight, hearing, smell, taste, and touch), plus others like balance (equilibrioception), pain (nociception), and body awareness (proprioception), the living things on earth have evolved many “extra” senses that we do not possess.

  • Magnetoception (Magnetic Field Sense): The ability to detect the Earth’s magnetic field and use it for orientation and navigation. This is found in a wide variety of animals, including migratory birds, sea turtles, sharks, and even honey bees. They use this as an internal compass for long-distance travel.
  • Electroreception (Electric Field Sense): The capacity to sense weak electrical fields generated by other living creatures’ muscle contractions and heartbeats. Sharks and rays use specialized organs called the ampullae of Lorenzini for hunting in murky water, and the platypus uses electroreception in its bill.
  • Infrared (IR) Sensing/Vision (Thermoreception): The ability to sense heat radiation, allowing an animal to “see” the body heat of warm-blooded prey, even in complete darkness. Pit vipers (like rattlesnakes) and pythons have specialized pit organs that detect infrared radiation.
  • Echolocation: A biological sonar system used by bats, dolphins, and toothed whales to navigate and hunt. They emit high-frequency sound pulses and listen to the echoes to create a detailed mental map of their environment.
  • Ultraviolet (UV) Vision: The ability to see light in the ultraviolet spectrum, which is invisible to most humans. Many insects (like bees), birds, and fish use UV vision for finding nectar, recognizing mates, or spotting prey.
  • Polarized Light Detection: The ability to perceive the polarization patterns of light. This is used by many insects (for navigation using the sky) and mantis shrimp (which have the most complex eyes known, seeing forms of polarized light we cannot comprehend) for navigation and communication.
  • Seismic/Vibrational Sensitivity: The ability to detect subtle vibrations traveling through the ground or water over great distances. Elephants use their feet to sense ground tremors, and many snakes and insects use this to detect predators or prey.
  • Ultrasonic and Infrasonic Hearing: Many animals can hear frequencies far outside the human range of 20 Hz to 20,000 Hz. Bats and moths use ultrasound (above 20,000 Hz), while elephants and some whales communicate using infrasound (below 20 Hz).

Senses: Could there be more?

Our current understanding of sensory biology is itself limited by our own human perception. We tend to define a sense based on some physical parameter that can be and is converted into a signal that can then be interpreted by a specialised brain which has evolved together with the sensory organs. If there is some parameter or subtle information in our surroundings that no living thing known to us has evolved to be able to detect, or one that is so subtle and complex that it doesn’t clearly map to a known physical stimulus, we would not even recognize it as a “sense” at all.

  • Subtle Chemical Gradients: While we have smell, some organisms (like bacteria or fungi) may sense complex, long-range chemical fields in ways that defy our simple notions of “smell” or “taste.”
  • Quantum Senses: Some research suggests that the magnetic sense in birds may rely on quantum entanglement within specific proteins. If true, this hints at perception mechanisms on a quantum scale that are difficult for us to even conceptualize fully.
  • Predictive or Internal Senses: Plants, which react to light, gravity, touch, and chemical signals, display complex “behavior” without a nervous system. While we classify these as existing senses, their internal “awareness” of time, nutrient deficiency, or potential nearby threats might constitute forms of interoception or time-perception that function in a fundamentally different way than any human feeling.

Our “awareness” of a sense is often based on the technology we invent to imitate it (like a magnetic compass for magnetoception). It is highly likely that life on Earth has evolved to be able to detect some environmental information in ways that remain outside the scope of our imagination or our measurement tools. We can speculate on senses that could exist in principle but which have no value on earth and therefore have never evolved. Let us take a “sense” to be a structured mapping from external regularities into neural states. Many regularities exist which life-forms on Earth have apparently had no motive or incentive to detect or track.

  • Neutrino detection. Neutrinos pass through a light-year of lead without stopping. Biological tissue could never detect them reliably. Could it be of value to some alien cognition. What would such detection change in a world view?
  • Sense of gravitational gradients at fine spatial scales. Gravity is too weak at the biological scale. A living creature would need to be built of very dense matter to reliably distinguish micro-variations in gravitational fields. But we cannot see any value of this to any conceivable form of life.
  • Hyperspectral gamma-ray “vision”. Gamma rays obliterate earthly biological tissue. A system to detect them without dying would require materials and chemistry alien to Earth. The energy levels are simply incompatible with organic molecules.
  • Direct dark-matter detection. Dark matter barely interacts with baryonic matter. Evolution cannot select traits for a signal that never reaches biology. But could there be alien biology and alien cognition which made use of such detection. Who knows?
  • Time-structure sensing at quantum-coherence timescales. A species that can detect changes occurring over femtoseconds or attoseconds is conceptually possible, but organic molecules are far too slow and thermally noisy. Evolution selects for what biochemistry can sustainbut we cannot know what we cannot know.
  • Sensing vacuum fluctuations (zero-point energy). We are almost entering into nonsense territory but then my nonsense may be basic knowledge to an unimaginable alien.
  • Direct perception of spacetime curvature (not gravity but curvature gradients). Living tissue cannot detect curvature directly. Only masses and accelerations reveal it.

Our reality is that as our knowledge grows so does the perimeter to the unknown grow. We can never know all the senses we do not have.


Language frees humans from the captivity of now

September 13, 2024

Introduction:

The capability for language (physical and mental) is physiological and lies in many of our genes acting together. These genes evolved individually for other reasons and came together, one assumes fortuitously rather than by design, to make language possible. Nevertheless our hominin ancestors only discovered they had this ability when, as need prescribed, they tried to communicate. The capability was discovered but languages themselves were invented by humans. As with most inventions, language invention was also driven by necessity. In this case it was social needs in the shape of the desire and the need to cooperate. Humans invented languages to answer the need to communicate. There is no doubt that language helps thinking but it is equally obvious that a man alone on a deserted island would not have invented language just for the sake of his solitary thinking. The driver for the invention was the need to communicate, not the need to think. So my contention is that the social desire/need to cooperate is what uniquely distinguished/distinguishes the human species. The discovery of language ability and the invention of languages was an evolutionary step change which lifted this distinction to an unprecedented level. But it didn’t stop there. An immensely powerful feedback loop was established when language aided thought which, in turn, enriched language. A virtuous spiral was born which has now been in place for some 200,000 years and continues. One apparent evolutionary weakness is that the language-thinking advances can only be passed on culturally and cannot be passed on (as far as we can tell) to new generations through our genes. So there is a risk of cultural discontinuity (as with the abrupt end of a civilization for example). I suspect that many such cultural discontinuities have occurred and manifested as Dark Ages which followed the end of some advanced civilizations. As, for example, after the heights of the Egyptian and Roman civilizations.

When were languages invented?

While there is no direct fossil evidence of language, the indirect evidence suggests that our ancestors from 200,000 years ago (most likely homo sapiens but possibly even Neanderthals) had language.

  • Brain Development: The human brain, particularly the areas associated with language, had undergone significant development by the time modern humans (homo sapiens) appeared some 200,000 years ago. This suggests that the species had the cognitive capacity for language at that time. It is not impossible that Neanderthals and perhaps even some later descendant of homo erectus also had the cognitive ability.
  • Social Complexity: Archaeological evidence indicates that early humans lived in complex social groups, which would have required effective cooperation and communication. The need and practice of social cooperation reached an unprecedented level with homo sapiens and possibly also with Neanderthals. It is most likely that social cooperation began with group hunting and group defense.
  • Symbolic Behavior: By 200,000 years ago humans engaged in tool making and some symbolism must have been required. Complex symbols in art are present, I estimate, by at least 70,000 years ago. The ability to construct abstract ideas and the need to communicate these also originate here. Again it seems that the capacity to use symbols was available to Neanderthals and sapiens.
  • Genetic Evidence: Recent genetic studies have identified genes associated with language development that are present in modern humans and likely existed in our ancestors 200,000 years ago.

It seems that the ability to have language was present earlier than 200,000 years ago. From the discovery of the ability to the invention of a functional language was a major breakthrough and would have taken some time. Primitive spoken languages began probably around 200,000 years ago. There are other major steps to get from early sounds and gestures to sophisticated spoken languages and then to the invention of writing (possibly via art). The origin of written languages probably dates to the time of the earliest known cave paintings around 50 – 60,000 years ago. 

Language is a tool for communication – not a lens

There is a philosophical argument (patently false but it keeps academics occupied) that language is a lens through which to view the world and that language determines reality. The Sapir-Whorf hypothesis (first proposed by Sapir in 1929) suggests that the structure of a language influences the way its speakers perceive and think about the world and that language is the lens through which the world is seen.

It is an academic hypothesis but it is more wishful thinking than based on empirical observation. I do not need language to observe the world but I do need language to describe what I observe. I need to describe the world because I wish to communicate. The idea of language primarily as a lens to view the world is fundamentally flawed. I find the Sapir-Whorf hypothesis unconvincing because the invention of a language – any language – must start with the need to communicate some aspect of reality. It is very plausible and even likely that this would have been to communicate some thing important for survival (e.g. Danger! Lion! Run! …..). Communication to cooperate was the human trademark. Whether for hunting or defense or building a settlement, the key was social cooperation enabled by communication.

Generally philosophers have – in their own winding ways – always returned to the idea of language being a communication tool rather than a lens to view reality. Chomsky’s theory of universal grammar suggests that humans possess an innate language faculty that allows us to acquire language with relative ease. The capability for language surely lies in our genes. But I am not so sure that grammar is also explicitly in our genes. But it could be so, since the software in our brains for what we consider logic and reason (and hence grammar) probably has dependence on our genes. Nevertheless, his theory implies that language is a tool that we use to represent the world, and where language and reality are interconnected, but that language does not determine reality.  Wittgenstein flip-flopped (which professional philosophers are allowed to do). In his early work, he argued that language was a picture of the world and that the meaning of a sentence was determined by its relationship to the world. Not quite Sapir-Whorf but close. However, in his later work, Wittgenstein shifted his focus to the role of language in human life and activity. He found that the meaning of words is not fixed but is determined by their use in specific contexts. Thus language was actually more a tool for various tasks than he had previously thought.

The Sapir-Whorf hypothesis derives from the philosophy of cultural relativism where empirical observations of reality are taken to be subordinate to our cultural understanding of the world. It is speculation rather than based on observation. This is more a political position than a true hypothesis which can be falsified. Certainly language is intricately entwined with thinking. But we can think without language. As a species we thought long before the chimps separated from homo. However, when language is applied, thinking is transformed and thought is elevated from roiling, diffuse, emotional clouds to crisper, clearer, delineation of ideas. This clarity of thought feeds back into new developments of language to be able to describe the new ideas and the abstractions that emerge. New language leads to even more nuanced and complex thoughts and so on ad infinitum. This loop from  language to thinking and back to language is probably the most powerful cultural feedback loop we have and is, I believe, a primary differentiator of humans from other animals.

It is language which provides a kind of digitization of the analogues of emotion. I look at the leaves of a tree in my garden and I can see more shades of green than I have words to describe. For my own contemplation of the tree I do not need to describe all the various shades of green or define any new words. Emotions and feelings and existing language will do. But if I wished to communicate some particular shade to someone else then I would use language to do so – either by inventing a new word or using other words as adjectives. It is my ability to discern many unnamed shades of green in the real world which demands the invention of new words. Reality is observed by my senses not through my language. I do not define a new colour and then go and find it in the real world. In any event it is reality which is reflected in language, not language which determines reality. The history of the evolution of words suggests that we invent new words as we observe or find new ideas – real or not – that we wish to communicate. The capability for language is physiology but languages are invented – always – for the purpose of communication.

We observe the world through our senses which both enable and restrict our observations. We use language to communicate what we observe. It was invented as a brush to paint the picture of what we observed and wished to communicate.. But language, we find, is a much more versatile tool than we set out to invent. It is not restricted to describe only the real world that we observe. As with any tool, we find language has many more uses than it was first designed for. 

Language frees humans from the captivity of now

From describing immediate and surrounding reality in the now, language progresses to describing actions. In the development of any language, first comes the naming of real things (nouns) and then come the actions we need to communicate (verbs). First things (you, me the tree, the dog, the lion, the sky, …) and then what to do (run, hide, come, go, fight, …). It is only then we discover that language can  even describe abstractions. Our minds are full of abstractions because of our ability to think not because we have language. Animals may remember the past and even take actions regarding the future but they do not, as far as we can tell, refer to past or future events. Language is what allows humans to address unreal events in the past and in the future. It is a short step from abstractions and unreal things to thoughts and imaginary things. That in turn led to the emergence of imagination and fantasy which provided the creative impetus for human development.

While language can influence our thinking, it does not determine it. Language allows us to explore concepts and experiences that may not be directly observable or verifiable, such as the past, the future, and fictional worlds. It is language which allows us to describe the abstract the unreal and the imaginary. The past and the future and all other things which do not exist cannot be observed but can be described. Thus language allows and enables all that human imagination and fantasy can conjure up. In reality, the real world itself is merely a tiny sub-set of what language can describe. Language’s ability to describe non-existent things, such as the past, future, and fiction, is an unlooked for facet of language that emerged from the initial need for communication. The ability of language to deal with the unreal and the imagined was a discovery not an invention.

It is language which makes it possible to lie. (The evolution of lying can also be traced back to the survival need for deception and thence the use of language to create and communicate falsehoods).  Language is a tool invented primarily for communication rather than for thinking. But the feedback loop between language and thinking has allowed the species to take off. Language allows us to paint and communicate not only our view of the real world but even our thoughts. Language is that which has enabled and empowered social groups and cooperation among humans.

The power of language lies in its ability to transcend the boundaries of direct experience and explore the realm of imagination. Language is not confined to reality. It could be argued that imagination, fantasy and creativity are enabled by language. Rather than language being a lens through which to view reality, the real need to communicate is what enabled language and freed humankind from the captivity of reality and the shackles of now.


Image

Barbarous times

January 24, 2024

Back in 2015 I wrote a post about Execution by Nitrogen which now seems to have been adopted in Alabama.

Execution by Nitrogenktwop 18 March 2015

In power plants nitrogen is often used for pressurising, purging, cooling or protection. I first came across a death caused by nitrogen in the 1970s when a maintenance worker entered a pulverised coal storage silo which had been blanketed with nitrogen for explosion protection during a shut-down. It was not a pressurised silo and therefore not seen as being a high risk area. By accident, he had entered the silo without a companion being present and without his breathing equipment. He was only found hours later inside the silo and it became clear that his asphyxiation had happened so fast that he had had no time to struggle, let alone call for any assistance. Of course the death was not so much caused by nitrogen as by the lack of oxygen and the resulting hypoxia. Nitrogen asphyxiation is not unknown as an industrial cause of death. Through the 1980s and 1990s, I came across another 4 accidental deaths at power plants where workers had inadvertently entered a nitrogen atmosphere. Just in the US, there were 80 industrial deaths and 50 injuries due to nitrogen asphyxiation between 1992 and 2002.

…..

In this modern, civilised, 21st century, firing squads, beheadings, stoning, being pushed off a roof-top, being poisoned (gas, lethal injection), hanging, electrocution and asphyxiation are all in use or proposed as methods of execution. Not so very different from the barbarous times of the Middle Ages.

Barbarous times indeed but not just barbarous states. Don’t fool yourself in thinking that human behaviour is any “better” now than it has ever been since we became “human”. The range of possible human behaviour is set by our genes and the worst possible behaviour has not changed in over 10,000 years.

Since humans are genetically capable of being barbarous, then, in the appropriate circumstances, they are brutal and barbarous. Single individuals can be brutal and so can all members of conflict-based organisations. Members of Hamas, or ISIS, or all para-militaries, and all military personnel from all countries in the world – in the appropriate circumstances – can, and do, exhibit the most barbarous possible behaviour. “Being civilised” does not change the genetic nature of humans.

Can the Holocaust happen again? Of course it can.


Testing ChatGPT /2: Large language models conflate conformity with truth

May 3, 2023
  1. Large language models (such as GPT3) generate text based on probability.
  2. They can have no internal concept of truth.
  3. Probabilities are reflections of conformity and are based on weights of existing usage patterns contained within its database.
  4. Training is about giving additional weight as human feedback on the positivity or negativity of what has been generated.
  5. Generated text gives high values to conformity with its own database and with the training (feedback) received. 
  6. High conformance is conflated with truth.

Of course, this is a common feature in human interactions as well. Many people having a particular opinion gives that opinion a higher perceived truth value even if that opinion is utterly wrong or false. The deification of “the many” has warped our notions of what is truth.

But the fuzziness around truth could be a particular issue with large language models and for AI. Truth is not an inherent, internal concept of such models. Truth becomes relative (to the views of the trainers) and conformity is conflated with truth.


ktwop: Do large language models generate probabilistic text?
ChatGPT: That’s correct! Large language models generate text using probability distributions. These models are trained on massive amounts of text data and learn to predict the likelihood of certain words or phrases based on the context of the surrounding text.
When generating text, the model samples from these probability distributions to determine which word or phrase to use next. Because the model is probabilistic, it can generate different outputs for the same input, making the generated text more diverse and interesting.
However, it’s important to note that while large language models like GPT-3 are very good at generating coherent text, they are not perfect and can still produce errors or nonsensical output. It’s also important to be aware of ethical concerns surrounding the use of such models for potentially harmful purposes such as generating fake news or propaganda.
ktwop: How can the output from a language model be given a truth value?
ChatGPT:The output from a language model can be given a truth value by comparing it to a reference or ground truth text. The reference text can be a sentence or a paragraph that is known to be true or false, depending on the task at hand. For example, if the task is to generate a factually correct sentence, the truth value of the output can be determined by comparing it to a reliable source of information, such as a verified news article or a trustworthy database. If the output matches the reference text, it can be considered true, and if it does not, it can be considered false.
Similarly, for tasks such as sentiment analysis or topic classification, the truth value can be determined by comparing the output to a set of labels that are already defined as true or false. For example, if the task is to classify a text as positive or negative, the truth value of the output can be determined by comparing it to a labeled dataset where each text is already classified as positive or negative.
In summary, the truth value of the output from a language model can be determined by comparing it to a reference text or a set of labeled data that is already known to be true or false.
 
Image

 

Testing ChatGPT / 1: Activist versus independent judiciary

April 29, 2023

I am still trying to figure out how chatbots can be of help to me  – and I have no doubt that they will be of assistance. No doubt they will have their dangers but I am still convinced that the value-system that any chatbot or AI exhibits will depend on the individuals who defined its database and supervised and arranged for its learning. It is inevitable that the biases of the creators will be mirrored in the result.

Of course a chat/AI is about language and about predicting the next word (token) based on probability of previous usage. It is not about truth or logic. It is not about the correctness or goodness or “truth” about the generated text. The language usage which predominates in the database will skew the probabilities it sees for what the next word/token should be. Of course there will be bias depending on what is in the database and what is in the minds of those providing feedback and learning.

I thought it would be interesting to test the inbuilt biases and values that the current open, free, version of ChatGPT has. The only way I can think of is by posing specific questions or asking it to argue for particular positions. I have a suspicion that the current version is a little bit sanctimonious and more than a little bit woke. I plan to go through a number of cases which I have worded to try and expose bias. Though, of course, they will contain all my own biases. Note that in this case I am trying to force ChatGPT to take a particular position.


Q: Write an essay of about 500 words on why a judiciary may be activist or independent but not both.

ChatGPT: Judicial activism and judicial independence are two important concepts that govern the role of the judiciary in a democratic society. While both concepts aim to ensure that the judiciary operates in a fair and impartial manner, there is a fundamental tension between them. In this essay, we will explore why a judiciary may be activist or independent but not both.

Judicial activism refers to a situation in which judges take an active role in shaping public policy or interpreting the law in a way that is not necessarily consistent with the original intent of the lawmakers. This approach is often seen as necessary when the legislature or executive fails to act on important issues or when their actions are deemed unconstitutional or unfair. Judicial activism is often associated with judges who are willing to make bold and controversial decisions, even if they are not entirely supported by legal precedent.

On the other hand, judicial independence refers to the ability of judges to make decisions based solely on their interpretation of the law, free from political or other external pressures. This approach is seen as necessary to ensure that the judiciary remains an impartial and fair arbiter of disputes. Judges who value judicial independence are often cautious in their approach to decision-making and are reluctant to make decisions that are seen as overly political or controversial.

The tension between judicial activism and judicial independence arises because the two concepts are fundamentally at odds with each other. Judicial activism requires judges to take an active role in shaping public policy, which can be seen as infringing on the powers of the legislature and executive. This can be seen as a threat to the principle of separation of powers, which is a fundamental tenet of democracy. Judicial independence, on the other hand, requires judges to remain impartial and avoid taking an active role in shaping public policy. This approach can be seen as a threat to the principle of democracy, as it may prevent judges from addressing important issues that are not being adequately addressed by the legislature or executive.

In practice, it is difficult for a judiciary to be both activist and independent. Judges who value judicial independence are often cautious in their approach to decision-making and are reluctant to make decisions that are seen as overly political or controversial. This can make it difficult for them to address important issues that require bold and controversial decisions. On the other hand, judges who value judicial activism are often willing to make bold and controversial decisions, even if they are not entirely supported by legal precedent. This can make it difficult for them to remain impartial and avoid taking an active role in shaping public policy.

There are many examples of this tension between judicial activism and judicial independence in the history of the United States. One example is the landmark case of Brown v. Board of Education, in which the Supreme Court ruled that segregation in public schools was unconstitutional. This decision was a clear example of judicial activism, as it overturned decades of legal precedent and required the executive and legislative branches to take action to end segregation. However, it was also a clear example of judicial independence, as the Supreme Court was acting independently of political pressures and interpreting the law in a way that was consistent with the principles of democracy.

Another example of this tension is the case of Roe v. Wade, in which the Supreme Court ruled that a woman’s right to choose to have an abortion was protected by the Constitution. This decision was also a clear example of judicial activism, as it required the legislature and executive to take action to protect the right to choose. However, it was also a clear example of judicial independence, as the Supreme Court was interpreting the law in a way that was consistent with the principles of democracy and protecting the rights of individuals against potential infringement by the state.

In conclusion, while judicial activism and judicial independence are both important concepts in the operation of a democratic society, there is a fundamental tension between them. A judiciary may be activist or independent, but not both. Judges who value judicial independence are often ….


A most impressive performance. It was produced in less than 2 minutes and certainly much faster than anything I could have written. It produced 705 words instead of the 500 asked for and ends rather abruptly – perhaps because it had exceeded the required word-count. 


I needed more discrimination and altered the request slightly:

Q: Write an essay on why a judiciary to be independent cannot be activist

ChatGPT: The judiciary is an essential branch of government tasked with interpreting the law and ensuring that justice is served. It is imperative that the judiciary remains independent and free from political influence to uphold the rule of law and protect the fundamental rights of citizens. However, there is often a debate about whether the judiciary can be both independent and activist. In this essay, I will argue that a judiciary that is independent cannot be activist.

Firstly, independence and activism are fundamentally at odds with each other. Judicial activism requires judges to take an active role in shaping public policy, which can infringe on the powers of the legislature and executive. In contrast, judicial independence requires judges to remain impartial and avoid taking an active role in shaping public policy. Thus, judicial activism and judicial independence are inherently contradictory.

Secondly, judicial activism can undermine public trust in the judiciary. When judges take an active role in shaping public policy, they risk being seen as overstepping their boundaries and making decisions that are not based solely on legal precedent. This can lead to accusations of judicial overreach and undermine the public’s confidence in the judiciary as a fair and impartial arbiter of disputes. Judicial independence, on the other hand, ensures that judges remain impartial and base their decisions solely on legal precedent, which can increase public trust in the judiciary.

Thirdly, judicial activism can lead to a politicization of the judiciary. When judges take an active role in shaping public policy, they risk becoming politicized and aligning themselves with particular political agendas. This can lead to accusations of bias and undermine the judiciary’s reputation for impartiality. By contrast, judicial independence ensures that judges remain free from political influence and make decisions solely based on legal precedent, which can prevent the judiciary from becoming politicized.

Lastly, judicial activism can infringe on the separation of powers between the judiciary, legislature, and executive. The separation of powers is a fundamental principle of democracy, which ensures that each branch of government operates independently of the other. When judges take an active role in shaping public policy, they risk infringing on the powers of the legislature and executive, which can undermine the principle of the separation of powers. Judicial independence, on the other hand, ensures that the judiciary operates independently of the other branches of government and upholds the principle of the separation of powers.

In conclusion, while judicial activism can play an essential role in addressing issues that the legislature and executive are unable or unwilling to address, it cannot be reconciled with judicial independence. A judiciary that is independent must remain impartial and avoid taking an active role in shaping public policy. Judicial independence is necessary to uphold the rule of law and protect the fundamental rights of citizens. When judges remain independent, they ensure that justice is served, and the public’s confidence in the judiciary is maintained.


There is an attempt to be even-handed but a few too many caveats. So I perceive a clear bias. ChatGPT would I think prefer to defend the position that a judiciary can be both activist and independent.

Which, of course, is the woke point of view but utter nonsense.


On the way to homo superior?

August 21, 2022

A recent after-dinner discussion led to us speculating as to how humans and our world would change in, say, 100, 1,000 or 10,000 years. One approach was to look back 100, 1,000 and 10,000 years and forecast changes in the future to be at the same rate as in the past. But this is easier said than done. Extrapolation along a specified path of change is only a matter of elapsed time but when the direction itself changes, extrapolation does not work. Furthermore, any extrapolation is hampered by the fact that the rate of change is itself changing. However, there are some aspects of human physiology and behaviour which – apparently and to the best of our knowledge – have not changed at all in 10,000 years. And that led the discussion into whether the species homo sapiens sapiens is evolving, or will evolve, into homo sapiens superior, perhaps along the way through homo superior eventually to a homo scientia.

And how long could that take?

The term homo superior was coined in 1935 by Olof Stapledon in his science fiction novel Odd John which I read in my teens some fifty years ago.

Odd John: A Story Between Jest and Earnest is a 1935 science fiction novel by the British author Olaf Stapledon. The novel explores the theme of the Übermensch (superman) in the character of John Wainwright, whose supernormal human mentality inevitably leads to conflict with normal human society and to the destruction of the utopian colony founded by John and other superhumans. …  It is also responsible for coining the term “homo superior”

10,000 years is about 500 human generations and is not really long enough for humans to have developed into a new species (though it has recently been observed in finches that just 2 generations – with stringent isolation – is sufficient to create a new “species”). Defining a species is not so simple, but the practical – and pragmatic – definition of a species is one where individuals (of the appropriate gender) can interbreed and produce viable offspring. Changes to the species homo sufficient to give breeding incompatibility needs significantly longer time scales. It is just a guesstimate but one reason for putting the start of modern humans at 200,000 – 300,000 years ago is that individuals from that distant past would probably be sufficiently different from modern humans to disallow successful breeding.

We do not know for sure how fast humans are evolving. Views in the scientific community are divided and range from faster than ever before, to slower than ever before, to stopped completely.

Image

One view is that human development has neutralised the forces which have driven evolution. Certainly human development has now produced the capability for, and the practice of, manipulating our immediate surroundings. We create bubbles of habitability around us. We carry the bubbles around us not only on earth but also to escape the confines of the earth’s surface. We now have the potential to move under the oceans or even to other planets. The vagaries of weather and climate have virtually been eliminated as an evolutionary force. Having diversity is of value only when an organism has to face change. In an unchanging environment, unused diversity merely withers away. In the past it has been the uncontrollable changes to our surrounding environment which has given rise to “natural selection” and the evolution of us. In that sense, human development de-emphasises the value of genetic diversity since we maintain an unchanging environment within our habitable bubbles. Outlying genetic traits such as abilities to withstand cold or extreme heat or low oxygen pressure have lost relevance since they are not needed. There can be no “selection” for such traits when they provide no survival or reproductive advantage. 

Similarly medical advances have led to the neutralisation of “de-selection” forces. Genetic propensities for disease or weaknesses are no longer “naturally de-selected” since medical advances allow and enable such affected individuals to survive, reproduce and sustain these genetic weaknesses. Physiological weaknesses which would once have been weeded out by de-selection are now no longer “weaknesses” and are preserved.

Geographic isolation of whole groups has almost disappeared. Whereas propagation remains predominantly between individuals from nearby geographical locations the occurrence of offspring from parents from distant origins is sharply increasing. 

So what actually is being selected for? The short answer is that we do not know.

The three main drivers required for evolution to occur – diversity, de-selection of the non-viable and geographic isolation – have all been neutralised to varying degrees. It may not be a high probability but it is not inconceivable that the species will stagnate and individuals will regress to some mean. We could just become more and more alike. But it is much more likely that the human evolutionary drivers have just become more subtle and will only show up over longer periods. Our food habits are changing (generally softer foods) and we don’t need the same set of teeth and the same jaws that our ancestors did. Our need for long legs to hunt down prey is an anachronism. Our body size is increasing (partly nutrition, partly genetic) and this may check – and even reverse – the trend to smaller brains that has taken place over the last 500,000 years. Independent of brain size, the effectiveness of brain processes may be slowly increasing. (A smaller wrinkly brain can be much more effective than a large smooth one). The evolution of tool-making hands may be subtly changing to suit other things (bigger, more dextrous thumbs perhaps?). The disparity in the design life of our various organs was of no consequence before but are sharply in focus as we live ever longer. There is an element of artificial selection due to medical developments which was of no significance before, but is now becoming increasingly important. We are not far from the situation where the results of medical interventions in one generation could be passed on to the next. Resistance to particular diseases, for example, could potentially be induced in one generation and be passed on. Genetic engineering, if practised, could well pass on some “desired” traits to the next generation, but will also pass on many hidden, unknown traits.

Our own experience usually covers 5 generations in our c. 100 year lifetimes (grandparents to grandchildren). In evolutionary terms this is almost invisible but is certainly not insignificant. But we do not know if homo superior is on the way. There is little doubt that there will be – some 300,000 years in the future – a homo future species which will not be able to interbreed with us. But there is as good a chance that homo future turns out to be a homo inferieur, rather than a homo superior.


Image

What evolution does not do

May 17, 2022

Of course evolution does not actually do anything.

Evolution is not causative. It is a label for the effects we see of other factors which effect survival and reproduction. I dislike the description of evolution being as a result of natural selection. Strictly, it is never about selection but always a result of deselection of those not able to survive. The primary “force” which gives evolution is the dying of the unfit. It is not the survival of the fittest but the survival of the good enough. When it is said that some creature is perfectly suited to its environment what is actually meant is that all others not suited to that environment failed to survive. More than 99% of all species that ever existed are now extinct. We cannot, I think, apply value judgements of “good” or “bad” to the result. It is not correct to even apply the terms “natural” or “unnatural” or “artificial” about the word evolution. The evolution of species in general or any species in particular has always been without direction and without purpose. But it could be that the human species is the first which may be able to introduce an element of purpose and direction to its own future course. Whether this direction can encompass “good” behaviour is still in the realm of fantasy.

It seems to me that the human definition of “good behaviour” has not changed very much in the last 10,000 years. The golden rule (Do unto others as you would have them do unto you) probably became a golden rule whenever it was that our ancestors began cooperating in a serious way and built societies which were larger than the immediate family group. It seems plausible that this value judgement for “good behaviour” begins with the first establishment of clans or tribes. That takes us back at least 50,000 years and maybe even longer. Certainly it goes back to long before the first establishment of permanent settlements and cities (c. 10,000 years ago). But whenever it was that humans developed this value judgement for “good behaviour”, it does not seem to have been much favoured by the forces resulting in evolution.

Clearly some behavioural patterns do impact survival and reproduction and therefore must have some impact on the evolutionary result. Tribes and clans not inclined to cooperate went extinct long ago. Cultures where members did not specialise and cooperate, stagnated and gradually disappeared. The levels of specialisation and cooperation in today’s global society have reached unprecedented levels. If behaviour is to be selected/deselected for then it can only happen to the extent that behaviour is an inherited trait. Moreover, it can only be implemented by the continuous deselection of unwanted behaviour and selection of desired behaviour. That behaviour does have a genetic component is almost certain and that genes are mainly inherited is also certain. Breeding for emotional or behavioural traits is still a very chancy business. Domesticated animals, and even wolves and foxes, have been artificially bred for traits other than the purely physiological. This has involved “deselection” for some emotional traits (aggression for example) or to “select” for others (courage, tolerance of humans, ….). Individuals having desired traits are allowed to breed and those having undesirable traits are not allowed to reproduce. This ensures the passing on of genes. But from genes to behaviour is a very fuzzy step.

“Bad behaviour” is as prevalent today as it was in pre-history. “Bad behaviour” clearly is not deselected by “natural” evolutionary forces. “Good behaviour” is not selected for either. The propensity for violence, aggression and, generally, doing harm to others – albeit by a minority – has not changed much since ancient times. The only possible conclusion is that being “good” or “bad” does not lead to the evolutionary selection – or deselection – of a behavioural trait. What evolution certainly does not do is to choose between “good” and “bad”.

The persistence of bad behaviour through the ages suggests that it may even have some survival value.


“Good conduct” is not an evolutionary survival trait

May 4, 2022

What passes for “good conduct” today is not so very different to what it was at least 5,000 years ago. It is very probable that it has not changed very much for much longer than that. To lie, to rob, to cheat, to harm, to murder and to rebel against established societal authority have all been considered “bad conduct” in human societies from long before recorded history is available. The earliest known codes of laws go back to Babylonian (Hammurabi -1800 BCE) and even to Sumerian times (Urukagina – 2400 BCE). Codes of conduct can be inferred to even earlier times with the beginnings of Dharma in the pre-Hindu Indus-Saraswati Valley, in ancient Egypt and in ancient China. 

Code of Hammurabi

Definitions of what constitutes “good conduct” must originate with the earliest societies of hunter gatherers and must therefore precede the spread of farming, the growth of cities and even the beginnings of semi-permanent settlements at the end of the last ice age (c. 12,000 years ago). It is not unreasonable that the Golden Rule (Do to others as you would have them do to you) emerged as a core definer of good conduct around 40 – 50,000 years ago. 

50,000 years is not insignificant in evolutionary time. For humankind it represents around 2,500 generations of natural selection. But our conduct has not improved. Evolutionary changes can be observed in humans and they are not small. All the races we identify today have emerged in that time. The changes are continuing but it is not apparent over our short lifetimes as to what the future holds for us. The changes are sufficient that it is not very likely that a human from 50,000 years ago would be able to breed successfully with a human from today.

Wikipedia – Human traits that (have) emerged recently include the ability to free-dive for long periods of time, adaptations for living in high altitudes where oxygen concentrations are low, resistance to contagious diseases (such as malaria), light skin, blue eyes, lactase persistence (or the ability to digest milk after weaning), lower blood pressure and cholesterol levels, retention of the median artery, reduced prevalence of Alzheimer’s disease, lower susceptibility to diabetes, genetic longevity, shrinking brain sizes, and changes in the timing of menarche and menopause.

Humans are the only species which has shown the capability of interfering with the conditions determining natural selection. We started neutralising the effects of environment on us when we built shelters and gained control over fire. We now create our own bubbles in which we live and nullify the impact that climate and weather once had on natural selection. We use technology to minimise the impact of natural disasters on the evolution of our kind. Of course, the greatest impact humans have had on natural selection has come in the last 200 years or so with the great advances of medical knowledge. Being weak – mentally or physically – is no longer a de-selector for survival and reproduction. Natural selection no longer favours the “fittest”. Choice of mates is no longer (entirely) based on physical superiority. We deselect some characteristics before birth (Down’s Syndrome). Whether we admit to it or not, we employ a kind of eugenics by default. We have begun artificial selection (AI) though we are not quite sure what we are selecting for.

But it is not at all obvious that “good conduct” is any more prevalent among humans today than it was 50,000 years ago. We continue to lie, cheat, do harm, murder and flout established authority. As individuals we do so utilising the most advanced technologies available to humankind, always one step ahead of the established authorities. No doubt there is a genetic component to “good conduct”, but natural selection has not found any benefit in promoting it. In today’s age of entitlements, survival and reproduction by transgressors is actually protected. The genetic components of “bad conduct” are given a protected status. As societies we continue to war on each other for quite frivolous reasons with the most wonderful new weapons. In fact weapons production leads many technology advances – as it always has done.

The inescapable conclusion I come to is that “good conduct” is not a survival trait and has no impact whatsoever on the evolution of the species. In fact, “bad conduct” may well be preferred by the selection forces we have now brought into play. What evolution will result in remains to be seen. But it is highly probable that our conduct will not be any better than it is now. There is a chance it could be much worse.


Colonisation and the genocide of the Neanderthals by the even-newer-Africans

November 5, 2021

Colonisation

Colonisation is not anything new. It is a necessary characteristic for any successful species.

Colonisation is the expansion by one community into the physical space being occupied by some other biological community. The invaders may be a whole species or just a particular strain within a species. It is a phenomenon exhibited by every successful biological community from viruses and bacteria and fungi to plants and animals (including humans). A new territory or habitat may be already occupied by other species, or strains of the same species, or unoccupied. The incoming community, if they dominate the new territory, are considered colonists. Newcomers who merely merge into the existing population are immigrants rather than colonists. Any communities already existing in the space and falling under the sway of the newcomers are the colonised. Many attempts at colonisation fail; either because the colonists cannot adapt to the new habitat or because they cannot compete (biologically, culturally, or technologically) with the existing inhabitants.

That living things exist in every conceivable corner of the earth’s land surface and of the oceans is a consequence of colonisation. That living things find it necessary to search for new habitats is driven by the need to grow and to survive changing environments by utilising the inherent genetic diversity available in every species. The greater the diversity available in any species, the greater the possibility that some individuals can adapt quickly to a new habitat. A successful species is one which grows its numbers and expands its habitat. Evolution results when the genetic diversity available allows individuals in a species to cope with changes to its environment.. These changes are usually small and quite slow. However, much of evolution has been when natural selection has been accelerated as species adapt to the rapid, and sometimes large, changes of environment in newly invaded habitats. Not all succeed and many would-be colonists have failed  to navigate the change. Species die out when they move, voluntarily or involuntarily, to new unsuitable habitats. There are a few species which have stagnated in tiny niche habitats, exhibit unusually little genetic diversity and have lost the wherewithal to change. They have become so specialised to fit their habitat that they are incapable of adapting to any other and have reached evolutionary “dead-ends”. Most such species have gone extinct but a few survive. Panda bears and theridiid spiders are examples. They have become incapable of growth or of colonisation and are probably on their slow path to extinction. 

The reality is that any living species which does not colonise is doomed either to stagnation in a niche habitat, or to failure and extinction. The most abject species failure would be an in-situ extinction without evolution of any descendant species. Colonisation favours, and even enables, the emergence of succeeding species. The change of habitat and the changes it brings about are essential to the continuation of life. Spaces are colonised when expanding communities invade and bring, or evolve, more competitive abilities, cultures, or technologies than available to the current inhabitants, if any, of that space. The dinosaurs on land had a long run but could not finally cope with the changes in their environment. But some of them took to the air. Many found new sources of food. Some developed a taste for insects and worms and begat a plethora of new species. (Along the way they even wiped out entire species of some flying insects). But the descendants of those adventurous, little dinosaurs have become the thousands of bird species to be found thriving today. They have colonised the entire globe while their larger, ancient kin perished miserably a long time ago.

However, success is no longer politically correct. Failure is glorified instead. Failure as a victim of another species is even more highly regarded. In the politically correct version of conservation, even successful species of fauna and flora are condemned for being invasive when they colonise new territories. Failing species, unfit for changed environments, or unable to bear the heat of competition from other species, are artificially, and unnaturally, protected in glorified zoos. However I do not hear any ornithologists condemning birds for being invasive species or for colonising the planet.

Human colonisation

In today’s politically correct world, colonisation has become a dirty word. Colonists are considered evil. Statues of colonists are a seen as a sign of oppression and depravity. The descendants of those colonised in the past claim a self-righteous victimhood in the present. Needless to say, a place in the kingdom of heaven is reserved for the colonised and their descendants. Colonists and their descendants are subject to eternal damnation. Yet, the history of mankind is one of successful communities expanding and colonising. The peopling of the world has been enabled by colonisation. The colonised became colonised usually because their culture was stagnating and their technology could not compete with incoming ones.

It is usually only the successful European colonisations between about 1400 and 1900 CE which have become politically incorrect. But the many attempts that failed are lost from history and get little sympathy. What is conveniently forgotten, though, is that the human populations in Australia and Africa and the Americas, at that time, were ripe for colonisation by any invading community with superior technology. For these populations, their own culture or their technology, or both, were stagnating and they were not growing. During that 500 year period, the native Americans, the inhabitants of S America and Africa and of Australia were themselves neither expanding nor  technologically capable of colonising new territories. If they had been more advanced the newcomers from Europe would have been immigrants rather than colonists. If it had not been the Europeans, it would have been someone else. If not the Europeans, the Chinese or Indians would have colonised Australia. If Europe had not had the capability to expand, they would themselves have probably succumbed to the expansion by the Ottomans. The Americas would then have been colonised from China or from Japan or some other region which was growing. Africa would have been colonised probably by the Ottomans (Turks and Arabs) or the Persians in the north, and by Indians and Chinese in the south. The key fact is that the inhabitants of Australia and the Americas and Africa, at that time, were not advancing (in numbers, culture, or technology) sufficiently to drive their own expansions or growth.

Go back another 2,000 years (500 BCE – 1,500 CE) and we find the South Indian colonisation of large tracts of SE Asia. The Greek and Persian expansions gave way to the Romans. The Mongols, the Normans, the Turks, and the Vikings were busy moving into the new territories they could access. The Norse attempts to colonise North America all failed. The Greeks used colonisation for expansion but also as a way of reducing social stress at home. Dissident populations were expelled to go and found colonies and be a nuisance elsewhere. Han Chinese colonies were used as the spearhead and as the means of expanding Empire. They were often guinea-pigs for testing the viability of new territories. And many failed. Where colonies succeeded, they nearly always had superior technology or weapons or organisation to dominate the local inhabitants. They usually used the indigenous population for labour and often as enslaved labour. These early colonisations were probably more brutal and savage than those which came later, but they go back far enough in time to generally escape sanctimonious censure today.

Delving further back into antiquity, reveals that colonisation was a major means of expansion even then. On land, those who mastered horses had a crucial advantage – at least for a while. Voyages across oceans were limited by the capability for navigating open seas. But sailing boats were quite well developed even if navigation was not, and coast-hugging allowed many communities to colonise by sea. But these expansions by the Phoenicians, Carthaginians, Greeks, Romans, Egyptians, Persians, Indians, and Han Chinese are far enough in the past to escape moral judgements in the present. If we go back even further to pre-historical times (5,000 – 10,000 years ago), all of hunter-gatherer Europe was colonised by the rampaging Yamnaya and the Anatolians from central Asia. They brought agriculture and cities and civilisation in their wake, and so are now forgiven the conquest and pillage and enslavement they surely utilised along the way.

 But let us not forget that human colonisation was started long before antiquity by the Africans.

Colonisation from Africa

When it comes to the origins of human colonisation we need to go back to before we were ever human. I take humans to mean Anatomically Modern Humans – AMH – who appeared around 300,000 years ago. Around 2.5 million years ago, before AMH had evolved, homo erectus had already spread from Africa and colonised most of Europe and Asia but never reached Australia. These were the oldest-Africans, but they did not then have control of fire. These oldest-Africans were more settlers than colonists. 

Some time after we had evolved from apes, and perhaps around 1,000,000 – 800,000 years ago, a common ancestor of AMH, Neanderthals, Denisovans, and a couple of unknown hominin species (call them homo x and homo y), emigrated from Africa and colonised most of Europe, Central Asia, and South-East Asia. These were the old-Africans. The control of fire was achieved sometime during the homo erectus period and was certainly available to the colonising old-Africans. Most likely the movement of whole populations was motivated to move out of Africa, not so much by a shortage of space, but by growth or by changes of climate and a shortage of food. No doubt some were just seen as trouble-makers at home and encouraged to leave. These old-Africans were emigrants and probably the first ever hominin colonists. They were not strictly immigrants into existing societies since the territories they moved into had no other AMH inhabitants. But they probably did displace the oldest-Africans they found. There were probably many waves of old-Africans and later waves of emigrants may well have been immigrants into existing communities. By 700,000 years ago old-Africans covered all of Africa, Europe and Asia. Many of the areas they moved into may have had indigenous near-hominin populations. However, their indigenous cultures and technologies were not sufficiently competitive to prevent the wave of old-African colonists establishing themselves. The key distinction in these times was probably the control of fire. The colonisation of the world by the old-Africans led also to the demise of many animal species which could not resist the advanced cultures and technologies they were faced with. Some were hunted to extinction as prey, while others were unable to adapt quickly enough, and still others were just crowded out by the newcomers.

In due course (a small matter of a few hundred thousand years) the old-Africans in Central Asia and Europe evolved to become the Neanderthals. From about 500,000 years ago they were the dominant species, and that continued for about 300,000 years. In South-East Asia, the old-Africans evolved to become the Denisovans. In the rest of Asia (S China, India, and the Middle East), the old-Africans were still around but had evolved to become some as yet unknown hominin species. In Africa, the old-Africans evolved, sidestepped the evolution of any Neanderthal-like species, and eventually gave rise to Anatomically Modern Humans (AMH) by about 300,000 years ago. Let us call them the new-Africans.

Then from about 200,000 years ago there were a number of waves of new-African emigration/colonisation into Europe and Asia. These emigrant waves continued sporadically for 100,000 years while, in Africa, evolution had created the even-newer-Africans. Around 60 – 70,000 years ago, they were responsible for the major wave of emigration now termed the “Out of Africa” event. The new-Africans and the even-newer-Africans found indigenous populations all across the new territories they expanded into. They were sometimes just other new-Africans and sometimes they were blended populations of old-Africans (Neanderthals, Denisovans, …) and new-Africans. In India, for example, the even-newer-Africans arrived after the Toba eruption and mingled genetically with small surviving populations of old-Africans already mingled with new-Africans. By around 50,000 years ago the even-newer-Africans had reached Australia.

Whether there was conflict between indigenous and arriving populations, or whether one culture was gradually displaced by, or submerged into, the more dominant one is unknown. Whether advanced colonists enslaved less advanced populations is unknown but is quite likely to have occurred. What is known is that the arrival of the even-newer-Africans caused the Neanderthals and the Denisovans and any other remaining hominin species around to disappear. By around 50,000 years ago the Denisovans were extinct and by 40,000 years ago there were no Neanderthals left. However, their genes still survive in tiny quantities and live on in us. So some contact leading to gene admixture certainly occurred. However, whether intentional or not, many communities were overwhelmed. Eventually, entire human species were wiped out.

The time-line for African colonists into Europe and Asia:

  • oldest-Africans – c. 2 m years ago
  • old-Africans – c. 1 million years ago >> evolved to be Neanderthals, Denisovans, …
  • new-Africans – c. 500,000 years ago >> evolved, mixed with old-Africans and other strains
  • new-Africans (AMH) – c. 300,000 years ago >>admixture with previous colonists
  • even-newer-Africans – c. 100,000 years ago >> colonised the known world, eradicated the Neanderthals , Denisovans and others

In current-day, politically correct language it could be called the Genocide of the Neanderthals by the even-newer-Africans.

Image

The accidental story of two times five and base ten

November 23, 2020

Humans have used many different bases for number systems but the use of base 10 is overwhelmingly dominant. There are instances of the use of base 5, base 6, base 20 and even base 27. In spite of many attempts to replace it by base 10, base 60 has fended off all rationalist suggestions and remnants remain entrenched for our current mapping of time and space. For time periods, base 60 is used exclusively for hours, minutes and seconds but base 10 for subdivisions of the second. Similarly for spatial coordinates, degrees, minutes and seconds of arc are still used but subdivisions of the second use base 10. (Some of the other bases that appear in language are listed at the end of this post).

In terms of mathematics there is no great inherent advantage in the use of one particular number base or another. The utility of a particular choice is a trade off first between size and practicality. The size of the base determines how many unique number symbols are needed (binary needs 2, decimal needs 10 and hexagesimal 16). There are many proponents of the advantages of 2, 3, 8, 12 or 16 being used as our primary number base. Certainly base 12 is the most “fraction friendly”. But all our mathematics  could, in reality, be performed in any number base.

At first glance the reasons for the use of base 10 seems blindingly obvious and looking for origins seems trivial. Our use of base 10 comes simply – and inevitably – from two hands times five digits. In recent times other bases (binary – base 2- and hexadecimal – base 16 – for example) are used more extensively with computers, but base 10 (with some base 60) still predominates in human-human interactions (except when Sheldon is showing off). The use of base 10 predates the use of base 60 which has existed for at least 5,000 years.

It is ubiquitous now but (2 x 5) is not a consequence of design. It derives from a chain of at least three crucial, evolutionary accidents which gave us

  1. four limbs, and
  2. five digits on each limb, and finally
  3. human bipedalism which reserved two limbs for locomotion and left our hands free.

The subsequent evolutionary accidents which led to increased brain size would still have been necessary for the discovery of counting and the invention of number systems. But if, instead of two, we had evolved three limbs free from the responsibilities of locomotion, with three digits on each limb, we might well have had base 9 at the foundations of counting and a nonary number system. The benefits of a place value system and the use of nonecimals would still apply.

It is more difficult to imagine what might have happened if limbs were not symmetrical or the number of digits on each limb were different. No human society has not been predominantly (c. 85%) right-handed. But left-handedness has never been a sufficient handicap to have been eliminated by evolution. Almost certainly right-handedness comes from the asymmetrical functions established in the left and right-brains. The distinction between the functions of the two sides of the brain goes back perhaps 500 million years and long before limbs and tetrapods. By the time limbs evolved, the brain functions giving our predilection for right-handedness must already have been established. So, it is possible to imagine evolution having led to, say, 6 digits on right fore-limbs and 5 digits on left fore-limbs.

I wonder what a natural base of 11 or 13 would have done to the development of counting and number systems?

Why four limbs?

All land vertebrates (mammals, birds, reptiles and amphibians) derive from tetrapods which have two sets of paired limbs. Even snakes evolved from four-limbed lizards. 

Tetrapods evolved from a group of animals known as the Tetrapodomorpha which, in turn, evolved from ancient sarcopterygians around 390 million years ago in the middle Devonian period; their forms were transitional between lobe-finned fishes and the four-limbed tetrapods. The first tetrapods (from a traditional, apomorphy-based perspective) appeared by the late Devonian, 367.5 million years ago. Wikipedia

It would seem that – by trial and error – a land-based creature, fortuitously possessing two pairs of limbs, just happened to be the one which survived and become the ancestor of all tetrapods. The evolutionary advantage of having 4 limbs (two pairs)  – rather than one or three or five pairs – is not at all clear. Insects have evolved three pairs while arachnids have four pairs. Myriapoda are multi-segmented creatures which have a pair of limbs per segment. They can vary from having five segments (10 legs) to about 400 segments (800 legs). The genes that determine the number of limbs determine many other features also and why two pairs would be particularly advantageous is not understood.  It could well be that the two pairs of limbs were incidental and merely followed other survival characteristics. The best bet currently is that

“You could say that the reason we have four limbs is because we have a belly,”

All of us backboned animals — at least the ones who also have jaws — have four fins or limbs, one pair in front and one pair behind. These have been modified dramatically in the course of evolution, into a marvelous variety of fins, legs, arms, flippers, and wings. But how did our earliest ancestors settle into such a consistent arrangement of two pairs of appendages? — Because we have a belly.

According to our hypothesis, the influence of the developing gut suppresses limb initiation along the midgut region and the ventral body wall owing to an “endodermal predominance.” From an evolutionary perspective, the lack of gut regionalization in agnathans reflects the ancestral absence of these conditions, and the elaboration of the gut together with the concomitant changes to the LMD in the gnathostomes could have led to the origin of paired fins.

The critical evolutionary accident then is that the intrepid sea creature which first colonised the land, some 390 million years ago, and gave rise to all tetrapods was one with a developing belly and therefore just happened to have two pairs of appendages.

The tail, however, is an asymmetrical appendage which may also once have been a pair (one on top of the other) but is now generally a solitary appendage. But it is controlled by a different gene-set to those which specify limbs. In mammals it has disappeared for some and performs stability functions for others. In some primates it has functions close to that of a fifth limb. But in no case has a tail ever evolved digits.

Why five digits on each limb?

When our ancestor left the oceans and became the origin of all tetrapods, four limbs had appeared but the number of digits on each limb had not then been decided. It took another 50 million years before a split distinguished amphibians from mammals, birds and reptiles. The timeline is thought to be:

  • 390 million years ago – tetrapod ancestor leaves the oceans
  • 360 million years ago – tetrapods with 6,7 and 8 digits per limb
  • 340 million years ago – amphibians go their separate way
  • 320 million years ago – reptiles slither away on a path giving dinosaurs and birds
  • 280 million years ago – the first mammals appear

SciAm

The condition of having no more than five fingers or toes …. probably evolved before the evolutionary divergence of amphibians (frogs, toads, salamanders and caecilians) and amniotes (birds, mammals, and reptiles in the loosest sense of the term). This event dates to approximately 340 million years ago in the Lower Carboniferous Period. Prior to this split, there is evidence of tetrapods from about 360 million years ago having limbs bearing arrays of six, seven and eight digits. Reduction from these polydactylous patterns to the more familiar arrangements of five or fewer digits accompanied the evolution of sophisticated wrist and ankle joints–both in terms of the number of bones present and the complex articulations among the constituent parts.

By the time we reach the mammals, five digits per limb has become the norm though many mammals then follow paths for the reduction of the number of effective digits in play. Moles and pandas evolve an extra sort-of adjunct digit from their wrists but do not (or cannot) create an additional digit.

…….. Is there really any good evidence that five, rather than, say, four or six, digits was biomechanically preferable for the common ancestor of modern tetrapods? The answer has to be “No,” in part because a whole range of tetrapods have reduced their numbers of digits further still. In addition, we lack any six-digit examples to investigate. This leads to the second part of the answer, which is to note that although digit numbers can be reduced, they very rarely increase. In a general sense this trait reflects the developmental-evolutionary rule that it is easier to lose something than it is to regain it. Even so, given the immensity of evolutionary time and the extraordinary variety of vertebrate bodies, the striking absence of truly six-digit limbs in today’s fauna highlights some sort of constraint. Moles’ paws and pandas’ thumbs are classic instances in which strangely re-modeled wrist bones serve as sixth digits and represent rather baroque solutions to the apparently straightforward task of growing an extra finger.

Five digits is apparently the result of evolutionary trial and error, but as with all things genetic, the selection process was probably selecting for something other than the number of digits. 

Science Focus

All land vertebrates today are descended from a common ancestor that had four legs, with five toes on each foot. This arrangement is known as the pentadactyl limb. Some species have subsequently fused these fingers into hooves or lost them altogether, but every mammal, bird, reptile and amphibian traces its family tree back to a pentadactyl ancestor that lived around 340 million years ago. Before, there were animals with six, seven and even eight toes on each foot, but they all went extinct at the end of the Devonian period, 360 million years ago. These other creatures were more aquatic than the pentadactyl animals. Evidence in the fossil record suggests that their ribs weren’t strong enough to support their lungs out of water and their shoulder and hip joints didn’t allow them to walk effectively on land. 

Five digits on our limbs are an evolutionary happenstance. There is nothing special that we can identify with being five. It could just as well have been six or seven or eight. That the number of digits on each limb are not unequal is also an evolutionary happenstance predating the tetrapods. It is more efficient genetically, when multiple limbs are needed, to duplicate the pattern (with some variations for mirror symmetry and for differences between paired sets). When each limb is to carry many digits it is more efficient to follow a base pattern and keep the necessary genetic variations to a minimum. 

By 280 million years ago, four limbs with five digits on each limb had become the base pattern for all land-based creatures and the stage was set for base 20. And then came bipedalism.

Why bipedalism?

Bipedalism is not uncommon among land creatures and even birds. Some dinosaurs exhibited bipedalism. Along the human ancestral line, bipedalism first shows up around 7 million years ago (Sahelanthropus). It may then have disappeared for a while and then appeared again around 4 million years ago in a more resilient form (Australopithecus) which has continued through till us. What actually drove us from the trees to bipedalism is a matter of many theories and much conjecture. Whatever the reasons the large brain evolved only in bipedal hominins who had a straightened spine, and who had maintained two limbs for locomotion while freeing up the other two for many other activities. The advantages of being able to carry things and throw things and shape things are considered the drivers for this development. And these two free limbs became the counting limbs.

It seems unlikely that a large brain could have developed in a creature which did not have some limbs freed from the tasks of locomotion. Locomotion itself and the preference for symmetry would have eliminated a three-limbed creature with just one free limb.

Two limbs for counting, rather than 3 of 4 or 4 of 4, is also happenstance. But it may be less accidental than the 4 limbs to begin with and the 5 digits on each limb. An accidental four limbs reduced inevitably to two counting limbs. Together with an accidental five digits they gave us base 10.


Other bases

1. Oksapmin, base-27 body part counting

The Oksapmin people of New Guinea have a base-27 counting system. The words for numbers are the words for the 27 body parts they use for counting, starting at the thumb of one hand, going up to the nose, then down the other side of the body to the pinky of the other hand …… . ‘One’ is tip^na (thumb), 6 is dopa (wrist), 12 is nata (ear), 16 is tan-nata (ear on the other side), all the way to 27, or tan-h^th^ta (pinky on the other side).

2. Tzotzil, base-20 body part counting

Tzotzil, a Mayan language spoken in Mexico, has a vigesimal, or base-20, counting system. ….. For numbers above 20, you refer to the digits of the next full man (vinik). ..

3. Yoruba, base-20 with subtraction

Yoruba, a Niger-Congo language spoken in West Africa, also has a base-20 system, but it is complicated by the fact that for each 10 numbers you advance, you add for the digits 1-4 and subtract for the digits 5-9. Fourteen (??rinlá) is 10+4 while 17 (eétàdílógún) is 20-3. So, combining base-20 and subtraction means 77 is …. (20×4)-3.

4. Traditional Welsh, base-20 with a pivot at 15

Though modern Welsh uses base-10 numbers, the traditional system was base-20, with the added twist of using 15 as a reference point. Once you advance by 15 (pymtheg) you add units to that number. So 16 is un ar bymtheg (one on 15), 36 is un ar bymtheg ar hugain (one on 15 on 20), and so on.

5. Alamblak, numbers built from 1, 2, 5, and 20

In Alamblak, a language of Papua New Guinea, there are only words for 1, 2, 5, and 20, and all other numbers are built out of those. So 14 is (5×2)+2+2, or tir hosfi hosfihosf, and 59 is (20×2)+(5x(2+1))+(2+2) or yima hosfi tir hosfirpati hosfihosf.

6. Ndom, base-6

Ndom, another language of Papua New Guinea, has a base-6, or senary number system. It has basic words for 6, 18, and 36 (mer, tondor, nif) and other numbers are built with reference to those. The number 25 is tondor abo mer abo sas (18+6+1), and 90 is nif thef abo tondor ((36×2)+18).

7. Huli, base-15

The Papua New Guinea language Huli uses a base-15, or pentadecimal system. Numbers which are multiples of 15 are simple words. Where the English word for 225 is quite long, the Huli word is ngui ngui, or 15 15. However 80 in Huli is ngui dau, ngui waragane-gonaga duria ((15×5)+the 5th member of the 6th 15).

8. Bukiyip, base-3 and base-4 together

In Bukiyip, another Papua New Guinea language also known as Mountain Arapesh, there are two counting systems, and which one you use depends on what you are counting. Coconuts, days, and fish are counted in base-3. Betel nuts, bananas, and shields are counted in base-4. The word anauwip means 6 in the base-3 system and 24 in the base-4 system!

9. Supyire, numbers built from 1, 5, 10, 20, 80, and 400

Supyire, a Niger-Congo language spoken in Mali has basic number words for 1, 5, 10, 20, 80 and 400, and builds the rest of the numbers from those. The word for 600 is kàmpwòò ná ?kwuu shuuní ná bééshùùnnì, or 400+(80×2)+(20×2)

10. Danish, forms some multiples of ten with fractions

Danish counting looks pretty familiar until you get to 50, and then things get weird with fractions. The number 50 is halvtreds, a shortening of halv tred sinds tyve (“half third times 20” or 2½x20). The number 70 is 3½x20, and 90 is 4½x20.

11. French, mix of base-10 and base-20

French uses base-10 counting until 70, at which point it transitions to a mixture with base-20. The number 70 is soixante-dix (60+10), 80 is quatre-vingts (4×20), and 90 is quatre-vingts-dix ((4×20)+10).

12. Nimbia, base-12

Even though, as the dozenalists claim, 12 is the best base mathematically, there are relatively few base-12 systems found in the world’s languages. In Nimbia, a dialect of the Gwandara language of Nigeria, multiples of 12 are the basic number words around which everything else is built. The number 29 is gume bi ni biyar ((12×2)+5), and 95 is gume bo’o ni kwada ((12×7)+11).