Tag Archives: knowledge

A Case For Human Progress

Steven Pinker’s claim that human beings are now better off than at any time since the beginning of recorded history has provoked fierce resistance. Critics often regard it as complacent, technocratic, or morally obtuse in the face of ongoing suffering. Yet when the claim is properly understood—not as a denial of present evils, but as a comparative, historical judgment—it is not only defensible but compelling. On virtually every objective measure of human well-being, the contemporary world represents an unprecedented improvement over the conditions under which most human beings have lived for millennia.

1. The Baseline of History: A Brutal Past

To assess whether we are “better off,” we must begin with an honest appraisal of historical norms. For the vast majority of recorded history, human life was marked by extreme vulnerability. Infant mortality commonly exceeded 30–40 percent. Average life expectancy hovered around 30 years, not because people died of “old age” early, but because death from infection, childbirth, famine, violence, or accident was routine. Chronic pain went largely untreated. A toothache, a broken limb, or an infected wound could easily be fatal.

Material deprivation was the default condition. Even in relatively advanced civilizations, the majority lived near subsistence. Crop failure meant starvation; drought meant mass death. Economic growth, as we now understand it, scarcely existed. Generational improvement was not expected; stagnation was normal.

By any sober standard, this is a grim baseline—and it persisted not for centuries, but for thousands of years.

2. Health, Longevity, and Survival

Against this background, the modern transformation of human health is nothing short of astonishing. Global life expectancy has more than doubled since 1800. Infant and maternal mortality—once accepted as tragic inevitabilities—have collapsed across most of the world. Diseases that killed indiscriminately for centuries (smallpox, polio, cholera, tuberculosis) have been eradicated or brought under control through vaccination, sanitation, and antibiotics.

Crucially, these gains are not confined to wealthy elites. The most dramatic improvements in survival have occurred among the global poor. A child born today in sub-Saharan Africa is far more likely to survive infancy than a child born into European nobility in the 18th century. This is not a moral judgment; it is a statistical fact.

If “being better off” includes the basic ability to live, avoid suffering, and reach adulthood, then modern humanity surpasses all previous eras by an overwhelming margin.

3. Violence and the Value of Life

One of Pinker’s most controversial claims concerns the decline of violence. While modern conflicts are rightly horrifying, they are exceptional rather than typical when viewed across deep time. Archaeological and historical evidence suggests that rates of violent death in prehistoric and early historic societies were vastly higher than today. Tribal warfare, blood feuds, slavery, torture, and public executions were normal features of social life.

Even into the early modern period, homicide rates in European cities were many times higher than those of today. States routinely employed terror as an instrument of governance. War between great powers was frequent and devastating.

The modern world has not eliminated violence—but it has rendered it morally intolerable and statistically rarer. The very fact that contemporary wars are seen as global crises, rather than routine affairs of state, is itself evidence of moral progress.

4. Knowledge, Literacy, and Cognitive Freedom

Another dimension of being “better off” concerns the life of the mind. For most of history, literacy was restricted to tiny elites. Knowledge was local, fragile, and easily lost. Superstition dominated explanation; dissent invited persecution.

Today, basic literacy is the global norm. Scientific knowledge accumulates rather than vanishes. Information is accessible on a scale unimaginable even a century ago. Individuals are vastly freer to form beliefs, challenge authority, and revise their views in light of evidence.

This matters not merely instrumentally, but intrinsically. A world in which people can understand their circumstances, reason about causes, and imagine alternatives is a better world than one in which fate and fear dominate human consciousness.

5. Moral Progress Without Moral Perfection

Critics often object that pointing to progress ignores ongoing injustice: poverty, inequality, climate risk, authoritarianism, and war. But this objection mistakes the nature of the claim. To say that humanity is better off than ever before is not to say that it is good enough, or that improvement is inevitable, or that vigilance is unnecessary.

Indeed, the very capacity to identify injustice presupposes moral progress. Slavery, once defended as natural, is now universally condemned. Women’s legal and political rights, once unthinkable, are widely recognized. Cruelty toward children, animals, and minorities is increasingly regarded as morally unacceptable. These changes did not arise spontaneously; they reflect centuries of argument, institutional reform, and cultural learning.

Progress is fragile—but fragility does not negate its reality.

6. The Enlightenment Legacy

At the core of Pinker’s argument is a defense of Enlightenment values: reason, science, humanism, and institutional reform. These are not abstract ideals; they are practical tools that have demonstrably improved human lives. When societies apply evidence-based medicine, impartial law, and accountable governance, suffering decreases. When they abandon these principles, suffering returns.

The lesson of history is not that progress is guaranteed, but that it is possible—and that it depends on the continued use of reason and empirical feedback rather than nostalgia, fatalism, or ideological purity.

Conclusion: A Clear-Eyed Optimism

Measured against the full sweep of recorded history, the case that human beings are better off today than ever before is overwhelmingly strong. We live longer, healthier, safer, more informed, and more autonomous lives than our ancestors could plausibly imagine. This is not a matter of opinion, but of cumulative evidence.

Acknowledging this fact does not trivialize suffering; it honors the hard-won achievements that reduced it. Nor does it license complacency. On the contrary, understanding how progress occurred is essential to preserving and extending it.

Steven Pinker’s thesis, properly understood, is not a hymn of self-congratulation. It is a reminder that human affairs are not condemned to perpetual misery—and that reasoned optimism is not naïveté, but historical realism.

Leave a comment

Filed under Essays and talks

Against instinctive knowledge

The phrase “instinctive knowledge” can be understood in a few different ways, depending on how we define both “instinct” and “knowledge.” Let me break it down:


1. Instinct vs. Knowledge

  • Instinct usually refers to innate, biologically programmed behaviors or responses—things we don’t learn consciously but are inherited through evolution (e.g., babies grasping with their hands, ducklings following their mother).
  • Knowledge typically implies awareness or understanding—something propositional (“knowing that”), practical (“knowing how”), or experiential.

The tension is that instincts are automatic and unconscious, while knowledge usually implies some form of cognition or awareness.


2. Possible Senses of “Instinctive Knowledge”

  1. Innate predispositions:
    Humans seem to be “hard-wired” with certain kinds of proto-knowledge:
    • Infants show preference for human faces, voices, and even rough number discrimination.
    • Chomsky’s universal grammar hypothesis treats language capacity as a kind of inborn knowledge system.
      These aren’t learned through experience in the normal sense.
  2. Embodied or tacit knowledge:
    Sometimes what people call “instinctive” is closer to tacit knowledge—things we know how to do without being able to articulate them. For example:
    • Catching yourself when you trip.
    • An experienced athlete or musician reacting “without thinking.”
      Michael Polanyi described this as “we know more than we can tell.”
  3. Philosophical traditions:
    • Plato spoke of anamnesis (knowledge as recollection), suggesting that some truths are “already in us.”
    • Descartes argued for innate ideas (such as God or mathematical truths) present before experience.
    • Contemporary nativists in cognitive science (e.g., Steven Pinker) argue humans are born with mental structures that amount to forms of knowledge.

3. Limits of the Term

Strictly speaking, calling instinct “knowledge” can be misleading:

  • A bird migrating south by instinct doesn’t “know” geography.
  • A baby’s sucking reflex is not propositional knowledge.

But if we take knowledge in a broader sense—including tacit, embodied, or innate cognitive structures—then yes, something like “instinctive knowledge” does exist.


✅ In summary:
There is no universally agreed category called “instinctive knowledge,” but humans (and other animals) do possess innate capacities and tacit forms of knowing that feel instinctive. Whether we call these “knowledge” depends on whether we restrict knowledge to conscious, propositional content—or allow it to include embodied, unconscious, and inborn structures.

Leave a comment

Filed under Essays and talks

Concepts, notions, and ideas

Here is a comparison of concepts, notions, and ideas, showing how philosophers typically distinguish them:


1. Concepts

  • Definition: A concept is a precisely defined mental representation of a universal or category.
  • Role: Concepts are the building blocks of knowledge and reasoning, used for definition, classification, and abstraction.
  • Philosophical context:
    • For Aristotle, a concept corresponds to the universal form grasped by the intellect.
    • For Kant, a concept (Begriff) is a rule for organising intuitions into judgments.
  • Example: The concept of triangle = “a three‑sided polygon.”
  • Key feature: Concepts are systematic, general, and analysable—they can be defined and used in logical propositions.

2. Notions

  • Definition: A notion is a vague, intuitive, or less fully developed mental content about something.
  • Role: It often refers to a pre-theoretical or common-sense grasp of an idea that lacks full precision.
  • Philosophical context:
    • Locke sometimes used “notion” to mean a general idea but often with less clarity.
    • In modern philosophy, a notion can mean an impression or rough understanding that may lead to a concept.
  • Example: A child may have a notion of fairness before understanding the concept of justice.
  • Key feature: Notions are informal, intuitive, and less clearly defined.

3. Ideas

  • Definition: “Idea” is the broadest term, referring to any mental content—thoughts, images, concepts, or notions.
  • Role: In philosophy (especially Locke and Hume), “ideas” are the basic contents of the mind—anything we can think about.
  • Scope: An idea can be:
    • concept (clear, general, universal).
    • notion (vague or intuitive).
    • mental image or belief.
  • Example: The idea of a unicorn could be a mental picture, while the concept of a mammal is more systematic.

Comparison Table

FeatureConceptNotionIdea
PrecisionClear and definedVague, intuitiveVaries – can be precise or vague
FormalityFormal, philosophical, scientificInformal, everydayBroadest and most general term
UniversalityRepresents a universal or categoryPre-conceptual or common-senseAny mental content, including images or beliefs
ExampleConcept of triangleNotion of fairnessIdea of a unicorn

Relationship Between Them

  • Ideas are the broadest category: any mental content.
  • Notions are rough or intuitive ideas, often pre‑conceptual.
  • Concepts are well-defined ideas that allow reasoning and definition.

Leave a comment

Filed under Essays and talks

In favour of professionalism

Professionalism, particularly in providing expert advice, is fundamentally rooted in the principle of serving the best interests of the client. At its core, professionalism entails a commitment to competence, integrity, impartiality, and ethical responsibility. This commitment not only fosters trust between the professional and the client but also ensures that the client receives advice that genuinely aligns with their needs and objectives.

First and foremost, professionalism demands a high standard of competence. Professionals are obligated to maintain up-to-date knowledge, skill, and expertise in their fields to offer sound, informed, and accurate advice. Clients seek professional assistance precisely because they require specialized insights and solutions they cannot obtain independently. Consequently, the professional’s role is critical in navigating complexities and providing clarity, allowing clients to make informed decisions. Professionalism, therefore, guarantees that expert advice is not only accurate but also practically applicable and beneficial.

Integrity further underpins professionalism. Professionals are entrusted with sensitive and often confidential information, placing them in positions where honesty and transparency are paramount. Acting with integrity means prioritizing the client’s interests over personal gain. Professional advice that is grounded in integrity builds long-term relationships based on trust and respect. Clients who know they can depend on a professional’s honesty are more likely to engage openly, thus enhancing the effectiveness of the advice provided.

Impartiality is another cornerstone of professionalism. Providing advice in the client’s best interests requires an objective stance free from undue influence, biases, or conflicts of interest. Professionals must consciously guard against personal or financial incentives that could compromise their recommendations. Ensuring impartiality means the advice given genuinely reflects what is most beneficial for the client rather than what is easiest or most profitable for the adviser. This impartiality assures clients that their unique circumstances are at the forefront of any advice provided.

Finally, ethical responsibility reinforces professionalism by obliging professionals to adhere to established codes of conduct. These ethical guidelines exist to protect clients and maintain the integrity of professional practice. Ethical standards ensure accountability, providing clients with a means to seek redress if the advice given does not align with professional obligations. This accountability not only protects clients but also elevates the trust and credibility of professional services as a whole.

Critics of professionalism, notably Ivan Illich, argue that professionalization can create unnecessary dependence on experts, potentially disempowering individuals by implying that specialized knowledge is beyond the reach of ordinary people. Illich contended that professionals often monopolize knowledge and skills, restricting public access to essential services and fostering social inequity. However, this criticism overlooks the significant role professionals play in addressing complex challenges that require specialized expertise. Rather than creating dependence, professionalism at its best empowers clients by providing them with clarity, guidance, and informed choices, thus promoting autonomy rather than limiting it. Relying on amateurs or other non-experts risks making the wrong decisions, with potentially serious adverse consequences.

In conclusion, professionalism in providing expert advice is essential for protecting and promoting the best interests of the client. Competence ensures accuracy and effectiveness, integrity builds trust, impartiality guarantees objectivity, and ethical responsibility provides accountability. Together, these principles form a robust framework that enables professionals to consistently deliver valuable, trustworthy, and client-centered advice, reinforcing the essential role of professionalism in all areas of expert consultation.

Leave a comment

Filed under Essays and talks

The curse of knowledge

The curse of knowledge is a cognitive bias that occurs when someone, possessing knowledge or expertise on a particular topic, struggles to imagine or communicate with others who lack the same understanding or information. Essentially, once we know something, it’s very hard to imagine what it’s like not to know it.

How it manifests:

  • Communication issues: Experts may explain ideas poorly, using jargon or assuming background knowledge.
  • Misunderstanding audiences: Teachers or speakers overestimate how much their audience understands.
  • Design flaws: Creators or developers may design interfaces or products assuming users already know how they work.

Why it happens:

  • The human brain struggles to “unlearn” or mentally rewind to a state before the knowledge was acquired.
  • Our current knowledge shapes our perception, making it difficult to imagine ignorance or misunderstandings.

Examples:

  • A professor struggling to simplify concepts for students who are beginners.
  • Engineers creating complex devices that confuse new users due to assumptions about basic user knowledge.

Overcoming the curse of knowledge:

  • Put yourself in the learner’s shoes through empathy or testing.
  • Break down complex ideas into simpler, clearer explanations.
  • Seek feedback from people who have less familiarity with your topic.

The curse of knowledge underscores how expertise can inadvertently become a barrier rather than a bridge, highlighting the importance of empathy and clear communication.

Leave a comment

Filed under Logical fallacies

Invincible ignorance

“Invincible ignorance” refers to a state of ignorance that cannot be overcome because the individual has no way of accessing or understanding the necessary information. This concept is often discussed in moral and ethical contexts, particularly in philosophy and theology.

In these contexts, invincible ignorance is the lack of knowledge that is literally impossible for a person to obtain. This could be due to various factors such as cultural, geographical, or temporal barriers. For example, someone living in a remote part of the world without access to certain information cannot be blamed for not knowing it.

In moral theology, especially within the Catholic Church, the concept of invincible ignorance plays a significant role. It is believed that if a person is invincibly ignorant of the moral wrongness of an act, then their culpability for that act is diminished or even nullified. This is because moral responsibility is often linked to the knowledge and intent behind an action.

However, it’s important to distinguish invincible ignorance from “vincible ignorance,” which is ignorance that can be overcome but isn’t due to the individual’s lack of effort or willful avoidance of the truth. In moral discussions, vincible ignorance does not typically absolve an individual from responsibility in the same way invincible ignorance might.

Leave a comment

Filed under Logical fallacies

On Gettier Problems

by Tim Harding

Gettier problems or cases are named in honor of the American philosopher Edmund Gettier, who discovered them in 1963. They function as challenges to the philosophical tradition of defining knowledge as justified true belief . The problems are actual or possible situations in which someone has a belief that is both true and well supported by evidence, yet which fails to be knowledge (Hetherington 2017:1).

The traditional ‘justified true belief’ (JTB) account of knowledge is comprised of three conditions as follows: S knows P if and only if (i) P is true, (ii) S believes that P is true, and (iii) S is justified in believing that P is true. In his discussion of this account of knowledge, Gettier (1963:192) begins by noting two points.  His first point is that it is possible for a person to be justified in believing a proposition which is in fact false (for which he later gives examples).  His second point is that if a person is justified in believing any proposition P, and that proposition P entails another proposition Q, and that if the person accepts that Q is deduced from P, then the person is justified in believing Q.

Gettier (1963: 192-193) provides two counterexamples to show that it is possible meet these three JTB conditions and yet not know P.  I think that his second counterexample demonstrates both of his two opening points better than his first counterexample.  The proposition (f) ‘Jones owns a Ford’ entails the disjunctive proposition (h) ‘Either Jones owns a Ford or Brown is in Barcelona’.  In accordance with Gettier’s first opening point, Smith is justified in believing (f) even if it is false, because Smith did not know that Jones was lying about his ownership of the Ford. Thus in accordance with Gettier’s second opening point, if Smith is justified in believing (f), he is justified in believing (h).  So if (f) is false, (h) could still be true by chance, if unbeknown to Smith Brown just happens to be in Barcelona.  So Smith was justified in believing (h) yet he did not know (h).  Yet proposition (h) meets each of the three JTB conditions.  So I think that this counterexample shows that Gettier’s two opening points are both plausible.

Zagzebski (1994: 207) notes that Gettier problems arise ‘when it is only by chance that a justified true belief is true’, as in the case of Brown happening to be in Barcelona in the Gettier counterexample discussed above. She argues that ‘since justification does not guarantee truth, it is possible for there to be a break in the connection between justification and truth, but for that connection to be regained by chance’ (Zagzebski 1994: 207).  Gettier’s counterexample created a problem for ‘justified true belief’ because an accident of bad luck (Jones lying about owning a Ford) was cancelled out by an accident of good luck (Brown happening to be in Barcelona), thus preserving both the truth of the disjunction (h) ‘Either Jones owns a Ford or Brown is in Barcelona’ and Smith’s justification for believing the truth of (h).

I think this break in the connection between justification and truth is what Zagzebski (1994: 209) means when she later refers to the concept of knowledge closely connecting the justification and the truth component of a given belief, but permitting some degree of independence between them.  In a later essay (1999: 101), Zagzebski explains that ‘Gettier problems arise for any definition in which knowledge is true belief plus something else that is closely connected with the truth but does not entail it’.  She argues that all that is necessary is that there be a small gap or independence the between truth and justification components of knowledge (Zagzebski (1999: 101), as shown in Gettier’s abovementioned counterexample.  It follows that Gettier problems can be avoided if there is no degree of independence at all between the truth and the justification of a belief (Zagzebski (1994: 211).

Zagzebski (1994: 209-210) describes a general rule for generating Gettier cases. As long as there is a small degree of independence referred to in (ii) above, we can construct Gettier cases by the following procedure.  We start with a case of justified false belief, where the falsity of the belief is due to some element of luck (such as Jones lying about owning a Ford).  Now amend the case by adding another element of luck (such as Brown happening to be in Barcelona) which makes the belief (in this case a disjunction) true after all.  So the ‘belief’ that Zagzebski is referring to here is any justified false belief where the falsity is by chance.

References

Gettier, E., (1963) ‘Is Justified True Belief Knowledge’ in Sosa, E., Kim, J., Fantl, J., and McGrath. M. Epistemology : An Anthology 2nd edition. Carlton, Blackwell. 192-193.

Hetherington, S., ‘Gettier Problems’, The Internet Encyclopedia of Philosophy, ISSN 2161-0002, http://www.iep.utm.edu/gettier/, 29 October 2017.

Zagzebski, L., (1994) ‘The Inescapability of Gettier Problems’ in Sosa, E., Kim, J., Fantl, J., and McGrath. M. Epistemology : An Anthology 2nd edition. Carlton, Blackwell. 207-212.

Zagzebski, L., (1999) ‘What is Knowledge?’ in Greco, J. and Sosa, E., The Blackwell Guide to Epistemology. Carlton, Blackwell. 92-116.

 

If you find the information on this blog useful, you might like to consider supporting us.

Make a Donation Button

Image

Leave a comment

Filed under Reblogs

Book review: The Death of Expertise

The Conversation

File 20170619 28805 17lbqbv
A new book expresses concern that the ‘average American’ has base knowledge so low that it is now plummeting to ‘aggressively wrong’.
shutterstock

Rod Lamberts, Australian National University

I have to start this review with a confession: I wanted to like this book from the moment I read the title. And I did. Tom Nichols’ The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters is a motivating – if at times slightly depressing – read.

In the author’s words, his goal is to examine:

… the relationship between experts and citizens in a democracy, why that relationship is collapsing, and what all of us, citizens and experts, might do about it.

This resonates strongly with what I see playing out around the world almost every day – from the appalling state of energy politics in Australia, to the frankly bizarre condition of public debate on just about anything in the US and the UK.

Nichols’ focus is on the US, but the parallels with similar nations are myriad. He expresses a deep concern that “the average American” has base knowledge so low it has crashed through the floor of “uninformed”, passed “misinformed” on the way down, and is now plummeting to “aggressively wrong”. And this is playing out against a backdrop in which people don’t just believe “dumb things”, but actively resist any new information that might threaten these beliefs.

He doesn’t claim this situation is new, per se – just that it seems to be accelerating, and proliferating, at eye-watering speed.

Intimately entwined with this, Nichols mourns the decay of our ability to have constructive, positive public debate. He reminds us that we are increasingly in a world where disagreement is seen as a personal insult. A world where argument means conflict rather than debate, and ad hominem is the rule rather than the exception.

Again, this is not necessarily a new issue – but it is certainly a growing one.

Image
Oxford University Press

The book covers a broad and interconnected range of topics related to its key subject matter. It considers the contrast between experts and citizens, and highlights how the antagonism between these roles has been both caused and exacerbated by the exhausting and often insult-laden nature of what passes for public conversations.

Nichols also reflects on changes in the mediating influence of journalism on the relationship between experts and “citizens”. He reminds us of the ubiquity of Google and its role in reinforcing the conflation of information, knowledge and experience.

His chapter on the contribution of higher education to the ailing relationship between experts and citizens particularly appeals to me as an academic. Two of his points here exemplify academia’s complicity in diminishing this relationship.

Nichols outlines his concern about the movement to treat students as clients, and the consequent over-reliance on the efficacy and relevance of student assessment of their professors. While not against “limited assessment”, he believes:

Evaluating teachers creates a habit of mind in which the layperson becomes accustomed to judging the expert, despite being in an obvious position of having inferior knowledge of the subject material.

Nichols also asserts this student-as-customer approach to universities is accompanied by an implicit, and also explicit, nurturing of the idea that:

Emotion is an unassailable defence against expertise, a moat of anger and resentment in which reason and knowledge quickly drown. And when students learn that emotion trumps everything else, it is a lesson they will take with them for the rest of their lives.

The pervasive attacks on experts as “elitists” in US public discourse receive little sympathy in this book (nor should these). Nichols sees these assaults as entrenched not so much in ignorance, more as being rooted in:

… unfounded arrogance, the outrage of an increasingly narcissistic culture that cannot endure even the slightest hint of inequality of any kind.

Linked to this, he sees a confusion in the minds of many between basic notions of democracy in general, and the relationship between expertise and democracy in particular.

Democracy is, Nichols reminds us, “a condition of political equality”: one person, one vote, all of us equal in the eyes of the law. But in the US at least, he feels people:

… now think of democracy as a state of actual equality, in which every opinion is a good as any other on almost any subject under the sun. Feelings are more important than facts: if people think vaccines are harmful … then it is “undemocratic” and “elitist” to contradict them.

The danger, as he puts it, is that a temptation exists in democratic societies to become caught up in “resentful insistence on equality”, which can turn into “oppressive ignorance” if left unchecked. I find it hard to argue with him.

Nichols acknowledges that his arguments expose him to the very real danger of looking like yet another pontificating academic, bemoaning the dumbing down of society. It’s a practice common among many in academia, and one that is often code for our real complaint: that people won’t just respect our authority.

There are certainly places where a superficial reader would be tempted to accuse him of this. But to them I suggest taking more time to consider more closely the contexts in which he presents his arguments.

This book does not simply point the finger at “society” or “citizens”: there is plenty of critique of, and advice for, experts. Among many suggestions, Nichols offers four explicit recommendations.

  • The first is that experts should strive to be more humble.
  • Second, be ecumenical – and by this Nichols means experts should vary their information sources, especially where politics is concerned, and not fall into the same echo chamber that many others inhabit.
  • Three, be less cynical. Here he counsels against assuming people are intentionally lying, misleading or wilfully trying to cause harm with assertions and claims that clearly go against solid evidence.
  • Finally, he cautions us all to be more discriminating – to check sources scrupulously for veracity and for political motivations.

In essence, this last point admonishes experts to mindfully counteract the potent lure of confirmation bias that plagues us all.

It would be very easy for critics to cherry-pick elements of this book and present them out of context, to see Nichols as motivated by a desire to feather his own nest and reinforce his professional standing: in short, to accuse him of being an elitist. Sadly, this would be a prime example of exactly what he is decrying.

To these people, I say: read the whole book first. If it makes you uncomfortable, or even angry, consider why.

Have a conversation about it and formulate a coherent argument to refute the positions with which you disagree. Try to resist the urge to dismiss it out of hand or attack the author himself.

I fear, though, that as is common with a treatise like this, the people who might most benefit are the least likely to read it. And if they do, they will take umbrage at the minutiae, and then dismiss or attack it.

The ConversationUnfortunately we haven’t worked how to change that. But to those so inclined, reading this book should have you nodding along, comforted at least that you are not alone in your concern that the role of expertise is in peril.

Rod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Image

1 Comment

Filed under Reblogs

Facts are not always more important than opinions: here’s why

The Conversation

Image 20170412 615 1uec762
The message over the doorway to London’s Kirkaldy Testing Museum. But don’t be too quick to believe the facts and dismiss the opinions. Flickr/Kevo Thomson, CC BY-NC-ND

Peter Ellerton, The University of Queensland

Which is more important, a fact or an opinion on any given subject? It might be tempting to say the fact. But not so fast… The Conversation

Lately, we find ourselves lamenting the post-truth world, in which facts seem no more important than opinions, and sometimes less so.

We also tend to see this as a recent devaluation of knowledge. But this is a phenomenon with a long history.

As the science fiction writer Issac Asimov wrote in 1980:

Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge”.

The view that opinions can be more important than facts need not mean the same thing as the devaluing of knowledge. It’s always been the case that in certain situations opinions have been more important than facts, and this is a good thing. Let me explain.

Not all facts are true

To call something a fact is, presumably, to make a claim that it is true. This isn’t a problem for many things, although defending such a claim can be harder than you think.

What we think are facts – that is, those things we think are true – can end up being wrong despite our most honest commitment to genuine inquiry.

For example, is red wine good or bad for you? And was there a dinosaur called the brontosaurus or not? The Harvard researcher Samuel Arbesman points out these examples and others of how facts change in his book The Half Life of Facts.

It’s not only that facts can change that is a problem. While we might be happy to consider it a fact that Earth is spherical, we would be wrong to do so because it’s actually a bit pear-shaped. Thinking it a sphere, however, is very different from thinking it to be flat.

Asimov expressed this beautifully in his essay The Relativity of Wrong. For Asimov, the person who thinks Earth is a sphere is wrong, and so is the person who thinks the Earth is flat. But the person who thinks that they are equally wrong is more wrong than both.

Geometrical hair-splitting aside, calling something a fact is therefore not a proclamation of infallibility. It is usually used to represent the best knowledge we have at any given time.

It’s also not the knockout blow we might hope for in an argument. Saying something is a fact by itself does nothing to convince someone who doesn’t agree with you. Unaccompanied by any warrant for belief, it is not a technique of persuasion. Proof by volume and repetition – repeatedly yelling “but it’s a fact!” – simply doesn’t work. Or at least it shouldn’t.

Matters of fact and opinion

Then again, calling something an opinion need not mean an escape to the fairyland of wishful thinking. This too is not a knockout attack in an argument. If we think of an opinion as one person’s view on a subject, then many opinions can be solid.

For example, it’s my opinion that science gives us a powerful narrative to help understand our place in the Universe, at least as much as any religious perspective does. It’s not an empirical fact that science does so, but it works for me.

But we can be much clearer in our meaning if we separate things into matters of fact and matters of opinion.

Matters of fact are confined to empirical claims, such as what the boiling point of a substance is, whether lead is denser than water, or whether the planet is warming.

Matters of opinion are non-empirical claims, and include questions of value and of personal preference such as whether it’s ok to eat animals, and whether vanilla ice cream is better than chocolate. Ethics is an exemplar of a system in which matters of fact cannot by themselves decide courses of action.

Matters of opinion can be informed by matters of fact (for example, finding out that animals can suffer may influence whether I choose to eat them), but ultimately they are not answered by matters of fact (why is it relevant if they can suffer?).

Backing up the facts and opinions

Opinions are not just pale shadows of facts; they are judgements and conclusions. They can be the result of careful and sophisticated deliberation in areas for which empirical investigation is inadequate or ill-suited.

While it’s nice to think of the world so neatly divided into matters of fact and matters of opinion, it’s not always so clinical in its precision. For example, it is a fact that I prefer vanilla ice cream over chocolate. In other words, it is apparently a matter of fact that I am having a subjective experience.

But we can heal that potential rift by further restricting matters of fact to those things that can be verified by others.

While it’s true that my ice cream preference could be experimentally indicated by observing my behaviour and interviewing me, it cannot be independently verified by others beyond doubt. I could be faking it.

But we can all agree in principle on whether the atmosphere contains more nitrogen or carbon dioxide because we can share the methodology of inquiry that gives us the answer. We can also agree on matters of value if the case for a particular view is rationally persuasive.

Facts and opinions need not be positioned in opposition to each other, as they have complementary functions in our decision-making. In a rational framework, they are equally useful. But that’s just my opinion – it’s not a fact.

Peter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Image

Leave a comment

Filed under Reblogs

Vice Chancellor Barney Glover says universities must stand up for facts and the truth – ‘if we don’t, who will?’

The Conversation

Image 20170228 29915 abvjkd
Intellectual inquiry and expertise are under sustained attack, says Barney Glover.
Mick Tsikas/AAP

Barney Glover, Western Sydney University

This is an edited extract from a speech made by Vice Chancellor Barney Glover at the National Press Club on 1 March, 2017. The Conversation


We live in challenging times. Ours is an era in which evidence, intellectual inquiry and expertise are under sustained attack.

The phrases “post truth” and “alternative facts” have slipped into common use. Agendas have displaced analysis in much of our public debate. And we are all the poorer for it.

I want to deliver a passionate defence of the value of expertise and evidence. I will mount a case for facts as they are grounded in evidence, not as fluid points of convenience employed to cover or distort a proposition.

My plea to you all is this: let’s not deride experts, nor the value of expertise. Because in an era where extremists and polemicists seek to claim more and more of the public square, our need for unbiased, well-researched information has seldom been greater.

We must remind ourselves of how human progress has ever been forged. In this, academics and journalists have common cause. For how are we to fulfill our respective roles in a democracy if we don’t defend the indispensible role of evidence in decision-making?

Hostility towards evidence and expertise

In Australia and around the world, we’ve seen the emergence of a creeping cynicism – even outright hostility – towards evidence and expertise.

We saw this sentiment in the post-Brexit declaration by British Conservative MP, Michael Gove that “the people of this country have had enough of experts.”

And yet – as we strive to cure cancer; save lives from preventable disease; navigate disruption; lift living standards; overcome prejudice, and prevent catastrophic climate change – expertise has never been more important.

The turn that public debate has taken is a challenge to universities. As institutions for the public good, we exist to push the frontiers of knowledge. We enhance human understanding through methodical, collaborative, sustained and robust inquiry.

That doesn’t discount the wisdom of the layperson. And it doesn’t mean universities have all the answers. Far from it. But we are unequivocally the best places to posit the questions.

We are places structurally, intellectually, ethically and intrinsically premised on confronting society’s most complex and confounding problems. We are at the vanguard of specialist knowledge. And we are relentless in its pursuit. We have to be. Because – like the challenges we as institutions immerse ourselves in – the pace of change is unrelenting.

In universities, questioning is continuous, and answers are always provisional. The intensive specialisation, in-depth inquiry and measured analysis universities undertake is not carried-out in service of some ulterior motive or finite agenda.

In the conduct of research the finish-line is very rarely, if ever reached. There’s always more to learn, more to discover. The core objectives universities pursue can never be about any other agenda than the truth. There is no other, nor greater reward. So let’s not disparage expertise, or the critically important role of evidence and intellectual inquiry.

Instead, let’s try to understand its value to our country and its people. And, indeed, to the world.

Universities perform an essential role in society. We must stand up for evidence. Stand up for facts. Stand up for the truth. Because if we don’t, who will?

Universities’ role in the economy

Disruption is drastically refashioning the economy. It is reshaping the way we work, and reimagining the way we engage with each other in our local communities and globally.

In this constantly transforming environment – where major structural shifts in the economy can profoundly dislocate large segments of society – our universities perform a pivotal role.

Universities help us make the very best of disruption, ensuring we are able to “ride the wave”. And they are the institutions best equipped to buffer us against the fallout. This is particularly important in regions that have relied for decades on large-scale blue-collar industries.

Think Geelong in regional Victoria and Mackay in central Queensland. Look to Elizabeth in the northern suburbs of Adelaide. Wollongong and Newcastle in New South Wales. And Launceston in Tasmania. Onetime manufacturing strongholds in carmaking, steel, timber and sugar.

These communities have been wrenched economically, socially and at the personal level by automation, offshoring and rationalisation. For places like these, universities can be a lifeline.

Internationally, the evidence is in. Former financier, Antoine van Agtmael and journalist, Fred Bakker look at this very scenario in their recent book, “The Smartest Places on Earth”.

They uncover a transformative pattern in more than 45 formerly struggling regional US and European economies; places they describe as “rustbelts” turned “brainbelts”.

Akron, Ohio is one of the most remarkable examples they cite. This midwestern city had four tyre companies disappear practically overnight. The then president of the University of Akron, Luis Proenza, reached out to those affected, rallying them to collaborate and encouraging them to transform.

Van Agtmael tells the story of what happened next. “What stayed in Akron”, he observes, “was the world class polymer research that has given us things like contact lenses that change colour if you have diabetes, tyres that can drive under all kinds of road conditions and hundreds more inventions.”

Akron, he continues, “now [has] 1,000 little polymer companies that have more people working for them than the four old tyre companies.”

This kind of transformation, at Akron and beyond, Van Agtmael remarks, is “university centric.”

“Each of these rustbelts becoming brain belts”, he concludes, “always have universities.” In places like those he describes, and many others around the world, universities and their graduates are leading vital processes of renewal within economies experiencing upheaval.

You may be surprised by the extent that this is happening in Australia, too.

Four-in-five startup founders are uni graduates

Image
University graduates key to boosting startup economy. From http://www.shutterstock.com

Over the past decade, the startup economy has become part of Australia’s strategy for economic diversification and growth. Yet what has not been widely understood is the extent to which universities and their graduates are responsible for that growth.

Now, for the first time, Universities Australia and the survey group Startup Muster have taken a closer look at the data.

“Startup Smarts: universities and the startup economy”, confirms that universities and their graduates are the driving force in Australia’s startup economy.

It tells us that four-in-five startup founders in this country are university graduates. Many startups, too, have been nurtured into existence by a university incubator, accelerator, mentoring scheme or entrepreneurship course.

There are more than one-hundred of these programs dispersed widely across the country, with many on regional campuses.

They provide support, physical space and direct access to the latest research. They help to grow great Australian ideas into great Australian businesses.

This report confirms just how important the constant evolution, renewal and refining of course offerings at universities is.

We need to ensure that our programs equip our students and graduates for an uncertain future.

By the time today’s kindergarten students finish high school and are considering university study, startups will have created over half-a-million new jobs across the country. And this new sector of the economy – a sector indivisible from our universities – raised $568 million in 2016; 73% more than the previous year.

By the very nature of the reach of our universities, the benefits are not confined to our cities. We play a vital role to help regional Australians and farmers stake their claim in the startup economy too. The idea of the “silicon paddock”’ – using technology to take farm-based businesses to the markets of the world – is no longer a concept. It’s a reality.

Technology enables our regional entrepreneurs to stay in our regions; building and running businesses, investing locally without the need for long commutes or city relocations. And this, too, is very important; making sure nobody is left behind.

Extending knowledge beyond uni gates

Comprehending and overcoming the complex problems the world confronts, in my view, requires we defend the role of expertise and intellectual inquiry. That doesn’t mean universities are the last word on knowledge. To a large extent, it means rethinking the way knowledge is conveyed beyond university gates.

If universities don’t turn their minds to this issue, others will. And their motivations may not always be altruistic.

Take research, for instance. When the facts of a particular field of inquiry are under attack, the natural reaction among researchers might be to tighten-up their retort and hone the theoretical armory.

It is right to be rigorous and methodical in research. But in the broader communication of our research – in the public dialogue beyond “the lab” – I think universities have to guard against retreating to overly technical language that, perhaps inadvertently, sidelines all but a limited group of specialists

I don’t suggest that research can’t benefit or even be improved via a researcher’s consciousness of a particular, often very specific audience. Yet researchers who allow this consciousness to dominate the development of their work risk undermining their ability to tread new ground and challenge existing frontiers of knowledge.

Only by crossing borders can we come to something new. How many researchers’ discoveries have arisen from a subversion of discipline, practice or establishment? Virtually all, I would suggest.

Breaking down structural boundaries

Crossing borders also means we push other structural boundaries. Within universities, distinct discipline paradigms exist for good reason. They bring focus and in-depth intellectual lineage to a particular field.

But, increasingly, the complex problems we set out to solve don’t abide by the same boundaries. These questions demand expertise from many disciplines, working together and approaching the subject matter from different angles.

That is why universities are constantly refining their research and teaching programs and, increasingly, diffusing the borders that kept many of them separate. This is good for universities. It is good for the country. And it is good for our students, many of whom find their way into public service or politics.

These graduates bring a greater understanding of all facets of the complex questions they confront throughout their working lives.

Interdisciplinarity is, I think, a powerful antidote against ideological intransigence and prejudice. Australian universities – particularly in their research – have a growing track-record in this regard.

Many of our very best research institutes are characterised by a fusion of disciplines where, for example, sociologists, political scientists, spatial geographers, and economists collaborate on a common research objective.

The work that emerges from this research is almost always compelling because it is multi-faceted. It extends itself beyond its constituent research community.

Cross-disciplinarity has also expanded at the teaching level of our universities over the past few decades. But a constrained funding environment can provoke a reduction in options.

We must, however, keep our viewfinder broad, because reductionism doesn’t match the expansionist, multi-strand trends emerging in the broader economy. It’s a disconnect.

As universities, as a society, we must be mindful of how important it is to ask questions, to follow our curiosity, to challenge boundaries and to never rest with the answers.

• Read the full speech here.

Barney Glover, Vice-Chancellor, Western Sydney University

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

Image

3 Comments

Filed under Reblogs