The goal of this blog is to create a list of super facts. Important facts that are true with very high certainty and yet surprising, misunderstood, or disputed by many. This blog aims to be challenging, educational, and fun, without it being clickbait. I determine veracity using evidence, data from reputable sources and longstanding scientific consensus. Prepare to be challenged (I am). Intentionally seek the truth not confirmation of your belief.
Grant from “Grant at Tame Your Book” have written an excellent and well researched post about the dark side of Artificial Intelligence. It has nearly a hundred references and it is very professionally written. It is called Don’t Confuse AI with a Benign Tool. With this post I just wanted to highlight this important post. Please check it out.
Superfact 90: Large Language Models (LLMs) such as ChatGPT, Claude, Llama and Gemini are just one type of popular applications of Artificial Intelligence among hundreds of applications of Artificial Intelligence, and LLMs represents just one branch of Artificial Intelligence.
Artificial intelligence and research concept. Shutterstock Asset id: 2314449325 by Stock-Asso
LLMs are currently the most popular “viral” AI. We can all access LLMs in our browsers. This has created the common misconception that Artificial Intelligence is the same as Large Language Models. However, LLMs represent only one branch of narrow AI systems designed to perform specific tasks.
Applications of Artificial Intelligence other than what Large Language Models are used for include robotics, robot motion planning, advanced control systems using AI, self-driving cars, image processing, optical character recognition, classification, facial recognition systems, medical imaging diagnostics, game playing such as chess playing computers, financial fraud detection, cybersecurity, investment robots, route optimization, mathematical proof generation, recommendation algorithms, virtual assistants, programming code generation, smart home devices, drug discovery, and that is just for starters.
There are probably many applications and types of Artificial Intelligence that we have not yet invented.
Two Robots powered by Artificial Intelligence. Shutterstock Asset id: 558350728 by Willrow Hood.
LLMs use large neural networks with many hidden layers, so called deep learning algorithms, and they employ the Rumelhart backpropagation learning algorithm invented by David Rumelhart, Geoffrey Hinton, and Ronald Williams. Clearly neural networks with multiple hidden layers and using the Rumelhart backpropagation algorithm are incredibly successful but it is just one of many kinds of Artificial Intelligence algorithms, and who knows what we will see in the future. Related to this post is my previous post Artificial Intelligence is Not New. We have only just begun.
I consider this a super fact because it is true, kind of important, and I believe that the multitude of Artificial Intelligence algorithms and applications is a surprise to many.
The many Artificial Intelligence Algorithms
Shutterstock Asset id: 2645975149
Due to the great improvement and success of Neural Networks, they have become very popular and Large Language Models use very large Neural Networks with multiple hidden layers (employing the Rumelhart back propagation algorithm). You can read more about that here.
However, there are many other AI algorithms, hundreds, maybe thousands. One example is genetic algorithms. These are types of algorithms that mimics evolution. They iteratively select a set of the best candidate solutions, then combine them (crossover), and also add random changes (mutation) to generate new solutions. Then select the best solutions and then you do it again. Selecting the best solutions corresponds to natural selection. I tried out such algorithms at my work, and over many iterations / generations you can get some impressive results. It is easy to understand how a complex organ such as an eye can evolve in a similar way in nature.
One type of decision tree based machine learning algorithm that I used specifically for classification tasks at work was C4.5 and C5. More specifically I used this type of machine learning algorithm for evaluating the results from automatic mail sorting systems. Basically, how well can a result from a certain machine be trusted. I don’t remember exactly but my classes were something along the line of super reliable, pretty reliable, average, and this result probably sucks. Other examples of this type of machine learning are ID3, Random Forest, Gradient Boosting, and CART. These types of algorithms are still very popular.
One advantage of using decision tree based machine learning over neural networks for the same task is that when a decision has been made you can follow the decision tree backwards and see why a decision / classification was made. In fact, if you have less than 100 parameters you could likely do it over a lunch. When a neural network makes a decision all you have is a large bunch of numbers that were spit out by an algorithm that looped possibly thousands of times and changing all the numbers every time. You can’t backtrack and figure out exactly how a decision was made. You just have to trust the neural network. The advantage of a neural network in this situation is that if it is trained properly, it is likely to have a better result.
Another type of algorithm used in Artificial Intelligence is search algorithms. For robot motion planning I used an algorithm called A* or A-star, which is a very efficient pathfinding algorithm. It comes in dozens of variants and there are hundreds of other types of search algorithms.
These are just a few examples, but there’s also knowledge based agents, AI-agents with reinforcement learning algorithms, algorithms based on Bayes’ Theorem, Vector Machines, Markov Decision Processes, clustering algorithms, K-nearest neighbor (KNN) algorithm, simulated annealing, hill climbing, the ant colony optimization algorithm, and of course neural networks and there are also many types of neural networks. I used a relatively unknown form of artificial intelligence called reflex control for my robotics research. The point is, there is zoo of artificial intelligence algorithms out there. Deep learning neural networks are very popular AI algorithms but far from the only ones.
My Personal Experience with Artificial Intelligence
In 1986, when I was in college in Sweden, I took a class in the LISP programming language. LISP was the first Artificial Intelligence programming language, and it was invented in 1958. In 1987, as a university level exchange student, I took a class called Artificial Intelligence at Case Western Reserve University. That same year I also took a class called Pattern Recognition which introduced neural networks to me.
In 1986 a landmark paper was published by David Rumelhart, Geoffrey Hinton, and Ronald Williams introducing the Rumelhart backpropagation algorithm. Geoffrey Hinton received the Nobel Prize in physics in 2024. David Rumelhart and Ronald Williams were both dead and could therefore not receive the Nobel Prize. The Nobel Prize was also given to John J. Hopfield, another pioneer in neural networks. He invented the Hopfield network. You can read more about neural networks and the Nobel Prize in physics in 2024 here.
The Rumelhart backpropagation algorithm was a giant leap forward for neural networks and for Artificial Intelligence and it is the algorithm used by ChatGPT and the other large language models. Geoffrey Hinton is often interviewed in media and often presented as the father of Artificial Intelligence. He is not, but he us arguably partially responsible for the greatest leap forward in neural networks, as well as Artificial Intelligence.
In the pattern recognition class, we used the Rumelhart backpropagation algorithm on a simple neural network to read images with text. Later I did research in the field of Robotics where I implemented various Artificial Intelligence algorithms as mentioned above. I have a PhD in Applied Physics and Electrical Engineering with specialty in Robotics. Later I would use artificial intelligence algorithms in my professional career.
I used mostly the seven joint Robotics Research Corporation Robot for my robotics research. The robot was able to detect and avoid colliding with the objects surrounding it. I used echolocation for object detection.
The potential harm of AI is a related and important topic that I did not address. I don’t know much about this topic. However, Grant from “Grant at Tame Your Book” have written an excellent, well research and professional post about this issue called Don’t Confuse AI with a Benign Tool. Please check it out.
Superfact 89: There is overwhelming scientific evidence supporting so called macroevolution. Evidence for macroevolution includes the fossil record, molecular biology and DNA, biogeography, comparative anatomy, embryology, suboptimality, vestigial structures, etc.
It is difficult to deny that so called microevolution is happening since it can be directly observed. However, it is quite common to come across claims that there is no evidence for macroevolution or that macroevolution is impossible and unscientific. These claims do not come from mainstream scientists but from creationists. There is no magical barrier between microevolution and macroevolution. Rather, macroevolution is just an accumulation of microevolutionary steps, and it is a fact that those changes have been slowly accumulating over millions and billions of years.
Microevolution is small changes resulting in large changes over time. There is no magical wall stopping multiple microevolution changes from turning into macroevolution.
It is often said that macroevolution is when a species evolves into another and that this represents a special barrier, impossible to breach. The existence of fuzzy boundaries between species and the existence of ring species demonstrate that this idea is faulty. See the next section for more information on this. Next after that, I am listing 10 selected types of evidence for macro evolution. If you wish to see an overview of 29+ Evidences for Macroevolution, click here. I can add that scientists do not like to use the terms microevolution and macroevolution since they are nebulous. These terms are more of a creationist thing. That’s why I been prefixing microevolution and macroevolution with “so called”.
Roughly a third of Americans believe the creationist claim that macroevolution is not possible, or that there is no evidence for it, even though we know that there is Strong evidence for macroevolution. Therefore, I consider this a super fact. Note, 97% of scientists support the theory of evolution. This is a brief Wikipedia article on evolution.
Note, this post is long, but if you are interested in it, you could just read a few instead of the evidences instead of all ten.
Speciation is considered relative
It is often said that two animals belong to the same species if they can interbreed in nature and produce viable, fertile offspring. However, it is not that simple.
An animal A may be able to successfully interbreed with an animal B, and that animal B may be able to successfully interbreed with an animal C, but animal A and C cannot interbreed. Animal A could be said to be a different species relative to animal C, but animal B could be said to be the same species as both animal A & C using the definition above. A great geography related example of this is ring species. In a ring species, gene flow occurs between neighboring populations of a species, but at the ends of the ring the populations don’t interbreed.
Illustration of ring species, an example of how speciation can be relative. All the circles next to each other can interbreed but at the end it no longer works. Andrew Z. Colvin, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons.
Next up are ten selected types of evidence for Macroevolution in no particular order.
The Fossil Record Show an Evolution from Simple to Complex Species
The fossil record is quite extensive and represents 250,000 different species, but it is very far from complete. That is expected. Fossilization is an extremely rare event, and fossils are hard to find. Among the 250,000 fossils from different species there are no Precambrian rabbits or Mesozoic human fossils. If there were, that would have falsified evolution and been evidence for a creator. This example shows first of all that the theory of evolution is falsifiable (all scientific theories have to be falsifiable) contrary to some creationist claims and it constitutes a form of evidence for evolution.
If evolution is true then a scan through the entire sequence of rock strata should show early life to be quite simple, with more complex species appearing only after some time. In addition, the youngest fossils should be those that are most similar to living species. The fact that this is the case is strong evidence for evolution, specifically macroevolution. You can read more about this in this relatively short book, The Evidence for Evolution, by Alan R. Rogers.
The fossil record is a lot more solid and much less problematic than the creationist books I have read claimed. Shutter Stock Photo ID: 1323000239 by Alizada Studios
We can Follow Lineages in the Fossil Record
In the fossil record we can also follow lineages; species of animals and plants changing into something different over time. The fossil records show fish changing into amphibians, reptiles changing into mammals, dinosaurs into birds, artiodactyl like mammals into whales, apes into humans, etc. Creationists used to mock the fact that there were no transitional fossils between land mammals and whales and then they found Pakicetus in 1983 and then a lot more. As time passes the more transitional fossils we find.
Closeup of fossilized scary petrified Archaeopteryx transitional fossil between dinosaur and modern birds remains. Shutterstock Asset id: 1913076019 by Natalia van D.
The fact that we can follow lineages and that they are consistent with the various dating methods is powerful evidence for evolution. Dating methods include radiometric dating methods (uranium-lead, potassium-argon, carbon-14), and sequencing and superposition, and conditions encoded in fossils such as the length of the day (varied throughout natural history) and more. To read more about dating methods and how we know Earth is billions of years old click here. The picture below illustrates the skull changes of hominids by time.
Molecular biology and DNA may be our best evidence for macroevolution. Our understanding of DNA has greatly increased over the last couple of decades. The human genome has been sequenced along with that of many other species, and we are able to compare the DNA and the genes of various species, and trace origins.
Geneticist sequencing human genome Asset id: 2479929725 by FOTOGRIN
Of special interest are pseudo genes, the millions of transposable elements (transposons and retroelements) as well as useless sequences, introns. These segments are especially interesting because they are unaffected by natural selection and therefore mutations pile up in them at a fairly constant rate. By comparing two such segments in two species we can tell how far the species are apart and even how far back in time their common ancestor lived.
Based on the similarity in transposons we know that the closest related living animals to whales and dolphins (outside their order) are Hippopotamus, which confirms what we know from the fossil record of whales and the mammals that whales evolved from. Whales and Hippopotamus have a common ancestor and since we’ve found dozens of intermediate fossils between land mammals and whales, the evolution of whales is no longer a mystery.
All living cetaceans including whales, dolphins, porpoises, sperm whales and hippopotamids / hippopotamus belong to a suborder of artiodactyls called whippomorpha. Hippopotamus and whales have a common ancestor. Note: I created this image by inserting a few pictures from Wikipedia commons including a mother sperm whale and her calf off the coast of Mauritius, a gray whale in captivity, a hippopotamus and two pre-historic whales (from the section Evolution of Whales – Intermediate Fossils).
Based on the similarity in transposons, pseudo genes, and genes in general (all of the genome) we know that the closest related living animals to humans are chimpanzees and bonobos. In fact, chimpanzees and humans are more closely related than chimpanzees and the other great apes. Based on the genetic record chimpanzees are no longer classified as great apes but as Hominini together with humans. Also based on the genetic record we know that chimpanzees and humans had a common ancestor that lived about six million years ago. The fossil for this common ancestor has not been found, but the information in the DNA can often tell us more than the fossil record.
Comparison between human and chimpanzee karyotypes isolated on background. Shutterstock Asset id: 2432966649 by kanyanat wongsa.Evolution of humans via phylogenetics and differentiation between humans, chimpanzees, and other primates. Shutterstock Asset id: 2448150743 by kanyanat wongsa.Simple cladogram showing evolution of modern man from Hominid Ancestor Shutterstock Asset id: 2093535535 by CLOUD-WALKER.
The book Relics of Eden, the powerful evidence of evolution in human DNA by Daniel Fairbanks is good fairly in depth book on this topic.
Biogeography
Biogeographic evidence for evolution / macroevolution is among the oldest types of evidence (Charles Darwin used it) and yet it is very powerful. Biogeographic evidence for evolution shows that species’ geographic distributions result from descent with modification and environmental adaptation, rather than just similar habitats. Key types of biogeographic evidence for macroevolution include species existing only on a certain island, adaptive radiation (e.g., Galápagos finches), tectonic-driven species distribution (e.g., marsupials), and convergent evolution of unrelated species in similar environments.
Adaptive radiation is a rapid evolutionary process where an ancestral species diversifies into a multitude of new species (or subspecies) to fill vacant ecological niches. Shutterstock Asset id: 2707584123 by VectorMine.
One example of biogeographical evidence for macroevolution is with so called oceanic islands. Oceanic islands are not part of a continent but are formed from the sea bottom typically through volcanic activity. Oceanic islands lack native freshwater fish and amphibians, and they rarely harbor native mammals and reptiles. However, freshwater fish, amphibians, mammals and reptiles thrive when introduced to oceanic islands. It’s just that they have to get there in the first place.
Instead, oceanic islands typically feature birds, insects, and plants that can more easily spread long distances. In addition, the species on oceanic islands are typically closely related and appear in relatively few groups. Add the fact that the species on oceanic islands resemble species on nearby continents but they are not the same. This strongly supports the narrative that some species from nearby continents migrated to newly formed oceanic islands and evolved.
The evidence gets even better if you look in more detail. For example, the Hawaiian Islands (oceanic islands) were formed in chronological order from west to east, as the divide between the continental shelves moved. The species on the different islands show a gradual transition in their physical properties and in their DNA as you go from west to east. This supports the narrative that the species hopped from one island to the next as the islands emerged, and then they evolved.
Comparative Anatomy
Similar anatomical structures in different species, such as the similar bone structure in a human arm, a bat wing, and a whale flipper indicate shared ancestry. Another is the heart structures in fish, amphibians, reptiles, birds, and mammals which show a homologous progression of development.
Embryology
Different species share similar developmental stages. For example, early embryos of reptiles, birds, and mammals, including humans, develop pharyngeal pouches that are similar to fish gills. Baleen whale embryos have teeth that are lost by birth, human embryos develop a tail that are later lost, and human fetuses develop hair around week 16-20 that is usually lost but remain on premature babies. The development of embryos goes through stages of similar embryos of fish, then amphibians, reptiles, and then mammals.
Suboptimality
There is a lot of evidence based on so called suboptimality. Our bodies and that of other animals are full of imperfections that make perfect sense from an evolutionary perspective but not much sense if we were created by a creator. One example is the “vas deference”, which follow a circuitous route from the testis up and around the ureter and back down to the penis, instead of going straight to the penis. As the testis gradually moved from inside our bodies (as it was in fish) to the outside, vas deference got stuck around the ureter like a water hose can get stuck around a tree. This makes perfect sense from an evolutionary perspective.
Vestigial Structures
Vestigial structures are non-functional anatomical features, organs, or behaviors that were functional in a species’ ancestors but have lost most or all of their original purpose through evolution. Examples include the whale hind legs, flightless bird wings, the human appendix, the tailbone, wisdom teeth, and goosebumps in humans.
Atavisms
Atavisms are rare reappearances of a lost ancestral trait in an individual. This could happen because ancestral genes are preserved but suppressed but, for example, a mutation allows the gene to be expressed. Examples include a human baby born with a tail, a snake with limbs or a chicken with teeth, dolphins with back flippers, or teeth in chickens. It is rare but evidence for evolution.
Traces of Common Descent
Traces of common descent in species, for example, homologous anatomical structures, similar embryological development, shared genetic codes, and phylogenetic mapping allows the construction of the tree of life. Phylogenetic mapping suggests that organisms inherited fundamental traits from a common ancestor. All life except viruses can be traced back to a common ancestor that lived 4.2 billion years ago. This also constitutes evidence for evolution / macroevolution.
I can add that when I was young, I read a lot of creationists books. I was totally sold on creationism but as I started learning about science that changed. One thing all the creationist books that I read had in common was that they avoided discussing the evidence for evolution and they did not provide evidence for creationism. Instead, they focused on trying to discredit evolution. As I learned more about science I came to realize that not even one of those objections were valid. An example is super fact #73 below.
Superfact 88: The history of artificial intelligence (AI) began in antiquity, with stories of artificial beings. The first artificial neural network model was created in 1943. The Turing test was created in 1950. The field of “Artificial Intelligence Research” was founded as an academic discipline in 1956. The first trainable (able to learn) neural network was demonstrated in 1957.
Since then, artificial intelligence has come a long way. Did you hear about the computer that defeated the reigning world champion in chess? A computer finally defeated the supreme human intellect in the world in an intellectual field. Is this the end of humanity? Oh, wait, that was in 1997.
Artificial intelligence and research concept. Shutterstock Asset id: 2314449325 by Stock-Asso
The various recent launches of large language models such as ChatGPT, Gemini, Claude, Llama, Deep Seek, etc., have impressed many people but also fooled many people into thinking that Artificial Intelligence is a new invention. It is not. Artificial Intelligence has been around for a long time, and its past is filled with many success stories as well as disappointments. Click here to see a timeline for Artificial Intelligence stretching from antiquity to 2025. For additional sources click here, here, here, or here.
I consider this a super fact because it is true, kind of important, and based on my personal experience I believe that the long old history of Artificial Intelligence is a surprise to many.
My Personal Experience with Artificial Intelligence
In 1986, when I was in college in Sweden, I took a class in the LISP programming language. LISP was the first Artificial Intelligence programming language, and it was invented in 1958. In 1987, as a university level exchange student, I took a class called Artificial Intelligence at Case Western Reserve University. The book we used was Artificial Intelligence by Elaine Rich published in 1983. This book and the course were focused on decision trees and rule based algorithms and did not even mention neural networks.
That same year I also took a class called Pattern Recognition which introduced neural networks to me. In 1986 a landmark paper was published by David Rumelhart, Geoffrey Hinton, and Ronald Williams which introduced the Rumelhart backpropagation algorithm. Geoffrey Hinton received the Nobel Prize in physics in 2024. David Rumelhart and Ronald Williams were both dead and could therefore not receive the Nobel Prize. The Nobel Prize was also given to John J. Hopfield, another pioneer in neural networks. He invented the Hopfield network. You can read more about neural networks and the Nobel Prize in physics in 2024 here.
The Rumelhart backpropagation algorithm was a giant leap forward for neural networks and for Artificial Intelligence and it is the algorithm used by ChatGPT and the other large language models. Geoffrey Hinton is often interviewed in media and often presented as the father of Artificial Intelligence. He is not, but he is responsible for arguably the greatest leap forward in neural networks, as well as Artificial Intelligence.
In class we used the Rumelhart backpropagation algorithm to read images with text. It is one thing to type in a character on a keyboard and quite another to have a computer identify a character in an image. We trained our primitive neural networks to recognize images of letters using the Rumelhart backpropagation algorithm. We coded the backpropagation algorithm using the C programming language over perhaps 100 neurons/parameters and a few hundred synapses/weights (in AI). It worked pretty well. In comparison, ChatGPT 4 is estimated to have 1 trillion neurons/parameters. Our class was among the first in the world to try out this, at the time, new algorithm and at the time I did not realize the importance of it.
Later I did research and I worked in the field of Robotics where I implemented various Artificial Intelligence algorithms but not neural networks. I have a PhD in Applied Physics and Electrical Engineering with specialty in Robotics. At my next workplace Siemens I used decision tree algorithms, also Artificial Intelligence but not neural networks.
What is a Neural Network
A simple old-style 1950’s Neural Network (my drawing)
The first neural networks created by Frank Rosenblatt in 1957 looked like the one above. You had input neurons and output neurons connected via weights that you adjusted using an algorithm. In the case above you have three inputs (2, 0, 3) and these inputs are multiplied by the weights to the outputs. 3 X 0.2 +0 + 2 X -0.25 = 0.1 and 3 X 0.4 + 0 + 2 X 0.1 = 1.4 and then each output node has a threshold function yielding outputs 0 and 1.
To train the network you create a set of inputs and the output that you want for each input. You pick some random weights and then you can calculate the total error you get, and you use the error to calculate a new set of weights. You do this over and over until you get the output you want for the different inputs. The amazing thing is that now the neural network will often also give you the desired output for an input that you have not used in the training. Unfortunately, these neural networks weren’t very good, and they sometimes could not even be trained.
As mentioned, in 1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams presented the Rumelhart backward propagation algorithm which were applied to a neural network featuring a hidden layer (at least one hidden layer). It was effective and it was guaranteed to learn patterns that were possible to learn. It set off a revolution in Neural Networks. In the network below you also use the errors in a similar fashion as in the Rosenblatt network. However, the combination of a hidden layer and the backpropagation algorithm make a huge difference.
A multiple layer neural network with one hidden layer. This set-up and the associated backpropagation algorithm set off the neural network revolution. My drawing.
Below I am showing two 10 X 10 pixel images containing the letter F. The neural network I created in class (see above) had 100 inputs, one for each pixel, a hidden layer and then output neurons corresponding to each letter I wanted to read. I think I used about 10 or 20 versions of each letter during training, by which I mean running the algorithm to adjust the weights to minimize the error until it is almost gone. Now if I used an image with a letter that I had never used before, the neural network typically got it right even though the image was new.
Two examples of the letter F in a 10 X 10 image. You can use these images (100 input neurons) to train a neural network to recognize the letters F.
At first, it was believed that adding more than one hidden layer did not add much. That was until it was discovered that by applying the backpropagation algorithm differently to different layers created a better / smarter neural network and so at the beginning of this century the deep learning neural networks were born (or just deep learning AI). I can add that our Nobel Prize winner Geoffrey J. Hinton was also a pioneer in deep learning neural networks.
My drawing of a deep learning neural network (deep learning AI). There are three hidden layers.
I should mention that there are many styles of neural networks, not just the ones I’ve shown here. Below is a network called a Hopfield network (it was certainly not the only thing he discovered).
In a Hopfield network all neurons are input, and output neurons and they are all connected to each other.
For your information, ChatGPT-3.5 and ChatGPT-4 are deep learning neural networks, like the one in my colorful picture above, but instead of 3 hidden layers it has 96 hidden layers in its neural network and instead of 19 neurons it has a total of 176 billion neurons.
The Dark Side of AI
The potential harm of AI is a related and important topic that I did not address. However, this is already a very long and complex post, and I don’t know enough about this topic (yet). To read more about this topic check the comments made by “Grant at Tame Your Book” (in comment section). Better, Grant wrote and excellent, well research and professional post about this issue called Don’t Confuse AI with a Benign Tool. Please check it out.
The goal of this blog is to create a list of what I call super facts. Super facts are important and true facts that are nevertheless highly surprising to many, misunderstood, or disputed among the public. They are special facts that we all can learn something important from. However, I also make posts that are not super facts but feature other interesting information, such as this book review and book recommendation.
A Note About Liars on Amazon
I’ve noticed that most of the reviews for this book were positive but there were a few negative reviews from what I refer to as climate deniers. These reviews were not just misguided fossil fuel talking points, but they were obviously written by people who had not read the book or by people who skimmed the book and who did not make an honest effort to understand the content of the book. You can tell because the objections they raise were addressed and clearly debunked in the book in a way that was easy to understand.
I’ve read many books on climate science and there are always a bunch of negative reviews written by people who have no clue about the content of the book. Writing reviews for books you have not read is the same as lying, especially if you are slamming the book. There are reviewers who literally seem to be at war with the truth, and they spend their time trying to bury it, and in the process, they are shamelessly lying. Why would someone dump lots of fake reviews over books they haven’t read?
BEYOND DEBATE: Answers to 50 Misconceptions on Climate Change by Dr. Shahir Masri
Below I am listing the two versions of this book (kindle and paperback). I bought the paperback version.
Paperback – Publisher : Dockside Sailing Press (July 14, 2018), ISBN-10 : 0692157417, ISBN-13 : 978-0692157411, 329 pages, item weight : 1.09 pounds, dimensions : 5.5 x 0.75 x 8.5 inches. It costs $6.44 on US Amazon. Click here to order it from Amazon.com.
Kindle – Published : Dockside Sailing Press (April 12, 2021) ASIN : B092DPY7LL, 245 pages, it costs $9.99 on US Amazon. Click here to order it from Amazon.com.
BEYOND DEBATE: Answers to 50 Misconceptions on Climate Change. Click on the image to go to the Amazon page for the paperback version of the book.
Amazon’s Description of the Book
What if volcanoes are heating the planet? Maybe solar cycles are to blame? Isn’t carbon dioxide good for plants? These are but a few of the questions on global warming that are addressed in this book. If you are concerned that global warming may be a serious problem, but find it hard to know what to believe or how to help in the face of conflicting arguments, you will want to read this book. You don’t have to be a scientist to understand Dr. Shahir Masri’s explanations and solutions. They proceed along common-sense lines that are easy to follow. Climate change poses a major threat to public health and the environment. Yet, political squabbles and misinformation have stalled policy and enabled little progress to be made in solving the crisis.
Similarly, the notion of a “climate debate” has created the illusion of a divided scientific community, when in fact most scientists agree that human activity is causing the planet to warm. At a time when open discussion is essential, talk of global warming has become entrenched in politics and all but taboo in unfamiliar company. In Beyond Debate, Shahir Masri clears up 50 of the most common misconceptions surrounding climate change. He simplifies the science and resolves the confusion so that everyone may better understand the issue. Now is not the time for silence, but rather a time for conversation and collective action to address greenhouse gas emissions and begin to solve the climate crisis. Action begins with understanding, which Beyond Debate so eloquently offers. Masri conveys a sense of urgency while describing opportunities for hope.
Fix your misconceptions. Don’t fall for disinformation. Be curious and learn.
There are a lot of misconceptions, misunderstandings as well as disinformation surrounding climate change or if you call it global warming, global weirding, or climate disruption. This book provides answers and explanations to 50 misconceptions. Some of the misconceptions are common but basic misunderstandings. Other misconceptions require more in depth explanations.
In addition, the book gives you an introduction into how the greenhouse effect works, covering 200+ years of scientific discoveries by some famous scientists. Did you know that without the various greenhouse gases in the atmosphere (water vapor, carbon dioxide, methane, nitrous oxide, and ozone), our planet would be 60 degrees colder than it is. It would be a snowball earth. This book is for those of us who are curious and want to learn more about this topic.
An example of a basic misconception is Chapter 4, “Earth’s Natural cycles explain recent warming”, well they don’t. For example, the Milankovitch cycles, earth’s precession, axial tilt, and the eccentricity of earth’s orbit, are too slow and would favor cooling right now, not warming. It is not the sun (chapter 5) and not volcanoes (chapter 3). Volcanoes release less than 1 % of the CO2 released currently by human activities, and they are part of the carbon cycle, and CO2 from volcanoes have the wrong isotope mix to correspond to the increase of CO2. He explains that the carbon atom comes in different isotopes (different number of neutrons) and that the mix is different for different carbon sources and that the carbon added to the atmosphere comes from burning fossil fuels based on the isotope mix.
I can add that in addition different potential causes for global warming result in different ways the warming happens (like a fingerprint) and the fingerprint of the current warming is that of greenhouse gases (he does not explain this enough). Another thing to ask yourself is if you think the current global warming is natural, why do paleoclimatologists and others who have dedicated their lives to studying naturally occurring climate change not think this warming is natural.
Another basic misconception is addressed in Chapter 23, “Climate models don’t account for the most abundant greenhouse gas, water vapor”, which is false, they do account for water vapor. Some people believe that because water vapor is a more powerful and abundant greenhouse gas than CO2, it should be what is causing global warming. That’s not how it works. We are not increasing water vapor in the atmosphere by emitting it and even if we did it would rain back down. Therefore, water vapor is not driving global warming. If a greenhouse gas isn’t increasing it can’t cause rising temperatures, no matter how abundant it is.
However, an increase in carbon dioxide warms the atmosphere which in turn increases the amount of humidity the atmosphere can hold (positive feedback loop) thus water vapor gives the greenhouse effect a boost. It gives CO2 and methane a bit of a helping hand as the emissions and increase of these gases heats things up, but water vapor is not driving it. This is not hard to understand and yet this misconception refuses to go away.
Some other examples are chapter 8, “Climate Change is Chinese hoax” – this is a funny one. Climate science is 200+ year old European science. Chapter 12, “Climate change is just a theory” – see “evolution is just a theory”. Chapter 15, “there is still uncertainty about climate change” – that we know it is happening and that we are the cause is well established but there is uncertainty about other related things. Chapter 36, “glaciers aren’t melting, Antarctica is even gaining ice” – glaciers and sea ice are melting rapidly. Antarctica was gaining ice for four decades despite warming but there are good explanations for this (for example, precipitation). Now Antarctica is losing ice. Chapter 43, “Electric cars aren’t that green” – they are much cleaner than gas cars, but it depends on where you live. Chapter 49, “It’s too late for climate” – no it isn’t.
So, as you can see, this is a fact packed book addressing and correcting a lot of misconceptions. It is very educational and great for anyone ready to learn and understand. It is also well organized and well written. Reading this book will make you smarter and I highly recommend it to anyone who is curious about this topic. I think we all have some misconception on this topic. Let’s correct them.
BEYOND DEBATE: Answers to 50 Misconceptions on Climate Change. Click on the image to go to the Amazon page for the Kindle version of the book.