Showing posts with label pause. Show all posts
Showing posts with label pause. Show all posts

Tuesday, 20 June 2017

On the recent warming surge

"Incidentally, when in the journal Science in 2007 we pointed to the exceptionally large warming trend of the preceding 16 years, which was at the upper end of the [climate] model range, nobody cared, because there is no powerful lobby trying to exaggerate global warming."

"And of course in our paper we named natural intrinsic variability as the most likely reason. But when a trend at the lower end of the model range occurs it suddenly became a big issue of public debate, because that was pushed by the fossil fuel climate sceptics’ lobby. There is an interesting double standard there."

Maybe the comment is deleted in shame. At least I cannot find it any more. Someone on Reddit was making exactly the same errors as the fans of the infamous "hiatus" to argue that global warming since 2011 is exploding and we're soon gonna die. Deleted comments can be freely paraphrased.

The data

So let's have a look at what the data actually says. Below are warming curves of two surface temperature datasets and two upper air temperature satellite retrievals. I shortened the warming curve of Berkeley Earth to match the period to the one of GISTEMP. All the other datasets are shown over their full lengths. Taking this step back, looking at the overview, there clearly is no "hiatus" and no "warming surge".

The red dots are the annual temperature anomalies, which are connected by a thin grey line. The long-term trend is plotted by a thick blue line, which is a [[LOESS]] estimate.

If you want to see the "hiatus" you have to think the last two data points away and start in the temperature peak of 1998. Don't worry, I'll wait while you search for it.

Image
Image
Image
Image

A hiatus state of mind

After seeing reality, let's now get into the mindset of a climate "sceptic" claiming to have evidence for a "hiatus", but do this differently by looking at the data since 2011.

Naturally we only plot the recent part of the data, so that context is lost. I am sure the climate "sceptics" do not mind, they also prefer to make their "hiatus" plots start around the huge 1998 El Nino warming peak and not show the fast warming before 1998 for context.

Image
Image

The thick blue lines are quadratic functions fitted to the data. They fit snugly. As you can see both the linear and quadratic coefficients are statistically significant. So clearly the recent warming is very fast, faster than linear and we can soon expect to cross the 2 °C limit, right?

I am sure the climate "sceptics" do not mind what I did. They also cherry picked a period for their "hiatus" and applied a naive statistical test as if they had not cherry picked the period at all. The climate "sceptics" agreeing with Christopher Monckton doing this on WUWT will surely not object to our Redditor doing the same and will conclude with him that the end is nigh.

By the way, I also cherry picked the dataset. The curves of the upper air temperature retrievals are not as smooth and the quadratic terms were not statistically significant. But in case of the "hiatus" debate our "sceptical" friends also only showed data for the datasets showing the least warming. So I am sure they do not object to this now.

If you look at the full period you can see that the variability of the temperature signal is much larger than the variability around the quadratic fit. It is thus clearly a complete coincidence that the curve is so smooth. But, well, the "hiatus" proponents also just look statistically at their cherry picked period and ignore the actual uncertainties (including slowly varying ones.)

Some more knowledgeable catastrophists may worry that it looks as if 2017 may not be another record scorching year. No worries. Also no worries if 2018 is colder again. The Global Warming Policy Foundation thinks it is perfectly acceptable to ignore a few politically inconvenient years and claims that we just have to think 2015 and 2016 away and that thus the death of the "hiatus" has been greatly exaggerated. I kid you not. I have not seen any climate "sceptic" or any "luckwarmer" complaining about this novel statistical analysis method, so surely our Redditor can do the same. Catastrophic warming surge claims are save till at least 2019.

Image

Don't pick cherries

Back to reality. What to do against cherry picking periods? My advice would be: don't cherry pick periods. Not for the global mean temperature, not for the temperature of the Antarctic Peninsula, not for any other climate variable. Just don't do it.

If you have a physical reason to expect a trend change, by all means use that date as start of the period to compute a trend. But for 1998 or 2011 there is no reason to expect a trend change.

Our cheerful Redditor had even a bit more physics in his claim. He said that the Arctic was warming and releasing more carbon dioxide and methane. However, these emissions were not large enough to make the increase in the global greenhouse concentrations speed up. And they count. From our good-natured climate "sceptics" I have never heard a physical explanation why global warming would have stopped in 1998. But maybe I have missed their interest in what happens with the climate system.

If you have no reason to expect a trend change the appropriate test would be for a trend change test at an unknown date. Such a test "knows" that cherry picked periods can have hugely different trends and thus correctly only sees larger trend changes over longer periods as statistically significant. Applied to temperature data the result of such as test does not see any "hiatus" nor any "warming surge".

This test gave the right result during the entire "hiatus" madness period. Don't fool yourself. Use good stats.

Image

Related reading

Stefan Rahmstorf Grant Foster and Niamh Cahill: Global temperature evolution: recent trends and some pitfalls. Environmental Research Letters, 2017. See also "Change points of global temperature".

Cranberry picking short-term temperature trends

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Monday, 24 April 2017

"Hiatus": Signal and Variability

Stefan Rahmstorf, Grant Foster and Niamh Cahill just summarized the statistical evidence for the mirage people call the "pause" of global warming in their new article: "Global temperature evolution: recent trends and some pitfalls."

The Open Access paper is clearly written; any natural scientist should be able to follow the arguments. The most important part may be a clear explanation of the statistical fallacies that lead some people to falsely claim there was such a thing as a "hiatus" or "slowdown".




Suppose that Einstein had stood up and said: I have worked very hard and I have discovered that Newton got everything right and I have nothing to add. Would anyone ever know who Einstein was? ... The idea that we would not want to be Einstein, if we could overturn global warming ... how exiting would that be? Of the tenth of thousands of scientists there is not one who has the ego to do that? It's absurd, it is absolutely unequivocally absurd! We are people.


I have studied the "hiatus" problem hard (1, 2, 3, 4), read this new paper and I have nothing to add. Unfortunately.


Image


Well, okay, maybe one thing. Just because a trend change is not statistically significant, does not mean you cannot study why it changed. It only means that you are likely looking at noise and thus likely will not find a reason. But if you think there may be a great reward in the result that can make high-risk research worthwhile. Looking at how small the trend differences are and knowing how uncertain short-term trends are, I am not going to do it, but anyone else is welcome.


Image


That there was no decline in the long-term trends also does not mean that it is not interesting to study the noise around this trend. The biggest group in the World Climate Research Program studies Climate variability. That by itself shows how important it is.

This blog is called Variable Variability. I love variability. It is an intrinsic property of complex systems and its behaviour over temporal and spatial averaging scales can tell us a lot about the climate system. It also has large impacts. Droughts and floods fuelled by El Nino are just one example. It is a pity most people just want to average this away.


One man's noise may be another man's music


Now that we take the climate system into unknown territories predictions of the seasonal, annual and decadal variability have become even more important to plan ahead and protect communities. Historian Sam White suggests that the problem of the little ice age in Europe was not the cold winters, but the unpredictability of the weather. Better predictions will help a lot in coping with climate change and already produce useful results for the tropics.

Variability lovers of the world, let's stand up for the importance of our work and not try to faithlessly justify it with middle of the road research on overstudied averages.




Related reading

Science Media Centre asked three scientists for a reaction to the study: expert reaction to climate hiatus statistics

Cranberry picking short-term temperature trends

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Reference

Rahmstorf, Stefan, Grant Foster and Niamh Cahill, 2017: Global temperature evolution: recent trends and some pitfalls. Environmental Research Letters, 12, No. 5, https://doi.org/10.1088/1748-9326/aa6825.

Sunday, 5 February 2017

David Rose's alternative reality in the Daily Mail

Peek-a-boo! Joanna Krupa shows off her stunning figure in see-through mesh dress over black underwear
Bottoms up! Perrie Edwards sizzles in plunging leotard as Little Mix flaunt their enviable figures in skimpy one-pieces
Bum's the word! Lottie Moss flaunts her pert derriere in a skimpy thong as she strips off for steamy selfie

Sorry about those titles. They provide the fitting context right next to a similarly racy Daily Mail on Sunday piece of David Rose: "Exposed: How world leaders were duped into investing billions over manipulated global warming data". Another article on that "pause" thingy that mitigation skeptics do their best to pretend not to understand. For people in the fortunate circumstances not to know what the Daily Mail is, this video provides some context about this Murdoch "newspaper".

[UPDATE: David Rose' source says in an interview with E&E News on Tuesday: “The issue here is not an issue of tampering with data”. So I guess you can skip this post, except if you get pleasure out of seeing the English language being maltreated. But do watch the Daily Mail video below.

See also this article on the void left by the Daily Mail after fact checking. I am sure all integrityTM-waving climate "skeptics" will condemn David Rose and never listen to him again.]



You can see this "pause" in the graph below of the global mean temperature. Can you find it? Well you have to think those last two years away and then start the period exactly in that large temperature peak you see in 1998. It is not actually a thing, it is a consequence of cherry picking a period to get a politically convenient answer (for David Rose's pay masters).

Image

In 2013 Boyin Huang of NOAA and his colleagues created an improved sea surface dataset called ERSST.v4. No one cared about this new analysis. Normal good science. One of the "scandals" Rose uncovered was that NOAA is drafting an article on ERSST.v5.

But this post is unfortunately about nearly nothing, about the minimal changes in the top panel of the graph below. I feel the important panel is the lower one. It shows that in the raw data the globe seems to warm more. This is because before WWII many measurements were performed with buckets and the water in the bucket would cool a little due to evaporation before reading the thermometer. Scientists naturally make corrections for such problems (homogenization) and that helps make a more accurate assessment of how much the world actually warmed.

But Rose is obsessed with the top panel. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment (ERSST.v4) and the thin red line the previously estimated global temperature signal (ERSST.v3). Differences are mostly less than 0.05°C, both warmer and cooler. The "problem" is the minute change at the right end of the curves.

The mitigation skeptical movement was not happy when a paper in Science in 2015, Karl and colleagues (2015), pointed out that due to this update the "pause" is gone, even if you use the bad statistics the mitigation skeptics like. As I have said for many years now about political activists claiming this "pause" is highly important: if your political case depends on such minute changes, your political case is fragile.

Image

In the mean time a recent article in Science Advances by Zeke Hausfather and colleagues (2016) now shows evidence that the updated dataset (ERSSTv4) is indeed better than the previous version (ERSSTv3b). They do so by comparing the ERSST dataset, which comes from a large number of data sources, with data that comes only from only one source (buoys, satellites (CCl) or ARGO). These single-source datasets are shorter, but without trend uncertainties due to the combination of sources. The plot below shows that the ERSSTv4 update improves the fit with the other datasets.

Image

The trend change over the cherry-picked "pause" period were mostly due to the changes in the sea surface temperature of ERSST. Rose makes a lot of noise about the land data, where the update was inconsequential. As indicated in Karl and colleagues (2015) this was a beta-version dataset. The raw data was published; that is the data of the International Surface Temperature Initiative (ISTI) and the homogenization method was published. The homogenization method works well; I checked myself.

The dataset itself is not published yet. Just applying a known method to a known dataset is not a scientific paper. Too boring.

So for the paper NOAA put a lot of work into estimating the uncertainty due to the homogenization method. When developing a homogenization method you have to make many choices. For example, inhomogeneities are found by comparing one candidate station with multiple nearby reference stations. There are settings for now many stations and for how nearby the reference stations need to be. NOAA studied which of these settings are most important with a nifty new statistical method. These settings were varied to study how much influence that has. I look forward to reading the final paper. I guess Rose will not read it and stick to his role as suggestive interpreter of interpreters.

The update of NOAA's land data will probably remove a precious conspiracy of the mitigation skeptical movement. While, as shown above, the adjustments reduce our estimate for the warming of the entire world, the adjustments make the estimate for the warming over land larger. Mitigation skeptics like to show the adjustments for land data only to suggest that evil scientists are making global warming bigger.

This is no longer the case. A recommendable overview paper by Philip Jones, The Reliability of Global and Hemispheric Surface Temperature Records, analyzed the new NOAA dataset. The results for land are shown below. The new ISTI raw data dataset shows more warming than the previous NOAA raw data dataset. As a consequence the homogenization now does not change the global mean appreciably any more to arrive at about the same answer after homogenization; compare NOAA uncorrected (yellow line) with NOAA (red; homogenized).

Image

The main reason for the smaller warming in the old NOAA raw data was that this smaller dataset contained a higher percentage of airport stations. That is because airports report their data very reliably in near real time. Many of these airport stations were in cities before and cities are warmer than airports due to the urban heat island effect. Such relocations thus typically cause cooling jumps that are not related to global warming and are removed by homogenization.

So we have quite some irony here.
Still Rose sees a scandal in these minute updates and dubs it Climategate 2; I thought we were already at 3 or 4. In this typical racy style he calls data "wrong", "rogue", "biased". Knowing that data is never perfect is why scientists do their best to assess the quality of the data, remove problems and make sure that the quality is good enough to make a certain statement. In return people like David Rose simultaneously pontificate about uncertainty monsters and assume data is perfect and then get the vapors when updates are needed.

Rose gets some suggestive quotes from an apparently disgruntled retired NOAA employee. The quotes themselves seem to be likely inconsequential procedural complaints, the corresponding insinuations seem to come from Rose.

I thought journalism had a rule that claims by a source need to be confirmed by at least a second source. I am missing any confirmation.

While Rose presents the employee as an expert on the topic, I have never heard of him. Peter Thorne, who worked at NOAA, confirms that the employee did not work with surface station data himself. He has a decent publication record, mainly on satellite climate datasets of clouds, humidity and radiation. Ironically, I keep using that word, he also has papers about the homogenization of his datasets, while homogenization is treated by the mitigation skeptical movement as the work of the devil. I am sure they are willing to forgive him his past transgressions this time.

It sounds as if he made a set of procedures for his climate satellite data, which he really liked, and wanted other groups in NOAA to use it as well. Was frustrated when others did not prioritize enough updating their existing procedures to his.

For David Rose this is naturally mostly about politics and in his fantasies the Paris climate treaty would not have existed with the Karl and colleagues (2015) paper. I know that "pause" thingy is important for the Anglo-American mitigation skeptical movement, but let me assure Rose that the rest of the world considers all the evidence and does not make politics based on single papers.

[UPDATE: Some days you gotta love journalism: a journalist asked several of the diplomats who worked for years on the Paris climate treaty, they gave the answer you would expect: Contested NOAA paper had no influence on Paris climate deal. The answers still give an interesting insight into the sausage making. What is actually politically important.]

David Rose thus ends:
Has there been an unexpected pause in global warming? If so, is the world less sensitive to carbon dioxide than climate computer models suggest?
No, there never was an "unexpected pause." Even if there were, such a minute change is not important for the climate sensitivity. Most methods do not use the historical warming for that and those that do consider the full warming of about 1°C since the 19th century and not only short periods with unreliable, noisy short-term trends.

David Rose:
And does this mean that truly dangerous global warming is less imminent, and that politicians’ repeated calls for immediate ‘urgent action’ to curb emissions are exaggerated?
No, but thanks for asking.

Post Scriptum. Sorry that I cannot talk about all errors in the article of David Rose, if only because in most cases he does not present clear evidence and because this post would be unbearably long. The articles of Peter Thorne and Zeke Hausfather are mostly complementary on the history and regulations at NOAA and on the validation of NOAA's results, respectively.

Related information

Buzzfeed (October 2017): This Is How A Bogus Climate Story Becomes Unstoppable On Social Media

New York Times (September 2017): British Press Watchdog Says Climate Change Article Was Faulty

2 weeks later. The nailing New York Times interviewed several former colleagues of NOAA retire Bates: How an Interoffice Spat Erupted Into a Climate-Change Furor. "He’s retaliating. It’s like grade school ... At that meeting, Dr. Bates shouted that Ms. McGuirk was not trustworthy and belonged in jail, according to an internal log ..." Lock her up, lock her up, ...

Wednesday. The NOAA retiree now says: "The Science paper would have been fine had it simply had a disclaimer at the bottom saying that it was citing research, not operational, data for its land-surface temperatures." To me it was always clear it was research data, otherwise they would have cited a data paper and named the dataset. How a culture clash at NOAA led to a flap over a high-profile warming pause study

Tuesday. is a balanced article from the New York Times: Was Data Manipulated in a Widely Cited 2015 Climate Study? Steve Bloom: "How "Climategate" should have been covered." Even better if mass media would not have to cover office politics on archival standards fabricated into a fake scandal.

Also on Tuesday, an interview of E&E News: 'Whistleblower' says protocol was breached but no data fraud: The disgruntled NOAA retiree: "The issue here is not an issue of tampering with data".

Associated Press: Major global warming study again questioned, again defended. "The study has been reproduced independently of Karl et al — that's the ultimate platinum test of whether a study is to be believed or not," McNutt said. "And this study has passed." Marcia McNutt, who was editor of Science at the time the paper was published and is now president of the National Academy of Sciences.

Daily Mail’s Misleading Claims on Climate Change. If I were David Rose I would give back my journalism diploma after this, but I guess he will not.

Monday. I hope I am not starting to bore people by saying that Ars Technica has the best science reporting on the world wide web. This time again. Plus inside scoop suggesting all of this is mainly petty office politics. Sad.

Sunday. Factcheck: Mail on Sunday’s ‘astonishing evidence’ about global temperature rise. Zeke Hausfather wrote a very complementary response, pointing out many problems of the Daily Mail piece that I had to skip. Zeke works at the Berkeley Earth Surface Temperature project, which produces one of the main global temperature datasets.

Sunday. Peter Thorne, climatology professor in Ireland, former NOAA employee and leader of the International Surface Temperature Initiative: On the Mail on Sunday article on Karl et al., 2015.

Phil Plait (Bad Astronomy) — "Together these show that Rose is, as usual, grossly exaggerating the death of global warming" — on the science and the politics of the Daily Mail piece: Sorry, climate change deniers, but the global warming 'pause' still never happened

You can download the future NOAA land dataset (GHCNv4-beta) and the land dataset used by Karl and colleagues (2015), h/t Zeke Hausfather.

The most accessible article on the topic rightly emphasizes the industrial production of doubt for political reasons: Mail on Sunday launches the first salvo in the latest war against climate scientists.

A well-readable older article on the study that showed that ERSST.v4 was an improvement: NOAA challenged the global warming ‘pause.’ Now new research says the agency was right.

One should not even have to answer the question, but: No, U.S. climate scientists didn't trick the world into adopting the Paris deal. A good complete overview at medium level.

Even fact checker Snopes sadly wasted its precious time: Did NOAA Scientists Manipulate Climate Change Data?
A tabloid used testimony from a single scientist to paint an excruciatingly technical matter as a worldwide conspiracy.

Carbon Brief Guest post by Peter Thorne on the upcoming ERSSTv5 dataset, currently under peer review: Why NOAA updates its sea surface temperature record.

Monday, 16 January 2017

Cranberry picking short-term temperature trends

Photo of cranberry fields


Monckton is a heavy user of this disingenuous "technique" and should thus know better: you cannot get any trend, but people like Monckton unfortunately do have much leeway to deceive the population. This post will show that political activists can nearly always pick a politically correct period to get a short-term trend that is smaller than the long-term trend. After this careful selection they can pretend to be shocked that scientists did not tell them about this slowdown in warming.

Traditionally this strategy to pick only the data you like is called "cherry picking". It is such a deplorable deceptive strategy that "cherry picking" sounds too nice to me. I would suggest calling it "cranberry picking". Under the assumption that people only eat cranberries when the burn peeing is worse. Another good new name could be "wishful picking."

In a previous post, I showed that the uncertainty of short-term trends is huge, probably much larger than you think, the uncertainty monster can only stomach a few short-term trends for breakfast. Because of this large uncertainty the influence of cranberry picking is probably also larger than you think. Even I was surprised by the calculations. I hope the uncertainty monster does not upset his stomach, he does not get the uncertainties he needs to thrive.

Uncertainty monster made of papers

Size of short-term temperature fluctuations

To get some realistic numbers we first need to know how large the fluctuations around the long-term trend are. Thus let's first have a look at the size of these fluctuations in two surface temperature and two tropospheric temperature datasets:
  • the surface temperature of Berkeley Earth (formerly known as BEST),
  • the surface temperature of NASA-GISS: GISTEMP,
  • the satellite Temperature of the Total Troposphere (TTT) of Remote Sensing Systems (RSS),
  • the satellite Temperature of the Lower Troposphere (TLT version 6 beta) of the University of Alabama in Huntsville (UAH).
The four graphs below have two panels. The top panel shows the yearly average temperature anomalies over time as red dots. The Berkeley Earth data series starts earlier, but I only use data starting in 1880 because earlier data is too sparse and may thus not show actual climatic changes in the global mean temperature. For both surface temperature datasets the second world war is removed because its values are not reliable. The long-term trend is estimated using a [[LOESS]] smoother and shown as a blue line.

The lower panel shows the deviations from the long-term trend as red dots. The standard deviation of these fluctuations over the full period is written in red. The graphs for the surface temperature also gives the standard deviation of the deviations over the shorter satellite period written in blue for comparison with the satellite data. The period does not make much difference.

Image

Image

Image

Image

Both tropospheric datasets have fluctuations with a typical size (standard deviation) of 0.14 °C. The standard deviation of the surface datasets varies a little depending on the dataset or period. For the rest of this post I will use 0.086 °C as a typical value for the surface temperature.

The tropospheric temperature clearly shows more short-term variability. This mainly comes from El Nino, which has a stronger influence on the temperature high up in the air than on the surface temperature. This larger noise level gives the impression that the trend in the tropospheric temperature is smaller, but the trend in the RSS dataset is actually about the same as the surface trend; see below.

Image

The trend in the preliminary UAHv6 temperature is currently lower than all others. Please note that, the changes from the previous version of UAH to the recent one are large and that the previous version of UAH showed more (recent) warming* and about the same trend as the other datasets.

Image

Uncertainty of short-term trends

Already without cranberry picking short-term trends are problematic because of the strong influence of short-term fluctuations. While a average value computed over 10 years of data is only 3 times as uncertain as a 100-year average, the uncertainty of a 10-year trend is 32 times as large as a 100-year trend.**

To study how accurate a trend is you can generate random numbers and compute their trend. On average this trend will be zero, but due to the short-term fluctuations any individual realization will have some trend. By repeating this procedure often you can study how much the trend varies due to the short-term fluctuations, how uncertain the trend is, or more positively formulated: what the confidence interval of the trend is. See my previous post for details. I have done this for the graph below; for the satellite temperatures the random numbers have a standard deviation of 0.14 °C, for the surface temperatures 0.086 °C.

The graph below shows the confidence interval of the trends, which is two times the standard deviation of 10,000 trends computed from 10,000 series of random numbers. A 10-year trend of the satellite temperatures, which may sound like a decent period, has a whooping uncertainty of 3 °C per century.*** This means that with no long-term trend the short-term trend will vary between -3°C and +3 °C per century for 95% of the cases and for the other 5% even more. That is the uncertainty from the fluctuations along, there are additional uncertainties due to changes in the orbit, the local time the satellite observes, calibration and so on.

Image

Cherry picking the begin year

To look at the influence of cranberry picking, I generated series of 30 values, computed all possible trends between 10 and 30 years and selected the smallest trend. The confidence intervals of these cranberry picked satellite temperature trends are shown below in red. For comparison the intervals for trends without cranberry picking, like above, are shown in blue. To show both cases clearly in the same graph, I have shifted the both bars a little away from each others.

Image

The situation is similar for the surface temperature trends. However, because the data is less noisy, the confidence intervals of the trends are smaller; see below.

Image

While the short-term trends without cranberry picking have a huge uncertainty, on average they are zero. With cranberry picking the average trends are clearly negative, especially for shorter trends, showing the strong influence of selecting a specific period. Without cranberry picking half of the trends are below zero, with cranberry picking 88% of the trends are negative.

Cherry picking the period

For some the record temperatures the last two years are not a sign that they were wrong to see a "hiatus". Some claim that there was something like a "pause" or a "slowdown" since 1998, but that it recently stopped. This claim gives even more freedom for cranberry picking. Now also the end year is cranberry picked. To see how bad this is, I again generated noise and selected the period lasting at least 10 years with the lowest trend and ending this year, or one year earlier or two years earlier.

The graphs below compare the range of trends you can get with cranberry picking the begin and end year in green with "only" cranberry picking the begin year like before in red. With double cranberry picking 96% of the trends are negative and the trends are going down even more. (Mitigation skeptics often use this "technique" by showing an older plot, when the newer plot would not be as "effective".)

Image

Image

A negative trend in the above examples of random numbers without any trend would be comparable to a real dataset where a short-term trend is below the long-term trend. Thus by selecting the "right" period, political activists can nearly always claim that scientists talking about the long-term trend are exaggerating because they do not look at this highly interesting short period.

In the US political practice the cranberry picking will be worse. Activists will not only pick a period of their political liking, but also the dataset, variable, region, depth, season, or resolution that produces a graph that can be misinterpreted. The more degrees of freedom, the stronger the influence of cranberry picking.

Solutions

There are a few things you can do to protect yourself against making spurious judgements.

1. Use large datasets. You can see in the plots above that the influence of cranberry picking is much smaller for the longer trends. For a 30-year period the difference between the blue confidence intervals for a typical 30-year period and the red confidence intervals for a cranberry picked 30-year period is small. Had I generated series of 50 random numbers rather than 30 numbers, this would likely have shown a larger effect of cranberry picking on 30-year trends, but still a lot smaller than on 10-year trends.

2. Only make statistical tests for relationships you expect to exist. This limits your freedom and the chance that one of the many possible statistical tests is spuriously significant. If you make 100 statistical tests of pure noise, 5 of them will on average be spuriously significant.

There was no physical reason for global warming to stop or slow down after 1998. No one computed the trend since 1998 because they had a reason to expect a change. They computed it because their eyes had seen something; that makes the trend test cranberry picking by definition. The absence of a reason should have made people very careful. The more so because there was a good reason to expect spurious results starting in a large El Nino year.

3. Study the reasons for the relationship you found. Even if I would wrongly have seen the statistical evidence for a trend decrease as credible, I would not have made a big point of it before I had understood the reason for this trend change. In the "hiatus" case the situation was even reversed: it was clear from the beginning that most of fluctuations that gave the appearance of a "hiatus" in the eyes of some was El Nino. Thus there was a perfectly fine physical reason not to claim that there was a change in the trend.

There is currently a strong decline in global sea ice extent. Before I cry wolf, accuse scientists of fraud and understating the seriousness of climate change, I would like to understand why this decline happened.

4. Use the right statistical test. People have compared the trend before 1998 and after 1998 and their uncertainties. These trend uncertainties are not valid for cherry picked periods. In this case, the right test would have been one for a trend change at an unknown position/year. There was no physical reason to expect a real trend change in 1998, thus the statistical test should take that the actual reason you make the test is because your eye sampled all possible years.

Against activists doing these kind of things we cannot do much, except trying to inform their readers how deceptive this strategy is. For example by linking to this post. Hint, hint.

Let me leave you with a classic Potholer54 video delicately mocking Monckton's cranberry picking to get politically convenient global cooling and melting ice trends.






Related reading

Richard Telford on the Monckton/McKitrick definition of a "hiatus", which nearly always gives you one: Recipe for a hiatus

Tamino: Cherry p

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Temperature trend over last 15 years is twice as large as previously thought because much warming was over Arctic where we have few measurements

Why raw temperatures show too little global warming


* The common baseline period of UAH5.6 and UAH6.0 is 1981-2010.

** These uncertainties are for Gaussian white noise.

*** I like the unit °C per century for trends even if the period of the trend it shorter. You get rounder numbers and it is easier to compare the trends to the warming we have seen in the last century and expert to see in the next one.

**** The code to compute the graphs of this post can be downloaded here.

***** Photo of cranberry field by mrbanjo1138 used under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license.

Sunday, 8 January 2017

Much ado about NOAAthing


I know NOAAthing.

This post is about nothing. Nearly nothing. But when I found this title I had to write it.

Once upon a time in America there were some political activists who claimed that global warming had stopped. These were the moderate voices, with many people in this movement saying that an ice age is just around the corner. Others said global warming paused, hiatused or slowed down. I feel that good statistics has always shown this idea to be complete rubbish (Foster and Abraham, 2015; Lewandowsky et al., 2016), but at least in 2017 it should be clear that it is nothing, nothing what so ever. It is interpreting noise. More kindly: interpreting variability, mostly El Nino variability.

Even if you disingenuously cherry-pick 1998 the hot El Nino year as the first year of your trend to get a smaller trend, the short-term trend is about the same size as the long-term trend now that 2016 is another hot El Nino year to balance out the first crime. Zeke Hausfather tweeted to the graph below: "You keep using that word, "pause". I do not think it means what you think it means." #CulturalReference

Image

In 2013 Boyin Huang of NOAA and his colleagues created an improved sea surface dataset called ERSST.v4. No one cared about this new analysis. Normal good science.




Thomas Karl of NOAA and his colleagues showed what the update means for the global temperature (ocean and land). The interesting part is the lower panel. It shows that the adjustments make global warming smaller by about 0.2°C. Climate data scientists naturally knew this and I blogged about his before, but I think the Karl paper was the first time this was shown in the scientific literature. (The adjustments are normally shown for the individual land or ocean datasets.)

But this post is unfortunately about nearly nothing, about the minimal changes in the top panel of the graph below. I made the graph extra large, so that you can see the differences. The thick black line shows the new assessment (ERSST.v4) and the thin red line the previous estimated global temperature signal (ERSST.v3). Differences are mostly less than 0.05°C, both warmer and cooler. The "problem" is the minute change at the right end of the curves.

Image

The new paper by Zeke Hausfather and colleagues now shows evidence that the updated dataset (ERSSTv4) is indeed better than the previous version (ERSSTv3b). It is a beautifully done study of high technical quality. They do so by comparing the ERSST dataset, which comes from a large number of data sources, with  data that comes only from only one source (buoys, satellites (CCl) or ARGO). These single-source datasets are shorter, but without trend uncertainties due to the combination of sources.

Image

The recent trend of HadSST also seems to be too small and to a lesser amount also COBE-SST. This problem with HadSST was known, but not published yet. The warm bias of ships that measure SST at their engine room intake is getting smaller over the last decade. The reason for this is not yet clear. The main contender seems to be that the fleet has become more actively managed and (typically warm) bad measurements have been discontinued.

Also ERSST uses ship data, but it gives them a much smaller weight compared to the buoy data. That makes this problem less visible in ERSST. Prepare for a small warming update for recent temperatures once this problem is better understood and corrected for. And prepare for the predictable cries of the mitigation skeptical movement and their political puppets.

Image

Karl and colleagues showed that as a consequence of the minimal changes in ERSST and if you start a trend in 1998 and compute a trend, this trend is statistically significant. In the graph below you can see in the left global panel that the old version of ERSST (circles) had a 90% confidence interval (vertical line) that includes zero (not statistically significantly different from zero), while the confidence interval of updated dataset did not (statistically significant).

Image

Did I mention that such a cherry-picked begin year is a very bad idea? The right statistical test is one for a trend change at an unknown year. This test provides no evidence whatsoever for a recent trend change.

That the trend in Karl and colleagues was statistically significant should thus not have mattered: Nothing could be worse than define a "hiatus" period as one were the confidence interval of a trend includes zero. However, this is the definition public speaker Christopher Monckton uses for his blog posts at Watts Up With That, a large blog of the mitigation skeptical movement. Short-term trends are very uncertain, their uncertainty increases very fast the shorter the period is. Thus if your period is short enough, you will find a trend whose confidence interval includes zero.

You should not do this kind of statistical test in the first place because of the inevitable cherry picking of the period, but if you want to statistically test whether the long-term trend suddenly dropped, the test should have the long-term trend as null-hypothesis. This is the 21st century, we understand the physics of man-made global warming, we know it should be warming, it would be enormously surprising and without any explanation if "global warming had stopped". Thus continued warming is the thing that should be disproven, not a flat trend line. Good luck doing so for such short periods given how enormously uncertain short-term trends are.



The large uncertainty also means that cherry picking a specific period to get a low trend has a large impact. I will show this numerically in an upcoming post. The methods to compute a confidence interval are for a randomly selected period, not for a period that was selected to have a low trend.

Concluding, we have something that does not exist, but which was made into an major talking point of the mitigation skeptical movement. This movement put their credibility on fluctuations that produced a minor short-term trend change that was not statistically significant. The deviation was also so small that it put an unfounded confidence in the perfection of the data.

The inevitable happened and small corrections needed to be made to the data. After this even disingenuous cherry-picking and bad statistics were no longer enough to support the talking point. As a consequence Lamar Smith of TX21 abused his Washington power to punish politically inconvenient science. Science that was confirmed this week. This should all have been politically irrelevant because the statistics were wrong all along. This was politically irrelevant by now because the new El Nino produced record temperatures in 2016 and even cherry picking 1998 as begin year is no longer enough.


"Much Ado About Nothing is generally considered one of Shakespeare's best comedies because it combines elements of mistaken identities, love, robust hilarity with more serious meditations on honour, shame, and court politics."
Yes, I get my culture from Wikipedia)


To end on a positive note, if your are interested in sea surface temperature and its uncertainties, we just published a review paper in the Bulletin of the American Meteorological Society: "A call for new approaches to quantifying biases in observations of sea-surface temperature." This focuses on ideas for future research and how the SST community can make it easier for others to join the field and work on improving the data.

Another good review paper on the quality of SST observations is: "Effects of instrumentation changes on sea surface temperature measured in situ" and also the homepage of HadSST is quite informative. For more information on the three main sea surface temperature datasets follow these links: ERSSTv4, HadSST3 and COBE-SST. Thanks to John Kennedy for suggesting the links in this paragraph.

Do watch the clear video below where Zeke Hausfather explains the study and why he thinks recent ocean warming used to be underestimated.





Related reading

The op-ed by the authors Kevin Cowtan and Zeke Hausfather is probably the best article on the study: Political Investigation Is Not the Way to Scientific Truth. Independent replication is the key to verification; trolling through scientists' emails looking for out-of-context "gotcha" statements isn't.

Scott K. Johnson in Ars Technica (a reading recommendation for science geeks by itself): New analysis shows Lamar Smith’s accusations on climate data are wrong. It wasn't a political plot—temperatures really did get warmer.

Phil Plait (Bad Astronomy) naturally has a clear explanation of the study and the ensuing political harassment: New Study Confirms Sea Surface Temperatures Are Warming Faster Than Previously Thought

The take of the UK MetOffice, producers of HadSST, on the new study and the differences found for HadSST: The challenge of taking the temperature of the world’s oceans

Hotwhopper is your explainer if you like your stories with a little snark: The winner is NOAA - for global sea surface temperature

Hotwhopper follow-up: Dumb as: Anthony Watts complains Hausfather17 authors didn't use FUTURE data. With such a response to the study it is unreasonable to complain about snark in the response.

The Christian Science Monitor gives a good non-technical summary: Debunking the myth of climate change 'hiatus': Where did it come from?

I guess it is hard for a journalist to not write that the topic is not important. Chris Mooney at the Washington Post claims Karl and colleagues is important: NOAA challenged the global warming ‘pause.’ Now new research says the agency was right.

Climate Denial Crock of the Week with Peter Sinclair: New Study Shows (Again): Deniers Wrong, NOAA Scientists Right. Quotes from several articles and has good explainer videos.

Global Warming ‘Hiatus’ Wasn’t, Second Study Confirms

The guardian blog by John Abraham: New study confirms NOAA finding of faster global warming

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

No! Ah! Part II. The return of the uncertainty monster

How can the pause be both ‘false’ and caused by something?

References

Grant Foster and John Abraham, 2015: Lack of evidence for a slowdown in global temperature. US CLIVAR Variations, Summer 2015, 13, No. 3.

Zeke Hausfather, Kevin Cowtan, David C. Clarke, Peter Jacobs, Mark Richardson, Robert Rohde, 2017: Assessing recent warming using instrumentally homogeneous sea surface temperature records. Science Advances, 04 Jan 2017.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Lewandowsky, S., J. Risbey, and N. Oreskes, 2016: The “Pause” in Global Warming: Turning a Routine Fluctuation into a Problem for Science. Bull. Amer. Meteor. Soc., 97, 723–733, doi: 10.1175/BAMS-D-14-00106.1.

Wednesday, 30 November 2016

Statistically significant trends - Short-term temperature trend are more uncertain than you probably think

Image
Yellowknife, Canada, where the annual mean temperature is zero degrees Celsius.

In times of publish or perish, it can be tempting to put "hiatus" in your title and publish an average article on climate variability in one of the prestigious Nature journals. But my impression is that this does not explain all of the enthusiasm for short-term trends. Humans are greedy pattern detectors: it is better to see a tiger, a conspiracy or trend change one time too much than one time too little. Thus maybe humans have a tendency to see significant trends where statistics keeps a cooler head.

Whatever the case, I expect that also many scientists will be surprised to see how large the difference in uncertainty is between long-term and short-term trends. However, I will start with the basics, hoping that everyone can understand the argument.

Statistically significant

That something is statistically significant means that it is unlikely to happen due to chance alone. When we call a trend statistically significant, it means that it is unlikely that there was no trend, but that the trend you see is due to chance. Thus to study whether a trend is statistically significant, we need to study how large a trend can be when we draw random numbers.

For each of the four plots below, I drew ten random numbers and then computed the trend. This could be 10 years of the yearly average temperature in [[Yellowknife]]*. Random numbers do not have a trend, but as you can see, a realisation of 10 random numbers appears to have one. These trends may be non-zero, but they are not significant.

Image

If you draw 10 numbers and compute their trends many times, you can see the range of trends that are possible in the left panel below. On average these trends are zero, but a single realisation can easily have a trend of 0.2. Even higher values are possible with a very small probability. The statistical uncertainty is typically expressed as a confidence interval that contains 95% of all points. Thus even when there is no trend, there is a 5% chance that the data has a trend that is wrongly seen as significant.**

If you draw 20 numbers, 20 years of data, the right panel shows that those trends are already quite a lot more accurate, there is much less scatter.

Image

To have a look at the trend errors for a range of different lengths of the series, the above procedure was repeated for lengths between 5 and 140 random numbers (or years) in steps of 5 years. The confidence interval of the trend for each of these lengths is plotted below. For short periods the uncertainty in the trend is enormous. It shoots up.

Image

In fact, the confidence range for short periods shoots up so fast that it is hard to read the plot. Thus let's show the same data with different (double-logarithmic) axis in the graph below. Then the relationship look like a line. That shows that size of the confidence interval is a power law function of the number of years.

The exponent is -1.5. As an example that means that the confidence interval of a ten year trend is 32 (101.5) times as large as the one of a hundred year trend.

Image

Some people looking at the global mean temperature increase plotted below claim to see a hiatus between the years 1998 and 2013. A few years ago I could imagine people thinking: that looks funny, let's make a statistical test whether there is a change in the trend. But when the answer then clearly is "No, no way", and the evidence shows it is "mostly just short-term fluctuations from El Nino", I find it hard to understand why people believe in this idea so strongly that they defend it against this evidence.

Especially now it is so clear, without any need for statistics, that there never was anything like an "hiatus". But still some people claim there was one, but it stopped. I have no words. Really, I am not faking this dear colleagues. I am at a loss.

Maybe people look at the graph below and think, well that "hiatus" is ten percent of the data and intuit that the uncertainty of the trend is only 10 times as large, not realising that it is 32 times.

Image

Maybe people use their intuition from computing averages; the uncertainty of a ten year average is only 3 times as large that of a 100 year average. That is a completely different game.

The plots below for the uncertainty in the average are made in the same way as the above plots for the trend uncertainty. Also here more data is better, but the function is much less steep. Plots of power laws always look very similar, you need to compare the axis or the computed exponent, which in this case is only -0.5.

Image

Image

It is typical to use 30 year periods to study the climate. These so-called climate normals were introduced around 1900 in a time the climate was more or less stable and the climate needed to be described for agriculture, geography and the like. Sometimes it is argued that to compute climate trends you need at least 30 years of data, that is not a bad rule of thumb and would avoid a lot of nonsense, but the 30 year periods were not intended as a period on which to compute trends. Given how bad the intuition of people apparently is there seems to be no alternative to formally computing the confidence interval.

That short-term trends have such a large uncertainty also provides some insight into the importance of homogenisation. The typical time between two inhomogeneities is 15 to 20 years for temperature. The trend over the homogeneous subperiods between two inhomogeneities is thus very uncertain and not that important for the long-term trend. What counts is the trend of the averages of the homogeneous subperiods.

That insight makes you want to be sure you do a good job when homogenising your data rather than mindlessly assume everything will be alright and raw data good enough. Neville Nicholls wrote about how he started working on homogenisation:
When this work began 25 years or more ago, not even our scientist colleagues were very interested. At the first seminar I presented about our attempts to identify the biases in Australian weather data, one colleague told me I was wasting my time. He reckoned that the raw weather data were sufficiently accurate for any possible use people might make of them.
Sad.

[UPDATE: In part 2 of this series, I show how these large trend uncertainties in combination with the deceptive strategy of "cherry-picking" a specific period very easily produces a so-called "hiatus".]


Related reading

How can the pause be both ‘false’ and caused by something?

Atmospheric warming hiatus: The peculiar debate about the 2% of the 2%

Sad that for Lamar Smith the "hiatus" has far-reaching policy implications

Temperature trend over last 15 years is twice as large as previously thought

Why raw temperatures show too little global warming

Notes

* In Yellowknife the annual mean temperature is about zero degrees Celsius. Locally the standard deviation of annual temperatures is about 1°C. Thus I could conveniently use the normal distribution with zero mean and standard deviation one. The global mean temperature has a much smaller standard deviation of its fluctuations around the long-term trend.
** Rather than calling something statistically significant and thus only communicating whether the probability was below 5% or not, it fortunately becomes more common to simply give the probability (p-value). In the past this was hard to compute and people compared their computation to the 5% levels given in statistical tables in books. With modern numerical software it is easy to compute the p-value itself.
*** Here is the cleaned R code to generated the plots of this post.


The photo of YellowKnife at the top is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

Wednesday, 17 June 2015

Did you notice the recent anti-IPCC article?

You may have missed the latest attack on the IPCC, because the mitigation sceptics did not celebrated it. Normally they like to claim that the job of scientists is to write IPCC friendly articles. Maybe because that is the world they know, that is how their think tanks function, that is what they would be willing to do for their political movement. The claim is naturally wrong and it illustrates that they are either willing to lie for their movement or do not have a clue how science works.

It is the job of a scientist to understand the world better and thus to change the way we currently see the world. It is the fun of being a scientist to challenge old ideas.

The case in point last week was naturally the new NOAA assessment of the global mean temperature trend (Karl et al., 2015). The new assessment only produced minimal changes, but NOAA made that interesting by claiming the IPCC was wrong about the "hiatus". The abstract boldly states:
Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC ...
The introduction starts:
The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report concluded that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years [1998-2012] than over the past 30 to 60 years.” ... We address all three of these [changes in the observation methods], none of which were included in our previous analysis used in the IPCC report.
Later Karl et al. write, that they are better than the IPCC:
These analyses have surmised that incomplete Arctic coverage also affects the trends from our analysis as reported by IPCC. We address this issue as well.
To stress the controversy they explicitly use the IPCC periods:
Our analysis also suggests that short- and long-term warming rates are far more similar than previously estimated in IPCC. The difference between the trends in two periods used in IPCC (1998-2012 and 1951-2012) is an illustrative metric: the trends for these two periods in the new analysis differ by 0.043°C/dec compared to 0.078°C/dec in the old analysis reported by IPCC.
The final punchline goes:
Indeed, based on our new analysis, the IPCC’s statement of two years ago – that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years” – is no longer valid.
And they make the IPCC periods visually stand out in their main figure.

Image
Figure from Karl et al. (2015) showing the trend difference for the old and new assessment over a number of periods, the IPCC periods and their own. The circles are the old dataset, the squares the new one and the triangles depict the new data with interpolation of the Arctic datagap.

This is a clear example of scientists attacking the orthodoxy because it is done so blatantly. Normally scientific articles do this more subtly, which has the disadvantage that the public does not notice it happening. Normally scientists would mention the old work casually, often the expect their colleagues to know which specific studies are (partially) criticized. Maybe NOAA found it easier to use this language this time because they did not write about a specific colleague, but about a group and a strong group.

Image
Figure SPM.1. (a) Observed global mean combined land and ocean surface temperature anomalies, from 1850 to 2012 from three data sets. Top panel: annual mean values. Bottom panel: decadal mean values including the estimate of uncertainty for one dataset (black). Anomalies are relative to the mean of 1961−1990. (b) Map of the observed surface temperature change from 1901 to 2012 derived from temperature trends determined by linear regression from one dataset (orange line in panel a).
The attack is also somewhat unfair. The IPCC clearly stated that it not a good idea to focus on such short periods:
In addition to robust multi-decadal warming, global mean surface temperature exhibits substantial decadal and interannual variability (see Figure SPM.1). Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to 0.15] °C per decade), which begins with a strong El Niño, is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)
What the IPCC missed in this case is that the problem goes beyond natural variability, that another problem is whether the data quality is high enough to talk about such subtle variations.

The mitigation sceptics may have missed that NOAA attacked the IPCC consensus because the article also attacked the one thing they somehow hold dear: the "hiatus".

I must admit that I originally thought that the emphasis the mitigation sceptics put on the "hiatus" was because they mainly value annoying "greenies" and what better way to do so than to give your most ridiculous argument. Ignore the temperature rise over the last century, start your "hiatus" in a hot super El Nino year and stupidly claim that global warming has stopped.

But they really cling to it, they already wrote well over a dozen NOAA protest posts at WUWT, an important blog of the mitigation sceptical movement. The Daily Kos even wrote: "climate denier heads exploded all over the internet."

This "hiatus" fad provided Karl et al. (2015) the public interest — or interdisciplinary relevance as these journals call that — and made it a Science paper. Without the weird climate "debate", it would have been an article for a good climate journal. Without challenging the orthodoxy, it would have been an article for a simple data journal.

Let me close this post with a video of Richard Alley explaining even more enthusiastic than usually
what drives (climate) scientists? Hint: it ain't parroting the IPCC. (Even if their reports are very helpful.)
Suppose Einstein had stood up and said, I have worked very hard and I have discovered that Newton is right and I have nothing to add. Would anyone ever know who Einstein was?







Further reading

My draft was already written before I noticed that at Real Climate Stefan Rahmstorf had written: Debate in the noise.

My previous post on the NOAA assessment asked the question whether the data is good enough to see something like a "hiatus" and stressed the need to climate data sharing and building up a global reference network. It was frivolously called: No! Ah! Part II. The return of the uncertainty monster.

Zeke Hausfather: Whither the pause? NOAA reports no recent slowdown in warming. This post provides a comprehensive, well-readable (I think) overview of the NOAA article.

How climatology treats sceptics. My experience fits to what you would expect.

References

IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp, doi: 10.1017/CBO9781107415324.

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang, 2015: Possible artifacts of data biases in the recent global surface warming hiatus. Science. doi: 10.1126/science.aaa5632.

Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. Journal Climate, 28, pp. 911–930, doi: 10.1175/JCLI-D-14-00006.1.

Rennie, Jared, Jay Lawrimore, Byron Gleason, Peter Thorne, Colin Morice, Matthew Menne, Claude Williams, Waldenio Gambi de Almeida, John Christy, Meaghan Flannery, Masahito Ishihara, Kenji Kamiguchi, Abert Klein Tank, Albert Mhanda, David Lister, Vyacheslav Razuvaev, Madeleine Renom, Matilde Rusticucci, Jeremy Tandy, Steven Worley, Victor Venema, William Angel, Manola Brunet, Bob Dattore, Howard Diamond, Matthew Lazzara, Frank Le Blancq, Juerg Luterbacher, Hermann Maechel, Jayashree Revadekar, Russell Vose, Xungang Yin, 2014: The International Surface Temperature Initiative global land surface databank: monthly temperature data version 1 release description and methods. Geoscience Data Journal, 1, pp. 75–102, doi: 10.1002/gdj3.8.