Statistics and measurement: The climate brouhaha

In science, measurement matters. Issues of precision and accuracy are carefully noted as qualifications to be used in evaluating the meaning of the measurements. Statistics are used to help pull meaning from measures by finding a signal in the noise. Again, great care must be used to make sure that the methods and mathematics are sound. Mann’s Hockey Stick tale is an example of where these issues of precision, accuracy, and proper aggregation of measures can be distorted and twisted to suit a particular purpose. Climatology seems rife with such pollution. Here are two examples. John Hinderaker starts off with a tale about dueling temperature charts. The charts in question are temperature averages over the period from 1880 to the current. One shows average temperature by year on a scale from -10 to 110 (Fahrenheit degrees). This is the nominal temperature variation found in the measurements and the average temperature over the years is remarkably flat. The other graph, for comparison and contrast, is the temperature anomaly over 2.5 degrees for the same period. This graph emphasizes changes by using statistics to narrow down the range of measures and then show differences on a scale that is only 2% of the variations found in the source measures.

Steve wrote here about the global temperature chart that presented conventional data in a normal way, and therefore aroused the ire of climate alarmists, who deemed the graph “misleading” because it didn’t look scary enough

Dickson puts two charts side by side, one showing temperatures, the other showing temperature anomalies, from a presumed base*, on a very small scale so that purported changes are greatly magnified:

A fundamental problem is that the alleged changes that are depicted in magnified form are in fact minute in relation to the uncertainty that goes into their measurement and calculation.

The first problem with temperature measurements is how to qualify them for differences in siting and instrumentation. Anthony Watts got into that with his census of U.S. surface weather observation stations. A second problem is how to calculate an average temperature. This is often done by taking the difference between daily maximum and minimum for each day and then averaging that. It seems that heating and cooling degree days might be a better choice but that only highlights how distance the temperature averages are from something actually useful. A third problem is that of sampling locations. Temperature stations are widely dispersed and more common in urban environments and that doesn’t provide even a good measure of surface temperatures much less atmospheric temperatures. Data selection gets in here as well. There have been stories recently about how climate alarmists are dismissing satellite data as flawed because it does not show the desired increase in temperatures of the atmosphere.

Next up is from Luboš Motl about When religious beliefs trump one’s life“A heartbreaking opinion piece by a climate alarmist“.

Would you speculate about the question whether some change of the largely ill-defined global mean temperature from an ill-defined base to an ill-defined moment will be 2.0 °C or 2.3 °C? This man does. The minimum error margin isn’t much lower than 1 °C, however, and even 40 °C of warming would be way safer than the disease he’s been diagnosed with. I think that most people would think how many months of life await them.

Even if the temperatures in 2100 will be higher by 3 °C than today, and they won’t, it won’t represent any serious challenge for the people who will live in 2100. Worries about the climate are rationally indefensible and most people do this pseudoscientific stuff professionally because they want to get decent salaries for very little work and no valuable work and they want to enjoy the advantages.

The question is what drives this distortion of science?

Comments are closed.