Thursday, February 19, 2009

Inducing Errors in Temperature Measurement

In order to determine if man’s CO2 is causing global warming, we must first determine if the planet is indeed warming. This seems like a trivial process because it appears there is agreement. In reality, because our measurements are so recent (only about 150 years), not recorded in a standardized way throughout those 150 years, and recorded in relatively few places globally for most of that 150 years, there are significant reliability questions.

In the earliest part of this record, daily temperatures were almost exclusively recorded where people lived and that has not changed. Getting to remote areas (the Arctic or Antarctica) was an undertaking for only a handful of adventurers seeking to “get there first.” However, we can include places like the Sahara, the Amazon and other such places that even today present challenges to penetration.

In addition to the scarcity of recording stations early in the record, there was also a scarcity of data points. As automated stations did not exist early on, the temperature record was subject to someone reading the thermometer at least once a day. Early thermometers were not digital and there was always a probability of reading and recording error. Most early daily temperature records used in calculating long-term temperatures are a single entry. Therefore that day’s recorded temperature was highly dependent upon the time of day the thermometer was read. In addition, it was not well-understood all the ways a temperature reading could be distorted, for example by placing the weather station on a roof or in the shade.

Over time it became apparent that a point reading was not accurate and attempts were made to read the thermometer at varying times, but this was still not standardized. Not until the advent of automated stations that could record a high and low temperature over a 24-hour period did we get something like a true daily average.

Even today, with systems able to record temperatures continuously throughout the day, the main anomaly reporting agencies still use an average of the high and low. This presents a problem as shown in the table below. It is the result of the fact that there simply are not enough continuous measurement stations in the world. Most stations are still only capable of recording high and low.


This table shows temperature readings in °C from a hypothetical weather station. The first column is a single point reading of the type common in much of the early temperature records. The second column shows how most daily averages are derived. The third column shows what the average temperature would be if readings are collected hourly and the low reading predominates. The fourth column shows what happens when the reverse is true. The point I'm demonstating is that data collected either in single point or high and low points may either overstate or understate the actual average. With most of the early records we have no way of knowing the real average with any certainty.

Now take potentially flawed readings each day for a year for roughly 2000 weather stations over a 50 year period. You are talking about roughly 36,500,000 data points that may be in error. How do we account for this potential error? If you think that anyone has validated this data, you'd be wrong. The volume precludes this although statistical methods can eliminate outliers.


Our coverage of remote areas is still spotty. Of the three main temperature anomaly reporting agencies (the Goddard Institute for Space Studies (GISS), the National Oceanic and Atmospheric Administration (NOAA) and the Met Office Hadley Centre Climactic Research Unit at the University of East Anglia(HadCRU)), only HadCRU comes clean showing the huge gaps in surface readings. None of the reporting agencies provide their raw data or their code for calculating the anomaly, a distinctly unscientific stance regardless of their reasoning. We are simply to accept that their temperature reconstructions are accurate. Except that all three have different methodologies and each come to slightly different conclusions on their anomalies.
The white areas in this Hadley Centre graphic show the parts of the globe where we have NOT collected actual data. Note that the Arctic, Antarctic, Africa and South America, and large expanses of ocean have no data. In order to create the global temperature anomaly, these areas are statistically filled in.

As I stated in the previous post, the anomaly is not simply an average of all available data, but it is a statistical calculation. It is a model, a temperature reconstruction. The blank spots in the previous graphic are filled in statistically and you are not allowed to know how. You are simply to trust the modelers.

If the mere collecting of the data is fraught with potential error, should we not check how these agencies get around all the potential errors? Should we not question how they fill in missing data? And all this is just to answer the most basic question of whether warming is occurring or not. None of this would matter if this was all a simple academic exercise, but real money and our future are being staked on unproven hypotheses.

No comments: