Goddard Institute for Space Studies (GISS), the National Oceanic and Atmospheric Administration (NOAA), and the Hadley Climate Research Unit (HADCRU) are the three main reporting agencies that calculate a global surface temperature anomaly. All three agencies rely heavily on land-based surface station temperature records to calculate the anomaly and this is where the dispute about UHI comes into play. There are other arguments against using surface station temperatures as a measure of atmospheric change, but in this post I’ll concentrate on UHI.
This is not a trivial discussion since the Anthropogenic Climate Change Theory Advocates (ACCTAs) also known as Anthropogenic Global Warming Alarmists (AGWAs) tend to use these anomalies as their main “evidence” of AGW. It is these alarming projections and hysterics that are driving politicians into “doing something” even if it further cripples the economy.
All three of the reporting agencies named above make an attempt at accounting for UHI. In this post I’ll point out the general methodology of how GISS does this with their GISSTemp anomaly.
The overall “Readme File” that comes with GISSTemp is somewhat lacking, but here is the documentation about the UHI methodology:
“The goal of the homogeneization effort is to avoid any impact (warming or cooling) of the changing environment that some stations experienced by changing the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations. If no such neighbors exist, the station is completely dropped, if the rural records are shorter, part of the non-rural record is dropped.”Set aside the misspelling of “homogenization.” Remember, they’re computer geeks so cut them a little slack. I haven’t yet gone into that section of the code to see exactly how they determine a “rural neighbor,” but at this point it isn’t necessary. All you need know is that if the rural neighbors are “contaminated” by their own UHI issues, then the anomaly for the urban station will be overstated in addition to the anomaly being overstated for the rural neighbor. Add that up a few thousand times and think about what it does to the data.
Given the poor quality control on US surface stations, as documented by Anthony Watts in his spring Surface Stations Report, we should immediately begin to question the resulting anomalies. If the surface stations in one of the most advanced countries on earth demonstrate such bias, what confidence can we have in the accuracy of stations located in less-developed countries? How can we trust an anomaly that results from bad data that isn’t validated despite the best efforts of analysts trying to “clean it up?” You can’t.
This isn’t due to deliberate malfeasance or conspiracy, but to the fact that there simply aren’t the proper controls in place to ensure accurate data. No matter how hard they try to statistically adjust the numbers, no matter their diligence in eliminating obvious outliers, there is no getting around the fact that thousands of surface stations are providing upwardly biased measurements.
I don’t think the agencies are deliberately “fudging” the numbers upward. I think they truly believe that they are adequately correcting for the bias. On the other hand, I think they are ignoring the highly suspect nature of the measurements collected by the surface stations and don’t have a proper understanding of just how bad the data is.