Proving the Problem with the Station Data


The theme of the week turned out to be a comparison of how the satellite and station data behave.  I started out with the observation that the station data is responding less to ENSO events than it has in the past.  It is a steady progression of decreasing response.   Then I showed how the station data is poor in comparison to the satellite data at detecting the effects of volcanic eruptions.  I asked how can the station data have better resolution at detecting global warming if it is significantly inferior at detecting the climate effects of a volcanic eruption or the ENSO.

Now I am going to show what happens to the two types of data if the above events are removed from the global temperature anomaly since 1979.  That is 31 years of global temperature, but with the main events removed from the record.  For the ENSO and volcanic events in the past 31, the station data has averaged only 60% of the response that the satellite data has.  So what does the warming of the past 31 years look like if those events are removed?

I will start off by showing the temperature record without any modification.  It is simply the averaged station (CRU and GHCN) and the satellite (UAH and RSS) temperature anomalies for the period from 1979-2010.

The Inconvenient Skeptic

(Red) Station, Blue (Satellite). Averaged temperature from 1979-2010. No adjustments

This should be familiar to most readers.  The ENSO and volcanic events that have been discussed are clearly present in both the station and satellite records.

The next step is to remove the events at the magnitudes discussed in the previous articles.  The 1983-1984 ENSO event was a little tricky as the sequential events had a recovery and overlap.  There was also the 2nd year effect of the Mt. Pinatubo eruption.  I used a 50% reduction in the initial drop for each set of data.  Otherwise the clean up was the magnitude of the event response.

The Inconvenient Skeptic

(Red) Station, (Blue) Satellite with the major ENSO and volcanic climate variability removed.

This is a less noisy chart.  There is small reduction in the overall warming trend, but it is small for both sets of data.

Station Trend:    Was 0.192 °C/decade  Event Free: 0.185 °C/decade

Satellite Trend:   Was 0.145 °C/decade  Event Free: 0.128 °C/decade

So the trend is still warming for both sets of data and I would not consider the change to be significant.

There are two interesting items of note in the event free chart.  The first is that the temperature of both data sets stepped up in the year 2001 and have been stable since.  The trend for both sets is almost zero for the past 10 years.  There was some minor ENSO activity that wasn’t removed, but both sets agree that it was not significant to global temperatures.

The other question is why is the station showing double the overall warming over the past 30 years?  The two sets start off with an offset of ~0.2 °C.  Since the satellite is not calibrated to the station data I see no problem with that, but for the past 10 years, the station data is showing a ~0.4 °C offset.  Where did that 0.2 °C come from?

I have shown categorically that the satellite data is more sensitive to changes in atmospheric temperature.  If the average sensitivity difference for the past 30 years is used, the station data should only have shown 60% of the warming that the satellite data has shown.  The offset between them should be less now than it was 30 years ago.  Instead the opposite has happened and the stations are showing 187% of the warming than the satellite data.  Almost double the warming, from a data set that is statistically less sensitive to global temperature changes.

If increased levels of CO2 were causing warming, then the satellite should be detecting more warming than the station data.  There is no event where the station data was significantly more sensitive than the satellite data.  For the station data to show more warming than the satellite data in only general warming is opposite of the behavior in every other instance.

An analogy would be microscopes.  The station data is a 60x microscope and the satellite data is a 100x microscope.  In each case the 100x microscope provides better magnification of an object, except when the object happens to be the global warming bacteria.  Then it suddenly the 60x behaves like it is a 300x microscope.  That is the level of enhancement that would be needed to provide the additional global warming sensitivity to show 187% greater response than the satellite data provides.

If the assumption is made that the station has 60% the response of the satellite measurement, then this would be the expected behavior of the station data over the past 31 years.

The Inconvenient Skeptic

(Blue) Non-event Satellite, (Red) 60% response of the satellite changes applied to the initial station offset.

Based on the typical station response the global temperature should now be slightly above 0.3 °C, that it is ~ 0.6 °C is the error that the station data is showing based on the analysis of the typical station response in comparison to the satellite response.

The problem isn’t that there is an offset, the problem is that the change in offset is counter to the change observed for every other event that has a measurable impact on global temperature.  I can find to reasonable explanation as to why the station data is behaving in this manner.  This problem is significant as it accounts for about half of the warming that the stations have observed.  When the error is 50% of the observed change, there is a real problem.

Posted in Anomaly and Measurement Methods and Science Overviews by inconvenientskeptic on March 24th, 2011 at 4:30 am.

21 comments

This post has 21 comments

  1. Joris Vanderborght Mar 24th 2011

    “The problem isn’t that there is an offset, the problem is that the change in offset is counter to the change observed for every other event that has a measurable impact on global temperature. I can find to (=no?) reasonable explanation as to why the station data is behaving in this manner. This problem is significant as it accounts for about half of the warming that the stations have observed. When the error is 50% of the observed change, there is a real problem.”
    My answer is: urban heat island effect on and/or microlocation issues with the stations. People who tried to calculate the UHIE often come in the vicinity of 50 %. Even warmists (cfr. the discussion about the stations in China).

  2. SoundOff Mar 24th 2011

    TIS,

    Satellites fly above the stratosphere and measure temperatures through the air column below all the way down to the surface. Scientists do their best to separate out the signals for each layer of the atmosphere by focusing on certain frequencies but the result is still a blend of all the layers.

    One of the fingerprints specific to global warming is a cooling stratosphere. As such, we should be seeing a slight divergence in recent times between surface temperatures that still show ongoing global warming at the usual rate, and satellite temperatures that show a slower rate of warming due to the slight countering effect of the cooling stratosphere.

    You may become famous for providing yet more empirical evidence of global warming. All that remains is for you to submit this hypothesis to a peer-reviewed journal.

  3. intrepid_wanders Mar 24th 2011

    It is morbidly funny that these questions, yet remain, 11 years later:

    “There are five potential areas of error.

    1) Errors caused by environmental change in the general location of the measuring instrument.
    2) Errors arising at the point of measurement, such as equipment or procedural faults.
    3) Errors arising from statistical processing by GISS and CRU, such as poor station information
    4) Errors arising from station closures altering the homogeneity and balance of the network
    5) Errors caused by uneven geographical spread”

    - John Daly, 2000
    http://www.john-daly.com/ges/surftmp/surftemp.htm

    Truly, this “problem” can be proven time and time again. I just fear that there is no “raw data” that exists anymore.

  4. Soundoff,

    There has been no cooling of the stratosphere – yet another “global warming fingerprint” that fails to meet the test. How many more do you need?

  5. SoundOff Mar 25th 2011

    John Daly’s FUD questions were all for naught. In 2004 it was found that the satellite record was being incorrectly calculated, not the surface record. Now that the satellite record is being calculated correctly, surface and satellite temperature records exactly match if they are adjusted to the same base period with satellite temperatures just being a bit spikier, as expected. Here are a few recent graphs showing some comparisons.

    The Big Five Compared – All Years: http://chartsgraphs.files.wordpress.com/2011/01/cb_41.png

    30-Year UAH/GISS Moving Average Comparison: http://chartsgraphs.files.wordpress.com/2011/03/gu_move_avg2.png

    30-Year UAH/GISS Monthly Comparison: http://chartsgraphs.files.wordpress.com/2011/03/gu_trend2.png

  6. SoundOff Mar 25th 2011

    These links might interest TIS.

    Comparison of UAH Anomalies During 1998 & 2010 El Nino – La Nina Oscillations

    http://chartsgraphs.wordpress.com/2011/03/14/comparison-of-uah-anomalies-during-1998-2010-el-nino-la-nina-oscillations/
    _____________________-

    Comparison of GISS LOTAs [Land/Ocean Total Anomalies] During 5 El Nino – La Nina Cycles

    http://chartsgraphs.wordpress.com/2011/03/10/comparison-of-giss-lotas-during-5-el-nino-la-nina-cycles/

  7. Smearing John Daly on top of linking to irrelevant data make you look even more pathetic.

  8. inconvenientskeptic Mar 25th 2011

    Intrepid,

    One aspect of my approach is that the individual problems of the satellite and the station methods can be ignored. I only test the overall response to events. That is enough to show that the station method is inferior to the satellite method regardless of reason.

    Sound,

    This method does show that the satellites are detecting some warming, but the scale of warming is 0.2°C. It also shows that the station method is inferior and that the stations are showing 0.3 °C of spurious warming.

    I wonder… Could you take the step of agreeing that the satellite data is superior in its sensitivity to detecting change? The 0.2°C of warming in the satellite record would correspond to a change of 0.8 W/m2 of forcing. That would indicate that a short time lag response to forcing climate sensitivity of 0.25 °K/Wm2. Or in warmist terms a 0.9 °C/CO2 doubling.

    Could you accept that as legitimate?

  9. Jan 2011 anomaly

    .194C HADCRUT
    -.009C UAH
    .46 GISTEMP

    Feb 2011 anomaly

    .44C GISTEMP
    ..018 UAH
    ? HADCRUT

    GISTEMP is just completely out of whack.

  10. inconvenientskeptic Mar 25th 2011

    GISS has significant method problems. Despite the bad name that the CRU set has, from what I see the HadCRU data is the best of the station methods.

    GHCN v3 might be pretty bad as well. That v3 they are working on has some odd bias issues.

  11. SoundOff Mar 25th 2011

    Bruce says:

    Jan 2011 anomaly
    -.009C UAH
    .46 GISTEMP

    Feb 2011 anomaly
    .44C GISTEMP
    .018 UAH

    GISTEMP is just completely out of whack.
    ___________________________________

    Bruce is not wrong, he’s not even right. I don’t know the HADCRUT temperature record as well, but I certainly know GISTemp. The conversion of GISTemp to UAH is to reduce the GISTemp figure by 0.35ºC to restate it in terms of the warmer UAH base period.

    The adjusted numbers are shown below using the current temperatures from each record. What they show is GISTemp is just slightly warmer than UAH in real terms – a tenth of a degree warmer – which is basically nothing for a short period like a month.

    Even this difference would probably disappear if we compared to the RSS satellite record instead of to UAH, which is the cool outlier of the five big datasets. The latest RSS temperature I have is 0.25ºC for December 2010, while the comparable UAH temperature was 0.18ºC (almost a tenth of a degree difference). UAH has had long history of miscalculating its record, so I expect they still don’t have it right.

    Jan 2011:
    GISTemp was 0.46ºC – 0.35 = 0.11ºC
    UAH was 0.00ºC

    Feb 2011:
    GISTemp was 0.44ºC – 0.35 = 0.09
    UAH was –0.01ºC

  12. SoundOff Mar 25th 2011

    I disagree with you TIS. I don’t see any meaningful difference between the temperature data sets over any significant period of time. If they are the same, how can you conclude one is superior or inferior? The only difference is some are smoother and some are bumpier (which is neither a benefit nor detraction in general). Though, I suppose an argument could be made that UAH is inferior given its history of past errors and its outlier status.

    Shorter time periods are just noise and anyone can imagine whatever they want to see in noise. What you are doing when working with short periods is saying that February 26th was warmer than March 26th so spring is not coming this year, when it is coming. Just scale this concept up to a decade in place of a month and GW in place of a season.

    Data Set 30-year Warming Rates
    (in ºC/Decade to December 2010)

    GISTemp = 0.0176
    NCDC = 0.0171
    HadCRUT = 0.0169
    RSS = 0.0163
    UAH = 0.0141

    They are just hundredths of a degree different except UAH, which is two tenths of degree lower.

  13. SoundOff Mar 25th 2011

    Edit:

    (in ºC/Decade to December 2010)

    should be

    (in ºC/Year to December 2010)

  14. inconvenientskeptic Mar 25th 2011

    Sound,

    The idea that the temperature behavior is linear is meaningless. The satellite methods show step function behavior once events are removed.

    The different sets can be compared based on their response to climate events. A set that shows no sensitivty to real events is worthless at detecting co2 caused warming.

    Rankings based on that allow the usefullness of each set to be compared. A method that can’t detect the eruption of Krakatoa is worthless.

  15. SoundOff Mar 26th 2011

    You seem to be unwilling to consider the obvious answer. If something doesn’t react, or doesn’t react quickly to a volcano or some other forcing, maybe that forcing just isn’t very important in the bigger scheme of things. If it’s not that important, then the instrument that does react is probably overreacting. Would we want a seismograph that reacts to a truck driving down a nearby highway? That’s not the signal we are trying to measure. Surface temperature records are more tuned to what we want to measure, which is the surface warming trend. Satellites are an indirect proxy.

    I didn’t say temperatures move in a linear way over short periods. Even steps have a mathematically measurable slope when they are part of a stairway. That’s what I provided. It’s all that really matters when the topic is global warming because slope tells us where we are going, it tells us climate sensitivity, which is the crux of the issue.

  16. SoundOff Mar 26th 2011

    TIS,

    I want to be sure I understand your argument correctly. You are saying that satellites provide better measures of the climate because they react with greater magnitude and speed to various climate forcings and perhaps to heat redistribution events too.

    If this is true, then you are really saying that the climate is far more sensitive to any forcing than the surface record informs us. So you are essentially arguing that climate sensitivity is much higher than the consensus opinion. Which means the urgency to cut CO2 emissions is even greater.

    For a while there, I thought you were making the opposite argument. Turns out you are actually Al Gore on steroids.

  17. inconvenientskeptic Mar 26th 2011

    Sound,

    Your comment really did make me laugh. :-)

    The volcano’s that put material into the stratosphere really do cause significant climate effects. Ironically the Sunday article is about the climate sensitivity and the volcanic eruptions. So there is that to look forward too.

    There are many measurement methods that vary in accuracy.

    Which would give a statistically better measure of the Earth’s temperature. 2,500 thermometers scattered around three continents. Or 5,250,000 thermometers uniformly spread around the Earth.

    That is the effective difference between the two types of measurement. Even if the 2,500 can measure to 0.000C and the 5,250,000 measure to 0.0C, the 5 million will be more sensitive to changes in the Earth’s temperature.

    That is the effect of the satellite data. This is evident in their higher sensitivity to climate events.

  18. SoundOff Mar 26th 2011

    TIS,

    Not one of the 5,250,000 thermometers you refer to is on the surface. They are all in Earth orbit measuring radiation coming from some mixture of oxygen molecules at all altitudes below them and each measurement is across a fairly wide area. These kinds of measurements need to come from known positions with respect to the Sun to make sense of them, but these thermometers are forever shifting around measuring different places at different times than where and when we expect.

    These 5,250,000 thermometers are further affected by different cloud conditions, the cooling stratosphere and they can’t capture temperatures within the extreme polar zones. The history of these thermometers is just 3 decades long; barely long enough to deduce climate trends. Lastly, these thermometers wear out after a few years and need to be replaced by new instruments that don’t usually reproduce the same readings, requiring further human interpretation.

    It is only by taking a huge number of measurements that your 5,250,000 thermometers are able to replicate what just “2,500 thermometers scattered around three continents” can do.

    Actually there are over 11,000 official temperature monitoring land stations around the world located on every continent providing over 80,000 land surface temperatures per day. Marine readings come from various oceanic sources (4,000 ships and 1,200 buoys) that provide over 27,000 sea surface temperatures per day. You can get some sense of the totality of the coverage at the following link.

    http://www.wmo.int/pages/prog/www/OSY/Gos-components.html Global Observing System – Reporting stations

    Statistically it can be shown that increasing the number of samples above a certain threshold does not change the result, only the error bars shrink. This threshold is some small subset of the actual number of stations available (that’s why opinion surveys/polls work). Using about 1000 well-placed stations worldwide produces an accurate result. The uncertainty for the global average surface temperature as measured on the surface is ±0.1°C, so it is very precise.

    “It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-placed stations would be sufficient to give a reasonable estimate of the large scale month to month changes.” said by Gavin Schmidt in the article linked to below.

    http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/

  19. SoundOff Mar 26th 2011

    Why are satellite temperatures not used to estimate the global average surface temperature? (answered by Hadley Met Office):

    Although satellites can provide a quasi-global view of Earth’s surface, there are a number of difficulties involved in estimating near-surface temperatures from these observations. Over land, the satellites measure the temperature of the surface, which can be very different from the air temperature just above the surface. The difference depends, amongst other things, on the wind speed and the nature of the surface. Because of the way that the satellites orbit the earth, many only take measurements at a given point only a few times a day, making it harder to estimate the mean temperature.

    Over the oceans the problem is somewhat simpler. The satellites measure the temperature of the sea-surface, which is what we are interested in. The daily range of sea-surface temperature is much smaller than over land, so the time at which the observations are made is less important, although it is still significant. However, the observations from satellites are influenced by atmospheric conditions, particularly aerosols (small particles in the air), which can mean that the measurements are often in error by several tenths of a degree. Some sea-surface temperature products (for example HadISST) use satellite data, but because of the difficulties of forming a homogeneous climate record from satellite data, they are not yet used in our estimates of global average temperature.

  20. SoundOff Mar 26th 2011

    FYI Only

    Each temperature data set uses some subset of the available 11,000 land stations that record temperature data. Many stations don’t report data into any collection system, though their records are sometimes accumulated and added to the central archive years later. Their data is not available to include into the different data sets.

    A subset of the remaining stations is still used because the builders of the data sets don’t need multiple measurements for the same area and they prefer to avoid urban sites, short-record sites and other disadvantaged thermometers. For instance, records from urban stations without nearby rural stations are usually dropped from the analysis.

    The end result is about 70-80% of the world surface is covered by readings of one sort or another (the regions around the Poles comprise most of the missing data points).

    GHCN Archive – the central source of data
    Has 7364 land stations of 8653 possible ones that report data
    Not all stations send data every month (many batch up)
    Data is received from about 1500 stations monthly
    (raw data)

    NOAA/NCDC (also runs GHCN)
    Uses all 7364 land stations (?)
    (moderate extrapolation)

    NASA/GISS (GISTemp)
    Uses 6257 land stations
    (significant extrapolation)

    Hadley/CRU (CRUTEM3)
    Uses 4138 land stations,
    (little extrapolation)

    Of course, if a station doesn’t report its data before the deadline for a given month (very common), then that station can’t be included in that month’s global temperature anomaly calculation. If/when it reports the missing data later, its anomalies will be included in the next monthly recalculation.

  21. inconvenientskeptic Mar 27th 2011

    Sound,

    You have convinced me to do a detailed article comparing satellite and station data.

    I will point out one significant flaw in your statistical argument about the number of measurements for the surface of the Earth. That argument of population only applies to a normally distributed population. I am sure that you are not arguing that the temperature gradient of the Earth is not a normally distributed population around a central mean.

    When attempting to measure the average temperature of a large object with gradients that exceed 100 C, the finer the resolution of the measurement the higher quality the measurement is. The population argument is meaningless in this situation.

    Why do the GCM’s always strive for greater resolution? Because large resolutions are meaningless. Same applies for measuring temperature.

Web Design & Dev by

Mazal Simantov Digital Creativity