Why Satellite Temperature Data is more Accurate.


Over the last day I have received some questions about the satellite data and why I consider it more accurate.

First I should define what station data is.  Station data is temperature records that are made at any approved location that reports temperatures.  The local news temperatures are likely sources that go into the station data.  They are perfectly accurate and there are generally not problems with stations being inaccurate.  There are some problems that can occur, but those are not relevant as to why I think satellite data is better.

The limitation in the station data is location.  Primarily they exist only where people exist.  Vast areas of the planet do not have people.  Here is an example of the typical station coverage.

Inconvenient Skeptic

Typical Station coverage of the Earth.

For this map each station represents an area that is very, very large.  Then they fill in the blanks with some average between the stations.  That is really the biggest reason why I consider the station data to be inferior.  Note the large areas of Africa, Asia, Arctic and Antarctic regions that have little to no coverage.  The typical solution is to fill in the blank areas by increasing the area that each station projects from 250 km to 1200 km.  The result looks like this.

Inconvenient Skeptic

Large Station Projection

These maps were created from the GISS project of NASA.  Even with the larger projection the coverage of the Earth is not complete.  I fully agree that most stations provide very accurate measurements, but they simply do not cover most of the Earth.  You might notice that the oceans have good coverage.  Ships report temperature data while they sail, but there are also many areas of the world that have special buoys that record water and air temperature.  Their coverage is not everywhere as the ocean is very big, but they do provide data for the Atlantic and Pacific oceans.  Overall here is the distribution of stations worldwide used to collect the temperature data.

Station Locations Worldwide

The United States, Europe, Australia and Japan all show good coverage, but outside of those places there are areas with a dozen stations covering a million square miles.  That is not enough.  It is especially disconcerting that the Arctic and Antarctic regions are the areas that have the least coverage, especially since those are the areas that would be the best indicators of global warming.  A dozen stations to cover an entire polar area is not sufficient.

So all of these different sources of data are merged together as anomaly data and the result of that is what I call the station data.  There are more than the two I chose for my work, but those are two of the main three.  All sets of station data suffer from this highly limited source of locations.

The satellite data is a very different beast.  Each satellite does not cover the whole Earth each day, but they get very accurate coverage of most of the Earth every day.  Here is a comparison map of the daily satellite coverage.

Inconvenient Skeptic

Typical Daily Satellite Coverage

This is very good daily coverage.  This isn’t anomaly data though which is why the range of temperatures is so great.  Satellite measurement of temperature is complicated, but according to NASA they are accurate to within 0.03 °C. Combined with the superior coverage the satellite’s provide, there is no question that satellite data is better.  The only limitation is that the data only goes back to 1979.  If there was 1,000 years of satellite coverage there would be no debate about what was going on with the Earth’s climate today because there would be a much better understanding.

Here is a sample of final satellite coverage.

I would be perfectly happy to only use the satellite data, but then my data would only go back to 1979 or not use the more accurate data that satellite provides.  This conundrum is what has driven me to get a feel for combining the two sets of data so I can always use a continuous set of data that combines the best of both sets.  By making it open to everyone I still hope that others will support this idea.

Posted in Science Articles - Global Warming by inconvenientskeptic on October 12th, 2010 at 2:25 am.

9 comments

This post has 9 comments

  1. Phil Scadden Oct 12th 2010

    Still got the problem that satellite gives you estimate of lower troposphere temperature not surface temperature. If you want to think compare a model result for lower troposphere with reality, then that is the one to use. If you want to make investigate surface processes, then I will stick with thermometers thanks.

    I continue to strongly object to your GIPS2 graphic. You have acknowledged its a problem but in fact it is an outright lie. “Here” is 1905 – would visitors to you site know that? Would they think the “warming” shown is all there is if you dont show modern day central greenland temperature on the same chart? You claim to be an honest skeptic. If that is so then you would have the integrity to correct misinformation on your site when you discover it.Please replace that graphic by something that is true. I’m guessing that wont be the Law Dome data through to 1978.

  2. inconvenientskeptic Oct 13th 2010

    I will put something together to show that how I display the front graphic together is not misleading. It does include much more current data, but it is a long term moving average.

    But your complaint is fair so I will put together an article on it. 🙂

    Thanks,
    John

  3. Phil Scadden Oct 13th 2010

    You mean the little problem with simplistic moving averages the mean the end points are missing? If you take a long enough averaging period then you can completely get rid of inconvenient end point data. You actually believe that is an honest way to represent the data? A better methodology with significant end-point data is a LOESS smooth. The point here though is one of integrity – are you presenting a graph that actually represents the real situation? Manipulating data to hide the real situation is absolutely misleading and inappropriate for a real skeptic. While your data may have been processed honestly, the methodology chosen is not for representation of recent change. A completely unsmoothed dataset tells a better picture.

  4. RedMango Oct 18th 2010

    Very nice post!

  5. Quite interesting Blog .. thanks a lot

  6. roclafamilia Oct 21st 2010

    Helpful blog, bookmarked the website with hopes to read more!

  7. Really nice information, thanks!

  8. Glenn Tamblyn Oct 23rd 2010

    John

    Here are my earlier comments about the issues with all temperature sources. As usual it is rather long – you can’t have the detail without, well, the detail.

    On the basic principle that we need to use all the available temperature sources to give us the best picture possible, I completely agree for 2 reasons. Firstly, not one of the single temperature series is one hundred percent reliable, all have problems. And secondly, the whole subject has become so politicised that it needs a rethink. GISS and HadCru are regarded as tainted by Sceptics. And a lot of AGW supporters would take a similar view about UAH being generated by prominent Sceptics – Spencer & Christy. That still leaves NCDC and JMA for the surface data. All have different methods for handling area weighting, merging land and ocean data etc.

    I actually believe that this whole process needs to be taken over by the World Meterological Organisation, and a small research directorate established to oversee the production of a synthesis of all data sources, looking at the problems of each source impartially and possibly how differing sources can resolve the limitations of the others. And this process needs to be very open and transparent.

    Now to the problems with the data

    Surface Stations. Remember that surface stations only cover at best 30% of the Earths surface. The rest is ocean.
    – Poor geographical coverage in the more distant past
    – Poor geographical coverage in remote areas
    – Stations at much the same general location that may be moved to different altitudes, this needs to be compensated for
    – Changes in the meassurement technology at a given station over time. How to correlate the two or more sets of data
    – How to allow for the fact that measurement stations come and go from the grid over time. How to provide a continuous data set with discontinuous data. GISS attempt to address this with their Reference Station method
    – Compensating for the general Urban Heat Island effect. The surface records attempt to do this using different methods, although there is debate about whether the compensation is too little, too much etc.
    – The question of local site influences and whether they are significant. Many of the US stations reported by surfacestations.org show bad site conditions. However analysis by NOAA suggests the impact of this is small. Comparing good quality stations with poor ones there was little difference in the temp’ trends. The key point is whether the site specific influence will show some trend over time, thus influencing the anomaly calculation, and whether any such effect is already covered by the UHI compensation. surfacestations.org is big on photos, not so good on quantitative analysis.

    Sea Surface Temperatures. A point to note is that these are measurements of surface WATER temp’s, not AIR temp’s. These are combined with Land surface AIR temp’s to produce the overall surface temp data. So things such as UHI, Site specific factors etc only influence 30% of the surface record.
    – These are measured by satellite at present and I am not aware of any specific problems with the method. However, they do not provide samples where there is sea ice.
    – Prior measurements were from buoys and before that ships. Sporadic, not always time synched and not necessarily climatological quality data but all we have available. There will always be issues with splicing together data from disparate datasets.
    – An interesting paper came out last year “Identifying Signatures of Natural Climate Variability in Time Series of Global Mean Surface Temperature: Methodology and Insights”; Thompson et al 2009. They took surface temperature data and tried to remove the influence of ENSO, Volcanoes etc and found an interesting result. A sudden change in Temps in Aug 1945, mainly in SST data. They correlated this with the nationality of the ships taking the samples and there was a marked step change transition to far fewer American ships and much more British ships. British ships used a ‘throw a bucket over the side’ method of sampling the water while American ships sampled the inlet engine cooling water. The postulate is that some of the difference between the apparent warming in the early 20th Century, particularly the wars years, and the cooling during the 50s-70 may be an artefact of differing ‘instrumentation’

    Satellite Temperature Measurements. These are based on measuring microwaves at some specific frequencies associated with Oxygen. Different frequencies are tuned to ‘try’ and sample differing altitudes. When a sample is taken, the instruments scan a path below the satellite. Then they also point the instrument towards deep space to provide a very cold reference signal. Then they point the instrument at a warm source inside the satellite that is instrumented with temperature sensors. This provides a rough calibration that guards against the effect of temperature changes within the satellite for example. Some Problems
    – Each satellite has its own calibration issues. The MSU calibration on the NOAA 16 satellite is regarded as suspect and use of the data from that satellite was discontinued when another satellite was available
    – They then need to correlate data from different satellites launched at different times. Some observers have identified a divergence between the UAH and RSS data sets since the NOAA 9 satellite for example
    – They have to compensate for Diurnal Drift – the time that the satellite crosses the equator. Also orbital decay factors. The two groups doing regular satellite data – UAH & RSS – do not handle these in exactly the same way.
    – Surface based emissions can significantly cloud the readings so satellites are poor at reading Temp’s close to the ground, particularly below the Boundary Layer.
    – Then there is the basic fact that the signals being measured do not all originate at one level in the atmosphere. For each frequency that an MSU reads, the signal it sees originates from differing altitudes to varying degrees. This is compensated for by the use of Weighting Functions that attempt to adjust for this. One consequence of this is that the signal being used to principally measure the lower Troposphere – MSU2 on the earlier satellites – originates at least 15% from the lower Stratosphere. Since the Troposphere has warmed but the lower Stratosphere has cooled more substantially, this results in the data from MSU2 having a cooling bias introduced to it, tending to UNDER-report Tropospheric warming. This was being discussed during the early 2000’s.

    Two papers by Fu & Johansen et al in 2004 & 2005 looked at ways to compensate for this. Their 2004 paper used data from radiosondes to attempt to get a temperature profile through the atmosphere then produced an equation that attempted to adjust the MSU2 data by a percentage of the stratospheric data from MSU4, which has a much tighter Weighting Function centered on the lower stratosphere. This approach might be useful for a broad brush estimate for removing the startospheric cooling bias, but isn’t good enough for on-going anomaly measurement, let alone local readings. Then another paper in 2005 proposed using the much less used MSU3 unit, which has a weighting function including a different spread of troposphere & stratosphere to achieve the same compensation – I haven’t seen any further discussion of this approach. Mears & Wendt at RSS have acknowledged that the stratospheric cooling ‘bleed’ into the troposphere signal is an issue but don’t see any acceptable means of correcting for it, the Fu & Johansen method of 2004 being to crude. Vinikov & Grody have also published work in the same period casting criticism of how the thermal calibration process works and suggesting significantly higher warming rates.

    Finally there is the poor cousin of the temperature records. Radiosondes and Rocketsondes. These actually pass through the atmosphere, and in principle sample air temperatures at altitudes. However their geographical coverage is limited. Not bad in the Northern Hemisphere, but poor in the Tropics & SH. Also they are only designed for Meteorological standard data collection, not long term Climatological data. They have suffered from instrumentation changes over the years, not always been reliable as to launch time, and suffered from heating effects at altitude that bias the temp readings higher up. And their data on humidity is not regarded as particularly good.

    And somewhere in this hotch-potch of data is the actual temperature signal for the planet. Add the allegations of data manipulation and deception, which I find preposterous – you try to work out what it would take to successfully manipulate this mess with intent – and you have a witches brew that is infinitely distortable. But the truth is in there somewhere.

    Thus my view that it needs an integrated approach to combining all the data sources to tease out the truth. I suspect that if an answer could be found to a reliable method for removing the stratospheric bias in the satellite troposphere data then the satellite data would be more significant, with the surface data being more of a confirmation of what is happening below the boundary layer.

    One thought I have had is that with better quality radiosondes, engineered to overcome the bias problems at altitude, if their launch could be synchronised to when the satellite(s) are passing overhead they may be able to produce a vertical temperature profile for a location coinciding with readings from the satellite passing over the same location. Combining these two sets of data may provide a better dataset for use with the general approach taken by Fu & Johansen, perhaps enough to make it useful.

  9. inconvenientskeptic Oct 23rd 2010

    Glenn,

    I really did mean to get your email put in here, but time is the one thing that seems to be in the least supply at the moment.

    It is long, but very good. I agree that direct manipulation of data sets this large is foolish. I think it is possible that someone could choose a method that would tend to favor warming or cooling. An open and transparent group using the available sources is preferable.

    I find it hard to believe that with all the money being spent on climate research that a group like the World Meterological Organisation has not been given the resources to standardize a global temperature using the methods available.

    For now I will use my limited blended set for the last 150 years.

    Thanks,
    John

Web Design & Dev by

Mazal Simantov Digital Creativity