Richardson et al. 2016 is a new and improved paper on the issues of modeling a "global" surface air temperature when you cannot actually measure a "global" surface air temperature. "Surface" air temperature is somewhat measurable for about 30% of the globe provided you don't mind considerable interpolation in the modern era and a crap load of interpolation prior to 1950. So what Richards et al. recognize is that land plus ocean temperatures are really apples and oranges.
The funny part is that Richardson et al. propose "scaling" the interpolated data to fit modeled data in order to imply an extra 28% of warming that cannot be actually measured. However, NASA an other organizations spent a fairly substantial amount of money designing and deploying "platforms" to measure exactly what isn't "measurable". Nope, this isn't a Monty Python rerun, this is climate change science.
The Reynolds Optimally Interpolated Sea Surface Temperature (Roiv2SST) product is a space based data set that once did part of this job but was dropped from the Extended Reconstructed Sea Surface Temperature (ERSST) product because using Roiv2 SST appeared to indicate less warming "than expected". The Microwave Sounding Unit (MSU) and advanced MSU data products for the lower troposphere also are unreliable because they indicate "less warming than expected".
This new tool to find "expected warming" tends to cherry pick data related to the definition of Climate Sensitivity which is an increase in Global Mean "Surface" Air Temperature (GMSAT) due to a doubling of atmospheric CO2 where the "surface" isn't something tangible, but a phantom of Climate Science's imagination. Personally, I don't have a problem with an idealized metric or reference as long as the metric doesn't get adjusted constantly because of expectations. Since global policy is based on this idealized metric, a minor adjustment of a tenth of a degree can be worth about a trillion dollars of hastily implemented policy. Before long, we might be talking about real money.
Another metric involved in climate sensitivity is change in heat capacity, but this metric tends to get a back seat to the more popular and volatile GMAST metric. Since the majority of heat capacity change is related to Ocean Heat Content (OHC) and that target doesn't move as much, it is a much better candidate for "scaling" than GMSAT. Less volatility i.e. variablity, means less uncertainty.
A few years ago I did scale Land temperatures, SST, OHC, global mean sea level to a thermodynamic-ally important region of the oceans, the tropics, to create a scaled reconstruction of 2000 years of climate.
This has every bit as many warts and blemishes as any longer term reconstruction has, with one tiny advantage, it includes OHC and Global Mean Sea Level (GMSL) which is tightly linked to OHC, meaning it relates to the complete definition of "sensitivity", change in surface temperature and change in heat capacity. Around the same time, Rosenthal et al. 2013 produced a similar reconstruction of ocean heat capacity using that same Oppo et al. 2009 Indo-Pacific Warm Pool reconstruction including a comparison and contrast with previous "global" temperature reconstructions that neglected that pesky OHC issue.
Real science based on discovery instead of fulfilling expectations is stealthily creeping into the debate it seems. Now that the in-crowd seems to be approving "scaling", things might get interesting.
Update: In addition to correcting Richardson et al. spelling, this link goes to the paper's "background". Marvel et al. 2015 is another of the "ifs" in the paper since there were a number of issues with the efficacy estimates in that paper. I am not going to pay to read this paper and I suspect there will be considerable post publication review which will be interesting.