New Computer Fund

Friday, February 27, 2015

da Mann of Natural Variability

Michael Mann has a post on Real Climate about his recent paper, Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures, Byron A. Steinman1, Michael E. Mann, Sonya K. Miller.

"In numerous previous studies, these oscillations have been linked to everything from global warming, to drought in the Sahel region of Africa, to increased Atlantic hurricane activity. In our article, we show that the methods used in most if not all of these previous studies have been flawed. They fail to give the correct answer when applied to a situation (a climate model simulation) where the true answer is known."

This paragraph caught my eye, previous studies are flawed because they fail to give the correct answers when applied to a climate model simulation.  To me that is replacing reality with a climate model.  The models should emulate reality not the other way around.

I have been struggling trying to find ways to compare model "output" with reality which is harder than I thought because the main model output "surface" temperature, doesn't really exist.  The models have a tas output, Temperature Air Surface, which would be the air temperature at some distance above the physical surface, including the oceans, and there isn't a reliable marine surface air temperature (MAT) product that matches the land based (Tmax+Tmin)/2.  There are MAT products, but they include night MAT only which would be similar to land Tmin, but land Tmin is considered to be unreliable.  I can create a model output, 70% SST and 30% Land air temperature, but that opens the door to questions about adjusting for potential temperature and sea ice area.  The simplest solution I have found is just stick to Sea Surface Temperature (SST) which has the majority of energy anyway.

I am also not a big fan of temperature anomaly.  It has its uses, but this is an energy problem.  CO2 increase should cause a small energy imbalance, on the order of 1 Wm-2, at the Top of the Atmosphere (TOA) which should take some time to reduce to zero if in fact it does reduce to zero.  Using various models the TOA imbalance has been estimated to be about 0.6 +/- 0.4 Wm-2 compared to a surface imbalance of possibly 0.6 +/- 17 Wm-2 (Stephens et al. 2012).

To get a better feel for "surface" energy I combined the Berkeley Earth land temperature in absolute temperature and the NOAA ERSSTv4 data using the 70% ocean and 30% land ratio to create a "surface" temperature.  As I mentioned though, that isn't a "real" model output as best I can tell.  That diverted me back to the SST alone as the best model to observation comparison I can find.

The following are ERSSTv4 temperatures converted to estimated energy and CMIP5 tos also converted to energy for various ocean regions.  The data was masked on Climate Explorer which may have errors, but looks pretty reasonable.  I don't have titles on these charts, but the regions are indicated in the ledges.

The 45S to 45N region makes the models look good.  This includes the majority of the ocean surface area and energy so the models should be doing a great job.

60S to 60N though isn't quite as impressive.  This indicates the models over estimated ocean energy by about 3 Wm-2 in circa 1915.

Since Mann is trying to show AMO, PMO (his term) combine to create a NMO or Northern Hemisphere Multidecadal Oscillation, let's look at the northern hemisphere oceans.

This region starts off with a 3 Wm-2 under estimation and ends with about a 2 Wm-2 under estimation in the region he is calling the "known" in his paper.

"We propose and test an alternative method for identifying these oscillations, which makes use of the climate simulations used in the most recent IPCC report (the so-called “CMIP5” simulations). These simulations are used to estimate the component of temperature changes due to increasing greenhouse gas concentrations and other human impacts plus the effects of volcanic eruptions and observed changes in solar output."  From the realclimate post with my bold.

Technical details concerning development of a 1200 yr proxy index for global volcanism,   Crowley and Unterman 2013, appears to be the newest reconstruction of volcanic forcing.  Their estimates are considerably different than those used in the "known" CMIP5 model runs with the most notable addition of a large volcanic forcing circa 1913 which is not in the CMIP5 forcing estimates and would help explain why the models consistently miss the 1900 to 1940 observations.  

In the southern hemisphere the models over estimate SST by about the same magnitude they under estimate the northern hemisphere.  In case you were wondering,

the tropics (15S-15N) might provide the best reference for how the models miss volcanic response and recovery.  That 1910s forcing is global based on the SST data and completely ignored in Mann's limited Northern Hemisphere "anomaly" comparison against a questionable "known" simulation.  

Now let's see how many of the "real" scientist pick up on this.   

Wednesday, February 25, 2015

Zeke's TOBS Interpretation

Zeke's interpretation of NOAA's TOBs adjustment posted on Climate Etc. is about as good as it gets, but there are still a bunch of questions because some people aren't "getting" parts of his lingo

This is the distribution of Tmax and Tmin per 24 hour day with 0 and 2400 hrs being local midnight.  Tmix is pretty tightly clustered around 1500hrs or 3 PM and Tmin a little less tightly cluster around 400 hrw of 4 AM   Since the goal is to get the right Tmax and Tmin for the right 24hr period, midnight provide perfect timing.  1800 to 2000 hrs is the next best times to not mix days and temperatures up.  For Tmax in the summer at this location there is virtually no error for temperatures over 25C.  The cooler Tmax is the greater the potential error, but there is little chance of getting the day wrong.  For Tmin it would be easy to get the day wrong but less error in the temperature.

For this one year I compared the 24hr Tmax and Tmin for midnight and 1800 hours.

21.89 C is the average Tmax recorded at 1433hrs for midnight reset versus 22.09 C using 1800 hrs reset.  10.23 C is the average Tmin for midnight reset recorded at an average time of 0793 hrs versus 10.58 C for the 1800hrs reset.  These hours are in HH.HH or decimal hours instead of HHMM.  So for this particular station 1800 reset produces a slight warm bias, but this station is the Watkinsvill, GA CRN station with mechanically aspirated temperature sensors.  If it were a standard MMTS, the Tmax bias would change with surface wind speed because they are not mechanically aspirated and shelter condition, dirty etc. increase solar energy absorbed.  A Cotton Region Shelter (CRS) has a different temperature bias, Tmax is typically warmer due to solar radiant heating, so you have a similar surface wind speed issue.  

So is there some error introduced with time of observation?  Yes.  Is it easy to figure out what is TOBs and what is instrument error?  No.  Each type of station would need a different correction factor if you want to correct for the small errors.  The biggest concern would be errors that bias the station trend. 

Errors that tend to bias the trend, if they are step changes are easy to find and correct.  You can consider each step the start of a new record, like BEST does, and no TOBs adjustment is required.  NOAA decided to use TOBs adjustments for the USHCN or about 1200 surface stations which lead to the surfacestationsorg project by Anthony Watts in a effort to isolate micro-site issues that would bias a station without always producing an obvious step change.  

Zeke included this graph showing TOBs adjustments which includes a long slow increase from roughly 1960 to present.  Prior to NOAA defining the USHCN in the early 1980s, there has never been much need to adjust temperatures due to TOBs.  The reason the USHCN was defined was so that the NWS and Department of Energy could keep trace of Global Change.  Global Change/Global Warming/Climate Change tend to sensitize people to "warming" which appears to have caused a "non-random" shift in Time of Observation in order to focus on potential "record" events or maintain a more accurate record.  Human observers sensitized to a issue can make changes, good or bad, that can skew simple statistical analysis.  This can create an interesting problem for statisticians trying to estimated TOBs bias.  If observers are constantly changing reset times to chase records or improve data quality, but actually record the proper Tmax and Tmin for the proper day, there can be no TOBs adjustment required.  The observers themselves would be making corrections daily while dutifully noting what time they reset their instruments.  With an 1800 hrs reset there would be only a small number of days per year that would add a bias and if the observer starts a new observation schedule then they would be adjusting to minimize the error but a statistical program would think it is adding to the error.  This means that comparison to neighbor stations which also have bias related to instrumentation type and site issues might appear to be adjusted for TOBs while the station being corrected might be having error added instead.  So since the human observers are intelligent, they could change TOB to improve accuracy, simple statistical analysis doesn't include one important confounding factor.

This becomes a major PITA is one is trying to quantify micro-site issues that may be masked by TOBs adjustments when the folks saying the TOBs adjustments are required based on possibly flawed statistics keep shutting down other methods of quantifying error without using the TOBs adjustments.  

This is exactly what happened to Anthony Watts' most recent paper which was shot down because he didn't use TOBS adjustments.  The TOBs fans make a compelling case, but without knowing why the observers made the frequent changes in observation times, you really cannot determine a valid TOBs adjustment for each station.  

 This graph provided by Zeke shows changes in the meta-records.  Zeke mentions that in 1960 the cooperative observers were requested to change to an AM reset schedule in order to improve the accuracy of precipitation measurements (reduce evaporation).  For some perspective there are about 1200 USHCN stations and about 11,000 Coop stations that were "keyed" for the climate database modernization program.

The Coop Stns ( /10) curve above indicates a peak of around 6000 coop stations and the darker line at the bottom represents USHCN TOB changes which has its largest peak at about 1918 and a smaller peak around 1950.  Both of these appear as step changes on the Tmin/Tmax adjustments chart.

Around 1985 there is another peak for both TOB and MMTS for the USHCN which may have produced the tiny bump on the Tmin/Tmax adjust chart at that time on the smooth slope in adjustments starting in roughly 1960.  When there should be a step change, ~1960 because of the the requested change in time of observation, there isn't one.

If the magnitude of the TOBs adjustments are estimated by using the larger cooperative network which has micro-site issues as well as instrumentation changes, then it is less likely that the TOBs adjustments correction for just for TOB.

Zeke does a much more complete analysis of the CRN stations than my one year at one station in Georgia which indicates an 8AM TOB produces a small, about -0.1 C bias.  Zeke also includes chart of the TOBs changes for the USHCN stations.

Figure 1: Recorded time of observation for USHCN stations, from Menne et al 2009.

So with the majority of changes being to AM according to this Menne et al 2009 chart the TOBs adjustments should have decreased toward about +0.1 C not increase to nearly +0.3 C.  That indicates there is much more going on that Zeke's interpretation of TOBs adjustments indicates.  Part of that could be human observers adjusting TOB to reduce error which they might think is part of their job.

Key points:  No 1960 step change apparent on the Menne et al 2009 chart.  The Change in TOBs from PM to AM according to Zeke's CRN analysis should have been to less adjustment not more.  Finally, different station zones and instruments would have different TOBs considerations not shown in this simple example of the need for TOBs adjustments.

Conclusion: Zeke's rehash of the NOAA explanation of TOBS adjustments leaves the same unanswered questions.  Though the CRN analysis is a nice addition.

Sunday, February 22, 2015

More on TOBS, Unfortunately

Zeke Hausfather now has a post on Climate Etc. that says the same thing I did but draws a totally different conclusion.

Watch this, "Until the late 1950s the majority of stations in the U.S. record recorded temperatures in the late afternoon, generally between 5 and 7 PM. However, volunteer temperature observers were also asked to take precipitation measurements from rain gauges, and starting around 1960 the U.S. Weather Service requested that observers start taking their measurements in the morning (between 7 and 9 AM), as that would minimize the amount of evaporation from rain gauges and result in more accurate precipitation measurements."

A shift from later afternoon to morning, so let's say 6 pm to 8 AM okay?

A liquid in glass max/min will record the past 24 hours if reset every 24 hours.  At 8 am you would have recorded yesterdays' high this morning's low.  At 18:00 hrs (6pm) you would have recorded today's high and today's low.  You have a one time shift from yesterday' high to today's high by changing from 6pm to 8 am.  There is a very small probability that a max or min would occur at either 6pm or 8am.   With daylight savings time though a 7am reading could become a 6am astronomical reading gumming up the works.  That would increase the possibility of "split" or double recording of a lower than normal lower.  

" Between 1960 and today, the majority of stations switched from a late afternoon to an early morning observation time, resulting a systemic change (and resulting bias) in temperature observations."

If you tell a network of volunteers to please shift to 8 am readings , the vast majority would shift to 8 am readings over a few months not a few decades.  If there is any TOBS impact due to the shift it would most likely be a step change, something noticeable.

There is a step change around 1950 but no step change around 1960.  The odds of any TOBS adjustment being required due to an extremely gradual changing of observation time over 20 plus years is vanishingly small. 

From the start of the chart to ~1990 TOBS adjustments never needed to be more than +/-0.1 C degrees.  

Interestingly enough, the time of observation adjustment developed by Karl et al 1986 is not strictly necessary anymore. Changes in time of observation in station records show up a step changes in difference series compared to neighboring stations, and can be effectively removed by the pairwise homogenization algorithm and similar automated techniques"

Zeke even points out that TOBS issues would show up as step changes.  The LIG max/min thermometers were nearly completely phased out by 1995 after which there is a bigger need for TOBS adjustment?  I don't think so.  TOBS is most likely being confused with some other need adjustment do to instrumentation, mmts, local area changes, parking lots, trees, runways, clutter and shelter aging.  

Zeke just cannot get the concept that TOBS, the unneeded adjustment, is a kludge covering other issues.   When you consider adjustments on an instrument by instrument basis, the way it should be done, you are more likely to find out more about those "other" issues like land use change and such.

Time of Observation Adjustment (TOBS)

It seems most of guys that stay involved with the Climate Change debate have Alzheimer like  memory issues.  Every now and again the same topic pops up just as fresh as it was a year ago, and the year before that, and the year before that.  Today for some reason its TOBS again.

I generally stay out of these debates because they are a waste of time.  All the players are entrenched in their positions and there is no way to reason with them.  Well, most of my blog posts are cut and paste ammo.  Someone brings up a topic for the old days I hit them with a link.

Steven Mosher pissed me off with a reference to the temperature record adjustments being required due to the incompetent "citizen scientist" volunteers that manned the surface stations.  Typical elitist BS, leadership or management is supposed to make the most out of their resources and accept any blame for defects.  That's why they get the big bucks.  When leadership passes the buck, time for new leadership.

So let me make this simple enough for even Mosher to follow.  These are the documented NOAA adjustments to the USHCN  There is a very small TOBS adjustment that increases with time into the modern era.  Those ignorant citizen scientists were replaced by educated and highly compensated scientists when the adjustments got out of hand.  Note the very small MMTS adjustment.

Next, the temperature data are adjusted for the time-of-observation bias (Karl, et al. 1986) which occurs when observing times are changed from midnight to some time earlier in the day. The TOB is the first of several adjustments. The ending time of the 24 hour climatological day varies from station to station and/or over a period of years at a given station. The TOB introduces a non climatic bias into the monthly means. The TOB software is an empirical model used to estimate the time of observation biases associated with different observation schedules and the routine computes the TOB with respect to daily readings taken at midnight. Details on the procedure are given in, "A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures." by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.

TOBS adjustments came to be in roughly 1986 when Karl notice a quality issue.  He "ASSUMED" it was TOBS and created an adjustment that was only needed in special cases to correct that quality issue.  In actuality, the issue correlated with the change to Airport located weather stations, especially automated Airport weather stations.

Temperature data at stations that have the Maximum/Minimum Temperature System (MMTS) are adjusted for the bias introduced when the liquid-in-glass thermometers were replaced with the MMTS (Quayle, et al. 1991). The TOB debiased data are input into the MMTS program and is the second adjustment. The MMTS program debiases the data obtained from stations with MMTS sensors. The NWS has replaced a majority of the liquid-in-glass thermometers in wooden Cotton-Region shelters with thermistor based maximum-minimum temperature systems (MMTS) housed in smaller plastic shelters. This adjustment removes the MMTS bias for stations so equipped with this type of sensor. The adjustment factors are most appropriate for use when time series of states or larger areas are required. Specific details on the procedures used are given in, "Effects of Recent Thermometer Changes in the Cooperative Network" by Quayle, Easterling, et al. 1991, Bulletin of the American Meteorological Society 72:1718-1724.

Quayle noted that a correction was required for the new stations.  Now the TOBS adjustment could have been dropped and replaced with one MMTS/equipment adjustment.  I don't know who is higher up the food chain at NOAA, but it looks like Karl.  

So why are TOBS adjustments BS?  Because you don't need them.  Most of the record is based on LIG Tmax/Tmin thermometers that never cared what time a max/min occurred.  Changes in TOBS have minimal impact on most of the record.  There was a shift to Synoptic timing once overland communication became affordable.  Then stations, mainly manned by people that had the land line or telegraph, provided Universal Coordinated Time observations.  That way you could get your "national" forecast.  With the growth of private air travel in the US, stations where shifted to airports so pilots could call destinations for weather updates.  Those reading were taken more often.  Taking readings more often would mean more trips to the weather shack meaning weather shacks got located closer to the communications gear.  That means the adjustment required was a "TYPE" of observation adjustment or a "TYPE" of instrumentation adjustment.

To not step on toes, adjustments are added to existing fudges in order get the precision required, it has absolutely nothing to do with the volunteers that provided weather information as part of their day to day livelihoods.  

Does the Berkeley Earth Surface Temperature   project use TOBS adjustment? Nope, not required they just have instrumentation/break point adjustments since they didn't have any toes to avoid stepping on.  

The Quayley 1991 paper is a good place to start if you what to know the issues with each type of weather station, none are perfect. has even more information on specific surface stations.  Thanks to TOBS and MMTS adjustments, other adjustments for things like station location are not included because they are already assumed away.   

Friday, February 20, 2015

CET and Volcanic Forcing

I was playing with Volcanic forcing when Tony Brown put up a post on Climate Etc. about his Central England Temperature (CET) reconstruction using weather notes in diaries etc. for fill in the blanks to some extent pre-instrumental.  The CET record is more popular with skeptics especially of British origin and like any "local" temperature record has its good points and bad points.  So I made this rather busy chart.

 One of the major pluses for CET is it marine climate influence due to the Gulf Stream.  One of its minuses is it high latitude so variations in the Northern jet stream create huge swings relatively speaking in average temperature.  I have used a normal 5 year moving average of the CET annual and the BEST data I have is smoothed with a cascading 27 month average.  The three cascade stages would be roughly equal to a 6 year average.  I only used the Crowley and Unterman 2013 Northern Hemisphere volcanic data with simple 20% decay.  Then I roughly scaled the volcanic forcing with the 1815 1840 events.  Since I was using anomaly for the BEST data I added a +0.4 C offset to the volcanic to roughly match that same period.  Then I include the Law Dome CO2 reconstruction with a "sensitivity" of 1.1 C to roughly match the BEST 30S-30N temperature data.

The Berkeley Earth Surface Temperature (BEST) project noted the greater than expected amplification of the 30N-60N land temperature data.  This chart shows that at times, CET is more influenced by the amplified region in blue but appears to revert to the more "global" representative 30S-30N region.  The fit with the Volcanic forcing is the typical tease where some areas the fit is great and some areas not so great.  There is a lot more noise with some of that amplified in the northern higher latitudes which contributes to that less than ideal fit.  The Crowley and Unterman volcanic data includes quite a few smaller events ignored in previous reconstructions as well as so major events that were just neglected for whatever reason.  The extra information doesn't "fix" all the issues but it does reduce some o the more glaring inconsistencies.   For example 1912 now has an event to match its temperature dip,  Novarupta which was  VEI 6.

The CMIP5 models appear to include a 1912 volcanic event, but smaller than the 1904 event which appears to be Mount Lolobau which was a VEI 4.  So there is a significant difference between C&U volcanic and the volcanic used for CMIP5.  Note that the +0.4 offset I used is about the same as used for CMIP5.

The reason for all this is was there really a Little Ice Age and if so when did it end?

Since I "calibrated" my volcanic forcing to the 1810 to 1840 period which produced the +0.4 offset, I would be of the opinion that a period of cooler than "normal" temperatures due to volcanic forcing wouldn't be over until temperatures returned to the pre-volcanic forcing level.  That would be roughly 1930 to 1940 a period with much lower than "normal" volcanic activity.   Since the climate science community "picks" circa 1900 as the end of the LIA, they think "normal" is about 0.4C lower than I do, even though their modeled forcing tends to agree with me.  0.4 C is about half of the estimated AGW claimed which makes some of their projected impacts a bit hard for me to swallow.

Looking at the entire Crowley and Unterman 2013 NH data it seems perfectly reasonable to assume that more volcanic activity would lead to lower than normal temperatures and less would lead to warmer than normal temperatures.  What really is normal though would depend on what is "normal" volcanic activity .

This overlay of the Oppo et al. 2009 Indo-Pacific Warm Pool that seems to agree with my circa 1930 to 1940 exit from the LIA.  With instrumental data such as it is, CET and BEST, extending back to the mid 1600s to 1700s, the Mann "Global" temperature change reconstruction looks less and less likely.

Thursday, February 19, 2015

More on real temperatures and models

I have the CMIP5 tas or surface air temperature and tos or ocean surface temperature plotted with tas on the left axis and tos on the right axis.  The reason is to show how there is little difference between the way the models consider volcanic impact on the air or ocean.

This comparison of CMIP5 tas and Berkeley Earth's surface air temperature indicates that the models don't do a bad job with surface air temperature.  The models do start about a degree lower than Berkeley, but there is no official surface air temperature so Berkeley is just for reference.

The oceans cover the majority of the surface though and have the most energy.  I still would say the models do a fair job, but notice how the volcanic forcing doesn't match nearly as well with the lower specific heat capacity air.  This only has an error range of about +/-0.25 C, but because of the warmer temperatures and ocean latent heat the energy error could be much higher.  Just radiant wise the error is about +/-1.4 Wm-2 and it is difficult to quantify the latent.

When you combine the tas and tos using the ratio 0.3:0.7 for land and ocean for both the CMIP5 and instrumental data you get this.  I included the red and blue lines to indicate the approximate "sensitivity" estimated for the RCP4.5 scenario.  Since the biggest response error is in the ocean and the largest factor is the oceans, the models miss mainly because they don't "get" the oceans.

The graphics I created don't look as ominous as this IPCC AR5 graphic.  The RCP4.5 scenario is in light blue and 2100 is about 2.5 C above the approximately 1880 start year.  My charts start in 1854 to 1861 depending on the data set and pre-1880 is when one of the largest misses occurred.

If I start in 1880 the "historical" or instrumental data begins at about 288.2 K degrees and the modeled minimum is about 287.4 K degrees.  "Warming" in the historical is close to 1 degree at the end of the series about 2013.5 due to smoothing and the "projected" warming is about 0.75 C more than that.  The Projected" warming of the CMIP5 mean RCP4.5 run looks more like ~1.7 C +/-0.2 than 2.5 C.  The IPCC chart says it uses an 1861 to 1880 baseline.

Taking the same baseline you can see that it makes the model look better, but from the 1861-1880 baseline there is still only about 1.75 C of total warming expected.  Close to half of that warming is already out of the way.

Berkeley Earth's kriging method allows for a longer instrumental temperature series with of course increasing error.  Including the New Crowley and Unterman 2013 volcanic forcing reconstruction provides a different view on what impacts may be neglected in the 1861-1880 baseline.   This uses Berkeley Earth 20S-20N in hopes of getting better indication of tropical SS.  1861 is a bit of an odd choice.  Hadley center products start in 1850.  1861 may have been a compromise between the various temperature product start dates.  In any case, baseline choice can have a few tenths of a degree impact on "warming" while actual temperature comparison reduces that potential bias.  Scaling Berkeley to "simulate" SST though with its increasing error and the baseline issue doesn't provide anything more than a tease about what might have been.

but [perhaps a tease can be helpful.  The scaling required adjusting the baseline and the trend slightly to fit the majority of the more accurate SST data.  The divergence at roughly 1900 might be real or might be an artifact, but is is close to the uncertainty range of both products.

SWo why would I waste my time with all this?

The Marotzke and Forster 2015 Nature article compares models with HADCRUT4 "surface" temperature.  The HADCRUT4 data is about 70% SST and about 30% land surface temperature.  To get the best match of models to observation I have to create a combined 70% SST and 30% land model product since the Model tas appears to be a real attempt at a 2 meter or some real surface temperature.  There isn't an equivalent observational product.  Since most of the model errors appear to be in the SST portion of the programming, it makes sense to me to compare worst to worst before getting into fantasy to fantasy.  If the Berkeley "scaling" is acceptable, that would provide some out of sample data for Marotzke and Forsester to expand this research "globally" or just land only if they don't like the scaling.  Either way it would be nice to see apples with apples.

Tuesday, February 17, 2015

Back to the Tropics

"Surface" air temperature is the go to metric to illustrate "global" warming.  Global "surface" air temperature is a combination of roughly 2 meter measured land temperature average measured at different elevations combined with sea "surface" temperature often measured a few meters below the surface.  The data is what it is so we have to make the most of it, but it does lead to some real uncertainties that need to be addressed.

In the previous post I showed a few modeled "surface" air temperatures for the tropics (30S-30N) with SST.  There is a pretty large spread.  If you take a "historic" anomaly baseline, most of the model variance is in the future and if you take a future anomaly baseline most of the variance is in the past.  Real temperatures don't change relationships with baseline choice.

The model experiments do have a sea "surface" temperature output that should more directly compare to SST observation.  With that you can estimate an energy error using the Stefan-Boltzmann equation that neglects latent energy.

That indicates the rough "bias" of the models.  The models tend to run hot and miss cooling events like the 1910s dip by quite a bit.  In my opinion this is just error, but some repackage it as "natural" variability.  "Historic" for most models is 1861 to 2003 or so but a few it is 1850 to 2003.  One of the largest volcanic events recorded while there where thermometers was in 1815, Mount Tambora.  Most climate scientists agree that it only takes 8 to 10 years for most of the "surface" air temperature impact to be realized.

Using ocean thermal basins as a reference, there is roughly a ten year lag between the southern hemisphere ocean and the northern Pacific, but without a longer instrumental time series you have to make a number of assumptions.  I am not a big fan of lots of assumptions.  Since there is evidence of lags in the various ocean basins which would start a fairly complex recovery oscillation, I would not jump on assigning blame for the 1900 to 1920 ocean cooling on the ~1880 volcanic event.  Tambora or some other event could have started an oscillation where following events could hit in or out of phase making "forcing" appear less or more than it really should be.  If you could find a definite signature of one event, you could use that initial condition to tease out ocean inertia influence on apparent "forcing".

That lead me to tropical ocean paleo and "scaling" instrumental "surface" air temperatures in an effort to find the possible start of these longer term oscillations related to cooling events in 2000 years of climate.  According to that, circa 1700 AD is a reasonable starting point for the tropical ocean recovery.  The event(s) that caused that dip in temperature may have starting in 1200 AD.  There are potentially huge lags related to ocean heat content.

So with this you have a much more interesting puzzle than the boring debate over temperature adjustments or if model "experiments" are evidence of anything or not.  The models run hot and anomaly just let's you cheat, intentionally or not, to make the models look better.  Some real reference and some realistic starting point is needed to start solving the puzzle.

So even though this post is a bit of a rehash, I am posting it to try and get more folks involved in the pre-model and pre-Mann world of climate so perhaps something can actually move forward.

Sunday, February 15, 2015

On Marotzke and Forster - WAG Index?

There is considerable discussion ongoing over the Marotzke and Forster paper published this month in Nature Magazine.  Nic Lewis noted that the paper was based on circular reasoning in a post on Climate Audit.  That is based on a previous Forster paper that estimated change in forcing using estimated surface temperature then Marotzke using that forcing compared to surface temperature to estimate model "natural" variability.

Prior to all this I had been looking at what should be the "actual" surface temperature versus what the models assume to be the actual surface temperature.  Climate Explorer has a good deal of the model runs archived so that you can mask regions.  The tropics are my main region of concern primarily because of the huge latent heat generation which is a source for the majority of clouds.  Water vapor response, clouds radiant forcing and deep convection are all very uncertain climate factors.  So I have prepared and may continue to prepare comparisons of model "actual" temperatures to observational temperatures.  This isn't really as simple as it should be.  Models use TAS, Temperature Air Surface which isn't really something measured.  Land surface temperature is measured roughly 2 meters above the surface located at the surface station and the elevation of those surface stations varies considerably.  A "potential" surface air temperature is one way to approach this issue, but over the ocean no "surface" air temperature is reliably measured only a sea "surface" temperature which is often really a sea "sub-surface" temperature.  That makes "what surface?" an important question that really doesn't have a good answer.

So when I started putting this together I used ERSSTv4  as a reference and whenever possible masked the model tas for just ocean data points.  In the tropics, the tas should be close to the measured SST less roughly 1 C degrees.  I have the observed SST in red with a thicker line so it stands out and the models I selected to show a rough range of the model temperatures for the same area.  Not all of the models are set up for the sea point masking so GISS-E is just 30S-30N.  Unless I made an error, the rest are masked just for sea points.  There is roughly a 2.5 C range of temperatures in the models for what should be a "known" value to at least a half degree by my estimation.  The two Chinese model runs bbc-csm-1 and -1m start the closest to what should be the "real" temperature.  GISS which also produces a temperature product comes in third with one of the GFDL estimates a close forth.

Initially, it looked like the models with the more lower than observed temperature had the greatest "sensitivity", but this collection implies that there is not much correlation between getting surface temperature right and sensitivity.  It looks more like a bunch of wild assed guesses.

One of Forster's conclusions was that the spread in the wild assed guesses indicated that the modelers were not "adjusting" their models to match observation.  I heartily agree with that conclusion.  Another was the present and future forcing and feedbacks was the source of the variability in the model "forecasts".  Well, if you "adjust" the model runs to a 1860 to 1999 baseline anomaly or any other past period anomaly, then the majority of the variability is in the future.  If you "adjust" the model runs to a future baseline for anomaly then most of the variation is in the past and would be due to wild assed guessing.

Now if you used the Marotzke and Forster method on this you could come up with an estimate of"Wild Assed Guess Variability".  Perhaps we could create a WAG-Index for future model generations?

Saturday, February 14, 2015

The Thermal Tidal Basins

Continuing with the Thermal Tide analogy for the Scottish Skeptic, thanks to land mass distribution and the Coriolis effect you can say there are three tidal basins, the North Atlantic, North Pacific and Southern Hemisphere oceans.  The area and volume of these three basins have a rough ratio of 1:2:4 or the Northern Pacific is roughly two Northern Atlantic and the Southern Hemisphere oceans are roughly 4 Northern Atlantic ocean area/volumes.  These three can produce some difficult to follow oscillations.

In the past with the Drake Passage closed, there would have been two southern hemisphere ocean basins and with the Panama area open, there would have been a more common tropical ocean reducing the basins to possibly two, Atlantic and Pacific.  Longer term climate oscillations would be effected by the number of effective basins.

Each basin has equatorial surface currents driven by winds westward then northward when they encounter continental land mass.  Since the poles are the ultimate heat sink for the basins, the North Atlantic has the least effective heat sink followed by the North Pacific and thanks to the Drake Passage the Southern Hemisphere oceans have the most effective heat since.  Evaporation is the second major heat sink of the oceans so the North Atlantic with higher average surface temperatures produce a larger fraction of latent heat loss that ends as precipitation on land mass followed by the Northern Pacific and the SH oceans.

"Corrientes-oceanicas" by Dr. Michael Pidwirny

The northern portion of the Atlantic basin peaks around 20 C currently.

The northern portion of the Pacific peak around 18C, currently.

The Southern oceans for 30S to 65S peak at around 12.5 C degrees surrently.

If you look at the range of temperatures, the southern 30-65 has close to a 4 C range, Pacific 30-65 about a 9 C range and the northern Atlantic also about a 9 C range.

The tropics, 30S-30N in this case, have a small range, less than 2 C and the current peak temperature is close to 26.5 C degrees which may be a physical limit or close to it.

In some ideal world you could simply use the tropics as a base for the tidal signal, track the lag in each basin then yell EUREKA!  Not that easy in this world.

In this chart I have smoothed the three basins using a 27 month cascade and the full series base line for the anomaly.  The SH takes a huge dip circa 1900, the Atlantic a few year later and the Pacific has a slightly different dip about 5 years after than.  The volcanic forcing estimated by BEST provides a timing back drop.

Here I have lagged the SH by 10.5 years and the Atlantic by 5 years to nearly synchronize the dips.  It looks pretty unlikely that the ~1884 volcanic forcing could have caused the dips or the lags from the subsequent volcanoes would coincide with other similar dips that don't appear to exist.  There just doesn't appear to be enough data to make much of a case for anything other than there appears to be five 1/2 year lag of the northern Atlantic to the SH and a five year lag of the northern Pacific of the northern Atlantic, and that in a cooling response mode, the SH can cool more easily than the northern Atlantic which can cool easier than the northern Pacific at least under the conditions that existed at the beginning of the 20th century.

The tropics versus the southern oceans 30-65 show part of the complexity.  While the Tropics provide the majority of the energy for the thermal tide, the southern ocean sink can dominate heat loss.  Since this event happened prior to consistent data collection near the Antarctic, it will be a major challenge figuring out a convincing cause.  A small consolation may be that the 1941-1945 appears to be an over shoot of recovery from that event.

This is where I became more interested to tropical ocean paleo reconstructions, in particular, the Oppo et al. 2009 Indo-Pacific Warm Pool. reconstruction

In particular the 1700 to end portion.

So in my opinion, the thermal tide analogy has potential but it is not a slam dunk without some serious work mainly on paleo because there appears to be a 300 plus year recovery of tropical ocean temperatures.  

Chaos and Tides

I started to discuss this recently and stopped because I might screw up.  I still may screw up, but it is cold outside so what the heck.  Tides are somewhat chaotic.  The NOAA basics on tides above shows the plain vanilla coastal tides types, diurnal once cycle a day, semi-diurnal two cycles a day and mixed semidiurnal where you have a bit of both.  If there weren't any land masses, Earth would have just a semidiurnal tide.  High when the moon is overhead and high when the moon is exactly opposite overhead if there wasn't any inertia to be considered.  It takes time for water to move, so there is a delay related to the ocean mass inertia.  In a place like the Gulf of Mexico which has only one "inlet", that restricts flow increasing the time required to reach a high/low tide, so the Gulf States tend to have a diurnal tide.  Since the moon orbits the earth on roughly a 28 day cycle, the major tide cycle varies with the moon.  The full and new moons are inline with the sun so the peak tides are on the full and new moons.  Since the distance from the Earth to Sun varies annually with orbit, in January there is a "spring" tide and July and "neap" tide.   That relationship changes with longer term orbital cycles so in about 20,000 years the "spring" tides will be in July.  There are other longer term orbital changes that have a little less impact on tide timing.

Since we have plenty of data on actual tides and know the orbits of the Moon and Earth around the Sun, we have fairly accurate tide tables.  Wind shifts though can change the inertia of the water delaying or advancing tides at any given location.

Tides are what many would call a "chaotic" problem.   You can "solve" tides but there are "other" factors that cannot always be considered in advance.  If for whatever reason someone cut a canal through Florida wide enough, it would have an impact on the Gulf State tides by allowing more flow from a different tidal reference.

Very few people think about tides as being chaotic.  They have tide tables and are used to them being close but not exact.  Weather is chaotic, people are used to weather forecasts being close but not exact.  When it comes to climate though, chaos seems to be an ugly word.

Energy transfer in the atmosphere and oceans also have inertia and would have "tidal" cycles similar to the plain vanilla ocean tides.

A term I may have coined is tidal stutter.  It is nothing but the shift in any mixed semidiurnal or semidi-annual or semidi-preccessional shift from di whatever to semidi whatever "tidal" cycle.  This is based on the accepted tidal cycles like those discussed by NOAA.  There are other possible cycles but these are the "commonly" accepted variations in tidal patterns.  If you can wrap your head around your local tides you should be able to wrap your head around thermal energy tides.  Specific Heat Capacity is often called thermal inertia, which for this analogy you just accept as being a real inertia as if it involves mass.  It does, but it is thermal mass.  

If you look at the last chart the LS is lower stratosphere energy approximated from UAH LSTand SS"E" is sea surface energy instead of sea surface temperature.  The atmosphere and oceans have hugely different thermal inertia and are coupled as in one influences the other.  The Sun provides the energy and its response in the LS and SS"E" are lagged at differing rates due to their relative inertia.

Buried it what may appear to be noise is a basic mixed semidi-annual tidal cycle not much different than the NOAA basic example.  It is more complex because of the larger differences in thermal inertia, but very similar.

If you pick just one region like the Nino3.4 you would have a different tide table than for the Persian Gulf.  If you are fishing you would want your local tide table not one for Portland, Oregon.  How ever if you are concern with western Pacific coast weather you would want to know the Pacific Decadal "Tides" and for England the Atlantic Multi-decadal "Tides".   If you are concern with "global" climate you should want to use the equatorial "tides".  

This is the reason I have spent so much time on the Tropical Ocean Peak Temperature reconstruction.  Peak would be the high thermal tide.  Once you know the approximate high tide you can more easily predict the low and mid-range tides.  

 This graph of the Berger et al. solar cycle estimate for the equator should be what paces the equatorial thermal tides.  The Jun version would be a di-orbital "tide" and the Equ. Peak a semidi-orbital "tide".  So using what should pace equatorial climate one should be able to find the exceptions that have to exist.  

The Arabian Sea most likely would not have the same exceptions as the combined Tropical Oceans,

but there should be more similarities than differences.  Remember there is a lot of uncertainty not only in the SST data but the orbital approximation as well.  The precessional cycle is not fixed, but it is the data that we have and provides a reference to work from.  Here I have included a "global" solar peak with the Equatorial Tropical Ocean peak.   The northern/southern hemisphere thermal tides can impact the equatorial thermal tides because the system is coupled.  If I finish, you will have a thermal tide table that just like the local tide tables are close but rarely right on the money.

That is what happens with a chaotic problem, you can get reasonably close but never exact, just "probably" in the right ballpark.  Figuring out the area of the ballpark is another problem.