New Computer Fund

Saturday, March 28, 2015

Ted Cruz, NASA and NOAA

Ted Cruz is a Texas senator is the chairman of the senate Subcommittee on Space, Science, and Competitiveness.  There is a bit of a to do over a confrontation between Senator Cruz and NASA administrator Charles Bolden over NASA'a "core mission".  What is interesting is how this has been overly played by the warm and fuzzy minions.

Between NASA and NOAA we have two huge agencies tasked with space and inner-space research.  There is some overlap of responsibilities and needed joint cooperation, but there is considerable redundancy that is not only not cost effective but counter productive.  NASA GISS for example has a surface temperature product and climate science division as does NOAA (NCDC) and the GFDL.  The NASA GISS head at the time James Hansen predicted that CO2 related anthropogentic climate change would be extremely hazardous, potentially 4 C of warming while Syukuro Manabe with the GFDL predicted less impact, about 2 C  Currently observations tend to indicate that Manabe, with the GFDL, tasked with the inner space duties, knew more about the inner space climate than Hansen tasked with the outer space duties (NASA).  So we have a huge group of scientists on the public payroll during their job apparently pretty well and a few scientists on the public payroll venturing into areas outside of their agencies per view, not doing all that great of a job.

Senator Cruz appears to have mention that perhaps NASA should try to focus more on their real mission instead of competing, somewhat poorly, with other agencies doing their job.  That should be a fairly common sense type of concern for a senator watching out supposedly for your tax dollars.

From thinkprogress, “Our core mission from the very beginning has been to investigate, explore space and the Earth environment, and to help us make this place a better place,” Bolden said.

Well that's fine.  Nasa was founded in 1958 at the beginning of the cold war space race and had a noble mission statement at that time.  NOAA was founded in 1970 to focus on the inner environment, Oceanographic and atmosphere, sort of a spin off of NASA.  Bolden appears to not have gotten that memo.

According to media and blogs like ThinkProgress, Senator Cruz is an anti-science imbecile because he questions NASA's competition with NOAA and says things like, “Almost any American would agree that the core function of NASA is to explore space,” he said. “That’s what inspires little boys and little girls across this country … and you know that I am concerned that NASA in the current environment has lost its full focus on that core mission.”

How about some NASA history?  
To understand and protect our home planet,
To explore the universe and search for life,
To inspire the next generation of explorers,
… as only NASA can.


I find is hard to find fault with Senator Cruz's paraphrase there. 

If NOAA or its National Weather Service needs new weather or climate science platforms, NASA's mission would be to assist in the design and placement of those space vehicles.  If communications satellites are needed, NASA's mission is to assist in the design and placement of those space vehicles.  That doesn't mean that NASA is in the television industry or the climate science industry, it is in the space industry.    If NASA wants to do the job of NOAA, then eliminate NOAA, or how about just everyone sticking to their specialty instead of free lancing?


Monday, March 23, 2015

New papers getting a look

Bjorn Stevens and crew have been busy.  I stumbled on a Sherwood et al. paper that included Bjorn Stevens as one of the als.  The paper concerns some issues with the "classic" radiant forcing versus surface temperature and "adjustments" that should be considered.  If is reviewed and ready for publication but hasn't hit the presses yet.  A biggy in the paper concerns the "fungibility" of dTs which I have harped on quite a bit invoking the zeroth law of thermodynamics.  "Surface" temperature where the surface is a bit vague isn't all that helpful.  Unfortunately, surface temperature is about all we have so there needs to be some way to work around the issues.  Tropical SST is about the best work around, but that really hasn't caught on.

In my post How Solvable is a Problem I went over some of the approximations and their limitations.  I am pretty sure the problem isn't as "solvable" as many would like, but it looks to be more solvable than I initially thought.

Since Dr. Stevens also have a recent paper on aerosol indirect effects, I thought I would review the solar TSI and lower stratosphere correlation.

I got pretty lazy with this chart but is shows the rough correlation which is about 37% if you lag solar by about a year.  It is better when volcanic sulfates are present in the tropics, but I can safely say most of the stratospheric cooling is due to a combination of volcanic aerosols and solar variability.   The combinations or non-linearly coupled relationships are a large part of the limit to "solution".  When you have three or more inter-relationships you get into the n-body type problems that are beyond my pay grade.  You can call it chaos or cheat and use larger error margins.  I am in the cheat camp on this one.

The cheat camp would post up charts like this.

We are on a simple linear regression path and about to intersect another "normal range" so surface temperatures in the tropical oceans should crab sideways in the "normal" range with an offset due to CO2 and other influences.  Not a very sexy prediction but likely pretty accurate.  "Global" SST with light smoothing should vary by about +/- 0.3 C and with heavy smoothing possibly +/- 02 C degrees.
Plus or minus 0.3 C is a lot better than +/- 1.25 C, but with one sigma as error margins there is still a large 33% error window.  So technically, I should change my handle to +/- 0.3 C to indicate an overall uncertainty instead of the +/- 0.2 C, CO2 only claim  That would indicate that I "project" about 0.8 C per 3.7 Wm-2 from the satellite baseline with +/- 0.3 C of uncertainty.  The limit of course is water vapor and aerosols which tend to regulate the upper end.  Global mean surface temperature still sucks, but with the oceans as a reference, it sucks less.  This hinges on some better understanding of solar and volcanic interacting "adjustments" which is looking more likely.

If there are more ocean, especially tropical ocean papers that attempt to estimate "sensitivity" sans the flaky land surface temperature 30%, my estimate should start making more sense.  It is heartening to see clouds being viewed as a regulating feedback than a dooms day positive feedback which took a lot longer than I expected.  Still a bit surprised it took so long to the dTs "fungibility" issue to be acknowledged since that was about my first incoherent blog post topic.  "Energy is fungible the work it does is not" was the bottom line of that post.  Since we don't have a "global" mean surface energy anomaly or a good way to create one, adjusting dTs is the next best route.  Then we may discover a better metric along the way.  Getting that accepted will be a challenge.

With more of the sharper tacks being paid attention to, Stephens, Stevens, Schwartz, Webster etc., this could turn into the fun puzzle solving effort I envisioned when I first started following this otherwise colossal waste of time.


Wednesday, March 18, 2015

How solvable is a problem?

If you happen upon this blog you will find a lot of post that don't resemble theoretical physics.  There is a theoretical basis for most of my posts but it isn't your "standard" physical approach.  There are hundreds of approaches that can be used, and you really never know what approach is best until you determine how solvable a problem is.

One of the first approaches I used with climate change was based on Kimoto's 2009 paper "On the confusion of Planck feedback parameters".  Kimoto used a derivation of the change in temperature with respect to "forcing" dT=4dF, which has some limits.  Since F is actual energy flux not forcing you have to consider types of energy flux that have less or more impact on temperature.  Less would be latent and convective "cooling" which is actual energy transfer to another layer of the problem and temperature well above or below "normal".   The 4dF implies a temperature which has exactly 4 Wm-2 change per degree.  Depending on your required accuracy, T needs to be in range so than the 4dF doesn't create more inaccuracy than you require.  So the problem is "solvable" only within a certain temperature/energy range than depends on your required accuracy.  If you have a larger range you need to adjust your uncertainty requirements or pick another method.

You can modify the simple derivation to dT=4(a*dF(1) + b*dF(2) +....ndF(n+1)) which is what Kimoto did to compare state of the science at the time, estimates of radiant, latent and sensible energy flux.  You can do that because energy is fungible, but you would always have an unknown uncertainty factor because while energy is fungible, the work that it does is not.  In a non-linear dissipative system, some of that work could be used to store energy that can reappear on a different time scale.  You need to determine the relevant time scales required to meet the accuracy you require so you can have some measure of confidence in your results.

Ironically, Kimoto's paper was criticized for making some of the same simplifying assumptions that are used by climate science to begin with.  The assumptions are only valid for a small range and you cannot be sure how small that range needs to be without determining relevant times scales.

In reality there are no constants in the Kimoto equation.  Each assumed constant is a function of the other assumed constants.  You have a pretty wicked partial differential equation.  With three or more variables it becomes a version of the n-body problem which should have Nobel Prize attached to the solution.  I have absolutely no fantasies about solving such a problem, so I took the how solvable is it approach.


The zeroth law of thermodynamic and the definition of climate sensitivity come into conflict when you try to do that.  The range of temperatures in the lower atmosphere is so large in comparison to the dT=4dF you automatically have +/- 0.35 C of irreducible uncertainty.  That means you can have a super accurate "surface" temperature but the energy associated with that temperature can vary by more than one Wm-2.  If you use Sea Surface Temperature which has a small range you can reduce that uncertainty but you have 30% of the Earth not being considered, resulting in about the same uncertainty margin.  If you would like to check this pick some random temperatures in a range from -80C to 50 C and convert them to temperature using the Stefan-Boltzmann Law.   Then average both and reconvert to compare.  Since -80C has a S-B energy of 79 Wm-2 versus 618 Wm-2 for 50 C neglecting any latent energy, you can have a large error.  In fact the very basic greenhouse gas effect is based on 15 C (~390 Wm-2) surface temperature versus 240 Wm-2 (~-18 C) effective outgoing radiant energy along with the assumption there is no significant error in this apples to pears comparison.  That by itself has roughly a +/- 2C and 10 Wm-2 uncertainty on its own.  That in no way implies there is no greenhouse effect, just most of the simple explanations do little to highlight the actual complexity.

Determining that the problem likely cannot be solved to better than +/-0.35C of accuracy using these methods is very valid theoretical physics and should have been a priority from the beginning.


If you look at the TOA imbalance you will see +/- 0.4 which is Wm-2 and due to the zeroth law issue that could just as easily be in C as well.  The surface imbalance uncertainty is larger +/- 17 Wm-2, but that is mainly due to poor approaches than physical limits.  The actual physical uncertainty should be closer to +/- 8 Wm-2 which is due to the range of water vapor phase change temperatures.  Lower cloud bases with more cloud condensation nuclei can have a lower freezing point.  Changing salinity changes freezing points.  When you consider both you have about +/- 8 Wm-2 of "normal" range.

Since that +/- 8 Wm-2 is "global", you can consider combined surface flux, 396 radiant, 98 latent and 30 sensible which total 524 Wm-2 which is about half of the incident solar energy available.  I used my own estimate of latent and sensible based on Chou et al 2004 btw.  If there had not been gross underestimations in the past, the Stephens et al. budget would reflect that.  This is a part of the "scientific" inertia problem.  Old estimates don't go gracefully into the scientific good night.

On the relevant time scale you have solar to consider.  A very small reduction is solar TSI of about 1 Wm-2 for a long period of time can result in an imbalance of 0.25 to 0.5 Wm-2 depending on how you approach the problem.  With an ocean approach, which has a long lag time, the imbalance would be closer to 0.5 Wm-2 and with an atmospheric approach with little lag it would be closer to 0.25 Wm-2.  In either case that is a significant portion of the 0.6 +/- 0.4 Wm-2 isn't it?

Ein=Eout is perfectly valid as a approximation even in a non-equilibrium system provided you have a reasonable time scale and some inkling of realistic uncertainty in mind.  That time scale could be 20,000 years which makes a couple hundred years of observation a bit lacking.  If you use Paleo to extend you observations you run into the same +/-0.35 minimum uncertainty and if you use mainly land based proxies you can reach that +/- 8 Wm-2 uncertainty because trees benefit from the latent heat loss in the form of precipitation.  Let's fact it, periods of prolonged drought do tend to be warmer.  Paleo though has its own cadre of over simplifiers.  When you combine paleo reconstructions from areas that have a large range of temperatures the zeroth law still has to be considered.  For this reason paleo reconstructions of ocean temperatures where there is less variation in temperature would tend to have an advantage, but most of the "unprecedented" reconstructions involve high latitude, higher altitude regions with the greatest thermal noise and represent the smallest areas of the surface.  Tropical reconstructions that represent the majority of the energy and at least half of the surface area of the Earth paint an entirely different story.  Obviously, on a planet with glacial and interglacial periods the inter-glacial would be warmer and if the general trend in glacial extent is downward, there would be warming.  The question though is how much warming and how much energy is required for that warming.

If this weren't a global climate problem, you could control conditions to reduce uncertainty and do some amazing stuff, like ultra high scale integrated circuits. With a planet though you will most likely have a larger than you like uncertainty range and you have to be smart enough to accept that.  Then you can nibble away at some of the edges with combinations of different methods which have different causes of uncertainty.  Lots of simple models can be more productive than one complex model if they use different frames of reference.

One model so simple it hurts is "average" ocean energy versus "estimated" Downwelling Long Wave Radiation (DWLR).  The approximate average effective energy of the oceans is 334.5 Wm-2 at 4 C degrees and the average estimate DWLR is about 334.5 Wm-2.  If the oceans are sea ice free, the "global" impact of the average ocean energy is 0.71*334.5=237.5 Wm-2 or roughly the value of the effective radiant layer of the atmosphere.  There is a reason for the 4 C to be stable thanks to the maximum density temperature of fresh water of 4 C degrees.  Adding salt varies the depth of that 4 C temperature layer, but not is value and that layer tends to regulate average energy on much longer time scales since the majority of the oceans are below the 4 C layer.  Sea ice extent varies and the depth of the 4 C layer changes, so there is a range of values you can expect, but 4 C provides a simple, reliable frame of reference.   Based on this reference a 3.7 Wm-2 increase in DWLR should result in a 3.7 Wm-2 increase in the  "average" energy of the oceans, which is about 0.7 C of temperature increase, "all things remaining equal".

Perhaps that is too simple or elegant to be considered theoretical physics?  Don't know, but most of the problem is setting up the problem so it can be solved to some useful uncertainty interval.  Using just the "all things remaining equal" estimates you have a range of 0.7 to 1.2 C per 3.7 Wm-2 increase in atmospheric resistance to heat loss.  The unequal part is water vapor response which based on more recent and hopefully more accurate estimates is close to the limit of positive feedback and in the upper end of its regulating feedback range.  This should make higher than 2.5 C "equilibrium" very unlikely and reduce the likely range to 1.2 to 2.0 C per 3..7 Wm-2 of "forcing".  Energy model estimates are converging on this lower range and they still don't consider longer time frames required for recovery from prolonged solar or volcanic "forcing".

If this were a "normal" problem it would be fun trying various methods to nibble at the uncertainty margins, but this is a "post-normal" as in abnormal problem.  There is a great deal of fearful over confidence involved that has turned to advocacy.  I have never been one to follow the panic stricken as it is generally the cooler heads that win the day, but I must be an exception.  We live in a glass half empty society that tends to focus on the negatives instead of appreciating the positives.  When the glass half empties "solve" a problem that has never been properly posed, you end up were we are today.  If Climate Change is worse than they thought, there is nothing we can do about it.  If Climate Change is not as bad as they thought, then there are rational steps that should be taken.  The panic stricken are typically not rational.










Tuesday, March 17, 2015

Latent Heat Flux

While the gang ponders the impact of Merchants of Doubt, the movie, I have drifted back to my questions about "global" latent heat flux.  There is considerable differences between the different latent heat flux products.  In the beginning, back in the day of the younger James Hansen, data was crude at best.  Kiehl and Trenberth produced a series of Earth Energy Budgets that attempted to get all the pertinent information down in an easy to read format.

If you have followed my ramblings you know that I discovered an error in these budgets along with a number of others, but it was Stephens et al. that final published a revised budget a couple of years ago.

This budget was discussed on Climate Etc. back in 2012.  Stephens et al. have latent at 88 Wm-2 +/-10 Wm-2 with K&T likely being the minus 10 and Chou et al likely the plus 10.  The other major difference is the atmospheric window where Stephens et al have about 20 +/- 4 and K&T in their latest are still using the 40 Wm-2 with no indication of uncertainty.

This post was prompted by Monckton of Benchley mentioning the Planck Feedback parameter he derived using the K&T budget.  That was based on the paper, On the confusion of Planck feedback parameters, by Kyoji Kimoto.  At the time I mentioned that Monckton and Kimoto were off because they used the inaccurate K&T budget, but that was prior to a real scientist publishing a revised budget.

Climate science appears to be finally catching up, but various irreducibly simple models never finish connecting the dots.  Tropical water vapor, clouds and convection are the most likely candidates for the regulating feedback.  Around 98 Wm-2 of latent you peg the negative feedback portion of the regulating feedback triggering deep convection.  If you initially assume that latent is 78 Wm-2, one of the initial estimates used by K&T, you have a large range of positive feedback from 78 to 98 or 20 Wm-2 which is about half of the atmospheric window assumed by K&T.  Since an absolute temperature has been a wild assed guess at best along with latent and convective energy flux, the room for warming has always been over estimated.

Andy Lacis on Climate Ect. mentioned they project a 28% increase in atmospheric water vapor which would roughly be equivalent to a 28% increase in latent heat flux.  A 28% increase from 78 Wm-2 would be 99.8 Wm-2 which we are roughly at now if Chou et al had an accurate estimate back in 2004.

The followers of the Merchant of Doubt drivel seem to believe anytime someone mentions uncertainty they are clouding the issue, but when all uncertainties tend to lead to the highest possible estimate, that is a simple sign of bias.  You do not have to be a rocket or atmospheric scientist to be aware of sensitivity of error.  In climate science the old guard are doggedly defending estimates based on extremely poor data while ignoring prompting by peers that they are off.  With accurate latent estimates Earth's Energy Budget is at the elusive Planck Feedback limit.

That does not mean there is not going to be some residual warming at the poles, but most of that will likely be in winter related to energy release mechanisms like Sudden Stratospheric Warming (SSW) and Arctic Winter Warming (AWW) related to increased pole ward "wall" energy flux.

The key to "proving" as in providing convincing evidence, are more accurate estimates of latent heat flux and convective response. The newest Stephens et al paper, also discussed on Climate Etc. appears to have part of that evidence.

It indicates that the northern hemisphere is releasing energy while more is being provided by the southern hemisphere via ocean currents.  Unfortunately, they may not have the estimate of northern hemisphere heat loss to space associated with SSW and AWW events correct.  That may require a more creative non-linear approach likely Fluctuation Dissipation Analysis, but really could be estimated with surface energy anomaly and high northern latitude ocean heat uptake.  Basically, northern high latitude SST has increased while ocean heat uptake has flat lined above 50 north.  That difference after considering uncertainty, should be heat loss not easily measured by satellite at the pole.  That loss can be on the order of 10^22 Joules in a single season.

The northern hemisphere heat loss may or may not trigger a negative AMO phase, which would make the shift obvious or it may just continue destabilizing the north polar jet/vortex until some glacial increase begins to store the energy.  Greenland is beginning to show signs of accumulation, but that may take a decade to be "noticed" given the current state of climate science.  Then again that loss could trigger an ice mass loss event, such is long term climate.

In any case, perhaps the current buzz will inspire some to revisit the much maligned, On the Confusion of Planck Feedback Parameters with more up to date observations.  


Monday, March 16, 2015

Zero tolerance in a complex world

Particulate Matter 2.5 micron or PM25 is know to have adverse health effects but PM25 is a generic term for a wide variety of small airborne particles.  The most common source of PM25 is the oceans and salt spray.  Then you have dust which can include some viruses, molds, fungi and bacteria.  Volatile Organic Compounds (VOC) from fossil fuel uses, burning wood, grasses trash etc  Cigarette and other tobacco smoke, marijuana and any other type of recreational combustion including incense.  Pollen,  tree saps, flowers any thing that has a scent pleasurable or other wide is releasing VOCs.  Smog is almost always caused by man made sources, vehicle exhaust, fireplaces or wood/coal stoves, industrial combustion.  Mixed in the the PM25 is often PM10 or 10 micron particulates and occasional larger sizes if you are close to the source.

Some PM25 is obviously bad whether the source in natural or man made and some not so bad with some down right desirable.  Desirable and not so bad depends on the individual or group of individuals being exposed to the aroma.  I personally think some colognes and perfumes should be banned.  The EPA also noted that there is a point of no gain.  People with hyper sensitivities will continue to be hyper sensitive.

With PM25, too much is bad, but what is too much subjective.  If you attempt a zero tolerance policy, you will never get there.  In fact, the more you reduce one type of PM25 the more another type becomes "noticeable".

In the US and other "developed" countries, clean air and water acts have worked to reduce particulate matter in the air where the sources are vehicles and industrial combustion sources.  Industrial combustion sources would be power plants for electrical generation and manufacturing processes like concrete,  Central electric power replacing coal and wood as home heating and cooking energy reduced the majority of urban air pollution followed by passenger vehicle emission standards.

As Urban living expanded into suburban sprawl, more limits were placed on central power plants for industry and electrical power.  Even with Coal as a power source, emissions were reduced with scrubbers and precipitators greatly improving air quality.  Changing farm practices, starting after the US dust bowl, have also helped reduce PM in the air.

However, the US and industrial nations are not in this world alone.  Emerging economies are going through the same growing pains and will need to improve their own emissions standards.

This produces the conundrum, how clean is clean or what tolerances are most beneficial and cost effective.  Technology exists to make any form of fuel source cleaner and more efficient but never perfectly clean or efficient.  The closer to zero tolerance regulations shoot for the more expensive basic energy needs become.  With PM25 the problem is exacerbated by the diverse natural sources of "pollution".  

Living on the edge of the tropics, one of the primary sources of harmful PM is the Saharan Desert and the entire middle eastern desert regions.  Caribbean asthma cases are directly linked to African dust along with coral reef damage due to fungi, molds, bacteria and viruses non native to the region killing off key species that help keep the reef system clean.  Sea urchins feed on algae that compete with coral polyps.   There was a massive die off of sea urchins due to Saharan Dust.  Along with the coral, sea fans were also impacted weakening the entire reef ecosystem.  In a weakened state, any number of other factors can appear to be a "cause" of reef die off, when a mutation of a fungi variety found in African soils was the initial most likely "cause".

It isn't just African dust.  The Island of Barbados top soil is almost completely composed of African dust.  African dust has been brought to the Caribbean as long as there have been African deserts and trade winds.  The  die off could be due to a combination of microbe evolution, land misuse in Africa and changing weather patterns.  The rough year of the demise of sea urchins was 1983 for those looking for a climate change connection.



Wikipedia has an interesting animation of particulate matter migration around the globe.  the sulfates in white are obvious over China, red/orange is dust located mainly in desert regions, Blue is the relatively benign salt spray, but energy salt spray can carry microbes, and green is black and organic carbon produced by any type of fuel combustion which appears to be mainly biomass burning related to agricultural and underdeveloped nations using biomass for basic fuel.

With developed nations still using fossil fuels including coal, current technology appears to be ahead of the pollution curve.  Similar cleaner central energy and more eco-friendly agriculture would appear to have a larger potential to reduce man made particulates and reduce "natural" particulates than extreme regulation of fossil fuels.  Especially when the poorest nations in the world appear to be producing the vast majority of air pollution via land use.misuse.

The minions of the Great and Powerful Carbon tend to side step this reality with their carbon causes everything bad simplistic reasoning.  Taking fossil fuel use to zero is unlikely to reduce PM25 by more than 30% and less than 10% of the outdated or unregulated uses likely produce the majority of the emissions.  It would seem having the rest of the world emulate the developed world's modest measures to reduce air pollution would be much more effective and much less costly.  That has a snowball's chance in Hell of happening with the minions running loose though.  They have a, ban coal first then sort out the rest, playbook. To most engineers coal is just a resource with advantages and disadvantages, just like nuclear, oil, gas, solar, wind and biomass.  It isn't good to be overly reliant on any one or totally opposed to any one.  Finding the proper mix for a situation is the goal.  

Eliminating any energy option isn't particularly intelligent, especially when it is tied to a global issue that could be mainly natural or at least, the majority isn't related to that energy option.

So what are the risks associated with outdoor pollution?  According to this Cancer Research UK article pretty small but there is definite link between outdoor pollution and lung cancer.  If you smoke you are about 20 times more likely to develop lung cancer and some studies put a 22% increase in risk per multiples of 10 micron exposure.  15 microns/cubic meter was considered the upper limit in the US for "clean" air but there was a move to regulate at 12 microns per cubic meter.  So if you live into your fifth decade you have a risk of lung cancer.  As you live longer your risk of other causes increase so your lung cancer risk decreases.  The longer you live the greater your chances of dying from something which I am sure must come as a surprise to many.

With lung cancer being a fifth decade typical onset disease, the developed world clean air acts being roughly 40 years old and people transitioning from "dirtier" manufacturing and construction trades to "cleaner" service related industries, there are plenty confounding factors to be dealt with in determining how significant a problem PM25 really is.   Cleaner air though would be better, but since the current air quality is just marginally significant as a carcinogen, maintaining the status quo of incremental, cost effective progress towards better air quality seems more rational than drastic changes that could produce unintended consequences.

Dramatic, note the root drama, change is very progressive though.  There is no problem so small it doesn't deserve our immediate attention.  With anarchists who do not wish to be governed setting out to become the governors, it should be obvious there will be plenty of piggyback political baggage involved in all the progressive initiatives to save the world.  It is impossible to make everyone happy, but the progressive haven't come to grips with that basic reality.  It is good for them though since there will always be some cause to occupy their idle musings.

In the real world, you do the best you can in a way that doesn't break the bank and you leave your options open.

For another opinion on air pollution consider the real issue, indoor air pollution.  Bjorn Lomborg has a article in Forbes that is right on target.  With a third of the world population still cooking over open flames be the energy source dung, coal or wood and developed nations enjoying tightly constructed central air condition dwellings, indoor pollutants are the real killer other that urban areas in rapidly developing countries that haven't addressed clean air regulation adequately.  Since this problem doesn't mesh with the carbon issue, it is not getting attention it deserves.  Centralized power generation and rudimentary electrification even using demon coal as a fuel source would reduce the third world problem while more timely maintenance and a bit of bleach would do wonders in the developed world.

In the developed world, water plus dirt produces molds.  Drywall is dirt sandwiched between paper with makes great mold growing habitats when you add water.  Dirt in air conditioning systems with their dark, moist interiors can produce wonderful mold science projects.  So all the while scientists study the impacts of outdoor particulate matter their subjects can be getting much more at home.

Real science needs to deal with the confounding factors before condemning progress in the name of progressive-ism.  As it is, they are coming up with too many solutions for the wrong problems.



Monday, March 9, 2015

Circumstantial Evidence of Secular Trend

I was playing with so simple linear regressions more to aggravate some of the minions than to really accomplish anything.  I am pretty positive there is a fairly larger range of undecipherable climate noise.  Some might call it chaos, but any simple control system has a range around a set point.   If you want to reduce that range you are looking at big bucks and potential instability.  It is simpler and more cost effective to "live" with a reasonable amount of fluctuation unless you have a very temperamental process.  Having worked with "novel" new control schemes like PDI feed forward, I have witness some spectacular initial failures.  Most of these types of control systems are viable now with more and faster computing power, but for simple things like HVAC control at the time they were a huge waste of money.  The average person cannot sense a temperature change of less than 2 F and there is no "ideal" temperature range for all people so a half degree to degree of "slop" for indoor air temperature control is perfectly acceptable.  Even a larger range of humidity control is acceptable and with proper system sizing, pretty much takes care of itself.

There are cases where more precision is required and the cost is justified, but proper load sizing as staging can reduced the complexity required for even those cases.  Simple isn't a bad thing.

What I did just for grins is a simple linear regression of GISS global with and uncertainty channel.  I just eyeballed the fit and was pretty happy with 0.5 C which happens to be 1.6 sigma for the GISS monthly data.

A little light smoothing, 13 month moving average, provides a one sigma, 0.29 C eyeball fit.  Since only five years of the 134 years total make it outside the channel, that is about a 95% uncertainty range in spite of the one sigma notation which implies about 68% uncertainty.  So in my opinion, with about a 0.3 C range global climate is pretty stable.  You can get the same fit with climate model runs with about +/- 0.35 C, mainly because of the volcanic forcing misses.  Smooth those out and you can get a more respectable error range for the high dollar estimates.

I then spliced the global with the Oppo et al. 2009 reconstruction, but had to kick the range back up to 1.6 sigma to get a fit back to about 1780.  After throwing in a pre-Little ice Age error range of +/- 0.29 C, I have a basic model of expected climate.  The difference in the slope of the dark green LIA recovery and the blue instrumental is roughly "potential" Anthropogenic Climate Change.  If you used the green regression and added CO2 forcing with a 0.8C "sensitivity" you could fit the two curve pretty well.  Of course, the Oppo 2009 and instrumental have different smoothing and picking the slope of LIA recovery is a bit of a guess, but it is in the ball park.

Since I am fairly certain that tropical convection and cloud cover trigger around 28C to produce significant negative feedback, the intersect of the linear regression and pre-LIA range are a pretty good illustration of what I expect climate to do.  That would be crab sideways with a fluctuation of around +/-0.5 C degrees.  So this is my prediction of the month.

If you are an ENSO fan, the projection would be 26.9 C +/- 0.5 C degrees.  Since some believe that ENSO, PDO and AMO are the sources of all variability, that might make them happy.  Personally, for global climate I would watch the tropics in general not any region in particular.  I have done a number of correlations between various tropical bands and "global" temperatures and the 30S-30N band is in the 70% and 80% range since it has over 50% of the total ocean area.

 I haven't figured out a real "sciency" way to fit the sea level rise into the picture, but roughly matching instrumental, SLR and Oppo 2009 looks like this.  Since there is more than just thermal expansion involved with SLR, a convincing fit with rational error bars will be quite a challenge.  I do have some thoughts on the subject, but that will take some real work which is against my current religion.

So there is the title of the post, "circumstantial" evidence of a secular trend related to LIA recovery.

The Units Game


This cartoon representation of past climate is used by many of the warmist minions to inspire others to climb aboard their bang wagon.  Other than some technical issues, it is a fairly accurate representation of past climate smoothed to about 1000 years.  If you smooth instrumental data which only has at most 250 years of data, to the same period, you would get one point at about zero for the end instead of a hockey stick blade at the end.

The team of Rosenthal, Oppo and Linsley published a new reconstruction of intermediate water temperatures which they used to estimate changes in total ocean heat content.  They unfortunately yielded to peer publishing pressure and overly hyped their results which had a few non-fatal errors with a claim that current warming of the ocean is 15 times greater than any other time during the Holocene.

The rate of sea level rise (SLR) makes a fair proxy for changes in ocean heat capacity.  Using SLR it is pretty obvious there isn't an "unprecedented" increase in the rate of ocean heat uptake, but there is continued ocean heat uptake.

For the past 15 years which includes the newer ARGO data, the increase in 0-2000 meter ocean temperature is about 0.035 C per decade.  That corresponds to an energy accumulation which can be quantified in a number of ways.

Joules, as in 10^22 Joules, has become popular with the warmists.  Since we live on a fair size planet, relatively small changes in temperature require huge changes in energy if you pick the smallest units you can get away with.  Here we have an increase of about 10x10^22 joules over one decade which is about 0.035C increase in temperature over the same period.  0.035C/dec is boring compared to 100,000,000,000,000,000,000,000 JOULES!!!

If you are trying to sell your product you need to grab attention.  Everyone should know this, but the minions of the great and powerful carbon aren't exactly the sharpest tacks in the box.  Unit changes and big numbers should be part of everyone's bullshit detector.  Hiding smoothing time frames should also be a BS detector input.

This is an example of smoothing selection on the Oppo 2009 IPWP temperature reconstruction.  I smoothed at 200 years with 5 stages or cascades.  The first two stages shift the peaks and valleys and every additional stage reduces the slopes.  If I added enough stages you would end up with just the average for the whole period as a straight line.  Note the legend, SST-50yr.  the original data has a t best a 50 year resolution so if you are going to compare to observation you should smooth obs by 50 years to have a reasonable apples to apples comparison.  Well, SLR does the smoothing for you.

If you look at this chart again you can see that SLR likely fits the 3rd cascade or so.

No filtering method handles end points very well, so if I splice instrumental observations to the end I would be some abrupt shift in the slope, in this case it could look like an unprecedented 15 times greater than any time in history.  Since nature does its own smoothing, a scientist needs to figure out how that impacts the end points before splicing data from a different source on the end.

This is all super basic stuff.  However, if you aren't familiar with this basic stuff, people can play you or themselves with "unprecedented" claims.

The trick is figuring out if they are trying to play your or have just played themselves.  If they play the units game without providing a reference to more familiar units, they could be playing you.  So if you think they are smart enough to know better, you might think they are being con men.  But never underestimate the human capacity for stupidity.  They probably just aren't quite as smart as they think.


Saturday, March 7, 2015

Everything including the Kitchen Sink

There is a new paper being discussed with a bunch of ideas about how to do climate modeling.  Mathematical and physical ideas for climate science by Valerio Lucarini, Richard Blender, Corentin Herbert, Francesco Ragone, Salvatore Pascale, Jeroen Wouters.  They hit on just about every method I have seen mentioned for improving climate modeling.  I am not particularly sold of some of it and most of it is over my head, but there are a few nice points, mainly fluctuation dissipation analysis.

Variability is inefficiency.  I have used the analogy of rattling the pot lid as in relieving pressure or releasing energy with respect to sudden stratospheric warming (SSW) and Arctic winter warming (AWW). The atmosphere can only hold so much energy before something has to give.  A good deal of this pressure relief is due to deep convection and there is also increased pole ward advection.  Unless I know how to convincingly "prove" as in mathematically, how much impact there is, I would rather stick to simple models and treat the chaos as uncertainty.  If you can live with the uncertainty range, no sense getting too carried away with the nits.  That doesn't mean you can't "see" what is going on, just you can only go so far quantifying the effects.

This chart of BEST temperatures with the interesting latitude bands is a pretty good illustration of fluctuating dissipation.  When the NH reaches a energy retention limit, the Arctic tends to relieve the pressure.  With the "pause", you have an increase in SSW and AWW magnitude and thanks to the scaling factor aka amplification, there is a lot bigger increase in temperature in the colder lower specific heat capacity regions.  A stable polar vortex and/or northern jet stream help retain energy and increase advection, either through the stratosphere, SSW or troposphere AWW, destabilizes these vortexes releasing energy and causing pretty severe winters down to the subtropics.

This variation is typically "blamed" on the AMO and/or PDO, but the more likely cause is the tropical wanderings and the Quasi-Biennial Oscillation (QBO).  Tropical wanderings would be ENSO like variations that result in the western Pacific and Indo-Pacific Warm Pools reaching a deep convection triggering temperature.  That is the reason I have been playing around with "Peak" tropical reconstructions instead of average reconstruction. Triggers or tipping points are generally extremes not averages.  So instead of jumping in feet first with the heavy math, I feel it is better to simplify the situation and estimate a realistic uncertainty that is likely "irreducible", then decide if it is worth the effort to get serious with the math.

Since my real goal is a simple back of the envelope style explanation I am satisfied with accepting the limits I find reasonable.  It is hard to avoid dreams of the ultimate mathematically rigorous treatment, but realistically you can only go so far with a planetary scale chaotic system.

Statistically, you can do a lot of neat looking stuff, but tipping points aren't all that well behaved so something is likely to bite you in the butt, especially when the tipping points are subtle like a reduction in variability that leads to a quite build up in energy that produces a super El Nino that inspires deep convection that changes stratospheric water vapor and ozone concentrations leaving all the experts scratching their heads.  The trigger for these events can be as simple as a shorter solar Schwabe cycle that reduces ENSO variability or a relatively weak volcano that destabilizes the polar vortex.

 As I have mentioned before, pole ward advection of tropical ozone and water vapor adds about 50 degrees to the poles with only about 8 Wm-2 of "global" forcing change.  The average annual variation in forcing at the top of the atmosphere is around 10 Wm-2 so a couple of Wm-2 during one winter season would be easy to miss.  That could produce a 12 C change at one of the poles that could be lost in the general "noise" at the poles anyway.  The Cowtan and Way kriging method "discovered" about 6 degrees of that change just by using satellite data to fill in a few blank spots.

Adding CO2 should reduce variability, which does what?  That right sports fans, lead to more variability.  In a chaotic system there are lots of counter intuitive possibilities.

So I am tossing out that paper for them what to beef up their mathematical options, but I think it will be some time before any of it will convince the true believers they have bitten off more than they can chew.






Scaling Factors and Uncertainty

I was going to include this in the last post, Stratospheric Cooling and the Missing Tropical Tropospheric Hot Spot, but my spread sheet was acting up.   The troposphere according to the models should warm at a fast rate than the surface.  The lower troposphere about 10 to 20% faster and the upper to middle troposphere about 30 to 60% faster.  So with the models predicting the tropical middle troposphere to warm 60% faster than the surface, it should be pretty noticeable.  The "polar" amplification which is actually "Arctic" amplification should also be pretty obvious.  The models predict those happenings.  Well, land surface air temperatures, which are in the lower troposphere should also warm faster than the oceans.  All of the amplification of warming should be easily predictable and measurable if atmospheric forcing is as simple as models suggest.



Some won't like my choice of data sets, but this is the scale or amplification ratio of CRUTs3.22 Tmax and Tmin to ERSSTv4 global.  I used a satellite record length linear trend for each in a sliding window to show changes in the amplification factor.  The data starts in 1950 to avoid the 1940 -1976 zero slope which messes things up.  You could take a difference of some reference slope to force the trends to be greater than zero, but this is just a quick and dirty comparison.  If you combine the two CRUTs3,22 sets or use Tave, the average scaling factor is about two.

If you are pretty certain that the relationship holds for longer time frames, you could scale the longer land temperature data set to extend say the SST data back in time.  I have scaled BEST, Sea Level Rise, Ocean Heat Content, Central England Temperature as a bit of a sanity check paleo reconstructions , but there is enough variation that the uncertainty is fairly large so it isn't exactly a recommend method in my opinion.


This is what that scaling exercise looked like with Oppo et al. 2009.  As you can see the uncertainty would be pretty large in climate science world, but it does indicate that Oppo et al, 2009 is probably a pretty good reconstruction.  The Oppo reconstruction has an average resolution of about 50 years versus 27 month moving average for the instrumental so I could smooth more to get the statistical uncertain down, but that doesn't do much but hide the real uncertainty.

The explanation for land amplification with respect to oceans is pretty simple, land has about half the specific heat capacity or the ocean skin layer.  You get about twice the warming per unit energy on the land mass.  It gets a bit more complicated because water availability changes the specific heat capacity of land and not all land is at the same altitude or initial temperature.

If that was the only factor you would have a slam dunk, of course it isn't.


The factor changes with time so you have to be careful.  Extremely long term you get the 1.1 for Tmax and 1.4 for Tmin.  The shorter your time frame the more variable it becomes.



The CMIP5 model means indicate a satellite era scaling factor like above.  Depending on what data sets you use you can get a fair match with models or a complete fail.  BEST Tmax and Tmin have a fair match but BEST Tmax and Tmin correlate better with model Tmin and Tmax.

This compares models with BEST and Hadisst using a Callandar baseline 1935-1944.  You can change your baseline around but that wouldn't impact scaling factors.  You only have a "real" scaling factor when all the slopes are of the same sign and a bit greater than zero which is a good reason to use differencing.  Just eyeballing though you can see there is a lot more noise in the land surface data because it is more easily amplified by whatever influence comes along.

Since the largest amplification should be in the tropical troposphere, there is a lot of focus on that with the RATPAC, UAH sand RSS gang.  The theory behind tropical tropospheric warming is similar but you have to consider the lapse rate.  Perfectly dry air would have a lapse rate close to 9.8C per km and saturate moist air a lapse rate of about half that or 5 C per km.  All things remaining equal more warming over the oceans should increase atmospheric water vapor which would tend to push the lapse rate toward saturation.  The upper troposphere has a much lower specific heat capacity so the increase in latent energy would tend to warm the upper troposphere more than the lower troposphere.  The all things remaining equal part is the catch though.  If convection and/or advection increases more that anticipated, there could be not only no tropospheric hot spot it could even cool.  Cooling there though related to advection would move the warming some place else.

With greater than expected amplification in the Arctic and over portions of the land mass, an increase in poleward advection is indicated.  Science of Doom has a nice discussion on the subject near the end and in comments, but the basic issue is that the hotspot is an amplification and if surface temperatures are not warming there is nothing to amplify.  With changes in advection you get other "warm" spots.

Berkeley Earth Surface Temperature did some comparisons and found that 30N-60N warming was amplified more that predicted.  There are plenty of "other" factors that can be involved, but more moisture in a saturated lapse rate situation would just mean a higher convection which would create a wider path for advection.  With moist air that could mean a wider area of cloud cover which would negatively feedback on the surface producing the warming to begin with.

Surface winds, which would be driven to a point by changes in convection/advection which would be related to tropical surface temperature, tend to both expand the tropical sphere of climate influence into higher latitudes and increase ocean mixing.

This is a bit of a double whammy for the "all things remaining equal" assumption, particularly the ocean mixing part which would increase the rate of ocean heat uptake reducing the amount of atmospheric warming.

Obviously, that is not the only thing going on or wind speed would follow surface temperature more closely.  The big things to me are uncertainty and natural variability.



I have just about given up on surface air temperature because of the noise, so this is an attempt at estimating uncertainty using just the SST data and models.  The uncertainty for the observation I am using is one sigma for a fairly aggressively smoothed data set.  Aggressive being a 5 cascade 13 month moving average.  That is compared with modeled SST +/- 0.25 C which is an eyeball fit.  So if you can expect a +/- 0.25 C uncertainty range for the models and a 1.4 scaling factor you have +/-0.35 C expected variation in the surface temperature and/or troposphere at what ever level blows wind up your skirt.  Since the estimated scaling factor varies from 1.1 to nearly 3, it is possible to have a +/- 0.75 range of uncertainty in the atmospheric data.  Measurement wise, the accuracy on a global scale can approach +/- 0.05C, but that doesn't help much when natural/internal variability can be a couple of orders of magnitude greater.  Whether you decide to smooth observation to some politically correct sigma or expand model uncertainty to include observational transgressions, SST variation combined with lower troposphere amplification is not going to be easily dealt with when it comes to attribution of "causes".

I believe Lorentz pointed this out some time ago.  In any case, zero times any scaling factor is still zero.  The missing tropical troposphere hot spot is just an indication of no significant warming, due to what ever cause.


Update:  I forgot the best graph!




30 to 60 north has the highest regional amplification and the most confounding factors, land use, suburban heat island, black carbon etc. and the amplification doesn't really look like a CO2 forcing curve.  No, this is not dependent on data selection.


All the data sets agree there is a plateau in this region that had been the most rapidly warming.  The "suburban" heat island effect is related to general land use impacts, more impervious surfaces, compaction, water cycle changes due to drainage and crop selection in the region that would be a real impact on temperature stations.  The RSS satellite data, controversial as it may be, tends to agree that it not just an isolate instrumental effect.  However, GISS and Cowtan and Way may be interpolating in a small amount from outside the region, but probably less than a tenth of a degree.



Thursday, March 5, 2015

Stratospheric Cooling and the Missing Tropical Troposphere Hot Spot

The discussion is going on at Climate Etc. thanks to a post by by Roger Pielke Sr., Phil Klotzbach, John Christy and Dick McNider.  Of course there are the typical uncertainty strawmen being toss around, but the tropical troposphere should be amplified  with respect to surface warming by a factor of between 1.1 and 1.6 depending on the region and surface being considered.

A simply way to explain the amplification is that the specific heat capacity of the upper portion of the troposphere is lower than the "surface" so  for the same change in energy there is a greater change in temperature for the lower specific heat capacity region.  The troposphere hot spot is related to change in the environmental lapse rate as it approaches the moist saturated lapse rate,  So the tropospheric warming is not really dependent on GHG forcing.

The tropical lower stratosphere is projected to cool with increased GHG forcing so it is supposed to be a "signature" of GHG warming.  Unfortunately, there are more GHGs than CO2 and the changes projected depend on what is assumed to be a "normal" condition.  This leads to the never ending discussion over initial versus boundary value problems.

That opens the huge can of observational worms.  If a data set disagrees with the model "projections" it is suspect and subjected to "re-analysis". The RATPAC or radiosonde data above has been severely criticized and since it tends to agree with the satellite data provided by the Microwave Sounding Units (MSU), so are they.  It can get very confusing because it isn't exactly a "simple" problem.

The uncertainty this case isn't as much of a problem because the difference between observation and "projection" is so huge it is pretty hard to mistake.  I use the Lower Stratosphere, 100-50 millibar data inverted for this example so there is a direct comparison, to show that stratospheric cooling is proportional to ocean surface warming and there is an amplification factor.  Just looking at temperature anomaly though doesn't really explain why.

By converting the tropical SST anomaly to an energy anomaly using Stefan-Boltzmann you can see that there is close to a 1:1 ratio of SST energy change to Lower Stratosphere temperature change.  This is one reason I am so focused on tropical SST reconstructions.  Following the energy which is mainly derived from the tropical oceans is just a simple way of looking at the problem.

Water vapor is one of those GHGs and increasing SST would increase the water vapor, but water vapor also has regulating properties (feedback) such as clouds and deep convection that are dependent on a "sweet spot" temperature range.  Most of the latent energy transferred to the atmosphere is heat of vaporization, but in the atmosphere the dewpoint temperature and freezing point temperatures are not separated by many degrees so there is a stronger temperature regulating response.  If climate models underestimate that "sweet spot" convective triggering temperature and the response curve, it isn't linear, then they wouldn't find a tropical tropospheric hot spot as large or located where they might expect.  The models can make small linear approximations of the response, but then you need to have the initial values correct.  Replacing the convective trigger parameterization with a "real" physics based calculation would take too much computer time/power, so there is considerable research into what parameterizations produce the best results.

That makes this entire argument moot, because the next generation of models should fine tune parameterizations and initial values.  Focusing on initial values, surface temperatures, specific humidity, cloud fraction etc. should mean more actual temperature data instead of anomaly data.  That would be the reason for my absolute surface temperature estimates and focus on maximum or extreme temperatures instead of average.

The aggravating part of all this to me are the "signatures".  There are so many inter dependent or coupled variables that "signature" is useless without knowing what "normal" is supposed to be.

With that in mind, comparing "surface" temperature to RSS MSU troposphere interpretations and UAH MSU troposphere interpretation is limited because they seem to be measuring different parts of the troposphere.  RSS provides a temperature for their lower troposphere product that indicates it is measuring in the atmospheric boundary layer.  Thanks to heat of vaporization and fusion, their product would have less warming so it is oranges to UAH's apples.  RSS is useful in my opinion for lots of things, but this just isn't one of them.  It is simpler to skip the turbulent troposphere all together and use the lower stratosphere data especially since it agrees well with the ocean surface temperatures/energy.

Wednesday, March 4, 2015

More on that Elusive Cause and Effect

Using the Tropics to Isolate Cause and Effect just leads to more questions.

This chart of 30 year sliding correlations with the Hot Tropics, 10S-10N points out a perturbation circa 1913 which could be due to a volcano not included in forcing estimates used for CMIP5 models runs.  The timing though is off, the perturbation leads the volcano I limited the SST regions to 60S-60N because of issues with early temperature measurements near the arctic circles.

The drop in correlation around 2005 was a bit of a surprise.  So I used the actual temperature data available for land areas, specifically the maximum annual temperature and minimum annual temperature to get a rough estimate of change in atmospheric forcing.  Lots of potential questions about how useful that might be, but variation in air temperature converted to approximate Wm-2 should produce a rough estimate of the total change in atmospheric forcing.

So I expand on that by doing the same thing with changes in sea surface temperature.

Using the same 1901-2013 baseline the energy anomaly for 40S-40N SST peaks at about 3 Wm-2 which is close to the land based forcing estimate.

I used 40S-40N for the SST version but I doubt using 50S-50N would make much difference.

Forty to 60 North is a bit more interesting, there is actually a significant difference, almost 2 Wm-2, at a couple of points.  Early in the record sould be dismissed as limited observation but since the most recent divergence is during the period of the best coverage, that part should be "real".

Forty to 60 South has the worse coverage, but should be some what reasonable after 1930.  This shows less "forcing", about 2 Wm-2, interesting, but the minimum is around 1930.

Comparing the actual temperatures recorded for the high latitude oceans shows a very interesting lag in responses.  The northern 40-60 band has a bottom from roughly 1905 to 1915 with a continuing down trend until it pops around 1985.  The SH portion starts its upward trend around 1930 with a plateau starting around 1980.  Both of these regions represent a fairly small portion of total ocean energy, but since temperature anomalies are not weighted towards energy, they could have a large impact on global mean temperature anomaly.  The northern high latitude ocean can impact more land area which tends to amplify change due to its lower specific heat capacity, so it could produce most of the combined PDO/AMO temperature variation.

Here I need to remind anyone still following that there is a huge change in the ratio of land to ocean area in the Northern Hemisphere.  The 60S-40S ocean region is nearly all ocean.  Latent heat loss from this region is much likely to return to the ocean in this region.  Water vapor from the surface would form clouds, releasing latent, but as the water falls as precipitation it would warm with the air regaining a large portion of that energy.

In the Northern Hemisphere it is much more likely that precipitation would fall on land transferring energy to that land then it would return to the oceans via river outflow with some delay.  When there is more water stored in sheds, reservoirs or ground water, the delay would be much longer.

The ratio of the 40N-60N band is 0.45 ocean, 20N-40N band 0.60 ocean and 0-20N band 0.75 ocean so the further north the more likely precipitation falls on land and is stored for some time, the further south the less likely precipitation is stored on land mass.  Above 60 degrees, in the north the stability of sea ice determines if the precipitation is stored and in the south, the Antarctic land mass would provide more consistent storage.

In a nut shell, northern hemisphere especially higher latitude, land and water use changes would have a much greater impact on climate via the hydrology cycle than in any other latitude band.  It isn't particularly easy to determine how much impact those changes actually have though without a "normal" period to be used as a reference.  It would generate lots of noise in the climate data.



 If you look at the temperature anomaly for the higher latitude bands you can see how response varies.  The 40N-60N band has less ocean area meaning a lower heat capacity, so it would have a faster response time.  These anomalies are based on the same 1901-2013 base line.

If you compare the actual temperature of the 40N-60N band with the bulk of the oceans, 40S to 40N you see about the same fast response.  Area wise, the 40N-60N is about 10% of the total area of these two and energy wise, 40N-60N provides about 8.8% of the entire area.

What all this seems to indicate that there is noise that makes any attempt to determine cause and effect pretty difficult.  The only consistent part of all these comparisons is that there has been some increase in atmospheric forcing on the order of 3 Wm-2 over the instrumental period.

Specific heat capacity is a great noise filter and since this is really an energy problem, the more specific heat capacity the better.

 Sea level rise may be the go to metric to estimate "global warming" in the long run.  Scaling the Jevrejeva et al sea level rise reconstruction to tropical 25S-25N SST indicates a long term increase with a slight acceleration over the past 150 years.  As you can see it pretty well filters out the noise.  I used a 1854 to 2002 baseline for this effort which indicates "noise" aka natural variability is likely on the order of +/-0.35 C or so at least in the tropics.  One could try to make a case for Volcanic forcing or solar, but considering the Crowley and Unterman 2013 Volcanic Aerosol Optical Depth reconstruction, there are more exceptions to the normal concept of atmospheric forcing than agreements.


So using SLR as a reference, today's temperatures may seem exceptional, but Berkeley's land temperature reconstruction seems to indicate other wise.


Sea level rise also filters out the oscillations in the Oppo et al 2009 Indo-Pacific Warm Pool reconstruction.

Just like the 30N-60N SST comparison above, Northern Hemisphere reconstructions appear to have the same fast response.  This Christian and Ljungqvist reconstruction is for 30N to 90N and uses a lot of tree ring reconstruction in very high northern latitudes.  Each of these individual reconstructions have a great deal of variability and how a multi-proxy combination ends up looking depends a lot on choice of proxies and methodology.  Spliced on the end I have GISS land only in both annual and 50 year moving average to match the C&L recon.  Their 2009 version is archived at NCDC with 71 of the reconstructions in an easy to download file with actual temperatures in most cases so you can compare the relative energy contribution as a double check of their weighting.  From that you can see most of the real signal is in the lower latitudes where there is more energy per unit anomaly.  The sigma variations they use don't give you any sense of how those recons should be weighted other than a rough area.

I picked this recon btw to show how messing with smoothing has become an art for some of the less ethical.

Skeptical Science played the loess game of hockey stick enhancement which produces those too good to be true uncertainty intervals so popular with advocates.  You can easily download the data from NCDC and instrumental from Climate Explorer to make you own comparison should you think I am some kind of lying SOB.

Using the less adulterated data I believe indicates a good deal of the "cause" of the warming is buried in multi-century scale wandering and less that representative presentation of selected facts.  Focusing on energy anomaly instead of temperature anomaly might limit some of the creativity.