New Computer Fund

Monday, April 29, 2013

More thermal Inertia

The Climate Science debate is coming full circle again to my favorite part of the puzzle, thermal inertia.  Thermal inertia is confusing.  Typically, specific heat capacity is used instead of thermal inertia until the surface temperatures start moving the wrong way, then the catastrophic global warming aficionados shift back to Ocean Heat Capacity, Missing Heat and Thermal Inertia to attempt to explain why things are not exactly like they estimated but of course still with model uncertainty range.  Yeah, Right.

I have shown before how climate response to volcanic forcing varies with thermal inertia.  A volcano catching the oceans on a downward trend has more impact than catching it on an upward trend.  The same thing with solar and ENSO.  It takes time for equatorial heat absorbed in the oceans to travel toward the poles.  When the ENSO "cycle" shifts, the temperature differential will be different between the poles and equator producing a different response.  This is due to a combination of real inertia, the mass of the ocean currents combined with the heat contained in that mass.  

The chart above shows the same situation with solar.  Since both solar and ENSO have different factors that determine their "cycles" they can become in phase or out of phase producing a different response to the same amount of "forcing".  In 1955, the two were close to 100% out of phase and by 1988-89 they were close to 100% in phase.  Since the ENSO period is shorter, it tends to "breakdown" faster than the solar "cycle".  

The use of "cycle" in quotes is because they are cycles, but due to different interactions they appear to be pseudo-cycles.  That is just part of dealing with the real, messy world of non-linear systems.  

Because of the number of influences involved, determining a reliable impact per perturbation is nearly impossible.  The ocean alone has probably 15 layers or segments that would need to be considered which are of course asymmetrical relative to the equator with all that atmospheric "weather" feeding back and forward on the ocean "cycles" at various depths.  It is chaotic.  

How you "prove" the impact of phasing of solar/ENSO or ENSO/AMO etc. is still a big question mark.  Getting people to realize that there is a definitive impact is hard enough.  

Just food for thought. 

Thursday, April 25, 2013

Water Balance

The circulation of the global oceans is pretty complex.  The rate of ocean heat transport impacts the rate of atmospheric heat transport which feedback on the rate of ocean heat transport.  While that is going on, the rate of polar sea ice growth changes in response to both the rate of atmospheric heat transport and ocean heat transport which provides separate feedbacks to both transports.  There is no obvious simple way to solve all the responses, but you can isolate portions to illustrate parts of the interactions.

Factor 0.6

System Circuit 1 Circuit 2

Flow 100 60 40

supply T 25 25 25

return T 3 2.2 4.2

delta T 22 22.8 20.8

Capacity 2200 1368 832

Circuit 1 total Circuit 1a Circuit 1b Circuit 1c
Circuit 2 total Circuit 2a Circuit 2b
Flow 60 30 20 10 Flow 40 20 20
supply T 25 25 25 25 supply T 25 25 25
return T 2.2 1 4.6 1 return T 4.2 2 6.4
delta T 22.8 24 20.4 24 delta T 20.8 23 18.6
Capacity 1368 720 408 240 Capacity 832 460 372

This is a simple spread sheet of a hypothetical ocean circulation.  The total flow is 100 units and the "factor" is the ratio of the flow to two primary circuits, the Southern Hemisphere and the Northern Hemisphere.  The pump providing the flow will be the Coriolis effect.  Circuit 1 has three branch circuits.  Imagine 1a is the Pacific circuit, 1c is the Atlantic circuit and 1b is the India Ocean Circuit.  1a and 1c have the same return temperature while 1b has a higher return because of restricted flow. Circuit 2 has two branches.  Circuit 2a would be the Pacific and 2b the Atlantic.  Circuit 2b has a higher return temperature also because of restricted flow.All of the circuits/branches have the same supply water temperature and the Capacity listed is just flow times delta T.

Factor 0.5

System Circuit 1 Circuit 2

Flow 100 50 50

supply T 25 25 25

return T 3.32 2.44 4.2

delta T 21.68 22.56 20.8

Capacity 2168 1128 1040

Circuit 1 total Circuit 1a Circuit 1b Circuit 1c
Circuit 2 total Circuit 2a Circuit 2b
Flow 50 20 20 10 Flow 50 25 25
supply T 25 25 25 25 supply T 25 25 25
return T 2.44 1 4.6 1 return T 4.2 2 6.4
delta T 22.56 24 20.4 24 delta T 20.8 23 18.6
Capacity 1128 480 408 240 Capacity 1040 575 465

Update: Now imagine I balanced the flow between the two main circuits.  With the same flow, the capacity is slightly reduced.  Circuit 1 obviously would have a lower capacity because of the reduction of flow, but circuit two would have a higher capacity.  I haven't adjusted the return temperatures to reflect the changes, but circuit 1 should have a lower return temperature, which would offset some of the capacity lost with the reduction in flow and circuit two should have a higher return temperature, but the total capacity will change.  That will change the average return temperature and with circuit 2b have some restriction, the total "head" of the system would increase cause a reduction in total system flow.  Each circuit should vary proportionally with the circuit flow variation, but there will be additional loses of overall system efficiency relative to the first condition.  For this example, 2168/2200=98.5 or a 1.5% reduction in efficiency.

That is not much loss right?  Well, it is in the same order of magnitude as a CO2 doubling is currently estimated.  If the total system capacity is "fixed", the this change would require an increase in the supply T to compensate.  Since the return water temperature is relatively "fixed" at 0 C degrees or a bit less due to the heat of fusion, a change is the percentage of hemispheric flow distribution will impact climate. No external forcing required.

Now lets consider some real numbers.  Using the Reynold's Oiv2 data, the SST between 30S and 30N has an average temperature of 26.2 C which is close to the supply temperature in the examples, 11.2 C is the average temperature of the southern circuit which we can call our circuit 1 return temperature, measured from 30S to 60S.  The Northern circuit 2 has an average temperature of 14C, measured from 30N to 60N. 

Factor 0.5

System Circuit 1 Circuit 2
Flow 100 50 50
supply T 26.2 26.2 26.2
return T 12.6 11.2 14
delta T 13.6 15 12.2
Capacity 1360 750 610

Factor 0.45

System Circuit 1 Circuit 2
Flow 100 45 55
supply T 26.2 26.2 26.2
return T 12.74 11.2 14
delta T 13.46 15 12.2
Capacity 1346 675 671

Using those temperatures, there would be a load imbalance of about 10% or a flow imbalance of about the same percentage.  Since the "average" atmospheric effect is ~160 Wm-2 which would need to be balanced by internal meridional flux in an equilibrium condition, the imbalance in terms of energy would be about 16 Wm-2. 

I will leave this here, but this is the rough ballpark of the potential impact of changes in ocean heat transport (OHT) which can vary with surface winds (atmospheric oscillations) and sea ice extent (termination of OHT) which is explored in this paper by Brian Rose et al. The role of Oceans and Sea Ice in Abrupt Transitions between Climate States.

Friday, April 19, 2013

More Proxy Stuff

Back to the Tierney et al Lake Tanganyika reconstructions.  The low frequency in blue with a high frequency, 1500 year tail used to show how "unprecedented" temperatures today are relative to the past. 

The Lake Tanganyika reconstructions are important in my opinion because they should not suffer some of the more complicated issues of "critter" drift.  For the deep ocean proxies, relatively minor variations in ocean currents can cause huge swings in the estimated SST.  If the uncertainty in the Lake Tanganyika reconstructions is properly considered, it should provide one of the more stable Paleo "reference" points. 

My plot is left to right while the Marcott et al 2013 version used by Nick Stokes is right to left, but you should be able to notice this subtle difference;
In the version I downloaded from NOAA Paleo, the twin peaks top out at about 2C degrees above the mean.  In the Stokes version, the later peak is about 1.8C above the downward shifted mean and the ~11000ka BP peak is nearly 4 C degrees above the shifted mean.  I would tend question whatever "novel" method Marcott uses if the proxies get selectively scaled. 

This proxy stuff really shouldn't be as hard as these "experts" make it out to be.  There is data, uncertainty and weighting, but it should all start with just looking at the bleeding data first.

Monday, April 15, 2013

Smoothing Uncertainty Example

With the Marcott et al. 2013 and other temperature reconstructions rehashing the same old same old issues of uncertainty, I thought I would create an example for the less statistically inclined among us, including myself.

This could be a climate scientist's worst nightmare, 100% natural variability.  What I have done is taken GISS land and ocean temperature for the southern hemisphere and just reversed the trend into the future.  Then I used doubling period trailing averaging of the nightmare so we can see how the natural smoothing of paleo temperature proxies could impact our perception of climate change.  I used trailing averages because ocean core samples are generally dated at the top and the average of the thickness of the core sample would extend back in time from that point.  So a sample dated 1950 with a 100 year time range would actually be the average for the period centered on 1800 AD.  Since there is no standard proxy sampling period, you can easily have a mish-mash of smoothing periods when assembling a temperature reconstruction.

The mean value lines give you an idea of the uncertainty between smoothing windows which in this example is about 0.12 C degrees.  The rough maximum uncertain of any single point is about twice the uncertainty in the average or 0.24C degrees.  Not being a statistical whiz, I would place the uncertainty at the larger of the two and call it 0.24 C degrees. 

The example I am using is a instrumental series that can have a daily resolution if you like.  You could add a few billion more readings and reduce the uncertainty of the instrumental to about 0.12C realistically, but you still have the potential of 0.24 C uncertainty if you select any one point.

I used the GISS SH LOTI because it is likely the worst of the instrumental temperature reconstructions since data for the far south was not available until roughly 1956, about half of the full instrumental series and there is much more SST data impact that has very sparse coverage and its own set of uncertainties, not the least being there is no Tmax and Tmin for the SST to be averaged like land surface air temperature is averaged.  The southern oceans prior to 1960 are basically a wild ass guess polished up a bit.

The uncertainty that Nick Stokes has for his novel new proxy combining method is close to the 0.12 with the +/- producing the same 0.24 C uncertainty for any one point, like at the end of his reconstruction.  I am pretty impressed with Nick's method and hope he gets it published.

There is one humorous part of this reconstruction.  Should I combine it with the 120 yta smooth of the GISS LOTI SH data, today's instrumental is unprecedentedly lost in the noise :)

Sunday, April 14, 2013

Nick Stokes and Marcott Reconstruction

Nick Stokes has been discussing the Marcott et al 2013 Holocene temperature reconstruction over at Lucia's.  The climate outcasts tend to think that Marcott's method sucks while Nick thought it sucked less.  So after a lot of back and forth, Nick used his surface temperature program, TempLS with minor modifications to average the Paleo reconstructions. 

I stole a screen shot from his blogto show how things are working out.  This looks a lot more reasonable to me as far as the procedure goes.  The uncertainty though doesn't include the uncertainty of the individual proxies which is pretty significant and not necessarily linear and this also include 100 years smoothing on proxies that have natural smoothing of often more than 100 years.  It is a step in the right direction IMO.

One of the problems with irregular smoothing being resmoothed can be seen in this splice of the Tierney et al Lake Tanganyika lake surface temperature.

Since there very little over lap, I used the last point of the longer reconstruction with two of the shorter reconstruction at about 637 AD to create the anomalies.  As you can see there is a lot more variability in the higher resolution 1500 year reconstruction that the 60 ka reconstruction.  I would think that the shorter reconstruction variance whould be used for both series with happens to be about the error range Tierney mentioned in the original 2008 60,000 year reconstruction.  That error range should be used no matter what type of smoothing is done.

This chart used the Yamal tree ring temperature reconstruct by Hantemirov and Shiyatov to show what different smoothing periods do to the information.  With 125 year smoothing the spikes are not only subdued but inverted.  So a 60 year excursion would appear unprecedented in any reconstruction smoothed naturally or in processing.  Since there is evidence that climate has 60 year Pseudo-cycles, it is understandable that 100 year smoothing would eliminate that evidence.  So the uncertainty range should include that possibility.

My two cents.

Tuesday, April 9, 2013


UPDATE:  Blogger appears to have eaten the original post, likely with some help from me, but in any case, the post and the draft of the post appear to be missing in action.

It was a simple post, existence is proof of continuity.  Another way to look at it, in nature, zero does not exist.  Neither does infinity.  Those are constructs developed by mankind to help understand nature.

Both zero and infinity can be approached in nature, but unlike a bank account that can actually be zero, nature never can be.

Carnot Efficiency is a tool used to estimate the amount of work you can get out of a mechanical system  It is a percentage based estimate so it has a range of zero to 100%, but can never be either.  In a single stage system, a more realistic maximum efficiency is 50%.  You can use the wasted energy from a single stage system to increase overall efficiency, but each stage will have its own limits typically less than 50 percent.

50% is a "sweet spot" for existence.  Consider a system that is open to the universe, if it gains more than 50% of the energy it can release, something has to break at some point.  If it loses more than 50% of the energy it gains, it ceases to exist as a system at some point in time.   

One of the Climate Etc. denizens posed a simple problem on dissipation.  One example or question was dissipation between two sets of plates, 300K to 150K and 150K to zero K degrees plates.  It you use the Carnot Efficiency, the first two plates are (1-150/300)=50% efficient and the second set are (1-0/150)= 100% efficient.  If the 150K plate can lose all its energy in an instance of time, it never existed.

If you consider the two sets of plate in series, then the efficiency of the first pair has to equal the efficiency of the second pair.  That requires a non-zero temperature of the final plate.  So the "system" cannot exist in isolation. There has to be a non-zero sink or an infinite number of system stages since zero and perfection in the case of Carnot efficiency, do not exist in nature.

There is more to physics that Carnot Efficiency, but it has proven itself to be useful, though imperfect like all models.  So if you consider the plate example with constant efficiency there would be more stages;

300 to 150, 150 to 75, 75 to 37.5 etc. until the next is no longer significant for our problem.  Since temperature is just another construct to relate to energy, also based on the range from perfection and zero, you can compare the first stage, 300K to 150K which would be 450 Wm-2 to 27 Wm-2 converting temperature to energy using the Stefan-Boltzmann law.  Since the Carnot efficiency is 50% and the S-B law is related to temperature by the 4th power, the emissivity or transmittance of energy from the warmer to coller plates would be [1-(.5)]^4=93.75% which would soon cease to exist.  To have a system that you could reasonably expect to exist for some time, the Carnot Efficiency would need to be (1-(1/2^.5)=29.3 percent.  Nature is full of square laws.  For the Emissivity to be 50%, then the energy transfer to the second plate would need to be 450/2=250 Wm-2 or 257.7K degrees which would produce a maximum Carnot efficiency of 42.8%.  Ideal is actually 42.8 percent not 50% and not 100 percent.

Then with two equal efficiency stages, the energy emitted would be 250/2=125 Wm-2 and the temperature for the outer plate would be 216K degrees.  Add another stage and you have 125/2=62.5 Wm-2 emitted with a plate temperature of 182.2 K degrees.

So if Earth had a surface temperature of 300K, it would have an effective radiant layer of 257.7K degrees and another of 216K degrees and another of 182.2K degrees, if the S-B law was perfect.  It's not.  How imperfect is it?

Earth has a near perfect radiant layer called the Turbopause.  The Turbopause, where the energy is low enough that convective mixing nearly ceases, is ~185K degrees which has an S-B energy of ~67Wm-2.  Then 62.5/67=93.2% is the approximate "effective" emissivity of the surface if the actual surface temperature is 300K degree.  That is pretty close to the ~.924 correction factor used with the S-B law.

Note, there are rounding errors, the numbers are not exact which would require some absolutely accurate baseline which may not exist, but for a guestimate, 16% Carnot Efficiency per stage seems to be pretty close.

For the other parts of the puzzle posed by the Climate Etc. denizen  I will let you comment there.

Sunday, April 7, 2013

Marcott Flaws Simplified

Climate is Cyclic.  The Earth Rotates and orbits around the Sun.  Just those two create 24 hour and 365.26 day cycles.  Since the Earth rotation is not perfectly smooth, it slowly wobbles around with a period of roughly 21,000 years, called precession.  There are more, but just consider the annual and 21,000 year cycles.

The Annual cycle causes Earth to have seasons.  The 21,000 year cycle causes the seasons to swap.  Right now, since the Southern Hemisphere is in the 21,000 year "summer" period, it would be warming just like the Southern Hemisphere is warmer in December through February, their Summer, during the annual cycle.

Marcott et al. averaged the Holocene, a period of roughly 11,000 years or half a precession cycle.

If the Northern and Southern Hemisphere are 180 degrees out of phase, that is the average Marcott should find.  Average is equal to the sum of the two precessional season sine waves.

If the precessional season has no lag at all, this is what Marcott et al. would have found.  The Average is equal to the cycle with the sum of the two being twice the average value.

If there is a 120 degree lag impact, the average would be less than the individual hemisphere precessional seasons and the sum would be equal to the the individual hemisphere cycles.

If there is a 90 degree lag between the hemisphere precessional cycles, then the average would be 70% of the individual hemisphere cycles and the sum would be 41% greater than the individual hemisphere precessional season cycles.

Marcott et al. were only looking for "average", their method would suppress any synchronization of hemisphere impacts.

Since the precessional cycle is roughly 21,000 years long, each precessional "season" would be 5250 years.  Since we are used to thinking in months, each precessional month would be 1750 years long.  Again comparing to the calendar, each precessional day would be 58.33 years long.

So Marcott et al. is essentially saying that it is unprecedented for the last precessional day, 58.33 years, to be warmer that the average of a precessional winter and spring.

We actually pay these guys to do this stuff.

Thursday, April 4, 2013

Playing with Proxy Reconstructions

While paleoclimate reconstructions are useful, I have never found the accuracy reliable enough to draw many conclusions.  I do tend to consider the relative variations between regions and for the paleo-oceans, depth informative.  There is no problem with paleo, other than over estimating the accuracy, as far I am concerned.

With the Marcott dust up, rational limits to paleo is again a hot topic.  I am a firm believer that the best way to "splice" paleo reconstructions if you must, is from the most recent to the older.  We live in the present, the cores were collected in the present and the methods use to calibrate the cores where developed in the present, since paleo is our window to the past, I prefer to work back in time.

I have an example of five paleo reconstructions I downloaded from the NOAA Paleo archives.  The reconstructions are pretty clearly documented though it can be a little difficult to find the "present" in the before present, from time to time.  When in doubt, 1950 appears to be the standard for "present".  In the example, two of the reconstructions have a lot of data between 0AD and "present", one has a reasonable amount for the period and two are limited to just a few points in the past 2000 years.

Since the Paleo climatologists calibrated the dates and temperatures, I assume they are in the ballpark and based my average on the more modern era, 0 to 2000 AD.  The alignment is what it is.  Since each reconstruction has a range of error associated with each data point, there is no sense trying to make more of this alignment than the error of the individual reconstructions will allow.

For the more recent 5000 years, this is how the alignment looks.  The Alley 2000 and Joulez 2007 reconstructions are for the Greenland and Antarctica.  They represent a fairly small but highly variable portion of the Globe.  The Kim 2000 and Ruehlemann 1999 recostructions are in the tropical Atlantic and the Tierney 2010, which I have misspelled, is from Lake Tanganyika.  These three reconstruction would represent more of the energy of the Earth and should likely be weighed more heavily if I were building some "global" temperature reconstruction.

This chart shows my selected full period for combining the reconstructions.  From this, the Antarctic region started coming out of the Ice Age first followed by the Arctic region using Greenland as a reference.  The other reconstructions stay is about a 4 C range.  Near present the data series all converge because of my choice of baseline.  At the start of the data series, the three tropical come close to converging again at -2.5 C +/- a few tenths.  If the tropics represent most of the energy of the planet, their variation between glacial and interglacial is only about +/- 2 C degrees with +/- 1.25 C degrees closer to the average variation in tropical conditions, not including the uncertainty of the individual reconstructions. The Antarctic has roughly a 10 C range and the Arctic a 15 to 20 C range.

The Antarctic reconstructions also have the popular "global" CO2 change which appears to be reasonably close to "global" with a range of error of possibly 100 ppm or so.

The Monnin 2004 Antarctic composite CO2 reconstruction is include in this chart.  I used the same baseline to calculate a CO2 anomaly, then scaled the CO2 anomaly by dividing by 9.5 to match the start of the combined reconstructions.  Based on this "fit", CO2 appears to lag the Antarctic reconstructed temperature as has been noted by a number of climate junkies and real scientists.  There is no "robust" correlation of CO2 with the other reconstructions apparent.

There are thousands of regional temperature reconstructions. It might be interesting to see where more of the regions converge using the 0 to present baseline. 

Update: The Tierney reconstruction in my example was also part of the Marcott et al 2013 Holocene reconstruction. 

Nick Stokes has kindly put together all of the reconstructions that Marcott used on his Moyhu blog.  The black mini-hockey stick ending at about 1350 BP above and about 1 C below mean is the Author, Tierney's

Marcott's date adjustment move the Tierney reconstruction into 900 BP range, about a 400 year shift.  That moves an African temperature reconstruction that did start an upswing at the beginning of the Medieval Warm Period to the beginning of the Little Ice Age period.  Wouldn't that tend to smooth out what is arguably the most recent warm period?

Tamino at his blog, Open Mind, uses some synthetic data with the Marcott Monte Carlo process to see if Spikes would still evident after the Monte Carlo smoothing.   RomanM appears to also be interested in the Marcott Monte Carlo over at Climate Audit.  Kinda looks to me like Marcott is not what it is "stacked" up to be.