New Computer Fund

Wednesday, September 28, 2016

If the oceans are warming they cannot be causing the warming

This is the interesting "logic" you run into debating Climate Change.  I got this response discussing how the oceans are more accurately described as a thermal reservoir than a thermal sink.  If you consider the oceans to be a thermal sink, you are using a classic Carnot Engine model. 

The Carnot engine is a simple two box model, Tsink or Tc (for cold) is one box and Tsource or Th (hot) is the other box.  Heat flows from hot to cold and if you have both boxes in a prefectly insulated container, eventually, both boxes would end up at the same temperature Taverage.

Carnot never used a Taverage model, because it is useless, and meaningless, because a flow of energy is needed to perform any work, which is the whole point of an engine.  Carnot's concept though was very useful because it estimates efficiency and provides reasonably limits of expectation.  Getting more than 50% efficiency requires a bit of creativity like adding a second engine to use the wasted energy of the first.  You can keep adding energy until you run out of money, but you will never get 100% efficiency.

The problem with the "logic" is that "warming" is related to Taverage and the oceans viewed as Taverage are reservoirs not sinks.  A sink at the same temperature as the source or even a greater temperature than the source, is meaningless in thermodynamics.  Carnot's simple two model shows then problem once you add the "effective" radiant energy to the problem.

If Th start at 400 K and Tc starts at 200K, if both boxes contain the same volume the final temperature should be 300 K assuming no energy is lost.  Tc would have an effective energy of 1423 Wm-2, Tc and effective energy of 89 Wm-2 which would be an average energy of 667 Wm-2, the average temperature of 300 K should have an average energy of 450 Wm-2, so since we are looking at an unrealistic situation, using average temperature as a metric for a heat engine, as if by magic the "blending" caused the system to "warm".  But can this actually happen in a real world situation?  You bet your ass.

It is easier to add energy to something that is cold than it is to add energy to something that is hot.  Heat "wants" to flow from hot to cold so there is always heat flow unless you dream up some perfect barrier to heat flow or an "ideal" model.  You can use "averages" to a point to assume that internal flow is insignificant, but the greater the temperature differences involved the less likely that is a valid assumption.  When you use a "sink" model, you are assuming a large difference in temperature so that small variations are insignificant, so as the variations become larger, that assumption is less valid. In climate science "discussions", hopefully not in real climate science, people tend to flip flop between assumptions and have Eureka moments.

So let's just look at the difference is ocean surface temperature by hemisphere.  The NH is on average 3 C degrees warmer than the SH, 24.5 C versus 21.5 C for the 45 to 90 higher latitudes.  Compared to the Carnot example, this difference is next to nothing, but if the two areas were to magically equalize to the same temperature, there would be about 0.23 Wm-2 of unaccounted for warming which happens to be about 1/3 of the entire energy imbalance of the planet.  Since the southern oceans are cooler, they are also easier to warm, nearly all of the current ocean heat uptake just so happens to be in the southern hemisphere.  Since the Solar energy varies seasonally and currently the southern hemisphere gets the highest energy, the greater temperature difference would imply a higher heating efficiency.  The entire imbalance "could" be due to the planet's current position in the precessional cycle,

Now we know better than that because adding CO2 and other GHGs will produce some warming by providing additional atmospheric insulation for one analogy, but ignoring the potential issues of internal imbalances and unbalanced "external" forcing would just "enhance" the efficiency of CO2 related warming to an unrealistic level.  Anthropogenic warming cannot be as large as indicated by simple models.  With the exception of online climate "experts", the entire thermodynamic literate world knows this.

So how well do the climate modelers do?  The "average" climate model over estimates the temperature of the southern oceans and underestimate the temperature of the northern oceans.  That is a pretty good indication the models miss part of the thermodynamics which is a large part of the physics they are supposedly based on. 

Now the "typical" online expert wants, equations, a better model or some other definitive "proof" that this is a valid issue, which is simple insane, because the Zeroth Law of Thermodynamics, is a "Law" not a suggestion.  It is their job to show they aren't playing fast and loose with the law, not mine. 

A great example of how much they are clueless is how they consider internal variability.  Internal oscillation or peudo-oscillations are a result of internal temperature gradients which constantly exist.  Since winds and currents are parts of these oscillations and winds and currents are drivers of the majority of internal mixing of fluids containing heat energy, they directly impact the "average" energy in the system.  Increased variability would be greater mixing which would indicate more heat uptake aka "warming" so assuming the oscillations "average" to zero impact is nuts.  You can assume some "average" rate of turbulent mixing, but then you have added another layer of assumptions.  Assuming internal oscillations have zero impact on "warming" is just another source of over estimation of CO2 forcing impact.

Now when you consider the oceans as a thermal reservoir, you can start to understand why a volcano can sometimes appear to "cause" warming.  If the volcano cause more localized cooling, like in the Arctic, the ocean overturning circulation can increase delivering more energy than normal to a region with extremely old temperatures so there is a maximum impact of the poor choice of ignoring the zeroth law.  Whether that "warming" is real or not, depends on the rate of ocean heat uptake, so "surface" warming can be a result of total system heat loss or just a glitch in the validity of a poor choice of metric, "Average Global Surface Temperature."   When you use just land temperatures, the situation gets worse because the temperature range for determining "average" gets larger.

Hopefully, this is the last time I have to write a post about something that should be common knowledge, the Zeroth LOT.


Thursday, September 22, 2016

Erin Brockovich - From environmental hero to ambulance chaser

Chromium 6 and Erin Brockvich are getting more press again.  The California Department of Health has placed a health goal of 0.02 ppb on Chromium 6 is drinking water but the reality is that 10 ppb is about the best that can be expected since Chrome, both as 3 and 6 are fairly common in US ground water, especially in the more desert regions of the southwestern US.  It was recently "discovered" that Chromium 6 at about 2 ppb is common in North Carolina ground water and based on a larger EPA study about 218,000,000 people in the US have greater than 0.02 ppb in their water supplies. 

So Ms. Brockovich has picked up the battle again and quoted something interesting, "12,000 excess cases of cancer by the end of the century."  So in 84 years the US can expect 12,000 "excess" cases of cancer based on testing of mice and rats.  Mice and rats are not "human" so there is some debate on how valid animal test actually is, so Chromium 6 is listed as a "probable" carcinogen not a known carcinogen like say Vodka.  A UK study estimated that 4% of all UK cancers were "caused" by drinking alcohol and the Chromium 6 "crisis" should produce an excess cancer rate on the order of 0.0003% which is close to negligible in my opinion.

California is known for attempting extremely conservative health standards while 1 in a million is generally considered negligible,  the 0.02 ppb looks to be about 1 in 2.25 million.  1 or 2 ppb is generally considered "safe" with the EPA picking 10 ppb as an action level.  For some perspective, you have a 1 in 6000 chance of dying in a car crash driving 10,000 miles per year and about a 1 in 16,000 chance of not making back from a hike in the mountains.  I am pretty sure mountain biking increases your risk.  The current US risk for all cancers is in the about 50% over a lifetime and 84 years is one ample lifetime. 

So while by her own sources there is as close to negligible risk as is possible, that doesn't stop Ms. Brockovich from trying to make some health related point to save the good people of the US from some evil corporate demon.  I am all for clean water, but this is getting bizarre.


Tuesday, September 20, 2016

Horseshoes and Hand Grenades - "Nearly Ice Free"

The main thing that attracted me to climate science was watching grown, educated people making complete fools of themselves.  This really started when Steig et al. published a paper on Antarctic Warming.  That group went to great lengths to "prove" what they expected to happen in the Antarctic when the "physics" really indicated that the opposite was more likely.  CO2 and other green house gases increase heat retention so without heat energy they do squat, they actually increase heat loss below temperatures of around -20 C degrees.  If you don't want to take my word for that, play around with the MODTRAN model available from the University of Chicago.

Now Greg Goodman has a post at Climate Etc. concerning the "ice free" Arctic mentioned often in the press which is really a shorten version of "nearly ice free" defined as less that 1 million kilometers squared Arctic sea ice area for five consecutive years in September.  "Nearly ice free" is a real Climate Change metric defined in the IPCC literature.  "Ice free" has become a sales slogan for the climate activists.  Since Climate Change is both "scientific" and a political hot button, abuse of the not all that well thought out "nearly ice free" climate metric should be fair game.

"Nearly ice free" isn't particularly useful because it is only a small change in sea ice cover for a short period of time in an area that represents a tiny fraction of the Earth's surface.  As soon as the area becomes "ice free" winter sets in and the cycle starts over again.  Whether ice is retained or not depends more on winds and wind direction than it does on actual temperature and retained atmospheric energy.

In fact, an "ice free" Arctic in September should lead to more snow fall/snow accumulation and brighter cleaner snow would tend to reflect more solar energy the next season, more than offsetting any September albedo reduction. 

The bizarre desire to blame everything on CO2 or mankind tends to overlook that when Arctic sea ice thins in summer, ice breakers move into the region to open shipping lanes which leads to greater flow of ice from the region.  It also over looks all the soot and dust deposited on the snow thanks to agriculture and shipping which often uses the most polluting liquid fuel every, bunker oil.  Left alone, the sea ice would last longer and the ice itself would help retain more ocean heat energy. 

However, thanks to climate activists like Al Gore, what little scientific meaning "nearly ice free" might have is completely lost to the political value of "alarming" "ice free" conditions that are happening many decades prior to model "projections."

All the while, Antarctic sea ice expansion is down played and the northern shift of the InterTropical Convergence Zone (ITCZ) which were completely missed in the models is ignored as much as possible along with the general reduction in Atlantic hurricanes related to that shift.  These shifts are most likely related to century and multi-century scale ocean circulation variations related to the hemispheric seasaw often discussed in paleo-climate research. 

Century and multi-century scale "oscillations" are bad for climate science business though since all that has been assumed to be insignificant.  Instead, climate scientists tend to look for anything close enough to be plausible, to use to reinforce their "projections".  "It doesn't look as bad as we expected" should be the order of the day, but climate change politics has far too much invested to take the rational path.

Horseshoes, hand grenades and "close enough for government work" are hard to argue against.

Tuesday, September 13, 2016

Technology versus Mysticism

The Dakota Access Pipeline (DAP or DAPL) is in the news because of a protest by Lakota (Dakota and Nakota) tribes being joined by quite a few other first nation groups concerned over water rights and "sacred" sites.  The actual path of the pipeline is not on tribal land according to most recent determination of what those lands are, but because the borders have moved over the years, there is some dispute over what those lands should be as well.

Since the tribal lands are downstream of the pipeline path, the Standing Rock Sioux (sic) have a right to be concerned, but they have a mystical approach to potential risk versus a technical approach of risk determined by the Army Corps of Engineers, EPA and industry professionals.  A classic battle of science and pseudo-science. 

While pipelines are not fool proof, they are considered much safer than over land transportation via rail or road.  Older pipelines already in the general area have had a break or two requiring expensive clean up in pristine places like the Yellowstone River but with a 400% or more increase in rail transportation of oil, the number of spills, damage and deaths associated with rail have increased greatly.  Both rail and pipeline companies have done quite a bit of work to improve safety, but pipelines still have an edge in safety and cost.  However, statistics, science and economics are hard sells compared to the mystical traditions of the "noble" savage. 

The (sic) by Sioux is because it is an Anglo nickname for the group that roughly means snake or enemy.  The French traders back in the day were allied with other tribes and the Dakota in the Minnesota and Iowa territories tended to have a blood feud on occasion with their neighbors.  Young male members of tribes needed to prove themselves and capture potential brides to gain status.  Since fur and hide trading could allow them to get better weapons and conveniences for their households, the ideological notion that native Americans only took what they needed and were admirable stewards of the land was a bit of a myth created by big city Anglos to sell papers and treaties. 

Blood law and survival of the fittest is the default laws of any land and the belief that the dead might return to avenge wrongs pretty common.  Dakota, as well as other groups, were known to dismember and mutilate their victims so their ghosts would be less of a threat.  The Dakota War of 1862 started in Minnesota when a group of 4 or 5 young warriors attacked a homestead and killed 5 or 6 settlers, mutilated the bodies and burned everything they didn't find of value.  The actual numbers vary a little depending on source.  The settlers were on land that the Dakota Tribe had sold to the US government for $5 million years earlier and due to typical Federal government efficiency during the start of the Civil War, the tribe's $80,000 in gold annuity had not arrived on time.  The young warriors were just doing what their traditions allowed and the situation escalated into a war where the tribe attacked and killed about 600 settlers in a village they thought was an easier target than a small fort that had actual Anglo warriors to tackle. 

While the Dakota were attacking the New Ulm township, their barrels of gold had arrived at Fort Ridgely where is was buried for safety and local settlers joined the handful of military in the fort which had 5 or 6 serviceable field pieces including Howitzers.  Once the Dakota warriors finally decided to agree with Chief Little Crow and attack the fort, the timing was lost and the Howitzers took their toll.  Had the tribe listened to their "chief", they most likely would have collected their $80,000 in gold, had a short court case involving 4 or 5 young warriors that would have been hanged and lived happily ever after until the next dust up.

Instead, the citizens of the Minnesota territory with the help of the US Army managed to raise a few thousand troops and despite a lack of military skill, managed to drive the Dakota Tribe out of Minnesota.  Since tales of the mutilated bodies of the settler men, women and children were wide spread, the militia retaliated in kind creating what one would call a less than Christian end to the story.  Nearly half of the estimated 6,500 Dakota were killed or captured.

Over 300 Dakota combatants captured were sentenced to death and while President Lincoln pardoned the majority, 38 warriors where hanged in the largest mass execution in US history.   This was the start of 30 year war with the Sioux Nation that ended at Wounded Knee where native men, women and children were killed in the same manner that New Ulm men, women and children were killed at the start of the war. The bones of Chief Little Crow who was killed a year after the 1862 war were on display at the Minnesota State Capital along with his scalp until 1971 when they were finally buried.

 Prior to the 911 attacks, the Dakota Tribe of Minnesota held the record for the largest mass slaughter of Americans.  The Sioux have a rich heritage as warriors but not a rich heritage of diplomacy.  

Here is a native American perspective of the Dakota WarAnother perspective. 

It will be interesting to see how our current scientifically enlightened leaders deal with the Dakota Access Pipeline situation and revisionist Native American history.

Friday, August 19, 2016

Distrust of Science vs love of pseudoscience

Locally, water quality issues are a big deal right now with the Florida DEP revising water quality regulations and the heavy rainfall causing more water to be released from Lake Okeechobee which is causing problems.  The Lake Okeechobee situation is a "real" problem that is going to take a few more decades to "solve" and after that there will still be problems when there is a heavy rain period following an extended drought period.  The water quality regulation issue is a non-issue being promoted it appears to recapture some of its former environmental activism glory.

For Lake Okeechobee, in spite of huge residential and commercial development since 1930, the primary villain is "Big Sugar".  Big Sugar uses fertilizer and runoff tends to favor invasive plants more than native plants.  To reduce the problem Big Sugar has adjusted their practices a bit and built artificial wetlands to filter mainly Potassium runoff.  The complete Everglades restoration plan is extremely ambitious and involves among other things installing nearly 100 miles of new bridges on the Tamiami Trail and other roadways to make the 'glades water flow more "normal".  To work properly, all of this has to be designed for extreme conditions like hurricane monsoons that flooded areas in the 1920s killing a few thousand people leading to the installation of the dikes, roads, and canals that are the problem now. 

There are lots of things that can improve the situation, but nothing that will prevent all problems and some of the "solutions" will create new problems.  Such is life.  Most of the current issues will resolve themselves as rainfall returns to "normal" and in the mean time the flushing is probably a good thing.  In any case, most of the projects have to be on hold until rains slow down enough to do the work. 

The other situation, water quality, is a function of incorporating 30 years of research and determining new levels of "acceptable" risk.  This is extremely interesting because 30 years of linear no threshold modeling and 1 in a million risk being used as an approximation of zero has totally screwed up lay logic. 

1 in million in the US is different than 1 in a million in the UK, or can be.  Depending on the issue, 1 in a million might be figured over a 70 year lifetime or over one year which is a huge difference.  If you have been using one "standard" and try to change to a new standard, political affiliation becomes a factor.  1 in a million annually is about 1 in 14,000 lifetime which is still 10,000 times less likely than dying while driving to work over a lifetime. 

Washington State has an issue with water quality limits and salmon.  This gets to be interestingly complicated because you have to estimate how much salmon someone might eat per lifetime, the bio-accumulation rate plus the level that might cause an unacceptable risk of something happening.  Since people seem to love fad diets, a new salmon diet could be 250 grams of salmon per day, 365.25 day per year for 70 years ignoring other fad diets that might interfere and the likelihood that an all salmon diet might not be all that healthy to being with.  What the local Governor wanted to do is change the lifetime risk from 1 in a million to 1 in 100,000.  One in a million is like flipping 20 coins at once and all landing on heads while 1 in 100,000 is like making it just 17 coins instead, more like 16.5 coins but who's counting. 

When Mercury in tuna became an issue, it was because a couple of fad dieters ate high dollar Albacore, 2 to 3 times a day for a year or two and started having their hair fall out.  Albacore has about 3 times as much Mercury as "average" tuna and tuna 3 times a day, everyday, forever, is in the ballpark of 200 times more than "normal" unless you happen to be a tuna eating predator. 

Neither of the tuna fad dieters died and after 6 months their Mercury levels returned to something close to normal.  One became a devoted advocate for "proper" labeling of Mercury and destroying millions of tons of tuna that exceed the limits they think are reasonable.  In case you miss this subtle point, having a whacked out nut job establish reasonable limits is beyond bizarre. 

Now add climate change activism.  About 17% of all the anthropogenic Mercury in the environment is due to coal emissions from "average" coal fired plants.  About 60% is related to mining in general with the majority related to "artisan" mining which is low tech third world practices.  So climate change activists ignore reality and use Mercury-coal for leverage to steer policy.  North America, which includes the US, is responsible for 7% of global mercury emissions with natural causes accounting for nearly half of that 7 percent.  Don't forget that other types of mining also takes place in the US though with stricter regulation, so the actual US coal contribution might be 1% to 3% of global emissions. 

Lead mining/smelting contributes to the Mercury problem as well.  Doe Run was the last lead smelting operation in the US and had developed an electro-separation method to produce lead in a "green" manner.  Because of costs and regulatory uncertainty, Doe Run dropped its plans for a new green plant and ships lead ore to China so "artisans" can do the smelting job then ship the lead back to the US.  This "clean" solution increases Mercury pollution along with adding quite a few tons of other pollutants to the environment, but tends to satisfy "activist" environmental science fans.

Fracking is unpopular and since fracking fluids that return from a well have Benzene levels and Benzene is a carcinogen, Linda Young, a Florida Clean Water "expert" noticed that Florida, currently with a Republican Governor, wanted to raise Benzene levels from 1.18 PPB to <2.0 PPB when revising the Florida clean water act standards.  It appears that the original data published to bash the revised standard mistakenly used <2.0 PPM, which was likely a typo, but the increase in allowed "toxins" by 1000 times by a Republican administration got pretty good media coverage.   Linda has a MA in Political Science and Communications and has been the "director" of Florida Clean Water Network for over 21 years and it was her website that broke the story.  Confusing Part Per Million with Parts Per Billion is unfortunately fairly common.  Happens to the best of folks.  Running with an alarming story without fact checking is also fairly common.  Thinkprogress jumped all over the story, which is something "real" science advocates might want to consider.  The original 1000 times has disappeared and been replaced by a more believable but still exaggerated 3 times.

Personally, I have no problem letting wannabe science activists make a case on false data so I can humiliate them to no end.  At my age it is pretty enjoyable making geniuses look like idiots.  During this political campaign season, my little hobby seems to be becoming popular.  I cannot take credit for the fad though.  Unfortunately, it is a bit of an inside joke since many of the self proclaimed "experts" aren't bright enough to get it, like the Food Babe for example.  See science you can defend, but pseudoscience is a belief system and there is a saying that "you can't fix stupid." 







Monday, August 15, 2016

Hypersensitivity - TMI all over again

The Three Mile Island nuclear accident freaked people out.  The vast majority of radiation leaked was tritium, which at the time wasn't studied all that well.  Thanks to law suits, lawyers for residents tested every thing they could find and government agencies had to test all the areas tested again.  The biggest find was naturally occurring radium and radon gas.  No one had ever thought to check for radon gas, seriously at least, so the great Radon scare was born.  After lots of years of research, there is no evidence of any health issue related to the TMI accident, there was however evidence that tritium or something else may have lead to a slightly lower cancer rate in the area impacted.

Now we have a new TMI like example of searching for one thing and finding something else.  Duke Power in North Carolina has a coal fly ash dam burst leading to some clean up and testing of water wells in the area.  Test showed higher than allowed levels of Chromium and higher than expected levels of Vanadium.  However, the tests did not show other metals that should have been present if the source were the coal fly ash.

Duke Energy made a convincing case for the situation and the North Carolina DEQ lifted a drinking ban on the wells and established a level of 20 PPB for Vanadium which is slightly lower than the 21 PPB recommended a bit halfheartedly by the EPA.  Halfheartedly because while Vanadium might be a problem, it also might be an essential mineral/element.  The jury appears to be still out on that point.

Chromium VI is well known since Erin Brockovitch and there was Chromium present in the tests, but Chromium both 3 and 6 can occur natural in North Carolina's mineral rich aquifers.  Vanadium also is naturally occurring and until early 2000s there wasn't a test available to measure Vanadium levels below 30 PPB. 

Of course the kill coal crowd who likes to use the clean water act to kill coal is up in arms about NC
allowing" Dirty Duke to get away with murdering innocent North Carolinian's with highly toxic waters.

Oregon, which has no coal resources to speak of and only one coal fired power plant also tests ground water since the CWA and during a 2011 survey noted levels of Vanadium in ground water just below the  30 PPB limit of previous test accuracy.  Oregon also didn't have a Vanadium limit but noted their levels exceeded Arizona's which does have a limit. 

Add to this the discovery of Arsenic at very low levels in water and foods which required the developing of newer more sensitive tests for the field and you have a crisis of super sensitive testing proportions.  I doubt anyone really has a clue if any of these levels are good, bad or just ugly reality.

Precautionary types have grown popular of zero tolerance which is fine if you can't measure all that small a concentration, but when parts per trillion are easy, just about everything is contaminated.  There is no such thing as zero to begin with, so zero tolerance is a bit of a joke anyway.

Now if I were younger and in my prime, I could test about anything you like and pick up good scare cash.  Scare and remediation is a lucrative business.  Right now the Green's are more into scare and decimate.  Not as lucrative, but you get a warm feeling.  Green activists even have a bible or sorts online explaining how to use the CWA to kill pipelines, coal, big sugar and of course big oil.  They are really doing well with the exception of Steven Donziger who foolishly allowed a documentary film crew to film his lying ass

Politics aside, the new insight on just how contaminated our little planet is, is interesting.  Historically, humans with the best immune systems win, and these new higher sensitivity tests could allow some verification of linear no threshold models used with wild abandon by the green and anti set.

In any case, the local epidemiologist for NC has resigned in protest on the governor's complete dis regard of public health and safety.  Fun times.

Monday, July 11, 2016

Unintended consequences

No good deed goes unpunished.  That is one of those ironic truisms I have heard from time to time.  There are plenty of examples.  The biofuel mandates and initiatives have impacted food prices, increased deforestation and reduced conservation lands all of which are punishment for the grand plans to save the world as we know it.  In addition, the carbon credit/taxes have a larger impact on the poorer population even though there is a wonderful "revenue neutral" promise for someone some where.  Overstating the urgency of the problem, if it has been overstated, has fueled the spiritual fires of the end of times crowd and belittling their fanciful beliefs isn't exactly going to chill things out. Focusing on the "greatest problem ever to confront mankind" also tends to hurt the sensitive feelings of all the other people trying to deal with their more pressing problems. 

Right now Black Lives Matter and the band wagon jumping on of seriously warped fringe groups seems to be gaining some traction.  They are demanding the problem be "fixed" and quick.  This might be a bit more of a problem than some might suspect.

First problem is about 248 people that identify as black were killed by police in 2015.  More people that identify with white were killed, but those don't appear to be a problem and the percentage based on population distribution is higher for the black population.  Some percentage of those deaths would be "justifiable", but there isn't any way to resolve justified versus negligent to anyone's satisfaction.  In order to "fix" the problem might require zero per year which is most likely impossible.  Kind of like zero emissions by 2025.

Prior to 9/11 and the 2009 economic turn down, there was a trend toward less violence, but even with the trend change, there is less now violence compared to 1993.  While some point to the economy as a main cause, the seat belt ticketing campaign adds in the neighborhood of 1,000,000 to 2,000,000 police citizen contacts per year and since the majority of those contacts are percentage of population wise black Americans, it is likely that click it or ticket is used as a legal profiling tool.  In 2011, there was over 26,000,000 traffic stops and in 2012 over 12 million arrests in the US.  For comparison, there were 1.3 million arrests in the UK.  The population of the UK is about 1/5th the US but arrests were 1/10th the US.  Even though most of the UK police force is unarmed, still an average of close to 40 people per year die after police contact.  So if the US had an unarmed police force just as gentle as the UK, we could expect 400 people per year to die with roughly 25% of those being black citizens or 100 black deaths per year under "ideal" conditions.  If there was like data on all police stops it would be nice, but using traffic stops the result of an unarmed police force would be close to 200 black civilian deaths per year. There are other deaths by other means but death after contact is the best UK data for a rough estimate.

It is pretty much impossible to have much of any reduction given the likelihood of inebriation, mental issues and just plain pissed individuals encountering law enforcement as often as they do in the US.  The simplest "solution" is being a bit more selective on what laws really need to be enforced.  Unfortunately, the US has an official/unofficial quota system.  Arrests, catching the bad guy, is the law enforcement grading scale a lot like publishing and citations are the rule for science.  Quality of arrests and publication aren't enough of a factor.  In any case, reducing police/civilian contact by about 10,000,000 times per year by losing click it or ticket, random drug and alcohol stops and forgetting about tickets for things like tail lights would save more lives in the minority populations and improve general citizen - law enforcement relations.

Plenty of laws in the US tend to be noble cause related and used for other than the intended purpose, advancement instead of public service for example.  Enforcement of any law impacts the lower economic class more that upper classes that have the time and money to navigate the judicial system.  A poor guy trying to make ends meet that gets a $116 dollar ticket for careless disregard of his personal safety in a 25 mph zone might start feeling somewhat put upon since that could be a day and a half of wages.  Likewise, that same poor guy might not feel quite the same when paying an extra $60 per month in energy costs to save the planet from global warming.

Of course it could be just the economy.  Personally, I think we have an elitist issue not a racial issue. 


Friday, July 1, 2016

Climate change "science" is so funny

Richardson et al. 2016 is a new and improved paper on the issues of modeling a "global" surface air temperature when you cannot actually measure a "global" surface air temperature.  "Surface" air temperature is somewhat measurable for about 30% of the globe provided you don't mind considerable interpolation in the modern era and a crap load of interpolation prior to 1950.  So what Richards et al. recognize is that land plus ocean temperatures are really apples and oranges.

The funny part is that Richardson et al. propose "scaling" the interpolated data to fit modeled data in order to imply an extra 28% of warming that cannot be actually measured.  However, NASA an other organizations spent a fairly substantial amount of money designing and deploying "platforms" to measure exactly what isn't "measurable".  Nope, this isn't a Monty Python rerun, this is climate change science.

The Reynolds Optimally Interpolated Sea Surface Temperature (Roiv2SST) product is a space based data set that once did part of this job but was dropped from the Extended Reconstructed Sea Surface Temperature (ERSST) product because using Roiv2 SST appeared to indicate less warming "than expected".  The Microwave Sounding Unit (MSU) and advanced MSU data products for the lower troposphere also are unreliable because they indicate "less warming than expected".

This new tool to find "expected warming" tends to cherry pick data related to the definition of Climate Sensitivity which is an increase in Global Mean "Surface" Air Temperature (GMSAT) due to a doubling of atmospheric CO2 where the "surface" isn't something tangible, but a phantom of Climate Science's imagination.  Personally, I don't have a problem with an idealized metric or reference as long as the metric doesn't get adjusted constantly because of expectations.  Since global policy is based on this idealized metric, a minor adjustment of a tenth of a degree can be worth about a trillion dollars of hastily implemented policy.  Before long, we might be talking about real money.

Another metric involved in climate sensitivity is change in heat capacity, but this metric tends to get a back seat to the more popular and volatile GMAST metric.  Since the majority of heat capacity change is related to Ocean Heat Content (OHC) and that target doesn't move as much, it is a much better candidate for "scaling" than GMSAT.  Less volatility i.e. variablity, means less uncertainty.

A few years ago I did scale Land temperatures, SST, OHC, global mean sea level to a thermodynamic-ally important region of the oceans, the tropics, to create a scaled reconstruction of 2000 years of climate.


This has every bit as many warts and blemishes as any longer term reconstruction has, with one tiny advantage, it includes OHC and Global Mean Sea Level (GMSL) which is tightly linked to OHC, meaning it relates to the complete definition of "sensitivity", change in surface temperature and change in heat capacity.  Around the same time, Rosenthal et al. 2013 produced a similar reconstruction of ocean heat capacity using that same Oppo et al. 2009 Indo-Pacific Warm Pool reconstruction including a comparison and contrast with previous "global" temperature reconstructions that neglected that pesky OHC issue.

Real science based on discovery instead of fulfilling expectations is stealthily creeping into the debate it seems.  Now that the in-crowd seems to be approving "scaling", things might get interesting.

Update: In addition to correcting Richardson et al. spelling, this link goes to the paper's "background".   Marvel et al. 2015 is another of the "ifs" in the paper since there were a number of issues with the efficacy estimates in that paper.    I am not going to pay to read this paper and I suspect there will be considerable post publication review which will be interesting.

Monday, June 13, 2016

Planetary Boundary Layer, Moist Air Envelope, Ocean Asymmetry

Planetary Boundary Layer, Moist Air Envelope, Ocean Asymmetry - all things I have mentioned a few times here.

As far as heat capacity goes, you have the oceans then the moist air envelope then ice and land.  The greenhouse effect and global warming start with dry air which isn't on my list, then assumes that an increase/decrease in dry air temperature will have wondrous amplifying feedback on carbon forcing.  Originally, this was carbon dioxide forcing, but shifted to more carbon generic forcing most likely because things were not going quite as planned.  In thermodynamics it is all about the heat which depends on the energy and energy storage capacity which really drive the bus.


The last post I had on the planetary boundary layer emphasizes the difference between being in and out of the moist air envelope.  With moist air you have a thicker, deeper and higher heat capacity planetary boundary layer which decreases the temperature response to any forcing.  You can heat up a potato chip or crisp a lot faster than you can a whole fresh potato.  Since this particular planet has more whole potatoes in the southern hemisphere and more crisps in the northern hemisphere, they aren't going to cook uniformly.

To add to that, the thermal equator and the physical equator are different and that damn thermal equator can move.  Right now the thermal equator or Inter-Tropical convergence Zone (ITCZ) is about 6 degrees north of the real equator.  Climate models often indicate there should be twin ITCZs which is obviously wrong and that warming should be more uniform.

To someone with a basic knowledge of thermodynamics, the models are friggin' wrong because they don't realistically consider heat capacity.  When you have an entire field of science starting with screwed up assumptions which should be obvious, you would be surprised how hard it is to get the giants in that field to listen.  These asshats, er giants, want a completely new theory most likely because their collective butts are on the line.

This completely new theory should "project" all of the things they think are relevant to a superior degree of precision, because they have over simplified a problem with poor assumptions and actually believe their model.  Since their model is obviously screwed up, they should have no reason to be so confident, but since they are humans you should expect flawed logic.

The sad thing about this situation is that in a multi-disciplinary approach, you have an exponential increase in the number of Prima Donnas.  Prima Donna are great at pointing out the flaws in others but not so great at introspection.  The normal thing to do is let these Pima Donna fade, but a false sense of urgency screws up the whole scientific process of advancing one funeral at a time.

The moist air envelope and moist air model were my attempts to get people to focus on the more significant part of the atmospheric portion of the problem.  There is no ideal way to set the problem up, so you have to consider the more significant parts.  If it turns out that what you consider to be more significant has issues, then you have to consider other parts that appear to be significant.  Basically, quit recycling your same failing set of assumptions.

The minions of the great and powerful carbon always revert to the basic playbook the way the choir reverts to their favorite hymns.  A dynamic open system with a planetary scale will likely require thinking outside of the box.  A large number of simplified models designed to avoid assumption inbreeding could be considered outside of the box.  Regional sensitivity and how that sensitivity changes with time, outside the box.  Subsurface reference instead of lower troposphere reference, outside the box.  If you keep your head inside the box, you may never realize that the box is up some idiot's butt, possibly your own.

Take clouds.  Clouds are a regulating mechanism.  If you focus on heat capacity instead one single likely flawed metric, Global Mean Surface Temperature, it is pretty obvious.  You can convert all those individual temperatures that make up GMST into equivalent energy and you have a simple weighting method.  Is that energy equivalent absolutely, perfectly accurate?  No, but it is useful, especially since it is assumed that effective surface energy is directly related to effective radiant energy.  You cannot convince the choir of that though.

Kimoto's equation, is that absolutely, perfectly accurate?  No, but energy is fungible and if you can figure out the solution to a wicked set of partial differential equations it could be.

Energy balance estimates of sensitivity, are they perfect?  No, but if you can figure out the right combination of regional energy balance estimates they could be.

As all Rednecks know there is more than one way to skin a catfish.  The best way is letting someone else do the work, it is the bones you need to worry about anyway.

End of rant

Saturday, June 11, 2016

Finally a paper on the Planetary Boundary Layer and relative heat capacity.

Differences in the efficacy of climate forcings explained by atmospheric boundary layer depths by Richard Davy and Igor Esau (Nature Communications no. 7) explains a few things I have been harping on for a while.  When you have low heat capacity you get bigger temperature response.  Pretty simple really.  There is a bit of a disconnect between the simplified GMST and change of actual energy in the system at the extremes of heat capacity.  Well worth a few minutes of your time to peruse and worth more time if you are confused about cloud forcing/feedback.

I have tried explaining it with effective radiant energy and heat capacity to show how the zeroth law of thermodynamics rears its ugly head when you attempt to use an average temperature based a range from -80C to +50C combing "surface" air temperature with bulk ocean temperature which is about as huge a range of heat capacity you can find, but Davy and Esau have an approach that is much more likely to be acceptable in the climate science community.

Because most of the positive cloud long wave feedback is in low heat capacity situations and most of the cloud negative short wave feedback is in high heat capacity situations, it should be pretty obvious that overall cloud feedback is most likely negative if you are concern with increased warming in a thermo relevant way.

So if you have wondered about the planetary boundary layer aka atmospheric boundary layer impact on estimates of sensitivity, this part is a got start.  Unfortunately, it doesn't delve into the issue of longer term impact on "global" heat capacity.  When you have high latitude warming in winter that doesn't result in heat storage below the "surface", that warming is actually cooling if you consider Ts=lambaRF +dQ, because dQ can be negative.  Ignoring the dQ just implies a higher sensitivity than actually exists.  Since solar in the lower latitudes has the largest impact on heat capacity, it would have a higher forcing efficacy.




Friday, June 10, 2016

The War for the Tropical Oceans

There is an interesting battle going on between ocean proxy groups.  Personally I have become a G. Ruber fan and am a bit skeptical of the coral fans, but both have their strengths and weaknesses.  Since both are based on biological organisms that have survival instincts, it is anyone's guess how well they would deal with extreme conditions so both likely have divergence issues.  In addition to the proxies having divergence issues, the instrumental data also has divergence issues early in the records.  

Robust global ocean cooling trend for the pre-industrial Common Era, McGregor et al. 2015 includes Delia Oppo as a coauthor and I use Oppo et al. 2009 quite a bit of the time as a reference.  While this paper is "global" and not just tropical, the tropical reconstructions carry considerable weight as they should.  They tend to consider the volcanic impacts starting around 800 CE, though the major impact should be around 1200 CE.  They also consider some ocean dynamic mechanisms,   but on time scales greater than 200 years, the volcanic forcing would be enough to reasonably explain the cooling, i.e. the Little Ice Age.

Tropical sea-surface temperatures for the past four centuries reconstructed from coral archives,  Tierney et al. 2015,  also finds pre-industrial cooling, but up to circa 1830 and don't get into mechanisms.

The war really is over how much cooling and when did it end.  McGregor et al. are in the 1700 CE minimum while the corals show a minimum about 130 years later.  Since the lowest minimum was likely in 1700, the corals could be missing a bit of the range, and Tierney et al. mentions that secular trends isn't a coral strong point.

Rob Wilson has a previous tropical corals reconstruction (30S-30N) that started in 1750 that didn't indicate much pre-industrial cooling and Emile-Geay has a Nino 3.4 reconstruction of corals that indicated some of the pre-industrial cooling with a minimum around 1750.  Corals can have a 30 year recovery time frame from either a warm or cold event which is one reason I am not a big fan.  G. Ruber proxies can be a bit on the cold side because G. Ruber is mobile, unlike corals, but their cold nature wouldn't explain the peak around 1150 CE. Both Oppo 2009 and Emile-Geay 2012 are single location reconstructions, but the Indo-Pacific Warm Pool (Oppo 2009) has a strong correlation with global temperature and NINO 3.4 has a strong correlation with El Nino related variability.  If we are looking for a proxy for GMST, I would go with Oppo 2009.  This makes me question the practice of throwing multi-proxy reconstructions together when they different strengths and weaknesses.


Saturday, June 4, 2016

Clouds are still cloudy

Around 80% of the reflectivity of the Earth is due to clouds or about 77 Wm-2 depending on which energy budget you like.  If you assume albedo (reflectivity) is magically fixed and completely separated from the "Greenhouse Effect" by some strange magic, you are missing a huge portion of the picture.  The atmosphere also absorbs in the ballpark of 77 Wm-2 of sunlight mainly due to clouds and water vapor.  Clouds, water vapor, convection, precipitation and albedo are all interconnected because of water in its three states.

Warmer air can hold more water and warmer more moist air condenses at a higher temperature.  Since temperature decreases with altitude, that would mean clouds would start forming at a lower altitude unless lapse changed to offset the change in dew point.  That is unlikely because the mechanical forcing of convection, buoyancy, becomes stronger with increased water vapor.  As buoyancy increases the rate of falling colder dry air that replaces the warmer moist air.  That colder dry air, which should be colder and dryer thanks to increased GHE would stimulate more condensation, stimulating more convection and more precipitation.

The keys to all this activity are the convective triggering mechanisms, temperature, moisture and pressure differential. One of the odd things about this combination is the role of saturation vapor pressure of water.  Colder dry air would have a higher pressure inducing flow toward warmer moist air but the warmer moist air has a higher saturation vapor pressure meaning water vapor would tend to flow from lighter more buoyant air to less buoyant air.  This is wonderfully counter intuitive to most folks :)  It is also just one of the mechanisms that makes modeling clouds and water vapor a serious bitch.

Mechanisms for convection triggering by cold pools

 Abstract Cold pools are fundamental ingredients of deep convection. They contribute to organizing the subcloud layer and are considered key elements in triggering convective cells. It was long known that this could happen mechanically, through lifting by the cold pools’ fronts. More recently, it has been suggested that convection could also be triggered thermodynamically, by accumulation of moisture around the edges of cold pools. A method based on Lagrangian tracking is here proposed to disentangle the signatures of both forcings and quantify their importance in a given environment. Results from a simulation of radiative-convective equilibrium over the ocean show that parcels reach their level of free convection through a combination of both forcings, each being dominant at different stages of the ascent. Mechanical forcing is an important player in lifting parcels from the surface, whereas thermodynamic forcing reduces the inhibition encountered by parcels before they reach their level of free convection.

I am still trying to digest this paper but it looks like it is on the right path.

Friday, June 3, 2016

Marcott v Rosenthal

"To the extent that our reconstruction reflects high-latitude climate conditions in both hemispheres,
it differs considerably from the recent surface compilations, which suggest ~2°C MWP to LIA cooling in the 30°N to 90°N zone, whereas the 30°S to 90°S zone warmed by ~0.6°C during
the same interval (24). In contrast, our composite IWT records of water masses linked to NH and
SH water masses imply similar patterns of MWP to LIA cooling at the source regions The inferred
similarity in temperature anomalies at both hemispheres is consistent with recent evidence from
Antarctica (30), thereby supporting the idea that the HTM, MWP, and LIA were global events."

Rosenthal et al. 2015

Fig. 2. Comparison between Holocene reconstructions of surface and intermediate-water temperatures.
(A) Global (red) and 30°N to 90°N (green) surface temperatures anomalies, (B) 30°S to
90°S surface temperature anomalies (24), (C) changes in IWT at 500 m, and (D) changes in IWT at
600 to 900 m. All anomalies are calculated relative to the temperature at 1850 to 1880 CE. Shaded
bands represent +/- 1 SD. Note the different temperature scales.


Panels A and B are from the Marcott et al 2013 paper and include the spurious uptick caused by lack of data and their method.  btw, that +/- 1 SD should not be confused with actual uncertainty because there are huge unknowns, but this is about as good as it gets for now.

A denizen posted the Marcott cartoon as a sort of gotcha then asked what I thought was a better reconstruction.  I offered Rosenthal et al. 2015 even though it has issues because there isn't a paleo reconstruction that doesn't have issues.  The Rosenthal, Oppo and Lindsey crew just happened to take a polite counter opinion of Marcott et al which carries a bit more weight than any of my ramblings.  Kind of entertaining that the denizen liked the Rosenthal paper because he thought the Marcott et al. reconstructions matched Marcott et al. reconstructions :)  Hopefully that denizen will get around to reading the text.

Fig. 3. Temperature anomaly reconstructions for the Common era relative to the modern data (note
that the age scale is in Common era years with the present on the right). (A) Change in SST from the
Makassar Straits [orange, based on (26) compared with NH temperature anomalies (27, 28)]. (B) Compiled IWT anomalies based on Indonesian records spanning the ~500- to 900-m water depth (for individual records,  see fig. S7). The shaded band represents +/- 1 SD.

They also include a comparison of Oppo et al. 2009 with Mann and Moberg similar to comparisons I have made but with a less dramatic scale, in the left hand figure.


Depending on which product you prefer and the current state of adjustment, the IPWP correlates well with global SST which is 70% of the GMST product.  Since HADSSTi uses less creative interpolation it is a better match for the reconstruction in the region of the reconstruction 3S-6S and 107E to 110E.

Since there are tons of issues in paleo to deal with, every reconstruction will need some time to resolve them, but Rosethal and crew appear to be ahead of the learning curve.

Tuesday, May 31, 2016

More on Ekman Transport and the Shifting Westerlies

By Kaidor - Own work based on File:Earth Global Circulation.jpg from Wikipedia Prevailing Winds article.

A recent paper, Deep Old Water explains why the Antarctic hasn't warmed,  is really a more specific and updated version of Toggwielder's Shifting Westerlies.  When the westerly winds shift the temperature of the water transported by Ekman currents changes and you have a larger than many would believe climate impact.  Sea Ice extent can also change the amount of transport when it covers or doesn't, colder polar waters than can be transported toward the equator by westerlies or toward the pole by easterlies at the arctic circles.  The Antarctic is given most of the press because it has a huge temperature convergence that can change by more than ten degrees in less than ten degrees latitude.  The Antarctic winds are also much more stable relative to the Arctic winds which can be changed dramatically by the less stable northern polar vortex.

As I have mentioned before, changes in ocean circulation due to the opening of the Drake Passage and closure of the Panama gap likely resulted in about 3 C degrees of "global" cooling with the NH warming while the SH cooled.  That is a pretty large change in "global" temperature caused by the creation of the Antarctic Circumpolar Current and other changes in the ThermoHaline Circulation (THC).  Pretty neat stuff that tends to be forgotten in the CO2 done it debate.

The "believers" demand an alternate theory where "your theory is incomplete" should suffice and for some reason the simplified slab model gang cannot seem to grasp that a planet 70% covered with liquid water can have some pretty interesting circulation features with seriously long time scales.  That is really odd considering some of their slab models have known currents running opposite of reality.  One seriously kick butt ocean model is needed if anyone wants to actually model climate and that appears to be a century or so in our future if ever.

In addition to the Drake Passage and Panama gap, relatively young volcanic island chains in the Pacific can impact ocean currents along with 150 meters or so of sea level variability on glacial time scales.

In any case, it is nice to see some interesting science on the ocean circulation over longer time scales for a change.

Monday, May 30, 2016

Increase in surface temperature versus increase in CO2

All models have limits but since the believers are familiar with MODTRAN it is kind of fun to use that to mess with their heads.

Assuming that "average" mean surface temperature is meaningful and 288.2 K degrees, if you hold everything else constant but increase surface temperature to 289.2 K degrees, according to MODTRAN there will be a 3.33 Wm-2 change in OLR at 70 km looking down.

This shouldn't be a shock, 3.33 is the "benchmark" for 1 C change in surface temperature and even though the actual surface energy would have increased/decreased by about 5.45 Wm-2, the TOA change would be about 3.33 Wm-2 or about 61% of the change indicated at the surface.  While MODTRAN doesn't provide information on changes in latent and convection, the actual increase in surface temperature for 5.45 Wm-2 at some uniform radiant surface would be about 0.80 C degrees.  Because of latent, about 88 Wm-2 and convective/sensible, 24 Wm-2, a 390Wm-2 surface would actually be "effectively" a 502 Wm-2 surface below the planetary boundary layer (PBL) and the more purely radiant surface would be above the PBL and at roughly 390 Wm-2.  This should not be confused with the Effective Radiant Layer (ERL) which should be emitting roughly 240 Wm-2.

Since the energy into the system is constant, the surface temperature should increase somewhat as the "benchmark" change in OLR at the TOA increases/decreases to restore the energy balance, but that assumes that there is no increase in clouds associated with the increase in surface temperature or increase in convection which both should provide a negative feedback.  

 Since someone asked what increase I would expect for 700 ppmv CO2, this run shows a decrease of 2.36 Wm-2 which is obviously less than the "benchmark" 3.33 Wm-2 in the first run.  If you use the iconic 5.35*ln(700/400) you should get 2.99 Wm-2 decrease.  MODTRAN indicates only 78% of expectations.  Again, there should be some change at the surface required to restore the imbalance, but you have the latent and sensible issue at the real surface and you have a bit of a struggle defining what "surface" is actually emitting the radiation that is interacting with the increased CO2.  You can use a number of assumptions, but twice the change at the surface is likely the maximum you could expect.

Since we are most likely concerned with the "real" surface which should agree with the average global mean surface temperature, there will be less increase in temperature because there will be an increase in latent and sensible heat transfer to above the PBL.

Other than the "believers" that like to quote ancient texts (papers older than 5 years), there is a shift in the "consensus" to a lower sensitivity range but no one can put their finger on why.  The simplest explanation is that someone screwed up in their hunt for the absolute worse possible case.

Whatever the case, an increase to 700 ppmv will likely produce less than a degree of warming at the real surface but could produce about 1.5 C degrees of warming or a range of ~0.75 C to 1.50 C degrees.

Sunday, May 29, 2016

More "Real Climate" science

"When you release a slug of new CO2 into the atmosphere, dissolution in the ocean gets rid of about three quarters of it, more or less, depending on how much is released. The rest has to await neutralization by reaction with CaCO3 or igneous rocks on land and in the ocean [2-6]. These rock reactions also restore the pH of the ocean from the CO2 acid spike. My model indicates that about 7% of carbon released today will still be in the atmosphere in 100,000 years [7]. I calculate a mean lifetime, from the sum of all the processes, of about 30,000 years. That’s a deceptive number, because it is so strongly influenced by the immense longevity of that long tail. If one is forced to simplify reality into a single number for popular discussion, several hundred years is a sensible number to choose, because it tells three-quarters of the story, and the part of the story which applies to our own lifetimes."

"My model told me," this is going to take 100,000 years.  2005 was part of what I consider the peak of climate disaster sales.  While there is some debate over how long it will take top get back to "normal" and still some debate on how normal "normal" might be, the catchy "forever and ever" and numbers like 100,000 years stick with people like the Jello jingle.

To get to the 100,000 year number you pick the slowest process you can find and set everything else to "remaining equal".  This way you can "scientifically" come up with some outrageous claim for your product that is "plausible" at least as long as you make "all else remain equal".

This method was also useful for coal fired power plants.  You pick emissions for some era then assume "all else remains equal" and you can get a motivational value to inspire political action.  Back in 2005, 10% of the US coal fired power plants produced 50% of the "harmful" emissions.  Business as usual in the coal biz was building more efficient coal plants to meet Clean Air Act standards and since the dirtiest plants were the oldest plants they would be replaced by the cleanest new plants.  If you replace an old 30% efficient 1950s-60s era plant with a 45% efficient "state of the art" plant, CO2 emissions would be reduced by 50% along with all of the other real pollutants that need to be scrubbed in accordance with the CAA.

By 2015, many had caught on to the sales game so the tactic changed to "fast mitigation".  Fast mitigation is just business as usual with new name and logo.  Because of regulations and threats of regulation, the CAA was replaced with the clean house threats so upgrades were placed on hold while the lawyers made some cash.  Between roughly 2005 and 2015, just about every technical innovation that could improve air quality while maintaining reasonable energy supply and cost, was placed on hold by the study, litigation, study, regulate, litigate, study process.  Peak fossil fuel energy costs hit in the 2011 time frame, just like the "necessarily more expensive" game plan predicted.  The progressive hard line, bolstered by fantasy "science" assuming "all else remains equal" had to give way to returning to normal with new packaging, "fast mitigation."

Business as usual has never been about maintaining the status quo, it is about keeping up with the competition and surpassing them when possible.  Cleaner, more efficient, better value, more bells and whistles is business as usual.  "Fast mitigation" is just a return to business as usual. Now if the "scientific pitchmen" will get out of the way, the future might look bright again.

Friday, May 27, 2016

Once in a while it is fun to go back through some of the realclimate.org posts just for a laugh

Once in a while it is fun to go back through some of the realclimate.org posts just for a laugh.  My favors are the Antarctic warming, not warming, warming posts when it is pretty obvious looking at the MODTRAN model of forcings that the Antarctic should be doing either nothing or cooling due to radiant physics.  It just took them a few years to figure out that any changes in the Antarctic are due to "waves" i.e. changes in ocean and atmospheric circulation.  Changes in atmospheric and oceans circulations on various time scales can be just a touch difficult to cipher, you could say they are somewhat chaotic.

Other than the extremely basic physics, there isn't much you can say one way or another and even the basic physics involve fairly large simplifications/assumptions.  "These calculations can be condensed into simplified fits to the data, such as the oft-used formula for CO2: RF = 5.35 ln(CO2/CO2_orig) (see Table 6.2 in IPCC TAR for the others). The logarithmic form comes from the fact that some particular lines are already saturated and that the increase in forcing depends on the ‘wings’ (see this post for more details). Forcings for lower concentration gases (such as CFCs) are linear in concentration. The calculations in Myhre et al use representative profiles for different latitudes, but different assumptions about clouds, their properties and the spatial heterogeneity mean that the global mean forcing is uncertain by about 10%. Thus the RF for a doubling of CO2 is likely 3.7±0.4 W/m2 – the same order of magnitude as an increase of solar forcing by 2%."

That whole paragraph is a clickable link to a 2007 post at RC.  The constant 5.35 is a curve fit to available data which tries to "fit" surface temperature forcing to that elusive Effective Radiant Layer (ERL) forcing and assumes a perfect black body response by both so that the temperature of one is directly related to the other.  For absolutely childlike simplification, there is nothing wrong with this but I believe we have moved beyond that stage finally, hopefully.

Note in the quoted paragraph the "uncertain by about 10%" fairy tale.  30% skewed high is a better guess and that assumes that the "surface" is actually some relevant surface on this particular planet.  Thanks to the hiatus that sometimes does and sometimes doesn't exist, someone finally noticed that the "surface" used in climate models doesn't match the "surface" being modeled for global mean surface temperature anomaly which doesn't have any agreed upon temperature for there to be an ideal black body heat emission from, to interact with the elusive ERL.   Since another crew actually spent nearly ten years measuring CO2 forcing only to find it to be considerably less than the iconic 5.35 etc.by about 27 percent, I would expect just a bit more "back to basics" work so that the typical made for kindergarten simplifications provided to the masses reflect the huge expansion of collective climate knowledge.  Instead you have Bill Nye, the not so up to date science guy, peddling the same old same old though I have to admit even he is getting some heat from the "causers".

Try to remember that the highest range of impact assumed a 3 times amplification of the basic CO2 enhancement and when you find that the CO2 enhancement is about 30% less than expected, the amplification would also be considerably less, "all else remaining equal."

That RC post also uses the disc in space input power simplification which ignores the physical properties of the water portion (about 65% of the globe if you allow for critical angles) of the "surface" which requires a bit more specific estimate of "average" power.  Once again, a fine "simplification" for the K-9 set, but not up to snuff for serious discussion.  That issue has one paper so far but will have a few more.






Wednesday, May 18, 2016

A Little More on Teleconnection

The Bates 2016/2014 issue with simplifying estimates of climate sensitivity by reduction to tropical regions has some profound impacts.  If you have followed my rants, the tropical oceans and oceans in general are my main focus.  The tropical oceans are like the fire box of the heat engine and the poles are a large part of the heat sink.  Space is of course the ultimate heat sink, but since there is a large amount of energy advection toward the poles, average global surface temperature depends on how well or efficiently energy moves horizontally as vertically.  Since the atmosphere will adjust to about the exact same outward energy flow over time as required to meet the energy in equals energy out requirement with some small "imbalance" that might persist for centuries, the oceans which provide that energy and are capable of storing energy related to the "imbalance" so if you "get" the oceans you will have "gotten" most of the problem.  The correlation of the oceans with sensitivity and "imbalance" just provides an estimate of how much can be "got".

Since Bates 2016 uses a smaller "tropics" than most, 20S-20N, instead of the standard ~24S-24N, I have re-plotted the correlation of the Bates tropics with global oceans for both the new ERSSTv4 and the old standard HADSST.  Both have a correlation of ~85% so if you use the Bates tropics as a proxy for global oceans you should "get" 85% of the information with a 3% to 6% variation that depends on which time frame you choose.  If you happen to be a fan of paleo-ocean studies, you could expect up to that same correlation provided you do an excellent job building your tropical ocean reconstruction.  If you are a paleo-land fan, a perfect land temperature reconstruction would give you a correlation of about 23% with the remaining 70% of the globe.  That is because land is "noisy" thanks to somewhat random circulation patterns.

Outgoing Long Wave Radiation (OLR) is also noisy, but thanks to interpolation methods and lots of statistical modeling, OLR is the best indication of the "imbalance".  The Bates tropical OLR using NOAA data has a 68% correlation with "global" OLR which means it is about twice as useful as the noisy land surface temperature if you are looking for a "global" proxy.

Interpolation methods used for SST and OLR will tend to enhance the "global" correlation so there is some additional uncertainty, but for government work, SST and OLR are pretty much the best of your choices.

"Believers" are adamant about using "all" the land surface temperature data, including infilling with synthetic data, if you are going to get a "reasonable" estimate of climate sensitivity with "reasonable" meaning high and highly uncertain.  Basic thermodynamics though allows the use of several references, each with some flaws, but none "useless".  If you can only get the answer you are looking for with one particular reference, you are not doing a very good job of checking your work.  Perpetual motion discovery is generally a product of not checking your work very well.  Believers demanding that certain data be included and only certain frames of reference be used is a bit like what you would expect from magicians and con artists.

In the first chart I used 1880 to 1899 as the "preindustrial" baseline thanks to Gavin Schmidt.  There is about 4.5 billions years worth of "preindustrial" and since the data accuracy suffers with time, the uncertainty in Gavin's choice of "preindustrial" is on the order of half a degree which is about 50% of the warming.

The true master of teleconnection abuse would be Michael E. Mann.  The 1000 years of global warming plot he has produced is based on primarily tree ring and land based temperature proxies.  So if he gets a perfect replication of past land temperatures based on the correlation of land versus ocean instrumental data, he would at most "get" about 23% correlation with 70% of the global "surface" temperature.  The Oppo et al. 2009 overlay on the other hand could get 85% correlation with that 70% of the global surface if they did a perfect job, so their work should be given more "weight" than Mann's.  If you can eyeball to 1880 to 1899 baseline on the chart you can see there is a full 1C of uncertainty in what "preindustrial" should be.  In case you are wondering, the Indian Ocean Warm Pool region has about 75% correlation with "global" oceans, so IPWP isn't "perfect" but it is much better than alpine trees.

The whole object of using "teleconnections" is to find the best correlation with "global" and to use relative correlations to estimate uncertainty.  This is what Bates 2016 has done.  He limited his analysis to the region with the "best" data that represents the most energy and based his uncertainty range on the estimated correlation of his region of choice with "global".  Lots of caveats, but Bates has fewer than Mann and the greater than 3C sensitivity proponents.

So the debate will continue, but when "believers" resort to antagonistic tactics to discredit quite reasonable analysis they should expect "hoax" claims since they are really using con artist tactics whether they know it or not.

Update:


Even the weak 23% correlation between tropical SST and the global surface temperature is "significant" when the number of points used is large.  With a bit of smoothing to reduce noise though you can get an eyeball correlation and thanks to Gavin's baseline you can see that the northern hemisphere extra tropical region is the odd region out.  If you recall, Mann's "global" reconstruction was really a northern hemisphere ~20N to 90N reconstruction with a few southern hemisphere locations kind of tossed in after the fact.  That 20N-90N area is about 33% of the globe and happens to be about the noisiest 33% thanks to lower specific heat capacity.  Understandably people are concerned with climate change in the northern extra tropical region to the point they are biased to that region, but an energy balance model just happens to focus on energy not real estate bias.

If you are a fan of pseudo-cycles you probably notice that the 20N-90N regional temperature looks a lot like the Atlantic Multi-decadal Oscillation (AMO).  There is likely some CO2 related amplification of the pseudo-cycle along with land mass amplification due to lower specific heat capacity, but that pesky 0.5 C degree wandering would make it hard to determine what is caused by what.  For the other 67% of the global, land and ocean included, the Bates 20S-20N tropics serves as a reasonable "proxy", "index" or "teleconnection" depending on your choice of terms.

Sunday, May 15, 2016

Does it Teleconnect or not?

While the progress of climate science blazes along at its usual snail's pace it is pretty hard to find any real science worth discussing.  The political and "Nobel (noble) Cause" side of things has also been done to death.  Once and a while though something interesting pops up :)

Professor J. Ray Bates has published a new paper on lower estimates of climate sensitivity, Bates 2016, which appears to be an update of his 2014 paper that was supposedly "debunked" by Andrew "Balloons" Dessler.  My Balloons/Balloonacy/Balloonatic nicknames for prof Dessler are based on his valiant attempt to prove that there was tropical tropospheric warming by using the rise and drift rates of radiosonde balloons instead of the on board temperature sensing equipment.  Andy did one remarkable job of finding what he wanted in a sea of noisy nonsense.  Andy seems to like noisy and is a bit noisy himself.

The issue raise by Balloonacy was that Bates used data produced by Lindzen and Choi 2011 (LC11) which was used to support the Lindzen "Iris" theory which is basically that increased water vapor should tend to cause more surface radiant heat loss because water vapor tends to behave a bit contrary to the expectation of climate modelers.  Since LC11 was concerned only with the tropics they used data restricted to the tropics.  Bates proposed that the tropics are a pretty good proxy for global and using the tropics produced a lower estimate of climate sensitivity and that thanks to better satellite data, the uncertainty in the estimates was much smaller than "global" estimates.

This chart of Outgoing Longwave Radiation (OLR) tends to support Bates.  The "global" ORL interpolation by NOAA has a 68% correlation with 20S-20N and an 80% correlation with 30S-30N.  If you consider that the 30S-30N band represents about 50% of the surface and about 75% of the surface energy, this "high" correlation for climate science at least makes perfectly good sense if you happen to be using energy balance models.  Balloonacy models not so much, but for energy related modeling using the majority of the available energy is pretty standard.

What Bates did with his latest was use two models, Model A with the 20S-20N "tropics" and model B which used the "extra-tropical" region or everything other than 20S-20N, to try and illustration to Balloonacy that majority energy regions in energy balance models tend to rule the roost.

The Ballonatics btw have no problem "discovering" Teleconnections should the teleconnects "prove" their point, but seem to be a bit baffled by reasons that certain teleconnections might be more "robust" than others.

Unfortunately, data massaging (spell check recommended massacring which might be a better choice) methods can tend to impact the validity of teleconnect correlations.  "Interpolation" required to create "global" data sets tend to smear regions which artificial increases calculated correlations.  So no matter how much you try to determine error ranges, there is likely some amount of unknown or unknowable uncertainty that is a product of natural and man made "smoothing".  Nit picking someone else's difficulties with uncertain after you have pushed every limit to "find" your result is a bit comical.

The general "follow the tropics", in particular the tropical warm pools is growing in popularity with the younger set of climate scientists just like follow the money and follow the energy are popular if you want to simply rather complex problems.  Looking for answers in the noisiest and most uncertain data is a bit like P-Hacking which is popular with the published more than 200 paper set.  Real science should take a bit of time I imagine.

In any case, Professor Bates has reaffirmed his low estimate of climate sensitivity which will either prove somewhat right or not over the next decade or so.  Stay tuned :)

Here is the link to J. Ray Bates' paper.  http://onlinelibrary.wiley.com/doi/10.1002/2015EA000154/epdf

Saturday, March 12, 2016

Uncertainty in uncertainty

I may seem to pick on Marcott et al. 2013, but that paper came out while I was playing with ocean paleo-reconstructions in the tropics.  Marcott used a number of tropical paleo reconstructions that I didn't use because they were too coarse (ultra low frequency) and had a unknown uncertainty in the samples.

The unknown uncertainty is "natural" averaging.  Most of the proxies are based on plankton of some sort and the biological temperature proxies would go through boom bust cycles.  You can have more depositing of a particular type of plankton during either.  So if you have a core sample say one inch thick that spans say 300 years, one particular good growth period would dominate the sample or one massive die off could.  The published accuracy of the sample is based on how well a lab can count something not how well biological life fits a normal distribution assumption.


At the time I was using the Oppo et al. 2009 and the Mohtadi et al. 2010 aka Anand on the legend was the low frequency reconstruction I didn't use.  There is a large difference between the two, especially near the end.  The lower frequency reconstructions have more uncertainty in dating which is pretty well know plus the natural averaging of the samples isn't known and the impact isn't discussed as far as I know.

Most of the criticisms of Marcott, Mann and other paleo re-constructors deal with just about everything other than the "effective" averaging produced "naturally" by the organisms and unintentionally by "novel" methods.

My curiosity might be misdirected, unfounded or irrelevant, especially since I am a Redneck and not a part of the "scientific" establishment, but since just about every higher resolution "cap" reconstruction diverges from lower resolution reconstructions, I am a tad stubborn about wanting a few answers.

It is a bit frustrating to me since in Redneckville such an obvious issue would dealt with something profound like, "What the F__ is going on!" but in climate scienceville some dweeb starts babbling about standard error when this is pretty much a non-standard situation :)

Technically, the error is only about a degree or about equal to the estimated amount of warming during the instrumental period and you can massage or polish all you like, but the result should always consider the absolute magnitude of the potential error if it is never specifically addressed.  In order to reduce the error you have to do more digging and less assuming.  It is very unlikely the error is "normally" distributed uniformly over the entire time frame of the reconstruction which is part of the assumptions made to get those unbelievably tight error margins in the published literature.

Inquiring Rednecks want to know what's the deal Lucille?


Friday, March 11, 2016

How to make history disappear as if by magic

Greg Goodman has a nice post on basic issues with Ordinary Least Squares regression analysis at Dr. Curry's place.  Someone asked about what impact it would paleo reconstructions of sea level.  While I don't have a specific sea level example I do have a sea surface temperature example.


The yellow curve is from the Mohtadi et al. 2010 paper reconstructing temperatures of the Indo-Pacific Warm Pool which has an sample rate of about 400 years and the blue curve is from Oppo et al. 2009 from the same area but it has a sample rate of about 50 years.  With more samples and higher resolution you get a clearer picture.  The Mohtadi reconstruction was one of many reconstructions used in the seriously flawed Marcott et al. paper of 2013.

If you regress Oppo with respect to Mohtadi you would have oranges on the x axis and apples on the y axis.  If you just average the two, the coarser Mohtadi would smooth out the information in the finer Oppo.  Either way you end up with a flatter than it should be past history and a sudden pop, either up or down when the influence of the coarse Mohtadi data ends.  If you pick coarse data or make higher resolution data coarse by inappropriate or "novel" averaging, you can make the details of the past disappear, as if by magic.  Even though Oppo et al. 2009 was available for Marcott and company it was not included in their "ground breaking" paper.

This is like the most basic of basics screw ups, so someone with a bit of knowledge would assume ignoring the obvious has to be deceptive instead of frigging stupid, if the mistake is made by a "professional" and published in a peer reviewed journal.  Unfortunately, since nearly everyone has access to canned statistic packages, stupidly using extremely powerful statistical tools is more likely than intentional deception.

With only the options of dishonest or stupid, tact becomes a bit of an issue.  Most engineers are not know for excessive amounts of tact, generally expect professionals to know what they are doing and have close to anal attention to detail, so they lean toward the dishonest accusation.  Hey!  It is an honest mistake and no one really likes being called stupid.

Back in the day, scientists had plenty of time to ponder prior to responding through snail mail or journals, so they were a lot more creative in parsing their insults.  Now a days, time is money and profanity is more socially acceptable.  Deal with it.