If somehow the average temperature of the surface of the planet increased by 1°C – say due to increased solar radiation – then as a result we would expect a higher flux into space. A hotter planet should radiate more. If the increase in flux = 3.3 W/m² it would indicate that there was no negative or positive feedback from this solar forcing (note 1).
Suppose the flux increased by 0. That is, the planet heated up but there was no increase in energy radiated to space. That would be positive feedback within the climate system – because there would be nothing to “rein in” the increase in temperature.
Suppose the flux increased by 5 W/m². In this case it would indicate negative feedback within the climate system.
The key value is the “benchmark” no feedback value of 3.3 W/m². If the value is above this, it’s negative feedback. If the value is below this, it’s positive feedback.
Without adding mass to the atmosphere, there is no reason to expect that "Benchmark" to change, but who exactly was that "benchmark" value drifted into?
For a planet to exist, Energy in has to Equal energy out, in simple terms, work has to equal entropy if you think of a Carnot Engine that can be perpetually stable. If there is more than one stage, the entropy of the first process could be used by a second process leading to a variety of efficiency combinations. The "surface" then could not be perfectly 50% efficient, Ein=Eout if any of the surface input energy is used by a second stage.
The leads interestingly to three states, less than 50% efficiency most likely, greater than 50% efficiency not very likely and 50% ideal efficiency, impossible. So what does this have to do with the "benchmark"?
Earth's surface doesn't have much "dry" air and adding moisture to air actually decreases its density since H2O is light in comparison to O2 and N2. The actual surface air density at 300C would be a little less than 0.616 kg/cubic meter.
Using the "Benchmark" value, increasing "surface" forcing by 3.7Wm-2 would produce a 1.12 C increase in temperature, but the "benchmark assumes zero feedback. With the "benchmark" limited by the specific heat capacity of air, any increase in surface temperature has to have a negative feedback if the surface efficiency is less than 0.5 percent meaning more energy is lost to the upper atmosphere and space than gained by the surface.
The other cause that can exist is where at the surface more work is performed than entropy, Venus. The surface density of Venus' atmosphere is about 67 times greater than 1 and the surface pressure is about 92 times greater than 1000 milibar. With a surface temperature of 740K the Venusian "benchmark" would be 1.8 Wm-2/K, twice as efficient at retaining heat than Earth. So if Earth is 39% efficient, 22% worst than 50%, then Venus should be 61% efficient, 22% greater than 50%. If you compare the geometric albedo of both planets, Earth is ~37% and Venus ~67% with both having other energy inputs, geothermal, rotational, tidal etc. Earth is lower likely due to some time in its past having an average surface temperature of 300K or greater causing loss of atmospheric mass and Venus having a more dense atmosphere retaining and likely gaining atmospheric mass.
It is interesting in any case that the Earth "benchmark" sensitivity appears to be estimated to be for a surface temperature of ~300C or roughly 10C greater than the current average. Since that requires greater atmospheric mass/density, we can't go back there anymore.