Thermoeconomics: Beyond the Second Law

©JOURNAL OF BIOECONOMICS,2002 4:57-88.

Abstract

Physicist Erwin Schrodinger’s What is Life? (1945) has inspired many subsequent efforts to explain biological evolution, especially the evolution of complex systems, in terms of the Second Law of Thermodynamics and the concepts of “entropy” and “negative entropy.” However, the problems associated with this paradigm are manifold. Some of these problems will be highlighted in the first part of this paper, and some of the theories that have been derived from it will be briefly critiqued. “Thermoeconomics”, by contrast, is based on the proposition that the role of energy in biological evolution should be defined and understood not in terms of the Second Law but in terms of such economic criteria as “productivity,” “efficiency,” and especially the costs and benefits (or “profitability”) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. Thus thermoeconomics is fully consistent with the Darwinian paradigm. Furthermore, it is argued that economic criteria provide a better account of the advances (and recessions) in bioenergetic technologies than does any formulation derived from the Second Law.

Keywords: cybernetics, entropy, information, natural selection, synergy, thermodynamics

“Horse manure does not explain a horse.”  — Stephen Jay Kline

Introduction

The Second Law of Thermodynamics is one of the pillars of the physical sciences, and rightly so. It has withstood the test of time, including numerous, often ingenious efforts to find exceptions or dispute its hegemony.

In the life sciences, however, the so-called “entropy law” has had a more checkered history. The fact that energy plays a central role in living systems, and in evolution, has long been appreciated. The centerpiece of Jean Baptiste de Lamarck’s 18th century evolutionary theory was what he called the “power of life.” In the latter 19th century, Herbert Spencer elaborated on this theme with his grandiose “universal law of evolution.” According to Spencer, energy was the driver of an inherent evolutionary trend toward increased complexity, both in nature and in human societies. Physicists Ludwig Boltzmann (1909) and Alfred Lotka (1922, 1945) also defined evolutionary progress in energetic terms, and many latter-day theorists have followed suit. But it was the physicist Erwin Schrödinger, in his legendary book What is Life? (1945), who catalyzed the modern approach to thermodynamics and evolution. Schrödinger characterized a living system as being, quintessentially, an embodiment of thermodynamic order — what he termed “negative entropy.” Whereas the Second Law posits a general tendency toward energy dissipation and maximum disorder in nature (entropy), Schrödinger asserted that living systems are able to elude this fate by “extracting order” from their environments. He also spoke of “sucking orderliness” from the natural world. Although it is often assumed that organisms feed upon energy, Schrödinger declared, this is “absurd…What an organism feeds upon is negative entropy” (1945:72).

Though there are serious problems with Schrödinger’s “paradigm” (see below), it has nevertheless enjoyed an immense influence over the years. Among other things, Schrödinger inspired many subsequent efforts to explain the evolutionary process, and especially the evolution of complexity, in terms of various interpretations of the laws of physics, including the Second Law in particular. There are, needless to say, major differences among these theories, but the common theme is the claim that biological evolution has been “driven” by forces, or propensities, or tendencies that are inherent in nature, as opposed to the workings of natural selection (which some members of this school characterize as an “uninvited guest”). Sometimes energy, or some form of “information”, or both, are said to be the keys to how living systems are able to transcend the entropy law, but at other times the entropy law itself is identified as the primary causal agency.1

A Plethora of “Laws”

Thus, physicist Ilya Prigogine et al. (1972a,b), emulating his unacknowledged predecessor (Herbert Spencer), claimed to have discovered a “universal law of evolution.” His theory is based on treating living organisms as, in essence, “dissipative structures” that evolve via increased energy flows and successive perturbations, or “bifurcations”.   Biologists Daniel Brooks and E.O. Wiley (1988: xi-xiv) have also laid claim to a “natural law of history.” However, in contrast with Prigogine, their “core hypothesis” is that “biological evolution is an entropic process….Increasing complexity and self-organization [arise] as a result of, not at the expense of, increasing entropy.” (Their much-criticized theory, it turns out, relies on a new conception of informational entropy.)2 Rod Swenson (1989:187) touts what he calls the “law of maximum entropy production,” which he says forms “the cornerstone to a theory of general evolution within which biological and cultural evolution are special cases.” Biologist Jeffrey Wicken (1987,1988:152-3) characterizes entropy as a “teleomatic drive” toward disorder that underlies biological variation and gives direction to evolutionary change. “Speciation is driven by the randomizing directives of the second law, ” he tells us (p.144). On the other hand, Wicken also claims that free energy fueled the prebiotic phase of evolution with “an inexorable determinism.” (Wicken also made a commendable but nonetheless problematical attempt to incorporate information into his paradigm.)3

In a similar vein, biophysicist Harold Morowitz, in one of his early works (1968), proposed that the evolutionary process was the necessary result of “the constant pumping” of energy, mainly from the sun (p.146). “The flow of energy through the system acts to organize that system…Biological phenomena are ultimately consequences of the laws of physics” (p.2). More recently, Eric Schneider and James Kay (1994, 1995), citing Morowitz as a progenitor, advance what they describe as a “Unified Principle of Thermodynamics.” They tell us that “life emerges because thermodynamics mandates order from disorder whenever sufficient thermodynamic gradients and environmental conditions exist” (p 171).4

There have also been several different claims to have discovered a new “fourth law of thermodynamics.” Morowitz’s suggestion that energy flows are autocatalytic and serve to organize a system is often referred to as a new law of physics — although Morowitz himself demurs from this view and advances a more complex paradigm (see below). Economist Nicholas Georgescu-Roegen (1977a,b,c, 1979) also formulated a “fourth law of thermodynamics,” which he asserted governs economic life. Calling the entropy law the “taproot” of economic scarcity (1979:1041), Georgescu-Roegen posited that, in a closed system like a human society, “material entropy must ultimately reach a maximum”(1977a:269). There is no way to escape it, he argued, and economies must work within this cosmic constraint.

Finally, biologist Stuart Kauffman, whose popular books have influenced a wide audience, has unabashedly promoted his own “fourth law of thermodynamics ” — an inherent tendency for the biosphere to become increasingly diverse and complex, or so he suspects (Kauffman 2000: xi). Kauffman and others also regularly invoke “self-organization”and “autocatalysis” as inherent ordering influences in evolution. Indeed, in his earlier 1995 book Kauffman speculated that “laws of complexity spontaneously generate much of the order of the natural world….Order, vast and generative, arises naturally” (pp. 8,25). He called it a “deep theory” of self-generated evolution. (Of course, this theoretical claim was only a promissory note; it has yet to be fulfilled.)

Problems with the Thermodynamics Paradigm

There can be no doubt that many autocatalytic and self-ordering processes do exist in nature, but there are serious — no fatal — problems associated with elevating these local influences into a general law (or laws) that govern the overall trajectory of the evolutionary process. The flaws associated with what could loosely be called the “thermodynamics paradigm” were discussed in some detail in Corning and Kline (1998a,b). (The late Stephen Jay Kline, Woodard Professor of Science, Technology and Society and of Mechanical Engineering Emeritus at Stanford University, was an expert in thermodynamics who taught the subject for many years.)

In brief, many of these Second Law theorists seriously misinterpret and thus misuse the concept of entropy; others utilize deficient concepts of “information” that cannot be operationalized; many blur the crucial distinction between statistical or structural forms of “order”, on the one hand, and evolved, goal-directed functional “organization”; not least, they have been misled by some of the very “gods” of physics into conflating energetic order/disorder and physical order, which in many cases is not correct (see below). But most serious of all, these theorists for the most part discount the “ground-zero” premise of the biological sciences (at least since Darwin) that life is a contingent phenomenon and that survival and reproduction is the “paradigmatic problem” of all living organisms. Life is quintessentially a “survival enterprise,” the parameters of which are locally defined by the nature of the organism and its specific environment, and the precise organism-environment relationship is a key determining factor in the ongoing evolutionary process.

Let me provide some necessarily abbreviated specifics to support these rather serious criticisms (see also Corning and Kline 1998a,b). We should start with the “founding father” of this paradigm, Erwin Schrödinger (1945). Recall his claim that organisms do not feed upon energy; they feed upon “negative entropy.” In other words, what matters most in living systems is their ability to resist the cosmic determinism of the Second Law and to create local conditions of increased thermodynamic order. Schrödinger then proceeded to define negentropy not in any independent, phenomenological way but in mathematical terms as the reciprocal of Ludwig Boltzmann’s expression for entropy. A crucial corollary of this formulation, which has echoed down through the years as received wisdom, is the proposition that living systems do not thereby violate the Second Law because they must “pay” for their increased order (negentropy) by producing an “equivalent” amount of entropy in the environment as compensation.

Schrödinger’s poetic metaphor is seductive. It has been quoted on innumerable occasions over the years. But, in fact, it too is “absurd” (to borrow Schrödinger’s term). In the first place, it reduces the complexities of living systems to a monolithic thermodynamic process and conflates thermodynamic order with functional “organization” — purposive designs for adaptation in a great variety of specific environments, including a number of different energy regimes and levels of organization. Metabolism is only one aspect of the many-sided problem of earning a living in the natural world. In effect, Schrödinger truncated these challenges into a single parameter, thus distorting the very nature of the evolutionary process.

Schrödinger’s vision also caricatures the energetics of living systems, which have developed ingenious and highly efficient (i.e., profitable) mechanisms for capturing or harvesting “available energy” in various forms and then using it for various purposes, from doing useful work to building biomass (more on this below).5 Contrary to Schrödinger’s assertion, it is more accurate to say that organisms feed upon available energy and create thermodynamic, structural and functional order than to say that they feed upon order (cf., Morowitz 1968:19; Perutz 1987).

But most serious, Schrödinger’s basic hypothesis is untestable, since his definitions of entropy and negative entropy are circular and have no empirical referents. (Negative entropy means, literally, an absence of an absence of order — in other words, order – aka available energy.) In this context, though, we have no idea how to go about measuring either entropy or negentropy. It is not at all like measuring the temperature gradient of the gas molecules in a defined system. As we will explain later on, there is also reason to question Schrödinger’s assertion that the process of biological evolution has been accompanied by an equivalent increase of entropy in the environment.

Prigogine’s Paradigm

Physicist Ilya Prigogine’s vision is similar to Schrödinger’s in that it too characterizes living systems in thermodynamic terms as far-from-equilibrium “dissipative structures” that feed on energy (Prigogine 1978; Prigogine et al., 1972a,b, 1977; Nicolis and Prigogine 1977, 1989). According to Prigogine, “order” — which he does not explicitly define but which he utilizes both in thermodynamic/process and in structural terms (a disturbing ambiguity) — evolves spontaneously in “open” systems via continuous energetic inputs. These inputs may lead to structural instabilities, which may in turn produce perturbations, or “fluctuations” in the direction of greater “complexity” (also not precisely defined). Prigogine refers to this causal dynamic as the principle of “order through fluctuations” (1972a), and he characterizes a living system as “a giant fluctuation stabilized by exchanges of matter and energy” (Prigogine et al., 1977:38). As Prigogine says, this is an autocatalytic theory of evolution; the vicissitudes of the natural environment are nowhere in evidence.

Another problem is that Prigogine makes no distinction between “order” and functional “organization”. In fact, he uses the terms interchangeably. Thus, he sees no difficulty in applying the same explanatory principle both to the formation of convection cells (Bénard cells) in a pan of heated water and to the complex control mechanisms associated with glycolysis (which entails upwards of 100 precise sequential steps, including multiple exchanges of energy) or the highly coordinated, information-driven functional transformations that occur over time in a colony of the cellular slime mold Dictyostelium discoidium (Prigogine et al., 1972a: 27-28; also 1977: 32, 34). This is a theory that seriously overreaches.

A Theoretical Segue

Both Schrödinger and Prigogine also helped to promote an expansive definition of the entropy law that, we maintain, is both unwarranted and significantly overstates the role of entropy in the natural world. Some of the confusion associated with the use of thermodynamics in evolutionary theory is the result of a major theoretical segue that occurred with the development of statistical mechanics in the latter 19th century. When the physicist Rudolph Clausius first named and formalized the concept of entropy, he defined it in strictly phenomenological terms.   In an ordered state, energy is aggregated in such a way that it has the potential for doing useful work. Accordingly, the concept of entropy (or thermodynamic disorder) was proposed by Clausius as a measure of the degree of energetic dispersal or dissipation and, consequently, its unavailability to do work. In this formulation, a state of maximum entropy corresponds to a complete state of energetic disorder which, paradoxically, also represents an equilibrium condition.

Although this version of the entropy concept has had many practical applications over the years, it also suffers from a serious limitation. The material world is in fact ordered in a multi-leveled and hierarchical manner, but Clausius’s concept was focused only on energetic order/disorder at the “macroscopic” level. This shortcoming was rectified when physicists Ludwig Boltzmann (1909) and J. Willard Gibbs (1906), inspired by the pioneering work of James Clerk Maxwell in kinetic theory, independently addressed the relationship between the energetics of the macroscopic and microscopic levels — say a body of gas in a container versus the dynamics of its constituent atoms. The dilemma is that the behavior of a thermodynamic “microstate” cannot be precisely predicted, for two reasons: first, because there is an inherent degree of stochastic fluctuation (indeterminacy) at that level and, second, because a human observer cannot know precisely the initial microstate of the system or make the necessary observations.

Accordingly, Boltzmann and Gibbs deployed statistical techniques resembling those that were developed for games of chance to describe in probabilistic terms the degree of energetic order/disorder at the microstate level, with the presumption that, within certain important constraints and limitations, these microstate statistics would correlate with the properties of the “macrostate” as well. Gibbs pointedly called his statistical formalizations “entropy analogues” to emphasize the fact that they were mathematical approximations only, not direct measures of the real thing. Today, physicists and engineers often distinguish between “classical entropy” and “statistical entropy” for much the same reason. (Later formulations, reflecting the development of quantum theory, added yet another level of microstate indeterminacy to the measurement of thermodynamic order and disorder.)

In any case, thermodynamic entropy as defined by these pioneers is a “state” function, comparable to temperature or pressure. Entropy in this sense is not a “thing” or a “force.” It is a property of the material world with the peculiar attribute that it is designed to measure the relative absence of something, namely, energetic order. When the entropy of a medium increases, its work potential decreases — which is why, somewhat confusingly, entropy equations relating to work potential typically carry a negative sign.

The problem arose when some leading theorists assumed that there is an isomorphism between statistical order, energetic order, and physical order. As a consequence, subsequent generations of physicists and laymen alike have often uncritically accepted the claim that the entropy law applies to everything in the universe. Thus, biologist Ludwig von Bertalanffy (1952[1949]) wrote: “according to the Second Law of Thermodynamics, the general direction of physical events is toward decrease of order and organization.” Likewise, biologists Brooks and Wiley (1988:36) speak of a general physical law which “predicts that entropy will increase during any real series of processes.” Georgescu-Roegen (1971, 1979:1039) assured us that “matter matters too” — the material world is also subject to the Second Law. Physicist David Layzer (1988:23) asserts that “all natural processes generate entropy.” Economist Malte Faber (1985: 317) tells us that “thermodynamics is that branch of physics which deals with systems of great numbers of particles.”

More surprising, physicist Stephen Hawking (1988:102) speaks of (quote): “a physical quantity called entropy, which measures the degree of disorder of a system. It is a matter of common experience that disorder will tend to increase if things are left to themselves. (One has only to stop making repairs around the house to see that!)” Similarly, physicist Roger Penrose (1989:308) informs us that “the entropy of a system is a measure of its manifest disorder [his italics]…Thus, [a] smashed glass and spilled water on the floor is in a higher entropy state than an assembled and filled glass on the table; the scrambled egg has a higher entropy than the fresh unbroken egg; the sweetened coffee has a higher entropy than the undissolved sugar lump in unsweetened coffee.” It follows, then, that “the second law of thermodynamics asserts that the entropy of an isolated system increases with time” (p. 309). Penrose goes on to associate the Second Law specifically with the “relentless and universal principle” that organization is continually breaking down.

Is the Earth Dissipating?

One problem with this formulation is that we know of no evidence for the assertion that the material world has an inherent tendency to dissipate. If this were the case, presumably somebody by now would have calculated the depreciation rate for the Earth as it progressively deteriorated. Though stars burn out and aggregates of individual gas molecules may readily dissipate, the stable molecular bonds that hold solid chunks of matter together do not for the most part spontaneously break down.

Another problem is that energetic and physical order are not always isomorphic. A case in point is a volume of water molecules that becomes increasingly “disordered” as energy inputs convert ice crystals to running water and then steam. This is a case where energy inputs result in a progressively increasing physical disorder! (This crucial point can also be illustrated with a thought experiment. Two equally heated crystals, one in lattice form and the other in a disordered pile of shards, nevertheless could in theory produce exactly the same work output in appropriate conditions.)

In fact, much of the physical disorder we experience is actually energy-driven! Take Hawking’s decaying house metaphor. This is not an example of an inherent entropic trend but of the effects of gravity, wind, weather, solar radiation, oxidation, human use and termites, among other things. Likewise, in Penrose’s examples, it is the joint action of gravity and a solid surface, not entropy, that is responsible for breaking the water glass. Energy inputs are also needed to scramble the egg, and well-understood physical processes (including the stirring actions of the coffee-drinker) are responsible for dispersing sugar cubes. (I can testify to the lack of entropy when I fail to stir my coffee!)

Equally dubious is the claim that the general trend in the universe is toward increased entropy. Indeed, entropy has often been portrayed as a dark force which somehow governs the fate of our species and dooms our progeny to oblivion — in the eventual “heat death” of the universe. The practice of making such cosmic claims for entropy dates back to Clausius. In his classic text, Abhandlungen über die mechanische Wärmetheorie (1864), Clausius wrote: “The energy of the universe is constant; the entropy of the universe tends towards a maximum” (quoted in Harold 1986). Clausius also coined the term “heat death” (Wärmetod).

This dour vision has long since become the conventional wisdom of the western scientific establishment. Over the course of the past 130-odd years, it has been echoed by countless other theorists (see, for instance, Lotka 1922; Bridgman 1941; Schrödinger 1945; Shannon and Weaver 1949; von Bertalanffy 1952; Koestler 1967; Morowitz 1968; Lehninger 1971; Georgescu-Roegen 1971; Miller 1995[1978]; Riedl 1978; Wicken 1987; Weber et al., 1988.)   However, there is reason to doubt the conventional wisdom. In a nutshell, the heat death scenario overlooks the role of gravity. Alongside the well-documented trend toward increased entropy in the universe, new “free” energy is being aggregated as we speak in the ongoing process of star formation and stellar nucleosynthesis. These energy-ordering processes are “driven” by the non-entropic influence of gravity, in utter contradiction to the Second Law!

As physicist Freeman Dyson (1971) explained it: “…in the universe the predominant form of energy is gravitational…gravitational energy is not only predominant in quantity but also in quality; gravitation carries no entropy…[Moreover] in the universe as a whole the main theme of energy flow is the gravitational contraction of massive objects, the gravitational energy released in contraction being converted to energy in the form of motion, light and heat.” In other words, even as the existing “stock” of available energy in the universe is being dissipated, more is being created by the great engine of “negentropy” in the universe, gravity. Physicist F.A. Hopf (1988:265) observed that the conventional wisdom about entropy in cosmic evolution might be “an artifact of our ignorance about how to handle thermodynamics when gravity is important.”

It should also be pointed out that a portion of the available energy that is mobilized by gravity and emitted from our sun does “work” of various kinds on Earth and ends up being “trapped” and embodied in matter and living systems.   Some of it also gets recycled and re-used in various ways. So it is not entirely lost to entropy. To be sure, the vast majority of the energy that bombards the Earth and the many billions of other celestial objects is ultimately dissipated. But this would have happened in any case; living systems do not in any way “increase” the overall energetic entropy of the universe. Indeed, some of that entropic energy is positively beneficial; it warms our planet and in other ways makes our environment hospitable to life.6

Forget Entropy!

A corollary assumption of the heat death scenario, and one of the pillars of modern physics, is that dissipated available energy ultimately goes to “equilibrium” (i.e., maximum entropy) in the vacuum of space and forms part of the residue of “background radiation” that is suffused throughout the universe. The problem with this scenario, it seems increasingly evident, is that the vacuum is not a vacuum. Rather, we simply cannot detect and measure what is going on out there. It has been a major embarrassment to cosmology for some years that approximately 95% of the predicted mass of the universe is missing and unaccounted for. Various theorists have struggled with this and other important paradoxes (such as quantum entanglement and quantum non-locality). For instance, Haisch, Rueda and Puthoff (1994) and others, have developed what they call the “zero-point field” theory, which posits an undetected omnidirectional field that, among other things, can account in a new way for how inertia and gravity work; they are effects produced by the field.

More recently, in light of the growing evidence that the universe is expanding at an accelerating rate, some cosmologists have revived Albert Einstein’s postulate of a “cosmological constant” in the form of undetected “dark energy” that may be driving the dynamics of the universe in ways that are not yet understood.   In either case, the available energy that is being created and dissipated in the part of the universe we can detect may be vastly outweighed by the energy we can’t detect. Though it is pure speculation at this point, it could be that the energy we define as entropic is not being dissipated at all. Instead, it is being absorbed back into the vast energy pool in which we are embedded. In any case, we are far less certain than we were only a few years ago about either the dynamics of the universe or its ultimate fate. But of one thing we can be reasonably certain. Entropy will have little to do with it.

More to the point, it is evident that entropy has had relatively little to do with biological evolution. To repeat, entropy is a state function like temperature or pressure; it cannot be equated to a “drive” or a “force” any more than temperature can be equated with energy. Entropy represents a constraint on thermodynamic processes, not a cause of them; it measures the energetic “wastes” associated with any real-world dynamic process. It’s a cost of doing business in the biosphere. To cite one of Steve Kline’s favorite sayings, a focus on entropy as a way of trying to understand a living system is analogous to trying to understand a horse by studying horse manure.

Thermoeconomics is the Alternative

Contrary to Schrödinger’s formulation, we believe that it more accurate to say that living organisms feed upon available energy to create thermodynamic (energetic) order, as well as structural and functional organization, rather than saying that they feed upon a statistical measure called “order”. Furthermore, we believe that energetic order, physical order and biological organization are not equivalent to one another. But most important, we believe that the role of energy in evolution can best be defined and understood in economic terms. By this we mean that living systems do not simply absorb and utilize available energy without cost. They must “capture” the energy required to build biomass and do work; they must invest energy in development, maintenance, reproduction and further evolution. To put it baldly, life is a contingent and labor-intensive activity, and the energetic benefits must outweigh the costs (inclusive of entropy) if the system is to survive. Indeed, energetic “profitability” is essential to growth and reproduction.   It could be called the “First Law of Thermoeconomics.”

Accordingly, there are three “ground-zero” assumptions that provide the conceptual framework for thermoeconomics: (1) life is a contingent phenomenon, and “adaptation” to specific, varying environmental conditions and constraints is an ongoing challenge for all living systems; (2) functional variation is endemic in nature and any form of biological order (or organization) is always subject to stringent testing and “editing” by natural selection; and (3) living systems are by their very nature “purposive” (cybernetic) in character, and their adaptation and evolution over time have been shaped in part by functional “control information.”7 This contrasts with the thermodynamics paradigm, which allows (and even invites) externally driven, deterministic models of living organisms, with attendant “laws” of evolution. Many of these theorists (not all by any means) assume that available energy is a free good that can simply be poured into a living system and that the environment presents at most only limited “constraints”. In contrast, the thermoeconomic perspective is fundamentally Darwinian in that it assumes that the “struggle for existence” (in Darwin’s pellucid phrase) is a process in which living systems must unfailingly earn a living in the “economy of nature” (a term Darwin frequently employed, following Linnaeus). In this paradigm, there is no “order for free,” as Stuart Kauffman would have it; all forms of order must also have a Darwinian “seal of approval.”

An illustration: Maxwell’s Demon

One way of illustrating this paradigm shift is by revisiting perhaps the most famous of all “thought experiments” — namely, “Maxwell’s demon.” In his classic text, Theory of Heat (1871), physicist James Clerk Maxwell proposed a means by which, supposedly, the Second Law might be violated. Maxwell conjured up a fanciful “being” that would be stationed at a wall between two enclosed volumes of gases at equal temperatures. (The term “demon” was actually coined by a contemporary colleague, William Thomson.) The demon would then selectively open and close a microscopic trap door in the wall in such a way as to be able to sort out the mixture of fast and slow gas molecules between the two chambers. In this manner, Maxwell suggested, a temperature differential would be created that could be used to do work, thereby reversing the otherwise irreversible thermodynamic entropy.

We suspect that Maxwell never thought his successors would take his demon very seriously, but many have. This is why, in the late 1920s, physicist Leo Szilard was compelled to argue, in a professional journal, that the energetic costs associated with the demon’s efforts (he focused on the gathering of “information”) would cancel out any gains from the sorting process; the demon had to be part of the thermodynamic accounting.8 Then, in 1949, Leon Brillouin added the argument that, in order to be able to “see” the molecules, the demon would also need illumination. Following Szilard’s lead, Brillouin (1949, 1968[1950]) stressed that the “information” required to do the sorting involved an offsetting (entropic) cost.

Many other theorists since the 1940s have made similar arguments (see especially the papers collected by Leff and Rex, 1990), but the demon refuses to die. For instance, physicist David Layzer (1988) revived the issue with the proposal that the demon could be replaced by “a tiny robot” that would be “programmed” with information about the positions and velocities of all the gas molecules after an “initial moment.” This would allow the trap door to be opened and closed automatically. Of course, Layzer conceded, “such a calculation would need to be based on an immense quantity of data…but that is all right in a thought experiment.” No, it is not all right. One cannot arbitrarily set aside the constraints of the real world and then claim to have found a way to violate the Second Law. Layzer’s argument fails if the vast energetic cost of designing, building and operating the robot, and of acquiring the necessary information, is included. Furthermore, as shown in Kline (1997), the very notion that it could ever become possible to track and sort individual molecules in a volume of gas is scientifically and technically “wildly unfeasible.”9

Another problem with Maxwell’s paradigm, mostly overlooked, is that the demon would be attempting to derive work from a thermal gradient in a control mass with a fixed energy content (an isolated system). If, for example, the two volumes were hooked up to a heat engine coupled to a means for “recapturing” the energy from the work output, it would be thwarted by the Kelvin-Planck dictum which states, in effect, that you cannot create a perpetual motion machine; the output would not be completely reversible. So, Maxwell’s classic model, even with the assistance of modern technology, is not a paradigm for progress.

A Cyanobacterium in Sunlight

The fundamental problem with Maxwell’s demon is that it was not really an experiment in thermodynamics but a surreptitious — unacknowledged — experiment in biology, and cybernetics, and thermoeconomics. Maxwell himself can be blamed in part for creating this muddle. In the famous and much-quoted passage from his book about his imaginary creature, Maxwell wrote that the Second Law is true (quote): “as long as we can deal with bodies only in mass, and have no power of perceiving or handling the separate molecules of which they are made up. But if we conceive a being whose faculties are so sharpened that he can follow every molecule in its course, such a being…would be able to do what is at present impossible to us” (quoted in Leff and Rex 1990:4).

Setting aside the egregious implication that such a perceptual feat — tracking every molecule in a volume of gas — might ever become possible, Maxwell then proceeded to make a serious conceptual error. He claimed that his hypothetical creature could “without the expenditure of work” create an energetic differential in a divided vessel. What, no work?   This assertion effectively removed the demon at a stroke from the realm of realism. Of course, Maxwell was only using his metaphor as an illustration of the fact that “statistical methods” are important to micro-level thermodynamic analyses. He did not pose it as a serious theoretical problem. Unfortunately, many of his successors have taken it seriously. Leff and Rex (1990) provide an annotated bibliography with some 250 references, many of which are concerned either with exorcising or resurrecting the demon.

Beginning with Leo Szilard’s famous 1929 paper, Maxwell’s thought experiment was redefined in such a way that it forced physicists to include the costs of the demon, especially the informational costs, in the thermodynamic bookkeeping, rather than treating them as “externalities”. This in itself was a major contribution, whatever may have been the ultimate flaws in Szilard’s argument (see the critique in Corning and Kline 1998b, Appendix A). In addition, there were the (usually) overlooked “economic” costs associated with designing, building and operating the demon (a recent example is Bennett 2000[1988]).10   As an increasing degree of realism was introduced into the debate, along with various doomed attempts to add technological improvements to the demon, the physics community ultimately converted the experiment into a problem in information theory and, lately, into a pedagogical tool in introductory physics courses.

The ultimate failure of physicists to design a “feasible” Maxwell’s Demon highlights the fundamental problem associated with defining the evolutionary process in purely thermodynamic terms. Maxwell’s Demon shows us, inadvertently, why it cannot be done. In a nutshell, there is no way to operate the demon at a profit. Contrary to the claims of many physicists and biophysicists over the years, the evolution of living systems can best be “explained” not in terms of the laws of physics (or the concepts of entropy and negentropy) but in terms of “thermoeconomics”. The laws of thermodynamics describe underlying physical conditions and constraints with which bioenergetic and human-made technological systems must cope, but they do not encompass or explain the “informed,” purposive actions of a cybernetic control system like Maxwell’s demon. In living systems (and, by extension, in human technology), the locus of causation is not confined to the energetics; it is crucially dependent also on the information-based actions of “purposeful” biophysical structures and processes; in order for living systems to function, work must be done to acquire and make use of available energy, which necessarily entails “extraction” or “production” costs and cybernetic control activity.

In effect, the structures and activities associated with the capture and utilization of energy for purposeful work introduce a new set of “bioeconomic” and cybernetic criteria into thermodynamic processes. This suggests the need for such familiar economic concepts as capital investments, operating costs, efficiency, even amortization (consider, for example, the annual “retooling” by deciduous trees). A good model for the role of energy in living systems is a cyanobacterium in sunlight. Nature has vastly improved on Maxwell’s demon by developing a highly efficient energy capturing system that regularly operates at a profit. It is time to give bacteria the credit they deserve, and to give Maxwell’s demon a decent burial — or perhaps a cremation.

The Thermoeconomics Paradigm

Harold Morowitz, one of the leading figures in biophysics and a major contributor to our collective effort to understand more fully the origins of life, inadvertently provided an illustration of the need for a broad, thermoeconomics paradigm in his path-breaking (and still valuable) volume on Energy Flow and Biology (1968). Recall how he proposed that the evolutionary process has been “driven” by the self-organizing influence of energy flows, mainly from the sun: “The flow of energy through a system acts to organize that system…Biological phenomena are ultimately consequences of the laws of physics” (p. 2).

This, unfortunately, was an overstatement. If energy flows were all that mattered in the evolution story, then we should expect to find complex living systems everywhere on Earth and, indeed, everywhere else in our solar system (we assume that the laws of physics are also applicable there). So, there must be something more involved — some other “ingredient” — and in fact there is, as Morowitz himself acknowledged later on in his book. In the penultimate chapter, where he explored ecological aspects of energy flows, Morowitz admitted “at this point, our analysis of ecology as well as evolution appears to be missing a principle” (p 120). His conclusion: Although the flow of energy may be a necessary condition to induce molecular organization, “contrary to the usual situation in thermodynamics…the presence or absence of phosphorous would totally and completely alter the entire character of the biosphere” (p. 121). And, we might add, so would the absence of water, or carbon dioxide, or oxygen (for aerobes).

Furthermore, as Morowitz noted earlier in his text, the lowest trophic level in the food chain is dependent on exogenous sources of free nitrogen, which would otherwise be a limiting condition (Liebig’s Limit) for the entire biosphere (as opposed to the abundant supply of energy). Finally, and most significant, Morowitz acknowledged that the functionally organized cyclical flow of matter and energy in nature requires a cybernetic explanation. “The existence of cycles implies that feedback must be operative in the system. Therefore, the general notions of control theory [cybernetics] and the general properties of servo networks must be characteristic of biological systems at the most fundamental level of operation” (p. 120). Exactly so. Biological evolution takes place within a situation-specific array of constraints and needed “resources”, and its course is also greatly affected by various kinds of “control information” — from enzymes to genes to nerve impulses to cultural information (memes). Thermodynamics per se, and especially entropy, has little to say about such matters.

Equally important, natural selection has played a major role in shaping the process, perhaps from the very outset. Some theorists (e.g., Wicken 1987; Depew and Weber 1995; Kauffman 2000) hold the view that “biogenesis” (the origin of life) was shaped by the laws of physics and that a “historical” process (i.e., natural selection) came later. Setting aside the growing suspicion that the laws of physics may themselves be artifacts of cosmic evolution, the assumption that the process of biogenesis was somehow indifferent to the specific “historical” environment and followed a deterministic, autocatalytic, self-organizing course that “got things right” on the very first try is a dubious proposition, I would argue. Given the ubiquity of variation in nature, plus the high frequency of failures and the evidence that functionally-important evolutionary “inventions” and “improvements” do not as a rule follow a smooth predestined course but instead emerge from a messy process of “trial-and-success” (in Julian Huxley’s felicitous term), it is more likely that “history” — and particularly “economic” influences — were co-determinants from the outset. Indeed, by its very nature, the process of biogenesis created “dependencies” — the “need” for a benign environment and access to a variety of material resources (namely, carbon, hydrogen, nitrogen, oxygen, phosphorous and sulfur) in addition to an abundant supply of available energy.

Accordingly (to repeat), a number of familiar economic criteria were likely to have been important from a very early point — capital costs, amortization, operating costs and, most especially, economic “profitability” (the returns had to outweigh the costs, especially with regard to energy “capture”). This “historical” aspect in turn provided an opportunity for “synergistic” functional innovations and improvements that were differentially favored by natural selection.   In accordance with the so-called “Synergism Hypothesis” (Corning 1983, 1995, 1998a, 2001a), the combined, synergistic effects produced by various combinations of elements, parts or individuals are themselves an important causal agency in evolution; functional effects are also causes — they are important determinants of natural selection. And many of these synergistic biotechnologies involved new methods of energy capture.

The Thermoeconomics of Biogenesis

This important evolutionary trend can perhaps be illuminated by reviewing a few of the highlights. (Among the many useful sources, see especially Morowitz 1968, 1978a, 1992; Lehninger 1971; Broda 1978; Harold 1986; and Nicholls and Ferguson 1992.)   Until recently, it was widely believed that photosynthesis — the ability to “feed” upon direct energy inputs from the solar flux — was preceded by fermentation — the consumption of energy-rich organic compounds, such as the simple sugars that formed spontaneously in the prebiotic environment in the presence of solar radiation (Broda 1978; Curtis and Barnes 1989).   However, it was not a free lunch, for there would also have been significant acquisition costs.

Another problem with fermentation as a biotechnology was that it was based on exploiting a strictly limited resource in a relatively inefficient manner. For instance, when yeast cells are placed in a barrel of sugar solution, they can recover (in the form of ATP) only about 35% of the energy content during alcoholic fermentation; the rest is “lost” as entropy (mostly waste heat). But more important, as Broda noted, this was ultimately a dead-end strategy. A growing population of living organisms would have been dependent upon a limited and ultimately shrinking resource base. Absent the invention of a means for tapping directly the abundant renewable energy resources of the sun, the evolutionary process might have come to an early end.

However, in recent years a radically different scenario for the origins of life has emerged from the work of a number of theorists, including David Deamer, Harold Morowitz and several others (see especially, Deamer 1978; Deamer and Oro 1980; Deamer and Pashley 1989; Morowitz 1978b, 1981, 1992; Morowitz, Heinz and Deamer 1988). This new scenario focuses on the role of “amphiphiles” — elongated fatty molecules that are like the lipids in modern cells. Amphiphiles, which evidently were present in the prebiotic environment, have the unique property that they are hydrophobic at one end and hydrophilic at the other end; they will align themselves with respect to a water medium. Thus, these molecules can self-assemble into “vesicles” — envelopes that might have provided a protected enclosure within which various forms of protochemistry could arise with the aid of raw material resources and an energy source like free-floating protons. These resources could have been selectively transported across the amphiphile “membrane” from the surrounding aqueous medium and then utilized for various purposes. This development, in turn, may have set the stage for a primitive precursor of photosynthesis, utilizing “chromophores”that contained photosensitive chlorophyll and retinal molecules.

It is an elegant concept, and the case for a spontaneous, autocatalytic process of this kind is quite plausible; much evidence has been marshaled to support it. However, the developers of this scenario also recognize that each step would have involved new energy and resource dependencies and many opportunities for functional improvements. As Morowitz (1992:93, 175-176) put it: “The necessity of persistence in [this] non-equilibrium domain leads to a Darwinian-like struggle for survival [and competitive selection] long before there are organisms in the conventional sense.” This is consistent with the earlier argument of Eigen and Schuster (1977) regarding the likely role of Darwinian selection in the emergence of autocatalytic “hypercycles”. Even Depew and Weber (1995:469-470), who prefer to label it “thermodynamic or chemical selection,” nonetheless embrace the underlying principle: “In a world in which autocatalytic cycles compete for efficiency in finding, utilizing, and dissipating energy sources, however, there would have been keen selection pressure for any entity that could increase these efficiencies by storing the information needed for autocatalysis and for expanding autocatalytic prowess…” In short, natural selection was primordial.11

Energy “Progress” in Evolution

In a nutshell, the story of energy in evolution has little to do with entropy; it has more to do with progressive improvements in bioenergetic technologies. This can be seen clearly in the development of photosynthesis, a highly sophisticated nanotechnology for exploiting a virtually unlimited energy resource with fantastic profit potential. Even photosynthetic bacteria are able to capture much more available energy than is required for their own immediate maintenance needs. However, the ability of the so-called prokaryotes to exploit atmospheric sources of carbon (CO2) to build biomass was only marginally more efficient than anaerobic fermentation (Lehninger 1971; Harold 1986). Its principal virtue was that it provided access to an abundant new source of raw materials.

However, the next significant technological improvement was highly synergistic and represented a major breakthrough. According to the serial endosymbiosis theory (SET) of Lynn Margulis (1993, 1998), when primitive “eukaryotic” protists, one-celled organisms with an enclosed nucleus and various specialized functional units called organelles, developed — or more likely, enveloped — ancestors of modern plant chloroplasts, they acquired potent new energy-capturing capabilities. Each chloroplast is a “specialist” (at least in modern land plants) that contains several thousand “photosynthetic systems” consisting of a “reaction center” and 250-400 chlorophyll and carotenoid molecules — perhaps as many as one million “antenna pigments” altogether. Moreover, each eukaryotic cell may contain 40-50 chloroplasts (Curtis and Barnes 1989). In other words, eukaryotes can capture many orders of magnitude more energy than their prokaryote ancestors.12

A crucial corollary, however, is that the specialization and increased productivity achieved by chloroplasts in turn depends upon a “combination of labor” in which these specialists are supported by a larger collaborative enterprise, including particularly the metabolic functions provided by the mitochondria, along with an array of other life-sustaining activities. The result is an interdependent “system” that is vastly more productive — one that, among other things, is capable of producing some 15-20 times as much available energy (net of entropy) as do prokaryotes (Margulis and Sagan 1995; Ridley 2001).

The next major development in the energy story is associated with the evolution of metazoa, complex multicellular organisms that developed new ways of exploiting the synergy principle. Now each eukaryotic cell, with its 40-50 million antenna pigments, became a contributor to a vastly larger enterprise in which many photosynthetic cells combined forces and developed entire energy-capturing surfaces, each square millimeter of which might contain half a million chloroplasts. And this already huge number (perhaps 2.5 x 1012 pigment molecules) could in turn be multiplied by the total light-capturing surface-area of a given plant. For a single deciduous tree, the total number of pigment molecules might be astronomically large — perhaps 5 x 1022.

“Free-loading”– better known as predation — may also be a (relatively) low-cost way to obtain available energy, and this alternative strategy is also likely to have developed early on in the evolution of the prokaryotes. However, a major evolutionary breakthrough occurred when a new class of predators (heterotrophs) developed the ability to utilize an accumulating biological waste product (oxygen) to bypass the rigors of photosynthesis and extract energy directly from the biomass of the so-called “autotrophs”(e.g., plants and grasses) using oxidative combustion. This represented a significantly more economical biotechnology. Equally important, it freed the heterotrophs from the need to sit in the sun all day and remain connected to an array of solar panels. However, as Fenchel and Finley (1994) point out, these increasingly complex forms of energy capture and metabolism were the result of synergistic functional developments that produced adaptive “economic” advantages, and not thermodynamic “instabilities”, “fluctuations”, or “bifurcations”.

Finally, various organisms have developed the ability to capture and exploit exogenous energy “subsidies” to enhance their survival-related activities and reduce internal energy costs — ranging from solar radiation to tidal currents, alluvial flooding, prevailing winds, even gravity. In humans, needless to say, these subsidies have had a major effect in shaping not only the destiny of our species but the course of evolution itself. For example, modern agricultural practices require about 10 calories of subsidy for every calorie of output (E. P. Odum 1983). However, the total output per agricultural worker has gone up proportionately. An American farmer can raise enough food to support him/herself and 45-50 other people; a New Guinea horticulturalist can support only 4-5 people.

In sum, the development of novel bioenergetic technologies in the evolutionary process has had little to do with entropy or “dissipative structures” and much more to do with “engineering” improvements in the ability of living systems to capture and utilize available energy; it is the organized use of available energy in evolved, “informed” (cybernetic) structures that has been the key. And the explanation for these changes lies in their “economic” advantages, as Lotka (1922) long ago suggested. No detailed cost-benefit analysis of this progressive trend has yet been undertaken, to our knowledge, but it is unlikely that we will be surprised by the findings. In fact, this trend supports one of the axioms of evolutionary theory, tracing back to Malthus and echoed by Darwin, which holds that living organisms have evolved the capacity for unchecked multiplication in the absence of various environmental constraints. However, it has not been a free lunch.

Does Entropy Pay the Bill?

One of the most striking trends in the evolution of bioenergetic technologies has been the improvement in “productivity” and “efficiency” over time — entropy reduction. Before we consider some aspects of this trend, however, it is necessary to confront two items of conventional wisdom about thermodynamics that are directly related. One is the claim, going back to Schrödinger, that living systems must “pay” for their thermodynamic order with an equivalent amount of entropy. In fact, there is no a one-to-one correspondence between the creation of order and an increase in entropy. In a “perfect” (i.e., reversible) process, there would be no increase in entropy at all. But more relevant for our purpose are the many cases in which efficiencies have been achieved that result in per-unit entropy reductions. Here we will provide just two examples, one in technology and one in biology. Power plants in the year 1900 required some eight times as much coal per kilowatt-hour of electricity output as do the best power plants operating today. Similarly, in living systems Schmidt-Nielsen (1972) showed that the energy consumption associated with locomotion is far lower per pound for large animals than for smaller animals (the regression line is -0.4). The point is that it is the inefficiencies — i.e., the wastes or irreversibilities — not the ordering processes per se that create entropy.

Furthermore, many bioenergetic processes are remarkably efficient and entail very little entropy. Internal conversion of chemical energy (ATP) to mechanical work within animal muscles, for instance, ranges from about 66% to 98% efficient (Kushmerick and Davies, 1969; Blake 1991). Likewise, there is almost no entropy associated with the light-dependent reactions in photosynthesis. McClare (1971, 1972) has suggested that there may be a time function associated with the thermalization of energy (and the creation of entropy) in living systems; very rapid photochemical and biochemical energy conversion processes may, in effect, be more efficient and may reduce energy wastage.

Finally, from a broader, cosmic perspective, what difference does a little more entropy in the universe make? For the sake of argument, let us say that 2% of the free energy that living systems are able to “capture” from the solar flux to do work and build biomass ends up being permanently stored or reused elsewhere in nature. This is much better than if all of it were “dissipated” into deep space, which might otherwise have been the case. So, contrary to Schrödinger’s assertion, living systems actually reduce (very slightly) the total entropy of the universe, at least for the lifetime of this planet.

The other major misapprehension about thermodynamics and evolution has to do with the notion that there is some inherent economizing influence embedded in the laws of physics themselves (e.g., Proops 1985; Wicken 1987). Often called Prigogine’s principle, the claim is made that, as thermodynamic processes approach an equilibrium condition they obey a law of “minimum entropy dissipation.” However, this principle is true only in some special cases. Obvious counter-examples are turbulent flows, say in various liquids and gases. Equally important, Gage et al., (1966) provided a definitive disproof that any variational principle of this kind could exist in a dynamic system. Although Gibbs showed in the 1870s that such a principle is applicable to static systems, it is not applicable to living systems operating in an approximate steady state. As discussed in Corning and Kline (1998a), there is no monolithic, general variational principle of any kind in physics based on a single variable that applies to living systems. In other words, we must look to thermoeconomics (and natural selection) to explain the progressive improvements in energetic “efficiency” that can be observed in the evolutionary process, not thermodynamics.

Thermoeconomic Trends in Evolution

There are two distinct thermoeconomic trends in the overall evolutionary process that can be viewed as a reflection of progressive improvements in the capacity of living organisms to acquire and utilize available energy. One such trend, mentioned in the discussion above, relates to the total quantity of energy “throughputs”. For instance, Karasov and Diamond (1985) have shown that small mammals can process food up to ten times faster than lizards of similar size with the same or greater extraction efficiencies, due to a greater intestinal surface area. A second trend, identified by Lotka (1922, 1945), has involved an increase in the total energy flux of the “biosphere”. Ecology textbooks refer to this quantity as the global “gross primary production.” Indirect evidence of this trend can be found in correlated environmental changes, most notably the reduction in atmospheric carbon dioxide and the increase in atmospheric oxygen over time (see E.P. Odum 1983).

Although evolutionists remain uncertain about many of the details, a related trend has to do with a long-term increase in the Earth’s total biomass. Wesley (1989), following Ehrenvärd, estimates that there has been a 20-fold increase in biomass from the Cambrian era to the present day. The energetic significance of this increase can be likened to capital/asset accumulation, and we have adopted the term “structural energy” to label this phenomenon. The term refers to energy that is stored in various forms, much of it temporary (like ATP) but some of it as permanent as the inorganic matter that is aggregated or even manufactured by living organisms. (We differ from Leigh Van Valen, 1976, who coined the term. He excluded “embodied” energy that might later be used maintenance, growth and reproduction.

Structural energy in our usage includes not only the biomass tied up in currently living organisms but also the vast quantities of organic detritus contained in fossil fuels — coal, oil, tars, oil shale — as well as limestone, reef corals, petrified wood, and other inorganic products of organic activity. M. King Hubbert (1971) estimated that the total (“initial”) quantity of coal (before human consumption began in earnest) amounted to some 15.28 trillion metric tons, half of which can be commercially mined. The remaining oil reserves have recently been estimated to be equivalent to some 10 trillion barrels (Davis 1990). This represents an enormous accumulation of structural energy. (And this says nothing about atomic or chemical energy.)

Efficiency is also an important concept in thermoeconomics. But, as Blake (1991) has pointed out, it is also a multi-faceted concept; it can refer variously to energy capture, chemical conversions, biomechanical work, locomotion/propulsion costs, thermoregulation, and so on. Natural selection sometimes maximizes for one or more forms of energetic efficiency but more often it produces compromises among various survival-related criteria. One example concerns the energetic costs of reproduction. The costs vary enormously from one species to another, and the reasons are always multi-factored and complex; they do not correlate closely with an obvious variable like body-weight (Harvey 1986). Another example concerns human energetics. The cost of transport for a running human (in oxygen consumption per unit body mass per unit distance traveled) is higher than for many other mammals and birds. Yet humans also excel in endurance, a paradox that reflects an evolutionary compromise (Carrier 1984).

Improvements in efficiency can be achieved in at least three different ways. One has to do with a decrease in entropy, or the degree to which available energy is fully utilized (often called First Law thermodynamic efficiency). As we noted earlier, energetic evolution has not always resulted in increases in this type of efficiency. Photosynthetic plants “waste” a lot of energy in evapotranspiration, and animals at the top of the food chain are often very wasteful of energy when there is no externally imposed need to “economize”. Likewise, human technologies are notoriously inefficient. For example, it requires two joules of energy from coal to produce one joule of electrical power, and automobiles have maximum energetic efficiencies in the neighborhood of 35-40%. Overall, only about half of the exogenous energy inputs for human technology are used productively.

Second Law thermodynamic efficiency, on the other hand, refers to the fraction of (net) available energy that is utilized to do “work” in an energetic process. Thus, to use an example provided by Ayres and Nair (1984), a space heater may operate at 70% First Law efficiency, meaning that only 30% of the energy inputs go up the chimney, whereas its Second Law efficiency may be only 4%. Only a small fraction of the heat is turned into mechanical work while the rest may briefly serve to warm our house but is ultimately dissipated.

However, the natural world also provides many examples of a third type of energetic efficiency, namely, adaptations to minimize the absolute quantity of energy used in meeting various biological needs. These adaptations range from shelter building to hibernation, heat sharing, nest-sharing, physiological adaptations (like fur, feathers, subcutaneous fat layers, etc.) and many others. For instance, Le Maho (1977) documented that the huddling behavior of emperor penguins during the long antarctic winter reduces individual energy expenditures by 20-50 percent. It is also important to note that one organism’s waste may become another one’s food supply. Consider the many decomposers and scavengers that utilize otherwise wasted energy, or the continuous recycling of oxygen and CO2 between aerobic heterotrophs and photosynthetic organisms. Such interactions require us to do our energy bookkeeping at the ecosystem level as well as at lower levels in the biosphere (see especially Ulanowicz 1980, 1983, 1986).

Issues in the Bioenergetics of Evolution

Two other issues concerning the bioenergetic aspect of evolution should also be mentioned briefly. One is related to a broader question in evolutionary biology, namely, does natural selection tend to “maximize” for any particular value, or objective? Is there a discernable overall trend or general direction to the process? Some theorists have suggested that, in light of its necessary role in biological processes, energy-capturing capabilities would likely be a major target of selection. This was first suggested by Lotka (1922, 1945), who formulated a “law” of maximum energy flux. Van Valen (1976) refined this idea further with his so-called “third law of natural selection.” Van Valen posited that natural selection would be likely to maximize not for energy flows per se but for what he called “expansive energy” — i.e., energetic surpluses or “profits” that, over time, would enhance the capacity of the biosphere to expand the total quantity of biomass. The progressive improvements in bioenergetic technology cited above would seem to lend some support to this hypothesis, and culturally evolved energy-capturing technologies have manifestly played a major role in the emergence of complex human societies.

The problem with this line of reasoning is that natural selection cannot, over the long run, maximize for any one parameter, because complex organisms have a “package” of important functional requisites; energetic improvements are not likely to occur at the expense of other survival imperatives. In other words, an evolutionary trend is not equivalent to an evolutionary law.

Finally, there is the vexing issue of “complexity” in evolution. It is generally agreed that there have been significant increases in biological complexity over the course of evolutionary history, but there is also widespread disagreement about how best to measure complexity and about its evolutionary significance.13 Few, if any, Darwinian theorists think that natural selection would maximize for any form of complexity per se; complexity is likely to be an artifact of various functional advantages (see Corning 1983, 1995, 1996; Bonner 1988; Maynard Smith and Szathmáry 1995; Ridley 2001; Szathmáry et al., 2001). On the other hand, many anti-Darwinian theorists seem to think that evolution might do just that.

We believe that an unbiased reading of the fossil record and the diversity of currently living systems will not support the non-Darwinian hypothesis. Complexity — thermodynamic or otherwise — is a contingent survival strategy that is continuously subject to testing and revision in light of fundamentally “economic” criteria. (Consider the fate of such energy-intensive creatures as large dinosaurs.) From this perspective, it is the functional consequences of various kinds of complexity that have been responsible for its differential survival and reproduction over the course of evolutionary history. The explanation lies in the economic costs and benefits in a given set of “environmental conditions,” not in some inherent trend.

Thermoeconomics and Economics

Finally, a word is in order about the long-standing but uneasy relationship between energetics and the discipline of economics. As noted earlier, the roots of this relationship can be traced back to Jean Baptiste de Lamarck, Herbert Spencer, Ludwig Boltzmann and others in the 19th century, who drew attention to the central role of energy capture and utilization in living systems. In this century, the demographer cum physicist Alfred Lotka (1922, 1945) was the first to view the role of energy and evolution within a natural selection context, and he spoke of using an energetic perspective to illuminate the “biophysical foundations of economics.” However, it was physical chemist and Nobel Laureate Frederick Soddy who, in the 1920s and 1930s, became the most vigorous proponent of an energy theory of economic value. Soddy wrote: “If we have available energy, we may maintain life and produce every material requisite necessary. That is why the flow of energy should be the primary concern of economics” (1933:56). Meanwhile, a contemporary of Soddy, Frederick Taylor (the father of “scientific management”), developed a similar but more narrowly conceived labor-energy theory of value that has subsequently been espoused by many theorists.

In the post-World War Two era, a number of anthropologists and ecologists embraced energy-centered theories of cultural evolution, most notably Leslie White (1943, 1949, 1959), Richard Adams (1975), Fred Cottrell (1953, 1972) Eugene Odum (1971) and Howard Odum (1971; Odum and Odum (1982), among others. Yet, as Mirowski (1988) observed, energetic paradigms never really took root in economics until well into the 1970s. What Mirowski calls the “neo-energetics” movement in economics can perhaps be dated to the work of Nicholas Georgescu-Roegen (1971, 1976, 1977a,b,c, 1979; see also Dragan and Demetrescu 1986) and the growing number of theorists who have attempted to build bridges between economics and thermodynamics over the last two decades. (See especially Hannon 1973; Slesser 1975; Gilliland 1975; Huettner 1976; Berndt 1978; Berry et al., 1978; Costanza 1980; Boulding 1981; Parsons and Harrison 1981; Bryant 1982; Roberts 1982; Ayres and Nair 1984; Proops 1983, 1985, 1987; Van Gool and Bruggink 1985; H.T. Odum 1988; Giampietro et al., 1993. A detailed history and critique of energy-economics can be found in Mirowski 1988, 1989.) Unfortunately, these theorists have sometimes been ill served by their sources in the physical sciences. As Maxwell’s demon illustrates, thermodynamics is blind to the economic and cybernetic (control) aspects of living systems. Furthermore, as noted earlier, the conflation of energetic entropy and physical disorder has seriously misled some economists (e.g., Georgescu-Roegen).

Conclusion

Thermoeconomics adds both to evolutionary biology and to economics a perspective in which the energetic costs and benefits in relation to meeting survival and reproductive needs are the keys to understanding the energetics of living systems. We believe that an economic (and cybernetic) paradigm provides a better predictor of the advances and recessions in biological complexity than does any formulation derived from the Second Law. Indeed, living systems may complexify, or simplify, for reasons that are unrelated to the gross energy throughputs. For instance, Ilya Prigogine’s “universal law of evolution” postulates that increasing complexity in nature is driven by energy-induced instabilities and “bifurcations” in “dissipative structures” (Prigogine et al., 1972a,b). Prigogine’s oft-cited example of Bénard convection cells was mentioned earlier. Presumably, then, a decline in energy consumption would result in a decrease in complexity. But, as we have noted, there are many cases in nature where reductions in energy use may reflect greater efficiency and even increased complexity (by many criteria). An obvious example is the collective (per capita) energy economies achieved by socially organized species like honeybees, army ants, emperor penguins, and humans.

We believe that the entire strategy associated with various attempts to reduce biological evolution and the dynamics of living systems to the principles either of classical, irreversible thermodynamics or to statistical mechanics — that is to say, to manifestations of simple, one-level physical systems — is a theoretical cul de sac. Physics is highly relevant to biology, but its explanatory arsenal can deal only with a part of the multi-leveled, multi-faceted causal hierarchy that is found in living systems. We believe that we have outlined a potentially fruitful alternative approach, one that is capable of shedding new light on the relationship between energy and the evolutionary process. In so doing, we believe that we have also brought this aspect of evolution more firmly into the Darwinian paradigm; we see thermoeconomics as being fully consistent with Darwinian evolutionary principles, and we believe that this alternative approach will bear much fruit.

Acknowledgements

I am deeply indebted to the late Steve Kline, Woodard Professor of Science, Technology and Society and of Mechanical Engineering, Emeritus, at Stanford University. I benefited greatly from Professor Kline’s widely acknowledged expertise in thermodynamics and his patient mentoring. He was a good friend and is sorely missed.   Needless to say, Prof. Kline bears no responsibility for any errors that may be found in this paper. I am also grateful to Ernst Mayr, Anatol Rapoport and Howard Odum for their support of this work, and to Terrence Deacon for a lively and helpful discussion of these issues. I also sincerely thank Stanley Salthe, a leader of the “infodynamics” school, for his willingness to read and comment upon a paper that was critical of his approach. I would also like to acknowledge the insightful comments and helpful suggestions from two anonymous reviewers for the original draft of this paper. Finally, I thank Pamela Albert for her research and bibliographic support. Her efforts contributed significantly to the final product.

NOTES
  1. Not all of these theorists deny the relevance of natural selection, needless to say, but in various ways they downgrade its importance. For instance, Stuart Kauffman (1995) acknowledges that natural selection is not irrelevant to the trajectory of evolution, but he pushes it into the background as an agency that provides “fine tuning” and “modest improvements” to the order that arises spontaneously in nature (see also Salthe 1998, who claims that adaptation is “not essential to life”). John Collier (1986) asserts that natural selection does not determine the “intrinsic dynamics” of evolution; it is merely “a rate-determining extrinsic factor.” Vilmos Csányi (1998) likewise acknowledges a subsidiary role for natural selection but gives primacy to an “autogenetic model” of evolution in which the main source of creativity involves “hidden properties” that emerge from an inherent “drive to be.” Biologist Jeffrey Wicken (1987,1988, 1989), who acknowledges that there has been an “over extension of the entropy concept” among the members of the thermodynamics school, nevertheless argues that thermodynamic “forces” underlie the principles of variation and selection in nature (1988:141). Even Depew and Weber (1988, 1995), in the course of presenting perhaps the most balanced view of the relationship between thermodynamics and selection (they speak of a dualistic process involving both autocatalysis and natural selection), circumscribe its role by excluding what they call “physical selection,” “chemical selection,” and even “thermodynamic selection.” In their view, only gene-based organic selection processes count as natural selection. We disagree. Natural selection applies to differential survival and “replication” at any biological level, whenever varying functional properties are responsible for the outcome (more on this in Footnote 10 below).
  1. Brooks and Wiley are also representative of recent efforts to incorporate information theory into the thermodynamics paradigm. Stanley Salthe (1993) calls it “infodynamics”. While this is certainly a salutary development, it suffers from the long-standing problem that physics cannot provide a functional definition of information, which is essential to understanding its role in living systems (but see Footnote 7 below regarding the concept of “control information;” also Corning and Kline 1998b, and Corning 2001b). The root of this problem traces to the pioneering work of physicist Claude Shannon (1948; also Shannon and Weaver 1949) on what he initially called “communications theory” but is now (perhaps inappropriately) called “information theory.” Shannon, who worked at the Bell Laboratories, was concerned with the problem of measuring uncertainty in the communication of messages between a sender and a receiver. At the suggestion of mathematician John von Neumann, Shannon adopted the term “entropy” to describe his measure. However, his form of entropy referred only to the degree of statistical uncertainty (disorder) in a given communications context before the fact, while “information” in his terms referred only to the capacity to reduce statistical uncertainty. If one uses the binary bit as a basic unit of measure, the degree of informational uncertainty (entropy) can therefore be defined empirically as a function of the number of bits required for its elimination. Attracted by the mathematical isomorphism between Shannon’s entropy and the Boltzmann/Gibbs formalizations for statistical entropy in thermodynamics, many other theorists since the 1940s have tried to apply information theory directly to thermodynamics, an enterprise Shannon himself is said to have discouraged. In general, these efforts share a tendency to lose sight of the original (energy-related) purpose of statistical entropy measures. For instance, physicist David Layzer (1988:29) defines information as the difference between the observed entropy state of any system and the maximum possible entropy. Wicken (1987) makes a convincing case against the notion that Shannon’s information/entropy concepts can be treated as generalized measures of order/disorder in nature. As Wicken notes, Shannon’s entropy bears no relationship to the state of the phenomenal world; it relates to the efficiency or effectiveness with which a “message” is communicated from a sender to a receiver and the degree of “uncertainty” reduction that occurs. Equally important, in the biological realm information is a functional phenomenon; it controls the work that is done via cybernetic control processes. (For more on this, see Corning and Kline 1998b). Thus, it is an ontological (or at least semantic) error to use the same concept both as a measuring rod for uncertainty/predictability and as a causal agency in the production of order/organization in the real world. Brooks and Wiley (1988), like other theorists of this school, have sought to circumvent this problem by differentiating between “structural information,” which is derived from what they claim are the “inherent” self-organizing capabilities of living systems and “instructional information,” which they assert (after Collier 1986) is a “physical array. ” The latter form of information provides a description of the state of the system, they say, and its “flow” is subject to informational entropy. A comment by biophysicist Harold Morowitz (1992:73,77) may be relevant here. “It is possibly the success of thermodynamics that has led to excesses by biological theorists looking for global extremum principles of biology in terms of parameters and variables that have little meaning in the domains in which they operate.to think in terms of predictive grand, unified theories based on thermodynamics is simply dreaming.”
  2. In his important book Evolution, Thermodynamics and Information (1987), Wicken initially adopts Shannon’s concept of information, a formulation that refers to certain statistical and quantitative properties associated with the “messages” that are transmitted in formal communications systems. Then, in an acknowledged theoretical segue, Wicken proceeds to deploy the concept of information as a causal agency in biological evolution. In order to do so, however, Wicken must shift to using a functional definition of information as an evolved, purposive artifact, a definition that more nearly accords with our common sense understanding of the term. Wicken advances the notion that organisms are “informed thermodynamic systems,” although he demurs from addressing the unresolved challenge of how to measure functional information empirically. He characterizes it as “a very perilous enterprise..We aren’t even close to knowing how to quantify it” (pp. 27-28). Wicken is well aware of the distinction between physical order and biological organization (see below), and he was among the first members of this school to recognize that biological organization depends upon functional (cybernetic) information. But he also acknowledged that he could not operationalize it: “All these considerations,” he noted, “make quantification of ‘information content’ extremely problematic, and pursuing that theme would only serve to reduce focus on the primary issue” (p.50). Wicken did suggest the use of informational “compressibility” — certain statistical properties (algorithmic or probabilistic) associated with various informational “units” — as a measure of ordered complexity, but this still did not solve problem of defining information in functional terms. (Again, see the discussion of “control information” in Footnote 7 below; also Corning and Kline, 1998b, and Corning, 2001b.) It should also be noted that a number of these theorists have recently established linkages with the field of semiotics, which has developed a much more compatible approach to biological information than the Shannon-Weaver paradigm (see especially Sebeok 1986; Nöth 1990; Brier 1992; Qvortrup 1993; Hoffmeyer 1997; Van de Vijver, Salthe and Delpos 1998).
  3. Schneider and Kay (1994, 1995) view living systems as being, quintessentially, a means for “dissipating” solar energy. The purpose of life, they assert, is “only” to provide a means for resisting the tendency of the solar energy gradient to perturb the equilibrium state of the “system” that encompasses the Earth. They view the evolutionary process as self-organizing because they posit an inherent tendency of any “system” to resist being “removed” from an equilibrium state. They describe evolution as “a march away from disorder.” Thus, energy flows “determine the direction” of evolution and the development of living systems over time. Below we will detail why we believe that any such monolithic thermodynamic determinism is inadequate as an explanation of the evolutionary process; we view biological evolution as a vastly more complex, multifaceted “survival enterprise.” The devil is in the details that Schneider and Kay allude to as “environmental conditions.”
  4. “Available energy” is a precisely defined technical term in thermodynamics that we much prefer to the more commonly used Helmholz or Gibbs “free energy” functions. The distinctions between them, and reasons behind our preference, are detailed in Corning and Kline (1998a, Appendix B). Briefly, the availability function allows one to calculate the work potential in any given environment, net of entropy, for both control mass and control volume situations. Though use of the control mass paradigm is more common in biology, we maintain that this category of systems is in fact inappropriate for the analysis of whole organisms, ecosystems and macro-evolutionary processes, because living systems at these levels are not systems of fixed mass; the flow of matter and energy through these systems more nearly resembles a jet engine than a bottle containing a fixed quantity of gas molecules. In any case, the availability function enjoys the advantage that it properly accounts for entropy without making entropy the analytical focus.
  5. To put this issue into perspective, the available energy associated with the part of the total solar flux that actually impinges on the Earth has been estimated to be about 13 X 1023 calories of radiant energy per year (Curtis and Barnes 1989). Of this total flux, less than 1% is “captured” (a number of variables affect the quantity of incident sunlight) and put to use to support life (Hubbert 1971; Harold 1986). The vast majority of the energy in the solar flux (about 80%) is reflected or entropically returned to space. The remaining 20% drives hydrological cycles, geological processes, the dynamics of the atmosphere, etc., in addition to sustaining life (Davis 1990). But, in any case, the Earth itself is a far greater source of “wasted” entropic energy (more than 99%) than is all of the Earth’s biological activity put together. Living systems contribute a trivial amount of entropy to the universe.
  6. The crucial role of cybernetics and “control information” in the evolutionary process is discussed in some detail in Corning (1983, 1995, 2001b; also Corning and Kline 1998a) (see also Wiener 1948; Buckley 1968; von Bertalanffy 1968; Powers 1973; Miller 1995[1978]). The term cybernetics derives from the Greek word Kybernetes, or “steersman”, and it is the root for such English words as governor and government. A cybernetic system is by definition a dynamic purposive system; it is “designed” to pursue or maintain one or more goals or end-states. The key to understanding a cybernetic system — say, a “smart bomb” as distinct from a ballistic missile — is the concept of “feedback.” Technically, feedback denotes information that a cybernetic system uses to monitor and adjust its behavior in order to attain or maintain a desired goal-state. Thus, cybernetic systems are “controlled” by the relationship between endogenous “goals” and the internal or external environment as experienced via informational processes. The systems theorist William T. Powers (1973) has shown that the behavior of such a system can be described mathematically in terms of its tendency to oppose an environmental disturbance of an internally controlled quantity. That is to say, the system will operate in such a way that some function of its output quantities will be nearly equal and opposite to some function of a disturbance in some or all of those environmental variables that affect the controlled quantity, with the result that the controlled quantity will remain nearly at its set point. Thermoregulation in the human body is an obvious example. Needless to say, complex cybernetic systems are not limited to maintaining any sort of fixed steady state. For instance, overarching goals may be maintained (or attained) by means of an array of hierarchically organized sub-goals that may be pursued contemporaneously, cyclically, or seriatim. Furthermore, homeostasis shares the cybernetic stage with “homeorhesis” (developmental control processes) and even “teleogenesis” (goal-creating processes). But more to the point, the cybernetic model is not merely a loose “analogy.” Its empirical validity as a description of communications and control processes in living systems is supported by a vast research and theoretical literature across many disciplinary lines. Indeed, cybernetic mechanisms exist at many levels of living systems. They can be observed in, among other things, enzyme (protein) activity (Monod 1971), morphogenesis (Shapiro 1991,1992; Thaler 1994), cellular activity (Hess and Mikhailov 1994) and neuronal network operation, as well as in the control of animal behavior. Another way to put it is that several levels of feedback processes exist in nature, and complex organisms such as mammals — and especially socially-organized species — are distinctive in their reliance on the higher level controls (see Corning 1983; Kline 1995). (For a history of feedback control mechanisms in human technology, which date back to antiquity, see O. Mayr 1970.) Finally, it should also be noted that cybernetic control processes may produce results that resemble the so-called “dynamical attractors” of chaos theory, but they are achieved in a very different way. Without some internal “reference signal” (teleonomy), there can be no feedback control, although there can certainly be self-ordered processes of reciprocal causation or autocatalysis at work, or perhaps Darwinian processes of “coevolution” and “stabilizing selection.” The mere fact of functional interdependence is insufficient to justify the use of cybernetic “model”. Although cybernetic systems must operate within the “constraints” of the laws of physics, chemistry, etc., cybernetic causation, by definition, introduces unique historical and configural (i.e., situation-specific) influences into the “degrees of freedom” that exist in the natural world. Another way of putting it is that organisms are distinguishable from, say, crystals or geysers in that their cybernetic properties introduce an emergent, partially independent source of causation that cannot be accounted for within the laws of physics. Accordingly, control information is defined as the capacity (know-how) to control the acquisition, disposition and utilization of matter/energy in purposive (cybernetic) processes. If energy is defined as the capacity to do work, control information is defined as the capacity to control the capacity to do work. The concept of control information was formalized and operationalized in Corning and Kline (1998b). (For an interesting attempt to build a bridge directly from information theory to “functional systems,” see Collier, 2000.)
  7. Actually, Szilard’s influential paper was preceded by a similar line of argument in a thermodynamics textbook by Lewis and Randall in 1923 and by Szilard himself in his 1925 doctoral dissertation at the University of Berlin (see Leff and Rex, 1990).
  8. Kline (1997) has shown that Maxwell’s demon is “wildly unfeasible” for any one of several reasons. (He defines “wildly” as meaning that it is currently beyond our technical capabilities by a factor of more than one million.) The demon would require capabilities for perception/detection, data collection, mechanical operation and feedback control that appear to be totally impracticable, not to mention being totally uneconomic. Kline points out that it is bad science to base theories and thought experiments on events that have no reasonable likelihood of occurring.
  9. Charles H. Bennett is well known as a theorist on the thermodynamics of information. His work on the reversibility of (Shannon) information was inspired by earlier work in this area by a colleague at the IBM Thomas J. Watson Research Center, Rolf Landauer. Bennett showed that information might (theoretically) be reversible, both logically and in thermodynamic (entropy) terms. However, Bennett also supported Landauer’s conclusion that there is an inescapable thermodynamic cost for “erasing” information to start a new measurement, and he applied this to Maxwell’s demon. Thus, Bennett concluded, it was not the cost of acquiring information (as Szilard supposed) but the cost of destroying it that makes the demon infeasible. The problem with this line of reasoning is that the calculations are all “internal”; they include only the thermodynamic costs of the information process itself. Landauer and Bennett both overlooked the real-world “economic” costs — the work associated with building and operating the demon, and, in particular, the work associated with “acquiring” and using (control) information. Indeed, Bennett (2000[1988]:70-71) approvingly quotes at length from Maxwell’s original passage in the Theory of Heat (1871), including the author’s claim that the demon could operate “without expenditure of work.”
  10. We prefer to define natural selection the differential survival and “replication” among functional variants at all levels of living systems and at all stages of evolution. This point was underscored by Morowitz (1992:49,53) in his book on biogenesis. He pointed out that the conversion of photon energy to chemical energy in a biologically useful way was no simple matter; severe restrictions had to be overcome. Likewise, the biological information that is stored in DNA molecules are costly to maintain; they are constantly undergoing thermal degradation and require energy inputs for their maintenance. This is not an entropic process, however, because the instabilities are energy-related; they are induced by the temperature of their surroundings.
  11. An alternative scenario for eukaryote evolution was recently proposed by William Martin and Miklós Müller (1998). It is called the “Hydrogen Hypothesis,” and it is supported by a variety of genetic and biochemical data. Martin and Müller believe that the process of “symbiogenesis” was cooperative from the start. In their view, a mutually beneficial association developed between ancient hydrogen-producing bacteria and a “methanogen” — a microbe that can utilize hydrogen to extract energy and make sugars, leaving methane as a waste product. The idea came to Martin one day when he was viewing a modern analogue, a one-celled eukaryote called Plagiopyla.
  12. In a recent commentary entitled “Complexity is Just a Word!” (Corning 1998b), it was argued that there is no agreed-upon definition of complexity, and for very good reason. There are, in fact, many different kinds of complexity. It is a qualitative property that we apply to both apples and oranges — to borrow a cliché — that are both fruits and grow in trees but also differ from each other in important ways. Despite the many fruitless attempts (pardon the pun) to develop a general definition for the term, there are a number of commonly associated properties. Often (not always) these include the following attributes: (1) a complex phenomenon consists of many parts (or items, or units, or individuals); (2) there are many relationships/interactions among the parts; and (3) the parts produce combined effects (synergies) that are not always predictable and may often be novel, or unexpected. Kline (1995) has also provided a useful index for measuring the complexity of a cybernetic control system. His “complexity index” (denoted C) contains three quantities: “V” for the number of independent variables needed to describe the state of the system, “P” for the number of independent parameters needed to distinguish the system from like systems, and “L” for the number of feedback loops. An imaginative (and practicable) new approach to measuring complexity specifically in biological systems has recently been proposed by Szathmáry et al., (2001). They propose an array of indices that are focused on the number of interactions that occur in various “networks”.
References cited
Adams, Richard. 1975. Energy and structure. University of Texas Press, Austin.
Ayres, Robert U., & Indira Nair. 1984. Thermodynamics and economics. Physics Today 37:62-71.
Bennett, Charles H. 2000[1988]. Notes on the history of reversible computation. IBM Journal of Research and Development 44(1-2): 70-77.
Berndt, Ernest. 1978. Aggregate energy, efficiency, and productivity measurement. Annual Review of Energy 9:409-26.
Berry, R. Stephen, Geoffrey Heal & Peter Salamon. 1978. On a relation between economic and thermodynamic optima. Resources and Energy 1:125-37.
von Bertalanffy, Ludwig. 1952[1949] Problems of Life: An evaluation of modern biological thought. John Wiley, New York.
von Bertalanffy, Ludwig. 1968. General system theory: foundations, development, applications. George Braziller, New York.
Blake, Robert W., ed. 1991. Efficiency and economy in animal physiology. Cambridge University Press, New York.
Boltzmann, Ludwig. 1909. Wissenschafliche abhandlungen (3 vols.) (F. Hasenöhrl ed.) J. A. Barth, Leipzig.
Bonner, John Tyler. 1988. The evolution of complexity. Princeton University Press, Princeton, NJ.
Boulding, Kenneth E. 1981. Evolutionary economics. Sage Publications, Beverly Hills, CA.
Bridgman, Percy. 1941. The nature of thermodynamics. Harvard University Press, Cambridge.
Brier, Søren. 1992. Information and consciousness: a critique of the mechanistic concept of information. Cybernetics and Human Knowing 1(2/3):71-94.
Brillouin, Leon 1949. Life, thermodynamics and cybernetics. American Scientist 37:554-568.
Brillouin, Leon 1968[1950]. Thermodynamics and information theory. Pp. 161-165. In W. Buckley (ed.) Modern systems research for the behavioral scientist. Aldine Publishing Company, Chicago.
Broda, Engelbert. 1978. The Evolution of Bioenergetic Processes. Pergamon Press, New York.
Brooks, Daniel R., & E.O. Wiley. 1988. Evolution as entropy: toward a unified theory of biology. (2nd. ed.) University of Chicago Press, Chicago.
Bryant, J. 1982. A thermodynamic approach to economics. Energy Economics (January):36-49.
Buckley, Walter., ed. 1968. Modern systems research for the behavioral scientist. Aldine Publishing Co., Chicago.
Carrier, David R. 1984. The energetic paradox of human running and hominid evolution. Current Anthropology 25:483-489.
Collier, John. 1986. Entropy in evolution. Biology and Philosophy 1:5-24.
Collier, John. 2000. Information theory as a general language for functional systems Computing Anticipatory Systems: CASYS’99, (American Institute of Physics) Conference Proceedings 517: 124-130.
Corning, Peter A. 1983. The synergism hypothesis: a theory of progressive evolution. McGraw-Hill, New York.
Corning, Peter A. 1995. Synergy and self-organization in the evolution of complex systems. Systems Research 12:89-121.
Corning, Peter A. 1996. The co-operative gene: on the role of synergy in evolution. Evolutionary Theory 11:183-207.
Corning, Peter A. 1998a. ‘The Synergism Hypothesis’: on the concept of synergy and its role in the evolution of complex systems. Journal of Social and Evolutionary Systems 21: 133-172.
Corning, Peter A. 1998b. Complexity is just a word! Technological Forecasting and Social Change 58:197-200.
Corning, Peter A. 2001a.   Nature’s magic: synergy in evolution and the fate of humankind. in press
Corning, Peter A. 2001b. ‘Control information’: the missing element in Norbert Wiener’s cybernetic paradigm. Kybernetes 30(9/10) in press.
Corning, Peter A. & Stephen Jay Kline. 1998a. Thermodynamics, information and life revisited, part I: ‘to be or entropy.’ Systems Research and Behavioral Science 15:273-295.
Corning, Peter A. & Stephen Jay Kline. 1998b. Thermodynamics, information and life revisited, part II: ‘thermoeconomics’ and ‘control information.’ Systems Research and Behavioral Science 15: 453-482.
Costanza, Robert. 1980. Embodied energy and economic valuation. Science 210:1219-1224.
Cottrell, Fred. 1953.Energy and society. McGraw Hill, New York.
Cottrell, Fred. 1972. Technology, man and progress. Merrill, Columbus, OH.
Csányi, Vilmos. 1998. Evolution: model or metaphor? Pp. 1-12 in G. Van de Vijver, S. N. Salthe, & M. Delpos (ed.) Evolutionary systems: biological and epistemological perspectives in selection and self-organization, Kluwer Academic Publishers, Dordrecht, Netherlands.
Curtis, Helena. & N. Sue Barnes. 1989. Biology (5th ed.). Worth Publishers, New York.
Davis, Ged R. 1990. Energy for planet earth. Scientific American 263(3):55-62.
Deamer, David W., ed. 1978. Light transcending membranes: structure, function and evolution. Academic Press, New York.
Deamer, David W. and Juan Oro 1980. Role of lipids in prebiotic structures. Biosystems 12: 167-175.
Deamer, David W. and R.M. Pashley. 1989. Amphiphilic components of the murchison carbonaceous chrondite: surface properties and membrane formation. Origins of Life 19: 21-38.
Depew, David J., & Weber, Bruce H. 1988. Consequences of nonequilibrium thermodynamics for the Darwinian tradition. Pp. 317-354 in B.H. Weber, D.J. Depew, & J.D. Smith (ed.) Entropy, information, and evolution: new perspectives on physical and biological evolution, MIT Press, Cambridge.
Depew, David J., & Bruce H. Weber. 1995. Darwinism evolving: systems dynamics and the genealogy of natural selection. MIT Press, Cambridge.
Dragan, Joseph C., & Mihai C. Demetrescu. 1986. Entropy and bioeconomics. Nagard Publishers, Pelham, NY.
Dyson, Freeman J. 1971. Energy in the universe. In Energy and Power (A Scientific American Book). W.H. Freeman and Co., San Francisco.
Eigen, Manfried and Peter Schuster. 1977. The hypercycle: A principle of natural self-organization. Part A: Emergence of the hypercycle. Naturwissenschaften 64: 541-565.
Faber, Malte. 1985. A biophysical approach to the economy: entropy, environment and resources. In W. van Gool & C. Bruggink (eds) Energy and time in the economic and physical sciences, Elsevier Science Publishers B.V., New York.
Fenchel, Tom, & Bland J. Finlay. 1994. The evolution of life without oxygen. American Scientist 82:22-29.
Gage, D, Menahem Schiffer, Stephen Jay Kline & William C. Reynolds. 1966. The non-existence of a general thermokinetic variational principle. Pp. 283-286 in R.J. Donnelly, R. Herman, & I. Prigogine (ed.) Non-equilibrium thermodynamics: variational techniques and stability, University of Chicago Press, Chicago.
Georgescu-Roegen, Nicholas. 1971. The entropy law and economic process. Harvard University Press, Cambridge, MA.
Georgescu-Roegen, Nicholas. 1976. Energy and economic myths: institutional and analytical economic essays. Pergamon Press, New York.
Georgescu-Roegen, Nicholas. 1977a. Bioeconomics: A new look at the nature of economic activity. Pp. 105-134 in L. Junker (ed.) The political economy of food and energy. The University of Michigan Press, Ann Arbor, MI.
Georgescu-Roegen, Nicholas. 1977b. The steady state and ecological salvation: a thermodynamic analysis. BioScience   27:266-270.
Georgescu-Roegen, Nicholas. 1977c. Inequality, limits and growth from a bioeconomic viewpoint. Review of Social Economy 35:361-375.
Georgescu-Roegen, Nicholas. 1979. Energy analysis and economic valuation. Southern Economic Journal 45:1023-1058.
Giampietro, Mario, Sandra G. F. Bukkens & David Pimentel. 1993. Labor productivity: a biophysical definition and assessment. Human Ecology, 21:229-259.
Gibbs, J. Willard. (1906). The scientific papers of J. Willard Gibbs (2 vols.). H.A. Bumstead & R. G. Van Name (ed.) Longmans, Green, New York.
Gilliland, Martha W. 1975. Energy analysis and public policy. Science 189:1051-1056.
Haisch, Bernhard, Alfonso Rueda, & Harold E. Puthoff. 1994. Beyond E=mc2. The Sciences 34(6):26-31.
Hannon, Bruce M. 1973. An energy standard of value. Annals of the American Academy of
Political Science 410:139-153.
Harold, Franklin M. 1986. The vital force: a study of bioenergetics. W.H. Freeman and Co.,New York.
Harvey, Paul H. 1986. Energetic costs of reproduction. Nature 321:648-649.
Hawking, Stephen W. 1988. A brief history of time: from the big bang to black holes. Bantam Books, New York.
Hess, Benno, & Alexander Mikhailov. 1994. Self-organization in living cells. Science 264:223-224.
Hoffmeyer, Jesper. 1997. Biosemiotics: towards a new synthesis in biology. European Journal for Semiotic Studies 9:355-376.
Hopf, F. A. 1988. Entropy and evolution: sorting through the confusion. Pp. 263-274 in B.H. Weber, D.J. Depew, & J.D. Smith (ed.) Entropy, information, and evolution: new perspectives on physical and biological evolution, The MIT Press, Cambridge.
Hubbert, M. King. 1971. The energy resources of the earth. Pp. 31-40 in Energy and power (A Scientific American book). W. H. Freeman, San Francisco.
Huettner, David A. 1976. Net energy analysis: an economic assessment. Science 192:101-4.
Karasov, William H., & Jared. M. Diamond. 1985. Digestive adaptations for fueling the cost of endothermy. Science 228:202-204.
Kauffman, Stuart A. 1995. At home in the universe: the search for the laws of self-organization and complexity. Oxford University Press, New York.
Kauffman, Stuart A. 2000. Investigations. Oxford University Press, New York.
Kline, Stephen Jay. 1995. Conceptual foundations for multidisciplinary thinking. Stanford University Press, Stanford, CA.
Kline, Stephen Jay. 1997. The semantics and meaning of the entropies. Report CB-1, Department of Mechanical Engineering, Stanford University, Stanford, CA.
Koestler, Arthur. 1967. The ghost in the machine. Macmillan, New York.
Kushmerick, M J., & R.E. Davies (F.R.S) 1969. The chemical energetics of muscle contraction (II) Proceedings of the Royal Society (London) 174: 315-353.
Layzer, David 1988. Growth of order in the universe. Pp. 23-40 in B. H. Weber, D. J. Depew & J.D. Smith (ed.) Entropy, information and evolution: new perspectives on physical and biological evolution. MIT Press, Cambridge, MA.
Le Maho, Yvonne. (1977). The Emperor Penguin: a strategy to live and breed in the cold. American Scientist 65:680-693.
Leff, Harvey S., & Andrew F. Rex. 1990. Maxwell’s demon, entropy, information, computing. Princeton University Press, Princeton, NJ.
Lehninger, Albert L. 1971. Bioenergetics: the molecular basis of biological energy transformations. Benjamin/Cummings, Menlo Park, CA.
Lotka, Alfred J. 1922. Contribution to the energetics of evolution. Proceedings of the National Academy of Science 8:147-155.
Lotka, Alfred J. 1945. The law of evolution as a maximal principle. Human Biology 17:167-194.
Margulis, Lynn. 1993. Symbiosis in cell evolution 2nd ed. W.H. Freeman, New York.
Margulis, Lynn. 1998. Symbiotic planet: a new look at evolution. Basic Books, New York.
Margulis, Lynn, & Dorion Sagan. 1995. What is life? Simon & Schuster (Peter N. Nevraumont), New York.
Martin, William, & Miklós Müller. (1998). The hydrogen hypothesis for the first eukaryote. Nature 391:37-41.
Maxwell, James Clerk. 1871. Theory of heat. Longman’s, Green and Co., London.
Maynard Smith, John, & Eörs Szathmáry. 1995. The major transitions in evolution. Freeman Press, Oxford.
Mayr, Otto. 1970. The Origins of feedback control. MIT Press, Cambridge.
McClare, C.W.F. 1971. Chemical machines, Maxwell’s demon and living organisms. Journal of Theoretical Biology 30:1-34.
McClare, C.W.F. 1972.   A ‘molecular energy’ muscle model. Journal of Theoretical Biology 35:569-595.
Miller, James G. 1995[1978]. Living systems. University Press of Colorado, Niwot, CO.
Mirowski, Philip. 1988. Energy and energetics in economic theory: a review essay. Journal of Economic Issues 22:811-830.
Mirowski, Philip. 1989. More heat than light: economics as social physics: physics as nature’s economics. Cambridge University Press, Cambridge.
Monod, Jacques. 1971. Chance and necessity (A Wainhouse trans.) Knoff, New York.
Morowitz, Harold J. 1968. Energy flow in biology. Academic Press, New York.
Morowitz, Harold J. 1978a. Foundations of bioenergetics. Academic Press, New York.
Morowitz, Harold J. 1978b. Proton semiconductors and energy transduction in biological systems. American Journal of Physiology 235: R99-114.
Morowitz, Harold J. 1981. Phase separation, charge separation and biogenesis. Biosystems
14: 41-47.
Morowitz, Harold J., Bettina Heinz and David W. Deamer. 1987. The chemical logic of a minimum protocell. Origins of Life 18: 281-287.
Morowitz, Harold J. 1992. Beginnings of cellular life: metabolism recapitulates biogenesis. Yale University Press, New Haven.
Nicholls, David G., & Stuart J. Ferguson. 1992. Bioenergetics 2. Academic Press, San Diego.
Nicolis, Gregoire, & Ilya Prigogine. 1977. Self-organization in nonequilibrium systems. Wiley, New York.
Nicolis, Gregoire, & Ilya Prigogine. 1989. Exploring complexity. W.H. Freeman, New York.
Nöth, Winifred. 1990. Handbook of semiotics. Indiana Press, Bloomington.
Odum, Eugene P. 1971. Fundamentals of ecology. W.B. Saunders, Philadelphia.
Odum, Eugene P. 1983. Basic ecology. Saunders College Publications, Philadelphia.
Odum, Howard T. 1971. Environment, power and society. John Wiley & Sons, London.
Odum, Howard T. 1988. Self-organization, transformity and information. Science 242:1132-1139.
Odum, Howard T., & Elizabeth C. Odum. 1982. Energy basis for man and nature (2nd ed.). McGraw-Hill, New York.
Parsons, T. R., & B. Harrison. 1981. Energy utilization and evaluation. Journal of Social and Biological Structures 4:1-15.
Penrose, Roger. 1989. The emperor’s new mind: concerning computers, minds, and the laws of physics. Oxford University Press, New York.
Perutz, Max F. 1987. Physics and the riddle of life. Nature 326:555-558.
Powers, William T. 1973. Behavior: the control of perception. Aldine, Chicago.
Prigogine, Ilya, Gregoire Nicolis, & Agnes Babloyantz. 1972a. Thermodynamics of evolution (I). Physics Today 25:23-28.
Prigogine, Ilya, Gregoire Nicolis, & Agnes Babloyantz. 1972b. Thermodynamics of evolution (II). Physics Today 25:38-44.
Prigogine, Ilya, P. M. Allen, & R. Herman. 1977. The evolution of complexity and the laws of nature. In E. Laszlo & J. Bierman (ed.) Goals in a global society, Pergamon, New York.
Prigogine, Ilya. 1978. Time, structure and fluctuation. Science 201:777-84.
Proops, John L. R. 1983. Organization and dissipation in economic systems. Journal of Social and Biological Structures 6:353-366.
Proops, John L. R. 1985. Thermodynamics and economics: from analogy to physical functioning. Pp. 155-175 in W. van Gool & J. J. C. Bruggink (ed.) Energy and time in economic and physical sciences. Elsevier Science Publishers, B.V. New York.
Proops, John L. R. 1987. Entropy, information and confusion in the social sciences. The Journal of Interdisciplinary Economics 1:225-242.
Qvortrup, Lars. 1993. The controversy over the concept of information. Cybernetics and Human Knowing 1(4): 3-24.
Ridley, Mark. 2001. The cooperative gene: how Mendel’s demon explains the evolution of complex beings. The Free Press, New York.
Riedl, Rupert. 1978. Order in living organisms: a systems analysis of evolution (R.P.S. Jefferies, trans.) John Wiley & Sons, New York.
Roberts, Paul C. 1982. Energy and value. Energy Policy 10:171-80.
Salthe, Stanley N. 1993. Development and evolution: complexity and change in biology. MIT Press, Cambridge.
Salthe, Stanley N. 1998. The role of natural selection in understanding evolutionary systems. Pp.13-20 In G. Van de Vijver, S. N. Salthe, & M. Delpos (ed.) Evolutionary systems: biological and epistemological perspectives in selection and self-organization, Kluwer Academic Publishers, Dordrecht, Netherlands.
Schmidt-Nielsen, Knut S. 1972. How animals work. Cambridge University Press, Cambridge.
Schneider, Eric D. & James J. Kay. 1994. Life as a manifestation of the second law of thermodynamics. Mathematical Computer Modeling 19: 25-48.
Schneider, Eric J. & James J. Kay. 1995. Order from disorder: the thermodynamics of complexity in biology. Pp. 161-173 in M.P. Murphy & L. A. J. O’Neill (ed.) What is life? The next fifty years, Cambridge University Press, New York.
Schrödinger, Erwin. 1945. What is life? the physical aspect of the living cell. Macmillan, New York.
Sebeok, Thomas A. 1986. The doctrine of signs. Journal of Social and Biological Structures 9: 345-352.
Shannon, Claude E. 1948. A mathematical theory of communication. Bell System Technical Journal 27:379-423, 623-56.
Shannon, Claude E., & Warren Weaver, 1949. The mathematical theory of communication. University of Illinois Press, Urbana.
Shapiro, James A. 1991. Genomes as smart systems. Genetica 84:3-4.
Shapiro, James A. 1992. Natural genetic engineering in evolution. Genetica 86:99-111.
Slesser, Malcolm. 1975. Accounting for energy. Nature 254:170-72.
Soddy, Frederick. 1933. Wealth, virtual wealth and debt: the solution of the economic paradox. Dutton: New York.
Swenson, Rod. 1989. Emergent attractors and the law of maximum entropy production: foundations to a theory of general evolution. Systems Research 6(3):187-197.
Szathmáry, Eörs, Ferenc Jordán and Csaba Pál. 2001. Can genes explain biological complexity? Science 292(5520): 1315-1316.
Szilard, Leo. 1964[1929]. On the increase of entropy in a thermodynamic system by the intervention of intelligent beings (A. Rapoport & M. Knoller, trans.). Behavioral Science 9:302-310.
Thaler, David S. 1994. The evolution of genetic intelligence. Science 264:224-225.
Ulanowicz, Robert E. 1980. An hypothesis on the development of natural communities. Journal of Theoretical Biology 85: 223-24.
Ulanowicz, Robert E. 1983. Identifying the structure of cycling in ecosystems. Mathematical Bioscience 65: 219-237.
Ulanowicz, Robert E. 1986. Growth and Development: Ecosystems Phenomenology. New York:Springer-Verlag.
Van de Vijver, Gertrudis, Stanley N. Salthe, & Manuela Delpos (ed.) 1998. Evolutionary systems: biological and epistemological perspectives on selection and self-organization. Kluwer Academic Publishers, Dordrecht, Netherlands. Van Gool, Willem & Jos J. C. Bruggink, ed. 1985. Energy and time in the economic and physical sciences. North Holland, Amsterdam.
Van Valen, Leigh. 1976. Energy and evolution. Evolutionary Theory 1:179-229.
Weber, Bruce H., David J. Depew, & James D. Smith. 1988. Entropy, information, and evolution: new perspectives on physical and biological evolution. The MIT Press, Cambridge.
Wesley, James P. 1989. Life and thermodynamic ordering of the earth’s surface. Evolutionary Theory 9:45-56.
White, Leslie A. 1943. Energy and the evolution of culture. American Anthropologist 45:335-356.
White, Leslie A. 1949. The science of culture: a study of man and civilization. Grove Press, New York.
White, Leslie A. 1959. The evolution of culture. McGraw-Hill, New York.
Wicken, Jeffrey S. 1987. Evolution, thermodynamics, and information: extending the Darwinian program. Oxford University Press, New York.
Wicken, Jeffrey S. 1988. Thermodynamics, evolution, and emergence: ingredients for a new synthesis. Pp. 139-169 in B.H. Weber, D.J. Depew, & J.D. Smith (ed.) Entropy, information, and evolution: new perspectives on physical and biological evolution, The MIT Press, Cambridge.
Wicken, Jeffrey S. 1989. Evolution and Thermodynamics: The New Paradigm. Systems Research 6(3):181-186.
Wiener, Norbert. 1948. Cybernetics: or control and communications in the animal and the machine. MIT Press, Cambridge, MA.

Category: Publications