ABSTRACT: This paper provides a reductive explanation of phenomenal experience that is coherent with exhaustive stipulated philosophical criteria and theories. Phenomenal experience, in being the contextual identity of human consciousness, has been described as the 'hard problem of consciousness', and to some is an insurmountable enigma. Consequently, a reductive explanation solves a mystery of the individual's experience of ‘consciousness’. This is done here by identifying an evolving dynamic systems hierarchy. Although not a requirement of reduction, the explanation I provide is consistent with our understanding of evolution and, consequently, explains the physical origins and purpose of organisms that possess higher-order thought.

This Website is archived for existing website link referencing
Please visit for the updated versions of this body of work

Related papers:

General Theory of Relational Ethics.
Special Theory of Relational Aesthetics.
Enhancing dispositional higher-order thought theory.
Exploring the relationship between phenomenal experience and the phenomenon of consciousness.



Consider the following: The effect of light striking the eye’s retina differs from when it strikes any other surface. Instead of the light energy just interacting with the surface through absorption and reflection, the specialised and ordered structure that is the brain translates the light impulse into a neural format. The neural representation of the light may, for example, become part of a dynamic web of neural associations. Alternatively, it may resolve into a motor response. Whichever form the expression takes, the neural system’s structure is responsible for the ordered maintenance of the neural representation of the light. Following conversion into its neural format, the neural representation of the light energy travels throughout the brain and is fed, filtered, and expressed. Additionally, the human mind uses its brain to an even higher degree by translating energy into neural representations that include the realisation of its own place within the environment. In this way, the energy maintained within the system, leads to a recognition and development of an individual's 'concept of reality' based on the neural associations it has collated from its senses (more on this later). Considered in this manner, the brain is a high level example of an evolved structure that dissipates various forms of energy in a controlled and ordered manner. But the brain is not the only physical structure that has the capacity to control the dissipation of energy. There are lower levels of structural organisation that dissipate energy in an ordered manner too. Importantly, these lower levels are related to or have a relational status with the higher levels. It is the nature of this relationship that I will identify and explain further. In 1983, I embarked on a project to explain consciousness. It was during this process on September 13, 1986 I made a discovery that, to my mind, explained the evolution and structure of consciousness. The problem I faced was how I might identify and adopt the relevant scientific language that would give the discovery credibility. This was the task to which I then turned my attention, and now, here is the conclusion to this work. The main objective is to introduce and outline a dynamic systems model called the 'Hierarchical Systems Theory'. This theory provides a reductive explanation of phenomenal experience and an explanation of the evolutionary foundations behind consciousness. Additionally, it

Background in 3 parts

Part 1. The Philosophy of Consciousness and the Problem of Phenomenal Experience

Phenomenal experience is the term used to describe the rather subjective ‘something it is like’ aspect of experience. Examples are the experience of depths and shades of colours, of the subtlety of aromas, the experience of sound clusters, or of tactile sensations. Whilst being a fundamental aspect of the way we relate to the environment, the phenomenon of our subjective experience has ineffable qualities. Phenomenal experience is the experience that individuals identify as the experience of consciousness. Why is it that we are unable to understand or analyse phenomenal experience objectively?

There has been a plethora of attempts to explain consciousness and explore the enigmatic features of phenomenal consciousness (Armstrong, 1968, 1984; Carruthers, 1996; Dennett, 1978, Flanagan, 1992; Gennaro, 1996; Kirk, 1994; Lycan, 1987, Nelkin, 1996; Rosenthal, 1986, 1993; Tye, 1995). The inability of these authors to demonstrate objective evaluation means that existing explanations remain conjectural. It is evident that part of the difficulty, is that phenomenal experience and consciousness evade analysis and definition. Chalmers (1995) ) argues that there is a uniquely ‘hard problem’ in deciphering consciousness in that any theory must adequately explain the specific characteristics and the textural qualities of personal phenomenal experiences. Some speculate in favour of a non-reductive explanation to this hard problem that would require the discovery of, an as yet, undiscovered psycho-physical entity or fundamental force with its own laws. Others argue that a reductive explanation is impossible, despite notable flawed claims by some to having already provided one (Carruthers, 2000a; Dennett, 1991; Dretske 1995; Lycan 1996; Tye, 2000a)

What then, are the requirements of an adequate reductive explanation of phenomenal experience?

1. Carruthers (2000a; also cf. 2004) states that a successful explanation of phenomenal consciousness should,

(a) explain how phenomenally conscious states have a subjective dimension; how they have feel; why there is something which it is like to undergo them;
(b) why the properties involved in phenomenal consciousness should seem to their subjects to be intrinsic and non-relationally individuated;
(c) why the properties distinctive of phenomenal consciousness can seem to their subjects to be ineffable or indescribable;
(d) why those properties can seem in some way private to their possessors; and
(e) how it can seem to subjects that we have infallible (as opposed to merely privileged) knowledge of phenomenally conscious properties.

2. Chalmers (1995) proposes that a coherent theory of consciousness must satisfy his three evaluative criteria. To paraphrase, these are as follows:

Criterion A, the double aspect theory of information principle, requires that information is fundamental to consciousness, and corresponds to physical and to phenomenal features that are isomorphic. (Section 7.3, para 4).
Criterion B, the principle of organisational invariance, states that any two systems with the same functional organisation will have qualitatively identical experiences. Examples of such systems might include computer systems. (Section 7.2, para 1).
Criterion C, the principle of structural coherence, requires that the processes that explain awareness link structurally to the basis of consciousness by determining the relationship between that of which we are aware (and can report upon) and that of which we experience. (Section 7.1, para 11)

Chalmers states that a reductive explanation should explain the experience about which and with which humans are individually aware and report upon, and provide an appraisal of prevailing physical facts and show how these facts must lead to organisms that possess phenomenal experience.

3. Dowell (2007a) considers the arguments that analysis of phenomenal experience (using type-A and type-B physicalism methods) and consequently, that reductive explanation is impossible. She does this by reviewing Jackson, Chalmers, and Gertler, on one side of the debate, and Block, Stalnaker, McLaughlin, and Hill on the other. Each offer a rival account of what, in the absence of analysis, would be sufficient to justify reductive explanation. Dowell (2007b) allays the concerns of the differing views by providing an alternative illustration of a hypothetical strategy, called a type-C physicalism method, which demonstrates, importantly how phenomenal analysis is not necessary for an a priori entailment (e.g. the extrapolation of existing physical principles) to satisfy reductive explanation.

4. A type-C reductive explanation is an ontic method which Carruthers (2004) argues, is a method that seeks to specify some significant part of a causal process and to describe the causal mechanism. Woodruff Smith (2001) too, seeks the principle aspects of a fundamental ontology, which he argues, should explore and explain the relationship between consciousness and, what he regards as three key facets of consciousness - evolution, physiology, and behaviour.

Hierarchical Systems Theory is an example of a type-C method of reductive explanation, because it links uncontroversial physical facts to uncontroversial phenomenal conclusions. It provides uniform consistency by showing an evolving systems-hierarchy that extrapolates from physics principles. Furthermore, it uniquely illustrates how the explanation can be theoretically and empirically evaluated, thereby dismantling all of Velmans’ (2001 cf. conclusion) criticisms that reductive physicalism ignores both the first-person phenomenological evidence regarding the nature of consciousness and the third-person evidence about how it relates to a world described by physics.

The theory also has bearing on First-order and High-order representational theories in that it formalises a dynamic hierarchical structure that facilitates and explains physiological and evolutionary connections. This answers questions posed by Carruthers (2000b) as to how and why transitions in the evolution of consciousness take place.

Part 2. The Science of Thermodynamics and the Problem of Order from Chaos

Since the 1940s there has been an optimism that the scientific fields of complex systems, thermodynamics, and cybernetics might provide fundamental principles to explain not just the order that arises from the complexities and chaos of the environment, but the sophisticated characteristics of organic lifeforms too (Barab et al., 1999; Corning & Kline, 1998; Jorgensen & Svirezhev, 2004; Schneider & Kay 1994; Swenson, 1997).

Despite this optimism, Schrödinger (1944) clarifies the depth and breadth of the challenge when he makes the observation that whilst the laws of physics “have a lot to do with the natural tendency of things to go over into disorder…. it is by avoiding the rapid decay into the inert state of ‘equilibrium’ that an organism appears so enigmatic.” (Chap. 6, para 2 & 6). Initially, living organisms appear to contradict the second law because life creates structure and order out of chaos. Despite the apparent paradox, Boltzmann (1886/1974) is clear that the evolution of ordered systems does not conflict with thermodynamic principles: As Pieper (2000 para 2) states “the synonymous use of the terms entropy and disorder represent a serious misunderstanding of thermodynamics.”

Unperturbed by these challenges, Prigogine (1978) demonstrates in his theory of dissipative structures that self-organisation can evolve spontaneously within chaotic environments. Furthermore, Swenson(1988, 1989) maintains that under certain conditions, ordered flows of energy can maximise the rate at which a system satisfies the second law thereby making it more effective at dissipating energy than chaotic flows.

Despite the advances of work relating to Prigogine's theory of dissipative structures, Corning & Kline (1998) give an in depth critique regarding the application of the second law of thermodynamics to multileveled structures like biological systems, making a distinction between order and functional organisation. What Corning & Kline allude to is that understanding systems dynamics requires understanding the function of systems structures, which is not possible through the application of thermodynamic laws alone.

Part 3. General Systems Theory and the Problem of Information

Whilst considering complex systems in general, Bertalanffy (1950, 1951) makes a couple of points that are worth emphasising:

1. The laws of thermodynamics apply to closed systems, but not to open systems - Importantly, the environment with which lifeforms interact, is open.

2. With complex systems such as living organisms, there is some aspect of 'self-regulation' or self-organisation that entails feedback or the transfer of information.

Adopting the language of general systems theory, Kuhn (1974) proposes that all systems tend toward equilibrium through communication (where communication translates as the exchange of information) and transaction (involving the exchange of "matter-energy"), and that a prerequisite for the continuance of a system, by controlled or uncontrolled means, is its ability to maintain a steady and stable state.

If one assumes that system stability arises through the transaction of information and that this transaction is in some manner self-regulatory, then one is left with some key questions: What is meant by information, how does the information of systems differ, what is it that leads to information being traded for stability, why is it self-regulatory, why is this a universal principle for all systems, and what are those principles?

What follows is a detailed appraisal of the Hierarchical Systems Theory in order to provide answers to these questions:

Introducing the Hierarchical Systems Theory

What is a system?

A system is an open state, and is composed of interacting and interdependent parts whose combined dynamic relationships determine coherent functional behaviours:

Any given system exists by virtue of its component dynamic stability because without stability the interacting parts cease to maintain their coherent systems behaviours and thus become separate entities. One might say that a system requires dynamic stability to define its existence.

Systems stability and interaction

Coherent functional behaviours are conditional on system stability. Inevitably, any given environmental interaction has the capability of disrupting either system stability directly, or some of the interdependent parts upon which a system relies for its own stability. One might reasonably conclude that when a system interacts with elements within the environment, a process takes place that leads to a reassessment or realignment of its stability. This realignment may lead to two distinct types of reaction:

1. When the realignment of stability is ordered, a system demonstrates its structural function by actively dictating the nature of a stabilising reactive outcome; or

2. When the realignment of stability is disordered, a system’s response does not demonstrate its structural function, and the system passively acquiesces to a stabilising reactive outcome,

This reassessment may also lead to two distinct consequences:

1. Systems behaviours, be they ordered or disordered, arise from dynamic reactive structural re-evaluations. These re-evaluations always result in some form a stable state, even if that state be transitory; and

2. Systems behaviours are indicative of the displacement or conversion of energy from one state to another. When the conversion of energy is ordered, one can define that conversion as the movement of information.

How do hierarchical systems differ from standard systems?

The Hierarchical Systems Theory states that under certain conditions, a system may comprise of interdependent parts that have an identifiable relational and hierarchical status. Under such conditions, the nature of the relationship of the interdependent parts is specific to that systems hierarchy. A hierarchical system describes sequentially related component systems structures rather than merely interacting component systems structures.

Theoretically, any systems dynamic has the capacity to evolve a unique and formal relational systems hierarchy or a hierarchical systems dynamic. But what is the difference between a standard system and a hierarchical system? Below are a couple of illustrative examples to address this question:

Standard systems evolution

Take a scenario where, for example, the structure of a system (S) reacts to type-A interactions in an ordered manner and type-B interactions in a disordered manner. In other words, type-A interactions lead to the system demonstrating its structural function whilst type-B interactions lead to chaotic systems dysfunction. If however, system S’s reaction to a type-A interaction were to lead, coincidentally, to the modification of its structure such that it could then react to type-B interactions in an ordered manner too, that systems structure would have acquired a new capability. This capability would be self-perpetuating because it would enable system S’s structure to stabilise interactive behaviours with both type-A and B interactions. The structural reaction would inevitably be positively selective. In this manner, standard systems are capable of evolving over time and by chance, developing increasingly complex structures and functional behaviours.

Hierarchical systems evolution

Given the same scenario above, consider a situation where system S evolves in complexity whereby its systems function responds favourably to increasingly diverse types of interaction from type-A and B through to type-Z. In doing so, system S becomes a complex system structure with complex arrays of behavioural function. A hierarchical systems dynamic then arises if and when this new system S structure, which now might be very different to the original standard system S, creates its own unique class of interactive environment that is specific to its systems state. This new system S then responds to this different class of interaction and can evolve even further independently of the type-A to Z interactions. The functional behaviours that derive from these new interactions still remain dependent on the system interacting coherently with the type-A to Z interactions, but there is now a hierarchy of systems interactions and behaviours.

To paraphrase the above, a hierarchical systems structure arises when a system evolves such that its functional complexity increases to the point where an aspect of its functional dynamic becomes a separate system state with its own unique interactive environment.

It is difficult to evaluate a theoretical illustration like this without concrete examples. Consequently, the main focus now is to analyse an actual example of a hierarchical systems dynamic. This analysis has three main objectives which are to,

1. Give a clearer indication of what a hierarchical systems dynamic entails beyond the illustration given above;

2. Demonstrate the underlying unity of the Hierarchical Systems Theory and indicate how this unity explains great complexity; and

3. Provide the first coherent reductive explanation of phenomenal experience and a blueprint of consciousness.

Extrapolation of Hierarchical Systems Categories 1 to 4

Systems Category 1.

Perception States

A compound atomic structure is an example of a dynamic system. Its stability is dependent on its component atomic elements and they in turn are dependent on the stability of more fundamental atomic forces.

As with all systems, an atomic compound interacts with its environment. I say 'interacts' rather than 'reacts' because the consequences of the process are very much dependent on the system's structure and the way it 'responds' with the environment:

If one can state that the interaction between a system and its environment is a process through which (per) a system's structure embraces (capere, to seize) experience... and then reacts, one might conclude that when a system experiences and then reacts, this 'interactive' behaviour is demonstrating environmentally per-ceptive characteristics.

This is an unconventional definition of perception where the sensation of perception is a process that applies equally to inanimate systems structures as to those experiences gathered by the specialised sensory organs of living organisms.

However, one can qualify this definition of perception further: When a system interacts with elements from the environment it will either maintain system stability and demonstrate its structural function by behaving in an ordered manner, that is, in a manner consistent with that system's structure, or the system's dynamic will reacquire stability in a disordered manner whereby the system's structure and system's integrity will be compromised.

The Distinction between Passive and Active Perception States

1. Passive perception - When an atomic compound interacts with its environment in an ordered manner, it results in behaviours that characterise the system's typical behaviour.

Disordered reevaluation occurs when, in our example, a compound atomic structure does not dictate the reactive processes that lead to alterations of its structural composition. This structural alteration, over which the system's functions have no active control, is the process that determines the evolution of the system's chemical complexity. In this respect we can understand that increasingly complex compounds evolve only as an accidental consequence of these type of disordered reevaluations.

But imagine a system capable of retaining active involvement in the evolution of its systems structure even when disordered reevaluation takes place. How could such a systems structure be possible?

Well, one can appreciate that increasingly complex compounds can evolve as an accidental consequence of environmental interactions and that this could lead to the evolution of some special systems characteristics. What process or systems structure though, could potentially evolve whose functional behaviour ensures that it maintains an active involvement in its chaotic or disordered environmental interactions?

2. Active perception - Unlike other systems structures that merely react, there is a system type that uniquely, actively dictates its structure’s reactive capability, even under the duress of chaotic and disordered environmental interactions.

One of these system types belong to the polynucleotide family and include ribonucleic acid and deoxyribonucleic acid. The ability to replicate affords a structure this unique status because structural replicates determine the nature of their individual structure’s development through successive generations, even after the parent structure dissipates and ceases to exist.

A replicating system encapsulates its perceptions actively by enabling the progressive evolution of its particular structure. Environmental interactions do not just happen and then end as is the case with passively perceptive systems, but they have an impact on a replicating system’s structure that transcends the structure’s lifespan through its successive generations. A replicating system is adaptive, whilst other types of systems are merely reactive.

Actively perceptive systems seek a stable structural adaptation

Whilst the requirements of a passively perceptive system are merely to seek structural stability during environmental interaction, the structure of an individual replicating system represents a snapshot in time of an evolving systems state whose requirements are to acquire and maintain a stable reactive adaptation. Consequently, for each new generation, the interactions of replicating systems lead to structural mutations, which represent new stable adaptations of that particular system over time.

The oldest replicating system found on earth is a 3.5 billion year old fossilised bacterium, identified in 1993 by Schopf (1999). Following the first evolutionary explosion of the Big Bang, but ignoring the evolutionary effects of early phase transitions that defined the nature of matter and space, the capacity to duplicate marks the start of a second momentous evolutionary event. Coren (2001) is an independent proponent of this descriptive analogy and suggests coincidently in his analysis of the empirical evidence for a law of information growth, that there could be a relationship between cosmological evolution, life on earth, human culture, and thermodynamics. There is no reason to suppose that the capability to replicate has only happened on earth, and Bennett et al. (2003) conclude that it has probably occurred on thousands of other planetary systems. This momentous evolutionary event began on earth with the unintentional evolution of systems whose structures could replicate.

Systems Category 2.

Consciousness States

In the previous 'Systems Category 1.' section, the intention was to introduce a few key concepts:

1. The first was to introduce the notion that perception is not a characteristic limited to 'living' organisms that 'sense' the environment, but that all systems structures that engage with and react to the environment are also per-ceptive.

2. In order to clarify point 1., it was necessary to determine in what way and by what means one could distinguish the perception of systems that might be regarded as 'not alive' with that of living organisms. This distinction is explained with the concept of passive and active perception states.

3. Finally, it was necessary to describe the mechanisms behind the passive and active perception states and explain how and why they relate hierarchically and naturally progress during the course of evolution.

On the conclusion to category 1, what we have with actively perceptive systems are structures striving for a stable structural adaptation.

Due to the uncontrolled mutation of their systems structures and to the naturally selective effect of the environment over generations, increasingly dynamic and complex life-forms evolve. This evolution of systems structures is a symptomatic feature of the actively perceptive.

Now, we turn our attention to the next category 2. The mechanism for category 2 is the same as for category 1., and in being so, complies with the principles of the Hierarchical Systems Theory and the principles of structural coherence demanded of a theory of consciousness by Chalmers.

The concept of information as it relates to complex organic structures

One way of looking at organic complexity is to view it as a structural indication of the nature of 'understanding' that a system has of its environment.

What is meant by the use of the term 'understanding' in this context? Clearly, it is not the understanding one might associate with reasoning or informed decision making. Nevertheless, by virtue of their complex structures and behaviours, organic systems do demonstrate that they possess a certain knowledge of the environment:

For example, the complex nature of creating sugars from light, water, and carbon dioxide indicate that the evolved biochemical structures of plants possess the 'knowledge' that enables photosynthesis to take place. Another way of putting it, is to say that systems structures themselves are a form or type of information construct. It follows therefore, that it is with (con) its biochemical structure that a biological system possesses knowledge (scire, to know). Dennett (1995a) also argues that adaptation is a form of knowledge and suggests that any functioning structure carries implicit information about the environment in which the function operates. Surprisingly, Dennett does not then conclude as I do, that these types of structures must, by definition, be conscious.

Understandably, this far reaching definition of consciousness requires qualification. Knowledge is more readily associated only with thinking processes derived from neural mechanisms. The inference with this new definition is that any structured series of biochemical processes, for example, chemical pumps, feedback mechanisms, inhibitors, and receptors, can be regarded as conscious because they encode information that relates to their system's interaction with the environmental. This is not the case however. One must be very careful with any definition, particularly when it will be used to create and assess future artificial consciousness applications. Thus, we must qualify as follows:

A systems structure is conscious if its component parts are the intrinsic interdependent elements that define the system's structure and when the system's behaviours arise from the structure's adaptation to environmental stimuli. Only conscious structures display knowledge that is intrinsic to their system's structure. This definition distinguishes conscious organisms from systems that merely react and from manmade allopoietic structures, of which thermostats or computer software are examples. These allopoietic structures are artificially organised constructs designed to have the appearance of and to mimic the behave of coherent uniform systems. But a thermostat is no more a systems structure than, for example, a person and the house in which they live. Both house and person are organised structures, but one is an evolved and the other, an artificial systems structure - putting them together does not create a uniform system. When the person is in the house, that individual’s environmental parameters are controlled and restricted by the house, but combining the two does not constitute a systems structure.

This relationship between systems structure and knowledge provides a preliminary account of how information growth relates to consciousness, demonstrating compliance with the first part of Chalmers’ double aspect theory of information principle (i.e. criterion A of a coherent theory of consciousness). The first part of the principle states that information is fundamental to consciousness. But the principle then goes further and states that this information corresponds to physical and to phenomenal features that are isomorphic. It is to this second part of the principle that attention turns in the following section:

Information acquisition and the distinction between passive and active consciousness states

As stated previously, when a system interacts with the environment it will either maintain system stability and demonstrate its structural function by behaving in an ordered manner, that is in a manner consistent with that system's structure, or the system's dynamic will reacquire stability in a disordered manner whereby the system's structure and system's integrity is compromised.

Replication and mutation are processes that ensure that a system's physiology adapts to the environment over time. However, it is not its replicating structure, but environmental selection that determines the nature of the knowledge that a structure’s physiology develops from one generation to the next. In other words, the stability afforded by the information about the environment contained within structural adaptation is acquired not by systems-design but by uncontrolled and disordered acts of nature. Consequently, a replicating organic structure does not dictate the means by which it acquires complex environmental knowledge. Its p

There is an active state however, whereby a complex organism can actively influence the acquisition of its knowledge. This state is one that has evolved unintentionally in response to the survival advantages afforded by the structural adaptations of category 1 systems structures. A system acquires this active state when it is capable of the immediate and direct evaluation of environmental conditions. Such capability enables its individuals to adapt their behaviour in light of their environmental understandings, rather than rely on innate responses and evolving physiological adaptations. This active state is effected by the biochemical mechanisms of the neural networks and sensory organs of animals.

A neural network is a biochemical construct that is capable of encoding knowledge of environmental experience. The significant difference between the knowledge that neural networks acquire over other forms of physiologically structured knowledge, is that neural understandings can evolve spontaneously in response to localised experiences. Spontaneous responses to the environment potentially have the benefit of enabling behavioural rather than evolutionary adaptation.

Actively conscious systems seek a stable understanding

Interactive neural processes function as a dynamic interdependent system. So although they evolve from the replicating mechanisms of category 1 systems structures, their systems functions are uniquely distinctive. (This is true of each hierarchical systems category).

An important characteristic of any systems structure is that its dynamic interdependent parts must be able to maintain stability for the systems structure to exist. With category 2., neurally encoded understandings of the environment represent the stable systems state. Therefore, conflicting environmental data has the effect of challenging stability. Consequently, environmental conditions have a continually destabilising effect on neurally encoded understandings. Thus there is a constant realignment of the stability of the understandings that neural structures encode in response to environmental interaction.

The effect of spontaneous realignments of understanding

The realignment of stability describes a changing dynamic relationship between differing stimuli and neural processing. That neural description is a translation of the effect of the constantly changing internal and external causal environments. What is the nature of the changing environmental effects on that neural description?

In answering to this question, one can explore in depth the notion that one can attribute that effect to the evocation of the phenomenon of feeling - The phenomenon of feeling is a neural translation of the effect of the realignments of understanding about the environment.

Feeling, in this context, is not that which one might associate with human concepts of ‘what it is to have feelings’. Feeling here refers to an effect arising from a process of restabilising neural responses. In itself, the effect has no experiential status as such. Rather, its experiential status is only effected within the context of any potential associations that are made with the causal environmental.

The processes that lead to changes in neurally encoded understanding are intimately entwined with the phenomenon of feeling. Furthermore, the ability to learn is very much determined by the ability to make associations between the restabilising processes that generate these feeling phenomena, and environmental experience. Consequently, these associations and the learning that results from them are non-conceptual and distinct from purely innate neural responses to stimuli.

Spontaneous behavioural adaptation is a characteristic that is indicative of an evolving complex interdependent system consisting of experience phenomena and prioritised evaluative processes. These processes demonstrate how information or knowledge can correspond to physical and to phenomenal features that are isomorphic, as required of the second part of Chalmers criterion A.

Feeling and its correct interpretation

Thinking, learning, and being knowledgeable of experience do not give an animal a mind’s eye view, inner wisdom, or self-knowing concept. Learning and feeling are simply a by-product of category 2 consciousness processes.

Consider the nature of communication in an animal that is only actively conscious of experience. In this state, an animal can express itself only by communicating its innate responses to stimuli and by communicating its learnt associations between the phenomenon of its feelings and its experience.

The relationship between feelings and learnt associations can evolve a complex communications structure or set of distinct individual and social attitudes. It is these attitudes that humans identify as emotions, which in themselves, are categorised according to human conceptual interpretations. But for a category 2 animal, there remain no defined realisations as to the significance of any given feeling regarding its expression or interpretation, or any insight regarding the relationship between an expression and learnt associations. In the absence of conceptual representation, an animal such as this cannot begin to communicate any form of conceptual understanding or form a view as to what such an expression means emotionally. Consequently, this phenomenal state of being actively conscious of perception does not embody the notion of what it is to be a human that is aware of the phenomenon of experience.

The complications of the human perspective regarding feeling are due to the reasoning that arises from its conceptual rationale. In this vein, Gunther (2004 p. 43) argues, “by introspecting [italics added] on what we feel, we learn to recognise what emotional attitude we’re experiencing.” (p. 44) This view is re-emphasised by de Sousa (2003) who suggests, “the specific nature of my emotion’s formal object is a function of my appraisal [italics added] of the situation.” (1). Introspection and appraisal (as italicised) are distinctive human attributes that alter the interpretations of the status and boundaries of feeling and emotion. In support of this, research by Nielsen (1998), and the reassessment of Damasio (1994, 1999), indicates that human creative, reasoning, and problem solving processes utilise the evaluation and assessment of emotions rather than purely the emotions themselves. Kaszniak (2001) also highlights research to show that functional aspects of emotion operate outside ‘conscious awareness’.

Artificial consciousness and the principle of organisational invariance

The principle of organisational invariance (Chalmers' criterion B) states that any two systems with the same functional organisation will have qualitatively identical experiences. Theoretically, a hierarchically based systems-model founded on the principles established by this paper would create a self-perpetuating artificial state whose functional organisation would generate structures with qualitatively identical experiences to conscious animals. Empirically, the application of computer programming that reflects the hierarchical relationship expounded by my Hierarchical Systems Theory is necessary in proving compliance with Chalmers’ principle of organisational invariance (criterion B).

Active consciousness and its impact on evolution

The physiological impact of active consciousness is considerable because it alters adaptive parameters. These parameters influence rates of cerebral expansion and physiological development:

In their analysis of complex systems, Hinton & Nowlan (1987) demonstrate that “learning organisms evolve much faster than their nonlearning equivalents”(p. 495). Maynard Smith (1987) also argues that learning alters the parameters of evolution. Complin (1997), who examines the mechanics of adaptation using computational experiments and a wide array of literature from biology and evolutionary computation, concludes that learning is a mechanism that leads to the extension of the boundaries of behavioural adaptation.

Unsurprisingly, the ordered and disordered systems dynamic, which led to the emergence of animals that were actively conscious of perception, marks the beginning of a third evolutionary explosion. This explosion began when multicellular organisms first developed the capability of experiential comparison and evaluation in wormlike animals of the phylum Annelida, about 540 million years ago. Initially, a basic form of chemical memory and evaluation fuelled a physiological explosion that followed from the expansion and specialisation of sensory organs and neural network mechanisms. This alternative explanation presents coherent and unified answers to the questions that Hameroff (1998) raises in relation to the Cambrian evolutionary explosion.

Systems Category 3.

Awareness States

A category 2 animal that is actively conscious has a neural mechanism that enables it to modify its behaviour in an ordered manner. This occurs when there is a systems structure in place whose existence enables comparative evaluation of behavioural responses to environmental experiences.

The key points from the previous category 2 section are that,

1. The underlying function of the category 2 consciousness mechanism is to maintain stable understandings.

2. One of the by-products of the mechanism is the creation of the phenomena of feeling.

3. Evaluations and associations between the phenomena of an individual’s feelings and its experiences leads to learning.

4. This complex neural mechanism requires the rapid comparison and prioritisation of experiential neural events upon whose coherence an organism’s survival may benefit.

On the evolution of passive and active conceptual realisations

Additionally, there is a survival driven potential for cognitive function to evolve degrees of sophistication that enable neural mechanisms to compare, not just the relationship between the phenomenon of experience and feeling, but the relationship between the phenomenon of experience and learning too, where the ability to learn is determined by the ability to make associations between the restabilising processes that generate feeling phenomena and the causal environment. One can view the dynamics of the heightened neural mechanism that explores the relationship between the phenomenon of experience and learning as a systems structure in its own right.

As with previous examples, when a system interacts, it will either maintain system stability and demonstrate its structural function by behaving in an ordered manner or the system's dynamic will reacquire stability in a disordered manner in which case the system's structure passively acquiesces to a restabilising reactive outcome:

In category 3, when understandings between the phenomenon of experience and learning are disordered, understandings are not recognised in terms of their relationship to the process of learning. There is no systematic interpretation of understandings and consequently, no conception of what understandings mean in the context of reality. In this situation, any potential conceptual realisations acquiesce and are but fleeting. This concept is remarkably consistent with Kant's thoughts as described in a letter to Herz:

[If I had the mentality of a sub-human animal, I might have intuitions but] I should not be able to know that I have them, and they would therefore be for me, as a cognitive being, absolutely nothing. They might still... exist in me (a being unconscious of my own existence) as representations..., connected according to an empirical law of association, exercising influence upon feeling and desire, and so always disporting themselves with regularity, without my thereby acquiring the least cognition of anything, not even of these my own states. (Bennett, 1966, p. 104)

My Hierarchical Systems Theory explains fully why Kant's intuitive ideas are implicitly correct. But the Hierarchical Systems Theory enables us to delve much deeper and with confidence:

If the neural system responsible for understanding evolves to a point whereby it begins to develop an evaluative process that actively establishes a neural systems state that relates phenomenal experience with learning itself, then that systems state becomes an ordered structured mechanism in its own right.

Recognitional concepts and the emergent appreciation of needs, emotions, and feelings

An ordered category 3 mechanism recognises and identifies a relationship between learning and experience. What is the significance of this relationship?

To recognise a relationship between learning and experience is to develop a conception of reality's functional relevance:

For example, one may recognise that apples fall from trees. But only by recognising this in a conceptual framework can one conclude, that throwing a stick at a juicy apple in a tree that is beyond reach, might bring the apple down to within ones grasp. This conceptual understanding is conferred only through evaluating the associations between learnt understandings. (cf. Loar (1990) Carruthers (2000a), and Tye (2000b) on phenomenal concepts). The Hierarchical Systems Theory explains that the process relies on several systems layers:

1. Firstly, it relies on the ordered evolution of structures that stabilise experiential reaction.

2. Secondly, it relies on the ordered evaluation of the phenomenon of feeling and experience.

3. Thirdly, it relies on the ordered recognition of the phenomenon of learning and experience.

In animals, learning and emotions are a derivative of complex associations. However, animals need not develop a realisation as to the significance of any given association. To do so, is to recognise the functional nature of that association. For example, an animal may learn that putting a stick in a tree crack and twiddling it about reveals a grub that satisfies its hunger. However, it has not made a conceptual association regarding sticks and satisfaction. To do this, it must make an association between objects that, in general, can function as tools for a variety of purposes to achieve a myriad of satisfying outcomes. Such a realisation is what leads to the development of generalised and ultimately creative concepts about tools and satisfaction.

The proposal is that a complex interdependent conceptual map evolves from a realisation of objective functional properties in view of the emergent appreciation of needs, emotions, and feelings. These come to form an individual humans's recognitional concepts.

The emergence of the phenomenon of reality and its by-products

There is one recognitional concept that is more profound than any other and that is the recognition of the phenomenon, reality. I say that it is profound, because it is by recognising the phenomenon of reality that an individual comes to recognise itself as a conceptual being that exists as part of that reality. There is an emerging identification of the concept of self and an active development of an awareness of the conscious state. In the grand scheme of a personal identity, an emerging conceptual map generates concepts about phenomena and the recognition of phenomenal experience.

In turn, being actively aware of the conscious state has a powerful effect on communication. Whilst the communication of only emotional attitudes in category 2 conscious animals can involve complex sounds and gestures, the communication of conceptual reality in category 3 humans is an entirely different proposition: The construction of a conceptual realisation is what compels a human to formulate any suitable grammatical framework that can effectively communicate conceptualised reality. Consequently, an individual’s language develops in response to its maturing concepts and their descriptive relevance.

Here in lies a coherent and far more plausible alternative interpretation of the findings that led Chomsky (1988) to suggestion that language arises through a realisation in the brain of an innate language faculty, or “language acquisition device” that switches on during language development. My Hierarchical Systems Theory explains that language is merely an intrinsic by-product of the dynamics arising from being actively aware of consciousness, notwithstanding that natural selection has responded and this has led to various highly developed physiological adaptations to enhance language acquisition and development.

Inevitably, one must reevaluate the conclusions of Savage-Rumbaugh et al. (1993) and Greenfield & Savage-Rumbaugh (1990) p. 540) that the “evolutionary root of human language” can be found in the “linguistic abilities of the great apes” and be critical of Leakey & Lewin (1992) who speculate that the cognitive foundation for human language is present in ape brains. Hierarchical Systems Theory falsifies the view that physiological characteristics are responsible for the emergence and development of language and offers the alternative that an evolving systems hierarchy drives the development of physiological evolution for each category. Indeed, the application of the systems hierarchy model to the research shows a clear unifying distinction and a coherent explanation: Category 2 consciousness processes compel apes and immature human infants to communicate only innate responses and emotionally motivated attitudes, whilst category 3 processes, additionally, compel mature humans to communicate conceptualised reality.

Why are phenomena properties ineffable?

For an individual to be aware of the conscious state is to be aware of the phenomena of learning and feeling and not of the mechanisms behind learning and feeling. This provides the explanation to point 3 of Carruthers' requirements of an adequate reductive explanation of phenomenal experience; namely, why the properties distinctive of phenomenal consciousness can seem to their subjects to be ineffable or indescribable:

Higher-order thought (HOT) processes (Carruthers, 2000b notably section 7 on dispositionalist HOT; Pharoah, 2008; Rosenthal, 2002) generate a perspective that has no means of accessing its category 1 processes, which organise the structure of its complex biochemical mechanisms and its category 2 processes, whose first-order representations (cf. Dretske, 1995; Tye, 1995 for FOR theories) generate its sensations and feelings (Carruthers, 2000b).

This does little to deter individuals from trying to conceptualise the phenomenon of their experiences, which include bodily functions, sensations, emotions, and consciousness itself. In conclusion to such cogitations, an individual might come to define sensations as, for example, ‘introspectively accessible phenomenal experiences that are irreducible’ and yet this conceptual definition provides no clue as to what sensations actually feel like or what they are exactly. Inevitably, despite the familiarity of phenomenal experience and consciousness, conceptual identification remains elusive. This is clearly demonstrated by the commentary on thought experiments like that of Nagel’s "bat" (1974) and Jackson's "Mary" (1986) (cf. Dennet's 'The Unimagined Preposterousness of Zombies', Dennett, 1995b).

The relationship between stable concepts of reality and social cohesion

One of the key characteristics of a system state is its tendency toward stability. This is not surprising. If the interdependent parts of a system cause instability, the continuing survival of the system state is jeopardised. A system state is defined by the component stability of its dynamic parts.

A category 3 system state seeks a stable concept of reality. When a concept does not conform to the reality of its learning and experiential evaluations, the stability of that system concept is in jeopardy. And yet, an individual is compelled to reevaluate its concepts of reality whenever its concepts of reality are challenged.

Contemplation and discussion always challenge the stability of concepts about reality. Importantly, every individual’s concept of reality includes the individual’s stable interpretation of self. Consequently, there is the tendency for contemplation and discussion to feel like a challenge the self-concept. Individuals are prone to be extremely protective of their perspective of reality and to be eager to maintain stable concepts however absurd they may be shown to be. Introducing new concepts are challenging because they require the 'gentle' dismantling of existing and well guarded concepts of reality.

The concepts of individuals, encapsulate family, tribal, and social beliefs and ideals. In these situations, concepts are not so much derived from the interpretation of experience, but from the unquestioning incorporation of culture, beliefs and ideals. Individuals will feel compelled to protect the ideals and the beliefs of their affiliated groups. Concepts derived from group affiliation are particularly potent because they are not experience based. The reevaluation of an individual’s concept of reality, can generate both positive and negative conclusions that fuel individual and societal creativity and bias.

Prejudice and creativity are symptomatic of the reinterpretation of realisations, and evoke the experiences and behaviours that are unique to human societies. Interestingly, under certain specific conditions, there may develop different classes of conceptual distortions and divergence strategies to maintain conceptual stability. One could classify these classes and their ensuing behaviours in terms of the relationship with and evolution of category 1, 2 and 3 anomalies. This classification is a new science that requires significant study. This science will lead to advances in psychological profiling and treatments, and to group conflict and resolution principles.

Hierarchical Systems Theory is simple, unifying, and coherent

Hierarchical Systems Theory explains how ordered and disordered interaction between systems and their environments leads to an evolved hierarchy of systems structures. It also explains why these systems are self-regulatory. The nature and specifics of this self-regulation is examined in some depth in the example given and shows the simple mechanisms from which great diversity ensues.

The example explains in detail, the unified and simple systems dynamic behind the phenomenon of experience that humans call consciousness. It explains the systems hierarchy and mechanisms in terms that structurally relate replication, environmental information, phenomenal experience, and the phenomenon of conceptualised reality. In doing so, it satisfies Chalmers' principle of structural coherence (Criterion C), which requires that the processes that explain awareness link structurally to the basis of consciousness by determining the relationship between that of which we are aware (and can report upon) and that of which we experience.

Why is this explanation so revealing? Because the explanations for each of the categories, which constitute the systems hierarchy, give insightful answers to long standing questions in many fields of study. One cannot disregard as coincidental, the fact that the simple extrapolation of this unified systems-dynamic describes the behaviour, interdependent relationship, and defining characteristics of important and definitive behaviours and stages in evolution.

Furthermore, Hierarchical Systems Theory demonstrates an example of a Dowell (2006) type-C physicalist’s reductive explanation of phenomenal experience by providing a structural link between that of which we are aware, which comprise of our conceptual representations, and that which we experience.

During the late Pliocene, about two and a half to three million years ago, the fourth evolutionary explosion began. The hominid category 2 brain may have initially developed in sophistication gradually because of the adaptive consequence of evolution and survival. However, this incidental adaptation resulted in the emergence of category 3. The benefits of conceptualising reality had a dramatic affect on cerebral expansion, physiological development, social dynamics, and the survival ethic of the primate family. The development of humankind and its unique identifying characteristics are the conclusion to the fourth evolutionary explosion where conceptual evolution rather than biological evolution has taken precedence.

But the theory has even more to offer:

By extrapolation, one can ascertain the nature and mechanisms behind the next evolutionary stage - The evolutionary stage to which humankind is heading and is yet to arrive at.

What is that future state?

Systems Category 4.

Future States

Theoretically, there is scope for the evolution of a future category 4 state. What are the defining characteristics of this future state to which the human mind may evolve?

You may have noted that each category displays, in a disordered manner, a specific and unique characteristic. That the characteristic grants the system in each category some kind of advantage, means that there is an evolutionary precedent. When a system then evolves such that its structure is capable of controlling the nature of the characteristic, it has acquired a new and powerful functional construct, a construct that can maximise the system's structural advantages and become the formative dynamic in the evolution of entirely new physiological features and functional behaviours.

In order to determine what future category 4 will be, therefore, one has to look at the characteristics unique to category 3, and ask the following questions:

1. What characteristic is being acquired in a disordered manner and that is unique to category 3?

2. Has this characteristic evolved in complexity over generations and has it got the potential to form a systems construct that supersedes category 3 and that could lead to a 5th evolutionary explosion?

3. What will be the defining qualities of this future construct?

4. In what way will it influence humanity?

1. The Hierarchical Systems Theory tells us that the next systems category will evolve out of the category 3 system. Therefore, in order to find out the category 4 construct, one needs to ask what specific 'class' of concepts about reality have been evolving accidentally and progressively within the category 3 system?

One possible candidate is creativity. After all, creativity is a unique characteristic of category 3 humans. But creativity is not a concept, so much as an effect or by-product of category 3. Likewise, phenomenal feeling is an effect or by-product of category 2 animals. Consequently, just as phenomenal feeling does not form the systems construct of category 3, creativity is not the correct candidate of the future category 4 construct.

One might suggest science, but is the development of science a conceptual development or merely a characteristic feature of category 3 capability? I believe the latter to be the case.

Should we look at an ethereal concept like 'love'? The problem with the candidate love, is that the concept does not appear to be progressive. It is not an evolving concept but indeed is a concept that seems to need to defy objectivity, to resist definition and to hanker to the inexplicably primitive.

Morality is a conceptual construct that is in flux as it evolves under the auspices of religion, philosophy, and our individual sense of human value. It is moral judgment that instructs our responses to daily consideration. The morality of our times is very different to that say found in the Old Testament or in the teachings of the ancient Greeks, or even across present day cultural boundaries and countries. Morality is a strong potential candidate as a systems construct for category 4:

What could possibly evolve accidentally from morality concepts that would lead to a unique active and ordered systems state? One can assume that certain individual humans have glimpsed this possibility, but have been unable to grasp the essence of the construct and to imbue human conscience with a stable construct with which all of humanity could actively partake, just as some category 2 apes might have emerging revelations of concepts of reality but be able to grasp their significance fully.

How can we be sure that morality is the construct that we are looking for? There is a way of assessing this by looking to aesthetics. This will be covered in another article.

2. Think of morality as having evolved over millennia. It certainly has the capability of developing a profoundly powerful systems construct. But as a construct for future category 4, one would not recognise the concept of morality in the way humans currently understand it. Humans would relate to it in a different light. I think of this new understanding and redefinition as a 'moral-wisdom' or a 'practical wisdom'. Humanity will seek a stable wisdom. What is artistic, scientific, religious, philosophical endeavour all about if it is not about finding a path to wisdom? This is the driving force behind our creative urges. Humanity just has not realised this to be the case. Wisdom has been acquired by accident. Humanity has been acquiring wisdom passively. It is a disordered consequence of our creative activities.

3. What will be the defining qualities of this future construct? I think it will lead to the end to intolerance, to the justification of all proper action, to a founding classification of social cohesion, to an understanding of boundless endemic creativity.

4. A category 3 human that is actively aware of its conscious state has a cognitive mechanism that enables it to evaluate its behaviour through conceptual appraisal. Whilst assessing the intentions and effects of its behaviour, an individual develops concepts by which it can make definitive judgments. The communication of these considerations in a societal context leads to the development of values. Any particular family of values are based on separate but interacting sets of principles categorised, for example, in the morality of religion, law, ethics, and personal considerations of free will. These frequently are in conflict with one another creating conceptual and behavioural paradoxes. Consequently, the evolution of moralities has been a disordered by-product of passive category 4 processes. This autopoietic process and consequential social attitudes and behaviours indicate that human moral conscience exists and evolves within the bounds of a unified construct that obeys fundamental dynamic systems principles. A category 4 state will, theoretically, involve the acquisition of an intentionally structured ethical discourse bound by a fundamental wisdom. This should lead, theoretically, to an explosion in human behavioural and physiological development.


Armstrong, D. (1968). A Materialist Theory of the Mind. London: Routledge. Textlink
Armstrong, D. (1984). Consciousness and causality. In D. Armstrong & N. Malcolm (Eds.), Consciousness and Causality (pp. 103-191). Oxford: Blackwell. Textlink
Barab, S., Cherkes-Julkowski, M., Swenson, R., Garrett, S., Shaw, R.E., & Young, M. (1999). Principles of self-organization: Learning as participation in autocatakinetic systems. The Journal of Learning Sciences, 8, 349-390. Textlink
Bateson, M. (2002). Context-dependent foraging choices in risk-sensitive starlings. Animal Behaviour, 64, 251-260. Textlink
Bennett, J. (1966). Kant's Analytic. Cambridge: Cambridge University Press. Footnotelink
Bennett, J.O., Shostak, S., & Jakosky, B. (2003). Life in the Universe. Boston, MA: Addison-Wesley. Textlink
Bertalanffy, L von. (1950). An Outline of General System Theory. British Journal for the Philosophy of Science, 1, 139-164. Textlink
Bertalanffy, L von. (1951). General system theory - A new approach to unity of science (Symposium). Human Biology, 23, 303-361. Textlink
Boltzmann, L. (1974). The second law of thermodynamics. In B. McGinness (Ed.), Theoretical Physics and Philosophical Problems (pp. 13-32). NY: D. Reidel. (Original work published 1886). Textlink
Carruthers, P. (1996). Language, Thought and Consciousness: an essay in philosophical psychology. Cambridge: Cambridge University Press. Textlink
Carruthers, P. (2000a). Phenomenal Consciousness. Cambridge: Cambridge University Press. Textlink
Carruthers, P. (2000b). The evolution of consciousness. In P. Carruthers & A. Chamberlain (Eds.), Evolution and the Human Mind. Cambridge: Cambridge University Press. Webpage Textlink
Carruthers, P. (2001). Consciousness: explaining the phenomenon. In D.Walsh (ed.), Naturalism, Evolution and Mind. Cambridge: Cambridge University Press. Webpage Textlink
Carruthers, P. (2004). Reductive explanation and the ‘explanatory gap’. Canadian Journal of Philosophy, 34. Webpage Textlink
Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies 2(3), 200-219. Webpage Textlink
Chen, Z., & Siegler, R.S. (2000). Across the great divide: bridging the gap between understanding of toddlers’ and older childrens’ thinking. Monographs of the Society for Research in Child Development, 65(2), 1-96. Textlink
Chomsky, N. (1988). Language and Problems of Knowledge: The Managua Lectures. Cambridge, MA: MIT Press. Textlink
Complin, C. (1997). The evolutionary engine and the mind machine: A design-based study of adaptive change. Unpublished doctoral dissertation, University of Birmingham, UK. Webpage Textlink
Coren, R. (2001). Empirical evidence for a law of information growth. Entropy, 3, 259-272. Webpage Textlink
Corning, P.A., & Kline, S.J. (1998). Thermodynamics, Information and Life Revisited, Part I: ‘To Be or Entropy’. Systems Research and Behavioural Science, 15, 273-295. Textlink
Damasio, A. (1994). Descartes' Error. NY: G.P. Putnam's Sons. Textlink
Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. NY: Harcourt Brace. Textlink
Dennett, D. (1978). Towards a cognitive theory of consciousness. In D. Dennett (Ed.), Brainstorms (pp 149-173). Hassocks: Harvester Press. Textlink
Dennett, D. (1991). Consciousness Explained. London: Penguin Press. Textlink
Dennett, D. (1995a). Darwin’s Dangerous Idea: Evolution and the Meanings of Life. NY: Simon & Schuster. Textlink
Dennett, D. (1995b). The Unimagined Preposterousness of Zombies. Journal of Consciousness Studies, 2(4) 322-326. Webpage Textlink
Dennett, D. & Kinsbourne, M. (1992). Time and the observer: the where and when of consciousness in the brain. Behaviour and Brain Sciences 15 (2) 183-247. Webpage Footnotelink
de Sousa, R. (2003). Psychological and Evolutionary Approaches. In R. de Sousa, Emotion, in E. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2003 Edition). Stanford: Stanford University. Webpage Textlink
Dowell, J.L., (2007a). Serious Metaphysics and the vindication of reductions. Forthcoming in Philosophical Studies. Webpage Textlink
Dowell, J.L., (2007b). A Priori Entailment and Conceptual Analysis: Making Room for Type-C Physicalism. Forthcoming in Australasian Journal of Philosophy. Textlink
Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press. Textlink
Flanagan, O. (1992). Consciousness Reconsidered. Cambridge, MA: MIT Press. Textlink
Forel, A.H. (1908). The Senses of Insects. (M. Yearsley, Trans.). UK: Methuen & Co. (Original work published 1901). Textlink
Fowler, T.B. (1979). Computation as a thermodynamic process applied to biological systems. International Journal of Biomedical Computing, 10(6), 477-489. Textlink
Gennaro, R. (1996). Consciousness and Self-Consciousness. Amsterdam: John Benjamins Publishing. Textlink
Greenfield, P.M., & Savage-Rumbaugh, E.S. (1990). Grammatical combination in Pan paniscus: Processes of learning and invention in the evolution and development of language. In S.T. Parker & K. Gibson (Eds.), "Language" and Intelligence in Monkeys and Apes: Comparative Developmental Perspectives (pp. 540-579). NY: Cambridge University Press. Textlink
Gunther, Y.H. (2004). The phenomenology and intentionality of emotion. Philosophical Studies, 117(1-2), 43-55. Textlink
Hameroff, S (1998). Did Consciousness Cause the Cambrian Evolutionary Explosion? In S. Hameroff, A. Kaszniak, & A. Scott (Eds.), Toward a Science of Consciousness II: The 1996 Tucson Discussions and Debates. (pp.421-437). Cambridge, MA: MIT Press Webpage Textlink
Hinton, G.E., & Nowlan, S.J. (1987). How learning can guide evolution. Complex Systems, 1(3), 495-502. Also in R. K. Belew & M. Mitchell (Eds.1996), Adaptive Individuals in Evolving Populations: Models and Algorithms. Santa Fe: Addison-Wesley. (pp. 447-454). Textlink
Jackson, F. (1986). What Mary Didn't Know. The Journal of Philosophy 83(5), 291-95. Textlink
Jorgensen, S.E., & Svirezhev, Y.M. (2004). Towards a Thermodynamic Theory for Ecological Systems. San Diego: Elsevier. Textlink
Kant, I. (1929). Critique of Pure Reason. In N. Kemp Smith (Trans.) Hong Kong: Macmillan (Original work published 1781/8). Footnotelink
Kaszniak, A. (2001). Emotions, Qualia, and Consciousness. Singapore: World Scientific Publishing. Textlink
Kirk, R. (1994). Raw Feeling. Oxford: Clarendon Press. Textlink
Kuhn, A. (1974). The Logic of Social Systems. San Francisco: Jossey-Bass. Textlink
Leakey, R., & Lewin, R. (1992). Origins Reconsidered: In Search of What Makes Us Human. NY: Doubleday. Textlink
Loar, B. (1990). Phenomenal states, Philosophical Perspectives, 4, 81-108. Textlink
Lycan, W. (1987). Consciousness. Cambridge, MA: MIT Press. Textlink
Lycan, W. (1996). Consciousness and Experience. Cambridge, MA: MIT Press. Textlink
Maynard Smith, J. (1987). When Learning Guides Evolution. Nature 329, 761-762. Also in R. K. Belew & M. Mitchell (Eds.1996), Adaptive Individuals in Evolving Populations: Models and Algorithms. Santa Fe: Addison-Wesley. (pp. 455-457). Textlink
Mingers, J. (1995). Self-Reproducing Systems: Implications and Applications of Autopoiesis. New York: Plenum Press. Textlink
Murphy, M.P., & O'Neill, L.A. (Eds.). (1995). What is Life? The Next Fifty Years: Speculations on the Future of Biology. Cambridge: Cambridge University Press. Textlink
Nagel, T. (1974). What is it like to be a bat? Philosophical Review 83(4), 435-50. Textlink
Nelkin, N. (1996). Consciousness and the Origins of Thought. Cambridge: Cambridge University Press. Textlink
Nielsen, L. (1998). Modeling creativity: Taking the evidence seriously. In S. R. Hameroff, A. W. Kaszniak, and A. C. Scott, (Eds.), Toward a Science of Consciousness II: The Second Tucson Discussions and Debates (pp. 717-824). Cambridge, MA: MIT Press. Textlink
Nguyen, S.P. & Murphy, G.L. (2003). An Apple is More Than Just a Fruit: Cross-Classification in Children’s Concepts. Child Development, 74(6), 1783-1806. Textlink
Pharoah, M.C., (2008). Enhancing Dispositional Higher-Order Thought Theory Webpage Textlink
Pieper, J. (2000). Entropy, Disorder and Life. Webpage Textlink
Prigogine, I. (1978). Time, Structure, and Fluctuations. Science, 201, 777-785. Textlink
Rosenthal, D. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329-359. Textlink
Rosenthal, D. (1993). Thinking that one thinks. In M. Davies & G. Humphreys (Eds.), Consciousness (pp. 197-223). Oxford: Blackwell. Textlink
Rosenthal, D. (2002). Consciousness and Higher-order Thought. In Nadel (Ed.), Encyclopedia of Cognitive Science, (pp. 717-726), London: Macmillan, Nature Publishing Group. Textlink
Sato, Y., Akiyama, E., & Farmer, J.D. (2002). Chaos in learning a simple two-person game. Proceedings of the National Academy of Sciences, USA, 99(7), 4748-4751. Textlink
Savage-Rumbaugh, E.S., Murphy, J., Sevcik, R.A., Brakke, K.E., Williams, S.L., & Rumbaugh, D.M. (1993). Language comprehension in ape and child. Monographs of the Society for Research in Child Development, 58(3-4 Serial No. 233). Textlink
Schneider, E.D., & Kay, J. J. (1994). Life as a manifestation of the second law of thermodynamics. Mathematical and Computer Modelling, 19, 25-48. Textlink
Schneider, T.D. (2000). Evolution of biological information. Nucleic Acids Research, 28, 2794-2799. Textlink
Schopf, J.W. (1999). Cradle of Life: The Discovery of earth's Earliest Fossils. Princeton: Princeton University Press. Textlink
Schrödinger, E. (1944). What is Life? Cambridge: Cambridge University Press. Webpage Textlink
Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-based choice. Cognition, 49, 11-36. Textlink
Shafir, S. (1994). Intransitivity of preferences in honey-bees - support for comparative-evaluation of foraging options. Animal Behaviour 48(1), 55-67. Textlink
Swenson, R. (1988). Emergence and the principle of maximum entropy production: Multi-level system theory, evolution, and non-equilibrium thermodynamics. Proceedings of the 32nd Annual Meeting of the International Society for General Systems Research, 32. Textlink
Swenson, R. (1989). Emergent attractors and the law of maximum entropy production: Foundations to a theory of general evolution. Systems Research, 6, 187-197. Textlink
Swenson, R. (1997). Autocatakinetics, evolution, and the law of maximum entropy production: A principled foundation toward the study of human ecology. Advances in Human Ecology, 6, 1-46. Textlink
Tye, M. (1995). Ten Problems of Consciousness: a representational theory of the phenomenal mind. Cambridge, MA: MIT Press. Textlink
Tye, M. (2000a). Language, Thought and Consciousness: an essay in philosophical psychology. Cambridge: Cambridge University Press. Textlink
Tye, M. (2000b). Color, Content, and Consciousness. Cambridge, MA: MIT Press. Textlink
Varela, F.J., Maturana, H.R., & Uribe, R. (1974). Autopoiesis: the organization of living systems, its characterization and a model. Biosystems 5, 187–196. Textlink
Velmans, M. (2001). A natural account of phenomenal consciousness. Communication and Cognition 34,(1/2) 39-60. Webpage Textlink
Wedell, D.H. (1991). Distinguishing among models of contextually induced preference reversals. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 767-778. Textlink
Wolfram, S. (1984). Twenty Problems in the Theory of Cellular Automata. Physica Scripta, T9, 170-183. Webpage Textlink
Wolfram, S. (1994). Cellular Automata and Complexity: Collected Papers. Reading, MA: Addison-Wesley. Webpage Textlink
Woodruff Smith, D. (2001). Three facets of consciousness. Axiomathes 12, 55-85. Webpage Textlink

Footnote 1 - The Multiple Drafts model
The "Multiple Drafts model holds that whereas the brain events that discriminate various perceptual contents are distributed in both space and time in the brain, and whereas the temporal properties of these various events are determinate, none of these temporal properties determine subjective order, since there is no single, constitutive "stream of consciousness" but rather a parallel stream of conflicting and continuously revised contents." (Dennett & Kinsbourne 1992, abstract) Textlink

Footnote 2 - Phenomenal experience
The relationship between experiences, feelings, and their associations are very complex.
Some experiences lead to innate reactive behaviours and feelings. The evocation of innate feelings can be both subtle (pale shades of colour, soft vowel sounds, gentle pressure, the smell of vanilla) and extreme (vibrant shades and contrasts, loud consonant sounds, painful pressure, the smell of ammonia). Our innate responses are equally responsive and have evolved for good reason.
Other experiences lead to learnt behaviours. By the complex procedures of associative learning, complex behavioural patterns can emerge. (Note, there is a significant distinction between the animal and the human relationship with phenomenal experience). For example, if an individual’s innate reaction to pain is withdrawal, but every time the individual experiences it, something nice happens (e.g. It is given comforting stimuli, like food or companionship etc) the observed emotional response will evolve over time (e.g. From withdrawal to suspicion, fear, depression, tolerance etc). The proposition is that emotions are the evolving consequence of learnt associations between innate feelings and complex stimuli. A derivative of this proposition provides a distinction between processes that lead to feelings, and those that lead to emotions. The nature of the complexity is due to the survival implications of the numerous diversifications arising from the degree and regularity of stimuli, and to the depth of innate knowledge and cognitive capability. Textlink

Footnote 3 - Phenomenal Concept Strategy
What is the nature of a realization? In animals, learning and emotions are a derivative of complex associations. However, animals need not develop a realization as to the significance of any given association. To do so, is to recognize the functional nature of that association.
For example, an animal may learn that putting a stick in a hole and twiddling it about reveals a grub that satisfies its hunger. However, it has not made a conceptual association regarding sticks and satisfaction. To do this, it must make an association between objects that, in general, can function as tools for a variety of purposes to achieve a myriad of satisfying outcomes. Such a realization is what leads to the development of generalized and ultimately creative concepts about tools and satisfaction. The proposal is that a complex interdependent conceptual map evolves from a realization of objective functional properties in view of the emergent appreciation of needs, emotions, and feelings. In the grand scheme of a personal identity, this map generates the concept of phenomena and the recognition of phenomenal experience. Textlink

Footnote 4 - Kant's perspective
"It must be possible for the 'I think' to accompany all my representations; for otherwise something would be represented in me which could not be thought at all, and that is equivalent to saying that the representation would be impossible, or at least would be nothing to me." (Kant, 1929, B 131-2)
There is more from Kant to the same effect in a letter to Herz: (Bennett, 1966, p. 104)
"[If I had the mentality of a sub-human animal, I might have intuitions but] I should not be able to know that I have them, and they would therefore be for me, as a cognitive being, absolutely nothing. They might still... exist in me (a being unconscious of my own existence) as representations..., connected according to an empirical law of association, exercising influence upon feeling and desire, and so always disporting themselves with regularity, without my thereby acquiring the least cognition of anything, not even of these my own states."
The Paper clearly defines the relationship between the sensory (experiential) and the intellectual (conceptual) where once they were regarded as species of the same genus (Bennett, 1966, p.55). Textlink

Footnote 5 - Phenomenal concepts that lack phenomenal experience
All people can demonstrate knowledge without empathy. Such individuals may have descriptive phenomenal concepts but lack the associative phenomenal experience from which such concepts would normally germinate. This is aptly illustrated by the performing musician.
A performing musician may display exceptional technical ability but leave an audience feeling a sense of indifference or detachment because the performance is not communicating feelings from which a shared emotional association can be made. In this scenario the performer is applying technical knowledge of his/her instrument, and of musical cadence and melody to evince a competent rendition of a score of music, without utilising any emotionally objective knowledge as to the significance of any given melodic or harmonic invention.
Conversely, an intuitive performance lacking structural and technical proficiency leaves an audience unable to discern or interpret its content. "Thoughts without content are empty, intuitions without concepts are blind. It is, therefore, just as necessary to make our concepts sensible, that is, to add the object to them in intuition, as to make our intuitions intelligible, that is, to bring them under concepts. These two powers or capabilities cannot exchange their functions. The understanding can intuit nothing, the senses can think nothing. Only through their union can knowlegdge arise... [this is] a strong reason for carefully separating and distinguishing the one from the other." (Kant, 1929, A 51-2 = B 75-76)
From this union one can determine a combinatory emotional intelligence in humans alone that enriches phenomenal concepts and, consequently, the tendency to be able to communicate them. Textlink