Skip to content

Symmetry Break

Cognitive Symmetry Breaking

…we cannot really use the word ‘selfish’ because that carries the implication
that there actually is such a thing as a self. Talking about being ‘unselfish’
traps us in the same false assumption—it reifies the idea of self. Saying
that one ought not to cherish the self is the same thing as cherishing the
self—putting oneself last is the same thing as putting oneself first, since
everything still revolves around the central idea of self. There is no getting
away from it. Similarly, there is no way of getting out of a neurotic pattern
of behavior because mind and the pattern it creates by deliberate action are
the same system.
When a definite…self is created there is an intensely rewarding glow of
satisfaction—every bit of me feels suffused with the delicious warmth of
confirmation: “I am!…I am!…I am!…” This is the message and I could
listen to it all day! When we are euphoric this is the gist of what we are
constantly trying to tell others—if not directly by saying how great we
are, then indirectly by spinning a web of self-reference, by becoming
proprietary towards everything that is going on, by exerting control
on the meaning of what is happening…
“Cognitive Symmetry Breaking,” Nicholas Williams

It is…possible to think of symmetry-breaking in terms of Aristotelian versus John von Neumann’s Quantum Logic; or, alternatively, in terms of closed Either/Or logic versus open Both/And logic. When we look for an answer within a given framework of understanding the only two terms which are available are Yes and No [+] and [-]. We can only think of things that are relevant to the rules which we are using to search ‘answer-space’ with, and relevance means either ‘agreement’ or ‘ disagreement.’ In order to deal with the inherent indeterminacy of quantum systems mathematician John von Neumann came up with a form of logic with an ‘irrelevant’ term in it, namely [Maybe]. Maybe doesn’t ever rule anything out, any more than it definitely includes anything; in fact, it isn’t definite at all. This may seem useless, but it is actually very advantageous: Yes and No trap us with a definite context, whilst Maybe doesn’t trap us anywhere. Maybe allows us mobility within the unbounded set of all possibilities, it doesn’t collapse the system by making arbitrary assumptions.
A break in cognitive symmetry amounts to ‘arbitrarily settling upon a particular mode of description.’ When we frame an hypothesis, or propose a model, we break symmetry. Once questions are asked on the basis of our thinking, the answers that come back to us confirm the validity of the paradigm which informs those questions. A strong theory succeeds tautologically by interpreting everything in its own terms and always obtaining Yes or No answer…
We are not just talking abut formal theories here—it is very much the case that all of us have a ‘model’ of reality, whether we realize it or not, and the principle of organizational closure applies just as much to us…The information content of our minds decreases in proportion to the extent to which we use closed (or tautological) logic. This ‘closure factor’ may also be pictured in terms of ‘purposefulness’: the more purposeful we are in life, the more we are relying on rules, and the more seriously we rely on rules, the more we make life relevant to those rules. Purposefulness, then, is just another way of talking about Yes and No—purposeful behavior is simply an expression of Aristotelian (Either/Or) logic. If I try to obtain a goal, then I am re-confirming the validity of the rules which I am
using to construct that goal; and if I try to avoid that goal, I am similarly reinforcing the way of looking at things that leads me to think that there is something there to avoid. I can hit the target, or miss the target; either way I am making the target crucially relevant to me. The only possible way of gaining perspective on the matter is through dropping purposefulness…
Nicholas Williams, Cognitive Symmetry Breaking (

Nicholas Williams, Cognitive Symmetry Breaking
In thermodynamic terms, the imposition of a model to explain what is going on results in an increase in the entropy of the system. This statement is fairly counter-intuitive since one naturally takes the transition from unpredictability to predictability to be an increase in order. One imagines that having a model allows one to derive more, and not less, useful information from the system under observation. They key word here is ‘useful’ since useful means that certain assumptions have been made and forgotten about. Before a model is used to filter information there is no knowing what is relevant and useful, and what is irrelevant and useless, and therefore the amount of information that is needed to meaningfully describe the system is unlimited. This is another way of saying that the information content of an undescribed system is infinite, since no decisions have been made regarding ‘cut-off’ points, points beyond which we have no interest in collecting data. In other words, if ‘anything could be the case,’ then we need an endless series of descriptive terms to cover all the possibilities, which is the definition of maximum complexity and maximum information content. On the other hand, if we already know within what ‘sort’ of things are possible (i.e., if we already know where to look) then this reduces the parameter of complexity. A symmetry-break in this case means the situation where I am able to discriminate, I have a ‘right’ way and a ‘wrong’ way to look at the universe. Symmetry is where no discrimination is possible, where there is no ‘right way’ and no ‘wrong way,’ no ‘up’ and no ‘down’; there is no ‘situational polarity’—all directions are the same.
The Satisfaction of Being Right
Once we decide to look at reality in a certain way there is a satisfying ‘click’ as everything falls into
place and we see a pattern where before there was only chaos and uncharted elements. The contrast between the discomfort of not knowing (of having no template for our experience) and the ‘satisfactoriness’ of having everything organized coherently means that we have a natural bias towards moving away from the essential relativity (or ambiguity) of the unprocessed picture to the self-evident ‘obviousness’ that we experience once we focus on one level of organization and ignore all others. When this tendency is taken to an extreme I find myself falling into what others can plainly see to be a ‘self-fulfilling prophecy,’ i.e., I process information so selectively that my assumptions are unfailingly confirmed. An information-collapsing bias is created which distorts my behavior and thinking to such an extent that the patterns of my life become oppressively narrow, repetitive and predictable. Inflexible and anxiety-laden beliefs form which are very difficult to challenge…

Relevance Versus Irrelevance

It is also possible to think of symmetry-breaking in terms of Aristotelian versus John von Neumann’s Quantum Logic; or, alternatively, in terms of closed Either/Or logic versus open Both/And logic. When we look for an answer within a give framework of understanding the only two terms which are available are Yes and No [+] and [-]. We can only think of things that are relevant to the rules which we are using to search ‘answer-space’ with, and relevance means either ‘agreement’ or ‘ disagreement.’ In order to deal with the inherent indeterminacy of quantum systems mathematician John von Neumann came up with a form of logic with an ‘irrelevant’ term in it, namely [Maybe]. Maybe doesn’t ever rule anything out, any more than it definitely includes anything; in fact, it isn’t definite at all. This may seem useless, but it is actually very advantageous: Yes and No trap us with a definite context, whilst Maybe doesn’t trap us anywhere. Maybe allows us mobility within the unbounded set of all possibilities, it doesn’t collapse the system by making arbitrary assumptions.
A break in cognitive symmetry amounts to ‘arbitrarily settling upon a particular mode of description.’ When we frame an hypothesis, or propose a model, we break symmetry. Once questions are asked on the basis of our thinking, the answers that come back to us confirm the validity of the paradigm which informs those questions. A strong theory succeeds tautologically by interpreting everything in its own terms and always obtaining Yes or No answer…
We are not just talking abut formal theories here—it is very much the case that all of us have a ‘model’ of reality, whether we realize it or not, and the principle of organizational closure applies just as much to us…The information content of our minds decreases in proportion to the extent to which we use closed (or tautological) logic. This ‘closure factor’ may also be pictured in terms of ‘purposefulness’: the more purposeful we are in life, the more we are relying on rules, and the more seriously we rely on rules, the more we make life relevant to those rules. Purposefulness, then, is just another way of talking about Yes and No—purposeful behavior is simply an expression of Aristotelian (Either/Or) logic. If I try to obtain a goal, then I am re-confirming the validity of the rules which I am using to construct that goal; and if I try to avoid that goal, I am similarly reinforcing the way of looking at things that leads me to think that there is something there to avoid. I can hit the target, or miss the target; either way I am making the target crucially relevant to me. The only possible way of gaining perspective on the matter is through dropping purposefulness, which is a notion that we will come back to shortly…

The Urge To Make The Universe Relevant To Me

The fact that psychology lacks an equivalent to those laws celebrated in the physical sciences has not gone unnoticed. Psychologists such as Hans Eysenck have tried to duplicate the success of classical physics and chemistry in deriving elegant and powerfully predictive laws, but still we have nothing. We don’t even have a single law to cover our nakedness. This just seems to be the way of it:
psychology is different than physics, and therefore there is no need for us to suffer from ‘physics-envy.’ Yet, if we stopped to reflect for a moment, we might realize that we possessed a ‘psychological law’ all along…
We will state it as follows. Chemical reactions are drive by energetic considerations, and these considerations in turn are most elegantly expressed in terms of the second law of thermodynamics. To put things simply, we could say that systems have a tendency to maximize their entropy content, which is to say, to maximize the predictability of their behavior. To reverse this tendency takes an energy input from outside the system. The psychological equivalent of entropy is therefore the tendency to increase definition of details. Alternatively, it also equals the tendency for cognitive symmetry-break to occur. More colloquially, we could say that the equivalent to chemical reactivity, which would be psychological reactivity, is the urge to describe ourselves and the world that we live in, where ‘describing’ means ‘making relevant.’ This increase in perceived ‘relevance,’ which we have also called organizational closure, is a manifestation of psychological entropy.
The Link Between ‘Making Up Your Mind’ and Euphoria
We will now bring in an extra ingredient—pleasure (or satisfaction). It is not a particularly odd thing to say that when we attain a goal we fee good…
However, we can now propose a different basis for feeling good. Whilst it is clearly true that attaining a socially or biologically reinforced goal gives satisfaction, there is also a different, rather more subtle basis for satisfaction. If both hit and miss re-affirm the context of meaning within which ‘hit and miss’ is construed, then both are equally good in terms of providing a basic orientation with which to operate. Having such an orientation, as we already noted, is in itself a source of satisfaction. To render the world predictable feels nice—the irreversible process of ‘making up my mind’ about something affords me a sense of relief because the uncomfortableness of ‘not being sure’ is gotten rid of. That is it—the subject is closed!
If I am intensely euphoric then this euphoria shows itself in the way that I love to dwell upon specific details, the way I extract enjoyment from re-iterating definite views and definite pronouncements. In states of manic elation the pleasure comes from the feeling that one has attained what one has set out to attain, or, perhaps, that one is capable of achieving whatever one wants to achieve. There is the satisfaction of being ‘right,’ along with the glow of feeling supremely potent in one’s ability to carry out goal-oriented activity. Similarly, a person who is experiencing intense euphoria due to the ingestion of amphetamine enjoys talking about nitty-gritty details, enthusing about trivia, endlessly putting things together and taking them apart, and generally performing routine tasks with formidable zeal. On the one hand, there is undoubtedly the pleasure of attaining goals, but along with this there would seem to be a less obvious pleasure, one which is derived from having a definite framework within which to act (and talk).
In OCD there is little in the way of successful goal-achieving: one’s activity does not ever really provide satisfaction in this regard, and in fact one’s behavior might be characterized as ‘forever seeking to correct a terminally uncorrectable situation.’ Yet there is the possibility, nevertheless, of obtaining the satisfaction of having a meaningful structure to work in. There is, in effect, a secondary gain of the chronically maladaptive and inefficient obsessive-compulsive behavior. One can be unhappy, and yet still secure! One can be content (or even smug) in one’s misery, so to speak. The most extreme example of suffering on one level combined with satisfaction on another is provided by paranoia. Paranoia takes one to heights (or depths) of terror which are totally unimaginable to the non-paranoid person, yet at the same time, no one can deny him or her the satisfaction of being ‘right’ in his or her interpretation! When a hydrogen atom combines with a fluorine atom a considerable amount of energy is released and a considerable amount of security is obtained in the chemical product. Similarly, when I as a paranoia sufferer latch on to a paranoid idea (or framework of ideas) there is a great deal of irreversibility going on and as a consequence a great deal of security is obtained, albeit security of a viciously oppressive nature. The idea that one might actually enjoy paranoia seems a bit far-fetched, but it is by no means rare to hear people speaking of past episodes of paranoia with a kind of wistful nostalgia. Some even come right out with it, and say that they kind of liked their paranoia, in a funny kind of way.

The Two Directions

If you happen to suffer from obsessive compulsive disorder and I run through the argument given above it may be the case that you can relate to your situation, but even so, what are you supposed to do now? What sort of practical guidance can be derived from a thermodynamically oriented model of neurosis? The perceptive reader will have noticed that all we have done is to indicate how normal problem solving approaches fail to be of any help. No purposeful action that you can think of can help you escape because your ‘purpose’ is your problem itself in a form you do not recognize. The ‘answer’ you came to is not the right answer because the question you originally asked was not the right question. What is worse, there is no right question…
We can think about this dilemma by considering the two possible directions in which it is possible to move. Direction is the direction of increasing dissymmetry, where I focus on the details and lose sight of the assumptions I have made in order to bring those details into prominence. This is the direction of increasing ‘security,’ where rules are solid and dependable, where Yes is always Yes, and never No. The further we move in direction the more congruent our ‘idea of what is’ become ‘what is.’
Direction is the direction of increasing symmetry, increasing relativity, where Yes is only Yes because of the way we choose to frame the question. It is the direction of decreasing relevance of our mental map (our concepts) to reality—the further we go in direction the less predictable our world gets, and the more strange it appears. Our sense of security approaches vanishing point.
Normally, we experience a tropism towards direction, we like to maximize our feeling of being adapted to our environment. All rule-based (purposeful) behavior sends us in this direction. Interpreting experience is also rule-based behavior. To see things this way is to take into account the complexity of the universe, which is to say, the way in which the universe cannot be reduced to one level of description without losing what you are trying to describe. It is pertinent within the context of this discussion because it answers our question about breaking out of the pattern of OCD. All we have to do is move in direction, and this is done by abandoning our habitual modus operandi, by leaving behind the framework within which we perceive ourselves to have some sort of ‘purchase’ on what is troubling us. There are no specific indicators which we can read off and know that we are moving in direction because a static framework of quantitative understanding is precisely what we are leaving behind. We are abandoning our terms of reference, our cozy and comforting map. However, there is an indicator which we can rely on, and this is our feeling of being insecure!
Normally, this feeling appears as anxiety or frank terror—a point-blank refusal to let ‘something or other’ (some supposed catastrophe) happen. The answer is not for me to positively force myself to accept the catastrophe, because that would only be another game that I am playing with myself. Instead, as each moment in time unfolds, I simply watch and see what happens. I don’t control the show, and try to get what I think is going to happen to happen, because that is in fact a refusal to wait quietly and see what actually is going to happen if I don’t control. There is nothing to do, apart from allowing life to unfold around me. Of course, this sounds far easier that it is in practice—the whole endeavor is positively fraught with difficulties. In practice, as Alan Watts says, I have to accept my attempts to control as further manifestations of the universe unfolding itself. I have to be ‘all-inclusive’ in my accepting; I cannot demand spontaneity and reject directed action because this is picking and choosing. I accept my unaccepting as part of the total picture—basically, I cannot become insecure on purpose, because deliberate insecurity is actually just another form of security.

The Fall

In conclusion, then, what we have said is that we cannot think of our situation in terms of having a problem from which we wish to escape because that reifies our assumptions. What we have here is a vicious circle, the same vicious circle that we run into with chronic anxiety, i.e., avoiding reinforces the idea that there is something there which is worth avoiding. The formation of a vicious circle is characterizable in terms of an information collapse, as we have said. In the following passage Alan Watts brings our attention to the archetypal nature of this problem:
The story of the Fall, of the eating of the fruit of the tree of knowledge
of Good and Evil, describes man’s involvement in the vicious circle—a
condition in which, of his own power, he is able to do nothing good that is
not vitiated by evil. In this condition it may be said that “all good deeds
are done for the love of gain,” that is, with a purely self-interested motive, because “honesty is the best policy.” Every advance in morality is counterbalanced by the growth of repressed evil in the unconscious, for morality has to be imposed by law and wherever there is compulsion there is repression of instinctual urges. Indeed, the very formulation of the ideal of righteousness suggests and aggravates its opposite. Thus St. Paul says,
“I had not known lust, except the law had said, Thou shalt not covet.”…
…regarding the question of how to break out of the vicious cycle, Watts, as we have already noted, advises the ‘non-technique’ of unconditional acceptance. Our resistance to such an approach, Watts argues, stems from our unacknowledged desire to ‘escape by our own cleverness,’ so to speak:
When it is said that man will not let himself be saved as he is, this is
another way of saying that he will not accept himself as he is; subtly
he gets around this simple act by making a technique out of acceptance,
setting it up as something which he should do in order to be a ‘good boy.’
And as soon as acceptance is made a question of doing and technique we
have the vicious circle. True acceptance is not something to be attained;
it is not an ideal to be sought after—a state of soul which can be possessed
and acquired, which we can add to ourselves in order to increase our
spiritual stature. If another paradox may be forgiven, true acceptance
is accepting yourself as you are NOW, at this moment, before you have
even begun to make yourself different by accepting yourself.

The Hidden Gain Of Neurosis

We have touched upon the real difficulty in dropping neurotic patterns of thinking and behaving when we looked at the idea that our need to escape suffering is less important to us than our need to have this ‘escape’ happen within the context of understanding which we can understand. It is not enough to be saved—we want to know how we are to be saved. This points to the hidden gain of neurosis, which we can explain as follows. So far we have mentioned the idea (which has wide acceptance in the field of psychology) that neurosis is refused pain. We have also explored the notion that ‘pain’ may be better defined as ‘terror due to loss of existential security.’ Our primary need is the need to have a framework, in other words. Another way to put it is to say that we are, at root, terrified of freedom. This sounds odd. We say that we want freedom, we go on about it incessantly, we write speeches about it and sing songs about it. We make an ideal of it. Yet the subtext is always there—what we really want is the freedom to carry on pretending that our model actually is reality. We want the freedom to stay in our safe framework, and, what is more, we want to be fulfilled by it! I want to be free to make lots of money and marry a super-model. I want the freedom to own a house that costs $750,000, and own a Ferrari, to be smarter than everyone else. I want the freedom to be good-looking, sophisticated and admired. What I don’t want is the freedom to see that none of these things matter a damn.

The hidden gain of neurosis is that I get to have a nice secure structure to hide in, to block out knowledge of what lies beyond that structure. I may be miserable but I’m secure. I may be having a totally rotten time—but it’s comfortable. I may be suffering, but I still get to have my own way and not face up to stuff that I don’t want to face up to. What this means is that it is not enough just to be sick of feeling miserable and sick of feeling jealous of everyone else having a nice life; my motivation must be deeper than this—I must want to see the truth about myself, no matte what that truth might turn out to be…

Hylotropism And Holotropism

I might find myself wondering what exactly the truth might turn out to be. One answer would of course be for you to say “see for yourself…” and leave it at that. But perhaps it is possible to get a little closer through discussion. Earlier we defined a so-called psychological ‘law’ which stated that the basic drive behind our activity is the ‘urge to describe.’ This is not the full story though: there are two directions and not just one, and the fact that one is a lot easier to head down doesn’t mean that we have to forget the ‘difficult’ direction. If the only tropism in town was the tropism towards increasing psychological entropy we would all have hit rock bottom a long time ago. There is more to it than the movement towards equilibrium, there is the movement out of equilibrium. Despite the second law of thermodynamics, life still manages to surprise us; despite ourselves, we still manage to grow, and leave behind old patterns and routines. Fritjof Capra makes the same point when he says that dynamic systems have two modes of change open to them; self-maintenance and self-transcendence. Prigogine and Stengers refer to optimization strategies versus radical change. Consciousness-researcher Stan Grof coined the term ‘hylotropic’ (movement towards the part, or towards detail) and ‘holotropic’ (movement towards the whole) and claims that these are the two fundamental drives behind all conscious life.
Regarding the part, we might say that this is the conditioned reality, or ‘the message.’ Messages only make sense within the context within which they were designed, and therefore their meaning is relative—it only exists if we are willing to allow that a certain set of assumptions are true. The whole is, therefore, the unconditioned reality, or ‘the medium’—it has no context because, obviously enough, it is the whole! The medium doesn’t need a context, which is to say, there is no right way to ‘read’ or understand it. This statement, although tending to be rather perplexing at first, is no different than our previous ‘explanation’ of the state of unbroken symmetry as the situation where there is no ‘up’ and no ‘down,’ no ‘right’ and no ‘wrong,’ neither ‘yes’ nor ‘no.’

The Direction Of Increasing Self-Reference

We have proposed the existence of a psychological drive which may be defined in terms of ‘the urge to make the universe relevant to oneself.’ This sounds very fancy upon first hearing, but further thought reveals the idea to be not quite so novel as we might previously have thought. After all, what we are talking about is actually nothing other than the process by which one creates a ‘self,’ and therefore the urge is probably better defined as ‘the urge to be a self.’ When I make stuff relevant to me what I am doing is creating a relationship with a definite external reality of some description. This has the reciprocal effect of defining myself—for if there is a reified external reality which I have a relationship with, then there must be a reified ‘me’ to have that relationship. This is the psychological direction of increasing self-definition. The ‘hidden gain’ of neurosis may therefore be seen in terms of the creation of a ‘self,’ not so much a ‘self’ in the normal sense, but self in the sense of a context of interpretation which provides a strong resolution both of the ‘problem,’ and the self that is being afflicted by this problem.

Being Selfish

We can clarify this point by considering the usual usage of the word ‘selfish.’ Selfishness is generally seen as a failing rather than a virtue; conventional morality urges us to be ‘unselfish’—which is a virtuous state. Conventional morality contains an unseen paradox, however, as Watts says in the passage quoted earlier; basically, everything the self does is selfish in motive, even when a self is deliberately being unselfish that is still selfish. To act as a self is to be caught up in an inescapable tautology.
The way in which we are approaching the idea of ‘selfishness’ is somewhat different. In fact we cannot really use the word ‘selfish’ because that carries the implication that there actually is such a thing as a self. Talking about being ‘unselfish’ traps us in the same false assumption—it reifies the idea of self. Saying that one ought not to cherish the self is the same thing as cherishing the self—putting oneself last is the same thing as putting oneself first, since everything still revolves around the central idea of self. There is not getting away from it. Similarly, there is no way of getting out of a neurotic pattern of behavior because mind and the pattern it creates by deliberate action are the same system.

Being A Big Fish In A Little Pond Or A Grain Of Sand In The Desert

We said that moving in the direction of increasing self-reference (or increasing tautology) has a pay-off. In essence, one gets to ‘be somebody.’ I become meaningful in terms of the world and the world becomes meaningful in terms of me; there is a feeling of individual significance—I can say “I am such and such” without fear of my ontological basis being whipped out from under me. I am me and that is that. A decisive break in the cosmic symmetry has been effected—there is ‘self’ versus ‘other.’ Once this divide is in place it becomes very real indeed, it is a source of satisfaction for us; and it is also a source of despair and meaninglessness, since (in order to have the security of being sure of who we are) we have cut ourselves off from what we really are. If I travel in this direction I create a fixed center; a disymmetrical ‘me’ is crystallized out of the perfect symmetry of non-locality.
If, on the other hand, I move in the direction of decreasing self-referentiality, then my individual significance starts to evaporate. The sharp lines delineating the known or pragmatic self get blurred and ambiguous—it all starts to look rather arbitrary. Instead of being a big fish in a small pond I become a grain of sand in the desert, a drop of water in the ocean. From the point of view of ‘being somebody’ this sounds like bad news, but that is only because of the way in which we are looking at it. From another point of view (or rather, the view that has no point, or ‘center’) it is the best possible news: this is the state of non-limitation, of unboundedness, the symmetrical state which resumes when closure comes to an end. I am dissolved in non-referential vastness, I am in a place which is no place, since there is no context for it, no map for it. My horizons have opened up and expanded beyond what I had previously known. Because I am ‘nobody in particular,’ I have no restrictions whatsoever upon me. There is no ‘me’! This is the state of unqualified freedom which we spend most of our lives trying so hard to escape from, the unmodified state which Robert Anton Wilson calls, ‘the non-local self.’

Irreversibility, Work And Conscious Suffering

When a definite (or local) self is created there is an intensely rewarding glow of satisfaction—every bit of me feels suffused with the delicious warmth of confirmation: “I am…I am!…I am!…” This is the message and I could listen to it all day! When we are euphoric this is the gist of what we are constantly trying to tell others—if not directly by saying how great we are, then indirectly by spinning a web of self-reference, by becoming proprietary towards everything that is going on, by exerting control on the meaning of what is happening…Just as a stable molecule like hydrogen fluoride is formed amid a burst of energy, so too is a stable ‘local-self’ formed amid a powerful burst of euphoria. This movement from instability to stability is irreversible, both in chemistry and psychology—it is a one way street, a slippery-slop. Irreversibility does not mean that the process cannot be reversed, but rather that it cannot be reversed without importing energy from outside the system. Therefore, I can turn hydrogen fluoride back into unreacted hydrogen and fluorine by pumping in exactly as much energy as was released in the first place; work has to be done, in other words. If we are going to go along with the analogy between psychology and chemical thermodynamics, then the ‘work’ that is needed to free the individual from being trapped in routines, habitual patterns of thinking, opinions, and predictable personality traits must involve paying back the satisfaction of the original ‘euphoria-burst’ in the coin of ‘reverse-satisfaction,’ acute dis-comfort, or ‘dysphoria.’
It is interesting to note that esoteric psychological systems such as that set out by Gurdjieff speak in terms of the ‘Work”—a process by which a deterministic ‘machine-personality’ is transformed into a free or self-determining being. Gurdjieff, in common with other esoteric teachings (and Buddhism), held that we only possess the illusion of free-will since we are [1] slaves to our conditioning, and [2] totally blind to the fact that our thinking is conditioned. This two-step formulation of our predicament has in recent times been echoed by David Bohm, who was until his death professor of theoretical physics at Birbeck College, London. Bohm, taking a radically different approach to psychology, made the following two assertions: [1} thought is ‘participatory,’ which is to say, it helps to create the reality it shows us, and [2] thought somehow tricks us into thinking that the reality it shows us is independently (or objectively) true. In other words, how we see the world is the result of a hidden bias in or cognitive process.
Gurdjieff stated that freedom from conditioning can only be obtained through ‘conscious suffering’—which may be defined as suffering that one does not try to evade. This is where irreversibility comes in: it is not that we can’t reverse the process of symmetry-breaking, it’s just that we have a bias against it. We have a very serious and deep-rooted objection to suffering! It is at this point that we have to be very careful to explain precisely what we mean by ‘suffering’—we are not talking about the superficial suffering which happens when a goal is not obtained, or when an undesired or ‘negative’ goal is not avoided, but of the profound (or subtle) suffering which occurs when we lose the security of having context within which to gain or avoid anything. It is this subtle but deeply unacceptable form of discomfort that we are referring to as ‘conscious suffering,’ as opposed to ‘suffering within a context’ (which is unconscious, since having a context is concomitant with being conditioned, which also equals ‘the state of being unaware of the fact of this conditioning’).

Two Types Of Suffering

Talk of ‘freedom through suffering’ tends to set off alarm bells, since notions like this are associated with over-zealous religious piety, and the cult of ‘pleasure denial’ which found expression in such Protestant sects as Calvinism and Puritanism. “If it makes you feel good, then its bad” is the motto we think of here. The point is, though, that even if we do decide that we want to ‘do the right thing’ and suffer discomfort to make ourselves better people, this is still not conscious suffering. I am suffering on purpose, I have an agenda, and therefore this is not the same thing at all. Deliberate suffering is not conscious suffering: on the contrary, deliberate suffering means that I want to suffer and I want to suffer within a special context of meaning; I want to suffer and have a guaranteed outcome of that suffering—in other words, I want to have the security of knowing that it is going to do me good.
Deliberate suffering is where I fight my own urges and desires: I want to eat a doughnut and so I don’t; I want to avoid sexual thoughts and so I try hard to think of something else; I want to put myself first, so I put myself last instead…Basically, either I confront the thing I hate, or I deprive myself of the thing I love. Either way, I am sticking firmly within my established frame of reference, and so I learn nothing. All purposeful action serves the function of distracting me from the task of questioning my assumptions, and this is why we find it so hard to drop our closed, goal-oriented behavior.

Doing Nothing

Matters are not so simple that they can be solved by slavishly following rule-based procedures. If we could become happy through fighting our own inclinations, then there would be no problem, but what happens is that we end up being sanctimoniously miserable, which is far worse than just being miserable without any excuses, because it gives us an officially-validated frame of meaning to hold on. Feeling bad without any ‘props’ is actually conscious suffering, and that is very difficult! Escaping the snares that life sets us is not then a straightforward matter of avoiding pleasure—this does not work any better than the neurotic’s continual attempt to avoid pain, which is the same futile struggle seen from the other side. The key to freeing ourselves from neurosis is not to manage our feelings better, but not to manage them at all. The art is to be without bias: to feel good when we are happy, and to feel bad when we suffer, and to leave it at that. Instead of this we automatically evaluate and analyze ourselves, we get ourselves all tangled up in our agendas. When we are happy we want to be sure that we are happy, and be sure that it will last; often we find ourselves stage-managing our happiness, and so we get stuck in sentimentality. There is also the possibility that we will feel bad about being happy because we don’t feel that we deserve it. Feeling bad gets just as complicated: when we feel bad, we feel bad about feeling bad, and so we get stuck in denial—either this or we feel good about feeling bad, and get stuck in theatricality. Although the need is not acknowledged, the bottom line is that we want everything that happens to be validated by a context. We are like snails, we want to stay safely in our conceptual shells—all the more so when danger threatens…
Non-action sounds strange, if not reprehensible, to us goal-oriented Westerners, but it is well known to students of Taoism as wu wei, the art of not-doing. In terms of Western esotericism, it can be understood as work. What we are essentially saying is that purposeful action creates a context, and it is therefore the opposite of work, because work is the undoing of the security of a context. As we have been saying, the relationship between goal-oriented action and the context it takes place within is a circular sort of thing: purposeful action arises out of that context, and simultaneously reconfirms the validity of that context. This is perfectly and utterly tautological, and yet due to the loss of information that occurs when symmetry is broken, we no longer have the perspective to see the tautological nature of what is going on. This is why the process is irreversible.
Irreversibility means that we cannot extricate ourselves from dis-symmetrical situations by thinking about it, because it was thinking that created the dis-symmetry in the first place. We cannot extricate ourselves by purposeful action, no matter how determined that action is, because purposefulness arises out of thinking. We cannot have the satisfaction of escaping through the power of our own minds! Once the dis-symmetry is there, we are stuck in it, we are caught up in having issues with the world, with making stuff relevant to us when it is not. We take life personally—we use it to confirm out identity. When there is no dis-symmetry then there are ‘no issues’ and so we are free to move on. We are no longer trapped in our conceptions about what is going on, we are no longer afflicted by the drastic defenses that we have taken up to protect ourselves against the openness of radical uncertainty. That defense-system is our everyday rational mind, and our repertory of self-confirming emotions such as anger, envy, jealousy, pride, etc. This defense system is what Carl Jung referred to in terms of ‘the basic psychic crime”—the crime of unconsciousness. Unconsciousness is not a crime in a moral sense, but rather in the sense of it being a transgression of our own nature by ourselves. It is a lie that we have told ourselves, and for which, one way or another, we will have to pay. We can pay in the coin of unconscious suffering, which means we will stick with our story and complain of cosmic injustice, or we can pay through conscious suffering, which means that we don’t complain, but ‘suffer gladly,’ as Tibetan Buddhist master Sogyal Rinpoche puts it.



From Wikipedia, the free encyclopedia

A holon (Greek: ὅλον, holon neuter form of ὅλος, holos “whole”) is something that is simultaneously a whole and a part. The word was coined by Arthur Koestler in his book The Ghost in the Machine (1967, p. 48). Koestler was compelled by two observations in proposing the notion of the holon. The first observation was influenced by Nobel Prize winner Herbert A. Simon’s parable of the two watchmakers, wherein Simon concludes that complex systems will evolve from simple systems much more rapidly if there are stable intermediate forms present in that evolutionary process than if they are not present. The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in both living organisms and social organizations. He concluded that, although it is easy to identify sub-wholes or parts, wholes and parts in an absolute sense do not exist anywhere. Koestler proposed the word holon to describe the hybrid nature of sub-wholes and parts within in vivo systems. From this perspective, holons exist simultaneously as self-contained wholes in relation to their sub-ordinate parts, and dependent parts when considered from the inverse direction.
Koestler also says holons are autonomous, self-reliant units that possess a degree of independence and handle contingencies without asking higher authorities for instructions. These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole.
Finally, Koestler defines a holarchy as a hierarchy of self-regulating holons that function first as autonomous wholes in supra-ordination to their parts, secondly as dependent parts in sub-ordination to controls on higher levels, and thirdly in coordination with their local environment.

General definition
A holon is a system (or phenomenon) which is an evolving self-organizing dissipative structure, composed of other holons, whose structures exist at a balance point between chaos and order. It is maintained by the throughput of matter–energy and information–entropy connected to other holons and is simultaneously a whole in and itself at the same time being nested within another holon and so is a part of something much larger than itself. Holons range in size from the smallest subatomic particles and strings, all the way up to the multiverse, comprising many universes. Individual humans, their societies and their cultures are intermediate level holons, created by the interaction of forces working upon us both top-down and bottom-up. On a non-physical level, words, ideas, sounds, emotions—everything that can be identified—is simultaneously part of something, and can be viewed as having parts of its own, similar to sign in regard of semiotics. Defined in this way, holons are related to the concept of autopoiesis, especially as it was developed in the application of Stafford Beer to second-order cybernetics and viable system theory, but also Niklas Luhmann in his social systems theory.
Since a holon is embedded in larger wholes, it is influenced by and influences these larger wholes. And since a holon also contains subsystems, or parts, it is similarly influenced by and influences these parts. Information flows bidirectionally between smaller and larger systems as well as rhizomatic contagion. When this bidirectionality of information flow and understanding of role is compromised, for whatever reason, the system begins to break down: wholes no longer recognize their dependence on their subsidiary parts, and parts no longer recognize the organizing authority of the wholes. Cancer may be understood as such a breakdown in the biological realm.
A hierarchy of holons is called a holarchy. The holarchic model can be seen as an attempt to modify and modernise perceptions of natural hierarchy.
Ken Wilber comments that the test of holon hierarchy (e.g. holarchy) is that if all instances of a given type of holon were removed from existence, then all those holons of which they were a part must necessarily cease to exist too. Thus an atom is of a lower standing in the hierarchy than a molecule, because if you removed all molecules, atoms could still exist, whereas if you removed all atoms, molecules, in a strict sense would cease to exist. Wilber’s concept is known as the doctrine of the fundamental and the significant. A hydrogen atom is more fundamental than an ant, but an ant is more significant.
The doctrine of the fundamental and the significant are contrasted by the radical rhizome oriented pragmatics of Deleuze and Guattari, and other continental philosophy.
A significant feature of Koestler’s concept of holarchy is that it is open ended both in the macrocosmic as well as in the microcosmic dimensions. This aspect of his theory has several important implications. The holarchic system does not begin with strings or end with the multiverse. Those are just the existing limits of the reach of the human mind in the two dimensions at the present time. Those limits will be crossed later on because they do not encompass the whole of reality. Popper (Objective Knowledge) teaches that what the human mind knows and will ever know of truth at a given point of time and space is verisimilitude – something like truth, and that the human mind will continue to get closer to reality but never reach it. In other words, the human quest for knowledge is an unending journey with innumerable grand sights ahead but with no possibility of reaching the journey’s end. The work of modern physicists designed to discover the theory of everything (TOE) is reaching deep into the microcosm under the assumption that the macrocosm is eventually made of the microcosm. This approach falls short on two counts: the first is that the fundamental is not the same as significant and the second is that this approach does not take into account that the microcosmic dimension is open ended. It follows that the search for TOE will discover phenomena more microcosmic than strings or the more comprehensive M theory. It is also the case that many laws of nature that apply to systems relatively low in the hierarchy cease to apply at higher levels. M theory might have predictive power at the sub-atomic level but it will inform but little about reality at higher levels. The work of the particle physicists is indeed laudable but they should give the theory they are looking for another name. This is not to claim that the concept of holarchy is already the theory of everything.

Types of holons
Individual holon
An individual holon possesses a dominant monad; that is, it possesses a definable “I-ness”. An individual holon is discrete, self-contained, and also demonstrates the quality of agency, or self-directed behavior. The individual holon, although a discrete and self-contained whole, is made up of parts; in the case of a human, examples of these parts would include the heart, lungs, liver, brain, spleen, etc. When a human exercises agency, taking a step to the left, for example, the entire holon, including the constituent parts, moves together as one unit.

Social holon
A social holon does not possess a dominant monad; it possesses only a definable “we-ness”, as it is a collective made up of individual holons. In addition, rather than possessing discrete agency, a social holon possesses what is defined as nexus agency. An illustration of nexus agency is best described by a flock of geese. Each goose is an individual holon, the flock makes up a social holon. Although the flock moves as one unit when flying, and it is “directed” by the choices of the lead goose, the flock itself is not mandated to follow that lead goose. Another way to consider this would be collective activity that has the potential for independent internal activity at any given moment.

American philosopher Ken Wilber includes Artifacts in his theory of holons. Artifacts are anything (e.g. a statue or a piece of music) that is created by either an individual holon or a social holon. While lacking any of the defining structural characteristics – agency; self-maintenance; I-ness; Self Transcendence – of the previous two holons, Artifacts are useful to include in a comprehensive scheme due to their potential to replicate aspects of and profoundly affect (via, say interpretation) the previously described holons. Artifacts are made up individual or social holons (e.g. a statue is made up atoms).
The development of Artificial Intelligence may force one to question where the line should be drawn between the individual holon and the artifact.

Heaps are defined as random collections of holons that lack any sort of organisational significance. A pile of leaves would be an example of a heap. Note, one could question whether a pile of leaves could be an “artifact” of an ecosystem “social holon”. This raises a problem of intentionality: in short, if social holons create artifacts but lack intentionality (the domain of individual holons) how can we distinguish between heaps and artifacts? Further, if an artist (individual holon) paints a picture (artifact) in a deliberately chaotic and unstructured way does it become a heap?

Holon in Multiagent Systems
Multiagent systems are systems composed of autonomous software entities. They are able to simulate a system or to solve problems. Holon may be viewed as a sort of recursive agent: an agent composed of agents which an agent at a given level has its own behavior as a partial consequence of these part’s behaviors.
Janus Multiagent Platform is a software platform able to execute holons.



“Zen is your everyday thought”; it all depends on the adjustment of the hinge whether the door opens in or opens out.”

Satori is the spiritual goal of Zen Buddhism (in Chinese: wu). It is a key concept in Zen. Whether it comes to you suddenly seemingly out of nowhere as found in the Enlightenment process called Aparka Marg, or after an undetermined passage of time centered around years of intense study and meditation as with the female Zen adept Chiyono, or after forty unrelenting years as with the Buddha’s brother Ananda, there can be no Zen without that which has come to be called Satori. As long as there is Satori, then Zen will continue to exist in the world.
Satori roughly translates into individual Enlightenment, or a flash of sudden awareness. Satori is as well an intuitive experience. The feeling of Satori is that of infinite space. A brief experience of Enlightenment is sometimes called Kensho. Semantically, Kensho and Satori have virtually the same meaning and are often used interchangeably. In describing the Enlightenment of the Patriarchs, however, it is customary to use the word Satori rather than Kensho, the term Satori implying a deeper experience. The level of Enlightenment reached by the Buddha and others of similar ilk is refered to as Anuttara Samyak Sambodhi.
There are, as seen in the above, more than one “level” of Self-realization. Most levels, except perhaps Anuttara Samyak Sambodhi, have been blanketed with what has become now a more general term, “Satori,” Satori having fallen into the day-to-day lexicon exemplified in a variety of sources from the Eight Jhana States, to the Five Degrees of Tozan, to the Five Varieties of Zen. There are also, as claimed by some, three kinds, levels or varieties of Satori — typically listed as being 1) emotion-based or Mystical Satori, 2) mind-based or Intellectual Satori, and 3) desire-based or Cosmic Satori.
It was not always that way. If you scroll down to the Satori discription by D.T. Suzuki, below, you will gain a much greater insight into the original meaning of Satori. There is an enormous difference between say something like a rather uncomplicated early stage such as as Laya to the somewhat deeper initial step of Inka Shomei and the state of Enlightenment at the level of the Buddha.
The only way that one can “attain” Satori is through personal experience. The traditional way of achieving Satori, and the most typical way taught to Zen students in the west — but NOT the only way — is through the use of Koans such as those found in the Gateless Gate, the Mumonkan. Koans are “riddles” students use to assist in the realization of Satori; these words and phrases were also used by the early Zen masters. See Regarding Mu.
Another method is meditation. Satori can be brought about through Zazen meditation. This meditation would create an objective self associated awareness with a feeling of joy that overrides any other feelings of joy or sorrow. See: Shikantaza.
Even though Satori is a key concept in Zen, it should be brought to the attention of the reader that Zen and it’s traditions does NOT have exclusive rights to the Enlightenment experience. That which is called Satori in Zen is a term that is wrapped around a phenomenon that “IS” and that “IS” is not “owned” by any group, religion, or sect.
Many, many, occurrences of that particular “phenomenon” has transpired both inside and outside the Doctrine of Buddhism. The person who was to become the Sixth Patriarch in the Chinese Lineage of Ch’an was Enlightened as a young boy when he overheard a sentence being spoken from the Diamond Cutter Sutra. He had gone into town to sell firewood for his mother when he just happened to hear the line. Until that point in time he had not received any formal practice in meditation, nor was he versed in Buddhism to any great extent, if at all. So too, again outside the scriptures, the great Indian sage Bhagavan Shi Ramana Maharshi was a typical of his culture teenage boy and most certainly not deeply seeped into formal religious tracts, when all of a sudden out of the blue, Satori-like, he was Awakened to the Absolute.
It is often said that when you truly need a teacher — or that which will function in lieu of a teacher — that is, a teacher or Satori for example, will fall upon you. This may due to some inexplicable serendipity. It may be due to the fact that the seeker has searched deeply within himself or herself and determined what sort of instruction seems to be required. It could be swept over him or her like the First Death Experience of the Bhagavan Sri Ramana Maharshi, or the Bhagavan’s little known Second Death Experience. Or it could be a spiritual desperation on the part of the seeker, or maybe no more than a successful sales pitch by a teacher (sincere or not). It may be a combination of the previous factors, or some intuitive awareness beyond expression. For whatever the reason, the saying often applies and the coming together of the results of inner and outside forces, some within one’s control, some without.
However, in the end , it is NOT just potential Zen masters in ancient China nor people in India that such events transpire, but everyday people as well. There are numerous Awakening Experiences in the Modern Era, but, even if those experiences parallel that which is called Satori, those experiences are not always called Satori.

The following six points on Satori are from D.T. Suzuki’s An Introduction to Zen Buddhism

1. People often imagine that the discipline of Zen is to produce a state of self-suggestion through meditation. This entirely misses the mark, as can be seen from the various instances cites above. Satori does not consist in producing a certain
premeditated condition by intensely thinking of it. It is
acquiring a new point of view for looking at things. Ever since
the unfoldment of consciousness we have been led to respond to the inner and outer conditions in a certain conceptual and analytical manner. The discipline of Zen consists in upsetting this groundwork once for all and reconstructing the old frame on an entirely new basis. It is evident, therefore, that meditating on metaphysical and symbolic statements, which are products of the relative consciousness, play no part in Zen.

2. Without the attainment of Satori no one can enter into the truth of Zen. Satori is the sudden flashing into consciousness of a new truth hitherto undreamed of. It is a sort of mental
catastrophe taking place all at once, after much piling up of
matters intellectual and demonstrative. The piling has reached a
limit of stability and the whole edifice has come tumbling to the
ground, when, behold, a new heaven is open to full survey. When
the freezing point is reached, water suddenly turns into ice;
the liquid has suddenly turned into a solid body and no more flows freely. Satori comes upon a man unawares, when he feels that he has exhausted his whole being. Religiously, it is a new birth; intellectually, it is the acquiring of a new viewpoint. The world now appears as if dressed in a new garment, which seems to cover up all the unsightliness of dualism, which is called delusion in Buddhist phraseology.

3. Satori is the raison d’etre of Zen without which Zen is no Zen. Therefore every contrivance, disciplinary and doctrinal, is directed towards Satori. Zen masters could not remain patient for Satori to come by itself; that is, to come sporadically or at its own pleasure. In their earnestness to aid their disciples in the search after the truth of Zen their manifestly enigmatical
presentations were designed to create in their disciples a state
of mind which would more systematically open the way to enlightenment. All the intellectual demonstrations and
exhortatory persuasions so far carried out by most religious and
philosophical leaders had failed to produce the desired effect,
and their disciples thereby had been father and father led
astray. Especially was this the case when Buddhism was first introduced into China, with all its Indian heritage of highly
metaphysical abstractions and most complicated systems of Yoga
discipline, which left the more practical Chinese at the loss as
to how to grasp the central point of the doctrine of Sakyamuni.
Bodhidharma, the Sixth Patriarch Hui-neng, Baso, and other Chinese
masters noticed the fact, and the proclamation and development of
Zen was the natural outcome. By them Satori was placed above sutra-learning and scholarly discussions of the shastras and was
identified with Zen itself. Zen, therefore, without Satori is
like pepper without its pungency. But there is also such a
thing as too much attachment to the experience of Satori, which
is to be detested.

4. This emphasizing of Satori in Zen makes the fact quite
significant that Zen in not a system of Dhyana as practiced in
India and by other Buddhist schools in China. By Dhyana is
generally understood a kind of meditation or contemplation
directed toward some fixed thought; in Hinayana Buddhism it was a
thought of transiency, while in the Mahayana it was more often
the doctrine of emptiness. When the mind has been so trained as to be able to realize a state of perfect void in which there isnot a trace of consciousness left, even the sense of being unconscious having departed; in other words, when all forms of mental activity are swept away clean from the field of consciousness, leaving the mind like the sky devoid of every speck of cloud, a mere broad expense of blue, Dhyana is said to have reached its perfection. This may be called ecstasy or trance, or the First Jhana, but it is not Zen. In Zen there must be not just Kensho, but Satori. There must be a general mental upheaval which destroys the old accumulations of intellection and lays down the foundation for new
life; there must be the awakening of a new sense which will review the old things from a hitherto undreamed-of angle of observation. In Dhyana there are none of these things, for it is merely a quieting exercise of mind. As such Dhyana doubtless has its own merit, but Zen must be not identified with it.

5. Satori is not seeing God as he is, as might be contended by some Christian mystics. Zen has from the beginning made clear and insisted upon the main thesis, which is to see into the work of
creation; the creator may be found busy moulding his universe, or he may be absent from his workshop, but Zen goes on with its own
work. It is not dependent upon the support of a creator; when it
grasps the reason for living a life, it is satisfied. Hoyen
(died 1104) of Go-so-san used to produce his own hand and ask his
disciples why it was called a hand. When we know the reason,
there is Satori and we have Zen. Whereas with the God of mysticism
there is the grasping of a definite object; when you have God,
what is no-God is excluded. This is self-limiting. Zen wants
absolute freedom, even from God. “No abiding place” means that
very thing; “Cleanse your mouth when you utter the word Buddha”
amounts to the same thing. It is not that Zen wants to be
morbidly unholy and godless, but that it recognizes the
incompleteness of mere name. Therefore, when Yakusan(aka Yaoshan Weiyan, Yueh-shan Wei-jen, 751-834)was asked to give a lecture, he did not say a word, but instead come down from the pulpit and went off to his own room. Hyakujo merely walked forward a few steps, stood still, and then opened his arms, which was his exposition of the great principle.

6. Satori is not a morbid state of mind, a fit subject for the study of abnormal psychology. If anything, it is a perfectly
normal state of mind. When I speak of mental upheaval, one may be
led to consider Zen as something to be shunned by ordinary
people. This is a most mistaken view of Zen, but one unfortunately often held by prejudiced critics. As Joshu declared, “Zen is your everyday thought”; it all depends on the adjustment of the hinge whether the door opens in or opens out.”
Even in the twinkling of an eye the whole affair is changed and you have Zen, and you are as perfect and as normal as ever. More than that, you have acquired in the meantime something altogether new. All your mental activities will now be working to a different key, which will be more satisfying, more peaceful, and fuller of joy than anything you ever experienced before. The tone of life will be altered. There is something rejuvenating in the possession of Zen. The spring flowers look prettier, and the mountain stream runs cooler and more transparent. The subjective revolution that brings about this state of things cannot be called abnormal. When life becomes more enjoyable and its expense broadens to include the universe itself, there must be something in Satori that is quite precious and well worth one’s striving after.

About SATORI, in a similar, yet somehow somewhat different approach, Suzuki goes on to write in ZEN BUDDHISM: Selected Writings of D.T, Suzuki, (New York: Anchor Books, 1956), pp. 103-108

1. Irrationality. “By this I mean that Satori is not a conclusion to be reached by reasoning, and defies all intellectual determination. Those who have experienced it are always at a loss to explain it coherently or logically.”

2. Intuitive Insight. “That there is noetic quality in mystic experiences has been pointed out by (William) James…Another name for Satori is Kensho (chien-hsing in Chinese) meaning “to see essence or nature,” which apparently proves that there is “seeing” or “perceiving” in Satori…Without this noetic quality Satori will lose all its pungency, for it is really the reason of Satori itself. “

3. Authoritativeness. “By this I mean that the knowledge realized by Satori is final, that no amount of logical argument can refute it. Being direct and personal it is sufficient unto itself. All that logic can do here is to explain it, to interpret it in connection to other kinds of knowledge with which our minds are filled. Satori is thus a form of perception, an inner perception, which takes place in the most interior part of consciousness.

4. Affirmation. “What is authoritative and final can never be negative. Though the Satori experience is sometimes expressed in negative terms, it is essentially an affirmative attidude towards all things that exist; it accepts them as they come along regardless of their moral values.”

5. Sense of the Beyond. “…in Satori there is always what we may call a sense of the Beyond; the experience indeed is my own but I feel it to be rooted elsewhere. The individual shell in which my personality is so solidly encased explodes at the moment of Satori. Not, necessarily, that I get unified with a being greater than myself or absorbed in it, but that my individuality, which I found rigidly held together and definitely kept separate from other individual existences, becomes lossened somehow from its tightening grip and melts away into something indescribable, something which is of quite a different order from what I am accustomed to. The feeling that follows is that of complete release or a complete rest—the feeling that one has arrived finally at the destination…As far as the psychology of Satori is considered, a sense of the Beyond is all we can say about it; to call this the Beyond, the Absolute, or God, or a Person is to go further than the experience itself and to plunge into a theology or metaphysics.” See #5 above as well as Turiyatita.

6. Impersonal Tone. “Perhaps the most remarkable aspect of the Zen experience is that it has no personal note in it as is observable in Christian mystic experiences.”

7. Feeling of exaltation. “That this feeling inevitably accompanies Satori is due to the fact that it is the breaking-up of the restriction imposed on one as an individual being, and this breaking up is not a mere negative incident but quite a positive one fraught with signification because it means an infinite expansion of the individual.

8. Momentariness. “Satori comes upon one abruptly and is a momentary experience. In fact, if it is not abrupt and momentary, it is not Satori.

As an interesting sidelight, in his paper on Zen master Te Shan (known throughout Zen lore for burning all his commentaries and books on Zen immediately following his Awakening), refering to the above book by D.T. Suzuki, the Wanderling waxes semi-nostalgic about the importance of his early association with the meaning and context of the same book:
“Several years ago my younger brother was cleaning out his attic when he ran across a long forgotten box of stuff stashed away that at one time belonged to me. Among the contents of the box was a beat up 30 year old copy of D.T. Suzuki’s ZEN BUDDHISM: Selected Writings of D.T, Suzuki (New York: Anchor Books, 1956), a book that had not seen the light of day in at least 20 years. The pages were faded and worn. Corner after corner of pages folded down. Pencil notes all over the margins and inside the covers. Sentences were underlined in ink. Whole paragraphs were highlighted in a now barely discernible yellow.
“My brother reminded me of how I, not unlike Te Shan, used to carry that book around like a bible my last two years of high school and several years afterward. Anytime anybody said anything about anything out would come my book…always ready with a “Zen answer.” Then one day something was different. Like Te Shan I somehow didn’t need books much any more. Don’t know why, it just was.” (source)
Although the above may not seem Satori related specifically, in actuality it is. In clarification, the following by the Enlightened sage Shri Ranjit Maharaj, is offered:

“Therefore, what I say is false, but true, because I speak of That. The address is false but when you reach the goal, it is Reality. In the same way, all the scriptures and the philosophical books are meant only to indicate that point, and when you reach it they become non-existent, empty. Words are false; only the meaning they convey is true. They are illusion, but they give a meaning. Therefore, All Is Illusion, but to understand the illusion, illusion is needed. For example, to remove a thorn in your finger you use another thorn; then you throw both of them away. But if you keep the second thorn which was used to remove the first one, you’ll surely be stuck again.”

“According to the philosophy of Zen, we are too much a slave to the conventional way of thinking. which is dualistic through and through. No “interpenetration” is allowed, there takes place no fusing of opposites in our everyday logic. What belongs to God is not of this world, and what is of this world is incompatible with the divine. Black is not white, and white is not black. Tiger is tiger, and cat is cat, and they will never be one. Water flows, a mountain towers. This is the way things or ideas go in this universe of the senses and syllogisms. Zen, however, upsets this scheme of thought and substitutes a new one in which there exists no logic, no dualistic
arrangement of ideas. We believe in dualism chiefly because of our traditional training. Whether ideas really correspond to facts is another matter requiring a special investigation. Ordinarily we do not inquire into the matter, we just accept what is instilled into our minds; for to accept is more convenient and practical, and life is to a certain extent, though not in reality, made thereby easier. We are in nature conservatives, not because we are lazy, but because we like repose and peace, even superficially. But the time comes when traditional logic holds true no more, for we begin to feel contradictions and splits and consequently spiritual anguish. We lose trustful
repose which we experienced when we blindly followed the traditional ways of thinking. Eckhart says that we are all seeking repose whether consciously or not just as the stone cannot cease moving until it touches the earth. Evidently the repose we seemed to enjoy before we were awakened to the contradictions involved in our logic was not the real one, the stone has kept
moving down toward the ground. Where then is the ground of non-dualism on which the soul can be really and truthfully tranquil and blessed? To quote Echart again, “Simple people conceive that we are to see God as if He stood on that side and we on this. It is not so; God and I are one in the act of my perceiving Him.” In this absolute oneness of things Zen establishes the
foundations of its philosophy. The idea of absolute oneness is not the exclusive possession of Zen. There are other religious and philosophies that preach the same doctrine. If Zen, like other monisms or theisms, merely laid down this principle and did not have anythng specifically to be known as Zen, it would have long ceased to exist as such. But there is in Zen something
unique which makes up its life and justifies its claim to be the most precious heritage of Eastern culture. The following “Mondo” or dialogue (literally questioning and answering) will give us a glimpse into the ways of Zen, A monk asked Joshu, one of the greatest masters in China, “What is the ultimate word of Truth?” Instead of giving him any specific answer he made a simple response saying, “Yes.” The monk who naturally failed to see any sense in this kind of response asked for a second time, and to this the Master roared back. “I am not deaf!” See how irrelevantly (shall I say) the all-important problem of absolute oneness or of the ultimate reason is treated here! But this is characteristic of Zen, this is where Zen transcends logic and
overrides the tyranny and misrepresentation of ideas. As I have said before, Zen mistrusts the intellect, does not rely upon traditional and dualistic methods of reasoning, and handles problems after its own original manners….To understand all this, it is necessary that we should acquire a “third eye”, as they say, and learn to look at things from a new point of view.”

I Nanobot

I, Nanobot

Thursday, March 9, 2006 05:57 ET,, BY ALAN H. GOLDSTEIN

Scientists are on the verge of breaking the carbon barrier — creating artificial life and changing forever what it means to be human. And we’re not ready.

Don’t call me Ishmael, for I am not a survivor. Don’t call me Cassandra either, since some might believe what I foretell. Perhaps I am the final manifestation of the singularity ignited in Olduvi Gorge a million and a half years ago. The flame that has grown to consume our planet and send sparks into outer space. The singularity that started as an ineffable, ineluctable pulse resonating through the neural matrix of H*** habilis. A voice that said, You whoever you are, You must sharpen that stone, pick up that bone, cross that line. A voice of supreme paradox; one that simultaneously makes us uniquely human, yet is itself not human. Nor is it the black extraterrestrial monolith of Stanley Kubrick’s imagining. Rather, it was always here. Hard-wired into us at the atomic level — and we into it. A voice whose physical manifestation, the tool, sang its song millions of years before human beings walked the earth. This voice prophesied and then enabled our coming. It will instruct us in our going. Or so I say, while understanding too well that in the 21st century we are all jaded and stultified with sensory overload. It’s always the end of the world as we know it — and we feel bored.
So why listen to the voice of one who is not Ishmael, not Cassandra, not even Ralph Nader? Because I can tell you something that no one else can. I can tell you the exact moment when H*** sapiens will cease to exist. And I can tell you how the end will come. I can show you the exact design of the device that will bring us down. I can reveal the blueprint, provide the precise technical specifications. Long before we can melt the polar ice caps, or denude the rain forests, or colonize the moon, we will be gone. And we will not — definitely will not — end with a bang or a whimper. The human race will go to its extinction in a state of supreme exaltation, like an actor climbing the stairs to accept an Academy Award. We will exit the stage of existence thinking we are going to a spectacular party.
The usual suspects — those who have become known for predicting the evolution of humans and their technology — just don’t get it. Mainly because they don’t understand what the definition of “it” is. They don’t realize what evolution is. They have come to the problem from artificial intelligence, or systems analysis, or mathematics, or astronomy, or aerospace engineering. Folks like Ray Kurzweil, Bill Joyand Eric Drexler have raised some alarms, but they are too dazzled by the complexity and power of human cybersystems, devices and networks to see it coming. They think the power of our tools lies in their ever-increasing complexity — but they are wrong. The biotech folks just don’t get it either. People like Craig Venter and Leroy Hood are too enthralled with the possibilities inherent in engineering biology to get it. And our “bioethicists,” like Arthur Kaplan, and those who cling to their human DNA like it was the Holy Grail or the original tablets of stone, blathering on like Captain Kirk about what special, sacred things we humans are — they can’t possibly get it. All these people who think (or fear) that technology will ultimately trump biology have missed the cosmic point. They are not even wrong. To begin to get it, one must dispense with artificial boundaries. If you are only thinking about cybersystems and DNA you can’t possibly get it. And if you are thinking outside the box, you are still thinking too much like a human being.
Linus Pauling would have gotten it right away. Erwin Schrödinger too, and probably Robert Oppenheimer. Bertrand Russell got it. In fact he named it. What Ray, and Craig, and Eric, and Arthur can’t see is the power of pure chemistry — what Bertrand Russell called “chemical imperialism.” What they don’t get is this — a system does not have to be complex to be transcendently, transformatively powerful. After all, we and everything we have created are nothing but the product of “carbon imperialism” — carbon being the element that all known life is based on. Nothing but the power of pure chemistry. Living and nonliving materials, everything that exists in the physical world of our experience burns with that same electron fire. The fire of the chemical bond.
And Prometheus has returned. His new screen name is nanobiotechnology.
Quick. What’s the difference between artificial life and synthetic biology? Don’t know? Neither does anyone else, but that isn’t stopping nanobiotechnology researchers from building them — or it, or that, or whatever. To stay up to speed, there is always Artificial Life, the official journal of the International Society of Artificial Life. According to the editors, the humble mission of the journal “is [to investigate] the scientific, engineering, philosophical, and social issues involved in our rapidly increasing technological ability to synthesize life-like behaviors from scratch in computers, machines, molecules, and other alternative media.” Whoa!
The federal government is in the game big-time as well. For example, the Physical Biosciences Division at Lawrence Berkeley National Laboratory tells us it has established the world’s first Synthetic Biology Department, “to understand and design biological systems”
Some people might argue that it is pretty cavalier to work on “artificial life” or “synthetic biology” before we have even agreed on definitions for these “things.” They might even point out that “artificial life” containing nonbiological components or new forms of biology could drastically alter the ecological balance or even the evolutionary trajectory of life on Earth. Of course the Lawrence Berkeley folks tell us we “need” synthetic biology for all kinds of excellent reasons. We need it for the efficient conversion of waste into energy and sunlight into hydrogen. We need it to create new life forms to use as “soft” biomaterials for tissue/organ growth. We need it to spawn new cells that will swim through the air or water to get to chemical and biological threats and decontaminate them. We need it, and we will build it, and it will be OK because we are the good guys (and gals). Our new life forms will only do good things.
In fact, we are very dangerously confused. To understand how confused, we must introduce the First Law of Nanobotics: The fusion of nanotechnology and biotechnology, now called nanobiotechnology, will result in the complete elimination of the barrier between living and nonliving materials. In other words, nanobiotechnology not only has the goal, it has the mandate to break through the “carbon barrier” of life. The result: We will produce not mere cyborgs, but true hybrid artificial life forms — or manifestations of synthetic biology, take your pick. In a previous article on nanomedicine I described a few of the rudimentary “things” that will emerge from nanobiotechnology: molecular machines that contain parts from both the worlds of biology and human engineering. Single-walled carbon nanotubes linked to DNA. Gold nanoshells linked to antibody proteins.
But gold nanoshells linked to antibodies are just a simple prototype. The fact is, we have no idea what artificial life and/or synthetic biology is, much less what it could do, or how it will behave. A recent article in Science provides terrifying evidence of our hubris. Toward the end of this article, the author explains, “Ethical and environmental concerns must also be dealt with before
synthetic biology fully matures as a field. MIT, the Venter Institute, and the Center for Strategic and International Studies in Washington, D.C., have teamed up to examine issues such as how to keep any new life forms created under control … One solution: Alter synthetic genetic codes such that they are incompatible with natural ones because there is a mismatch in the gene’s coding for amino acids.”
In other words, we will be protected because these organisms will have genomes never before seen on Earth! Perhaps, but that could also be a description of the ultimate biohazard. If the Ebola virus is considered a Biosafety Level 4 threat, what level would categorize a pathogenic organism made completely from synthetic genetic codes?
In order to understand the astonishing leap we are about to make, one needs to grasp that nanobiotechnology is more than just another tool. It is also a monumental experiment in molecular evolution over which we may ultimately have very little control. A nanobiotechnology device that is smart enough to circulate through the body hunting viruses or cancer cells is, by definition, smart enough to exchange information with that human body. This means, under the right conditions, the “device” could evolve beyond its original function. Cancer-hunting nanobots are often depicted as tiny robotic machines — thus reassuringly impervious to fundamental changes brought on by merging with their biological environment. But they will not be tiny robots. That mechanical fantasy, promulgated by proponents of “Drexlerian” nanotechnology who appear devoid of even the most rudimentary knowledge of chemistry, has been decisively refuted by people who actually build the components for nanobiotechnology systems. People like the late Nobel Prize-winning chemist Richard E. Smalley and the great Harvard bioorganic chemist George Whitesides.
What will really go into our bodies, or out into the environment, will be hybrid molecular devices composed of both synthetic and biological components. These “devices” will have been fabricated to specifically exchange chemical information with biological or ecological systems. They will not be nanobots, they will be nanobiobots — and those three letters make all the difference.
In fact, the ability to exchange molecular information with biological systems will be an absolute requirement for these devices to carry out the functions for which they will be created. To find cancer cells, or dissolve arterial plaque, or modify damaged neurological pathways, nanobiobots will be required to “speak” the language of biochemistry — our language, evolution’s language. Yet they will not be classifiable as the products of biological evolution, or genetic or human engineering. They will be true hybrids. We cannot, must not, assume that our current safety and testing standards, whether chemical, biological or toxicological, will be sufficient to predict the behavior of nanobiobots once they are released into the world.
The precautionary principle developed for environmental policy states that “where there are threats of serious or irreversible damage to the environment, lack of full scientific certainty should not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” This is generally interpreted to mean that a lower level of proof of harm can be used in policy making whenever the consequences of waiting for higher levels of proof may be very costly and/or irreversible.
Given that we don’t even have definitions for artificial life or synthetic biology, how would we even begin to apply the precautionary principle here? But we urgently need to.
Let’s take a simple example. Plans are currently underway to create medical nanobiobots that will use our own metabolic energy (for example, glucose oxidation) as a source of power. That means these devices could remain operational as long as we are alive — or longer if they manage to get into human egg or sperm cells. Any nanobiobot that develops the ability to propagate in this or any other manner across even one human generation has fulfilled the definition of a non-biological life form. A true alien. And it can happen.
Suppose a glucose-powered nanobiobot has been created to hunt cancer cells via a component antibody moiety. In effect, this nanobiobot has a protein grappling hook designed to dock it with a specific type of tumor cell. Standard dosing therapy will require that billions of these nanobiobots be released into their human “host.” If the antibody arm on even one of these nanobiobots is modified (either by some type of catalytic recombination with circulating antibodies or by simple chemical damage) so that it binds to a different type of cell, it could stay in that body for life, like cryptic viruses such as Epstein-Barr. If this nanobiobot is modified so that it can attach to a human sperm or egg cell, it could theoretically stay in the population for generations.
If this type of nanobiotechnology-based cancer therapy becomes common (and according to the NCI’s nanomedicine site, that is a real possibility), we could have tens of thousands of people carrying cryptic nanobiobots. Even though these nanobiobots were designed for different functions, it is reasonable to assume that they will have a number of components in common. For example, many of them may have antibody components that, in turn, have regions of identical protein structure. These interchangeable parts could act just like the repetitive DNA of introns in eukaryotic genomes. What happens when one nanobiobot (say) on a sperm cell meets a second one on an egg cell? The probability of this is, of course, extremely low. But if the population of nanobiobots introduced into the body is high (say, billions), then a one-in-a-million event becomes common. In fact, microbial and viral systems like E. coli and bacteriophages enabled the molecular genetics revolution precisely because with billions (or even trillions) of test organisms in hand, one-in-a-million events become commonplace.
Suppose in the near future, a routine nanomedical procedure involved the introduction of billions of nanobiobots designed to scour the arteries dissolving plaque. Cleaning out the circulatory system would be considered a “one shot” treatment so that these therapeutic nanomedical devices (nanobiobots) would not have the engine necessary to use human metabolic energy as a power source. But what if, during another “routine” nanomedical procedure, a second therapeutic nanomedical device (nanobiobot) designed to vaccinate against cancer is introduced into the same person? This latter nanobiobot would, by definition, be designed for longevity so that metabolic energy would likely be the power source. Now, what if these two meet up and combine, or exhange vital components? This could happen through physico-chemical damage or perhaps via some type of catalysis mediated by the host’s own complex biochemistry. Now we have a novel, hybrid nanobiobot capable of crawling through our circulatory system for life. Or until it exchanges even more information — either with another nanobiobot or with the body itself. In the world of biology, this type of event would be called a mutation.Even more likely is the “prion” scenario, in which one of the billions of nanobiobots in the body is damaged or modified and, as a result, gains the ability to convert other nanobiobots in a manner that alters longevity, tissue target, etc. (This is what the abnormally structured proteins called prions do. Prions are responsible for fatal, mysterious brain-tissue diseases like “mad cow” and fatal familial insomnia.) These myriad possibilities bring us to…
The Second Law of Nanobotics: It is not possible to ensure that devices created using the techniques of nanobiotechnology will only transmit molecular information to the target system.
This law essentially says it is impossible to ensure that molecular information only flows in one direction. Just as today’s pharmaceuticals almost always have side effects, there is no natural law that guarantees against the reverse movement of fundamental chemical information from the biosystem to the nanobiobot. Any real nanobiotechnology system — one that uses a combination of biological and synthetic components — is theoretically vulnerable to a reversal in the flow of molecular information. This, in turn, will create opportunities for the unpredictable evolutionary advances of these devices via a process similar to biological mutation.
Put plainly, if the nanobiobot can modify us there is no way to ensure that we can’t modify the nanobiobot.
Corollary to the Second Law of Nanobotics: Before nanobiobots are used outside of a controlled research laboratory environment, we must try to define and understand what it is we are making. And rigorous algorithms and adversary-analysis systems must be developed to test these devices to ensure that they are not obviously vulnerable to the reverse flow of molecular information. Of course, we will never know this with certainty. But we haven’t even started trying to find out.
What this all means is that within a generation, biology will face its ultimate identity crisis. Researchers in the field of nanobiotechnology are racing to achieve the complete molecular integration of living and nonliving materials. We will hack into the CPU of life in order to insert new hardware and software. The purpose is to extend the capabilities of biology far beyond the limits imposed by evolution, to integrate the incredible biochemistry of life with the equally spectacular chemistry of nonliving systems like semiconductors and fiber optics. The idea is to hard-wire biology directly into any and every part of the nonliving world where it would be to our benefit. Optoelectronic splices for the vision impaired, micromechanical valves to restore heart function.
But the moment we close that nano-switch and allow electron current to flow between living and nonliving matter, we open the nano-door to new forms of living chemistry — shattering the “carbon barrier.”
This is, without doubt, the most momentous scientific development since the invention of nuclear weapons. When we open the door and allow new forms of chemistry to enter, we will change the very definition of life. Yet no coherent strategy exists to identify the moment when nanoengineered smart materials cross over into the realm of living materials. Could we even recognize a noncarbon life form at the moment of its creation? The answer seems intuitively obvious until we remember that we too are made of materials. That we too are machines.
Humans operate entirely on electric current. There are 10 trillion living cells in your body, each powered by an electrical potential of 12,000,000 volts per meter. A thousand times as hot as the plug on your wall. The voltage of life is produced inside every cell by a sophisticated electrochemical power generator. Each subcellular “mitochondrion” is a protein nanomachine designed by evolution to burn sugar, one molecule at a time. The heat from this controlled burn yields high-energy electrons that are the anima of the living state. Every move you make can be traced back to a specific flicker of this electron fire. Electromechanical systems drive the contraction of your heart. Electro-optical systems capture the image on your retina. Layers of electrochemical switches form the architecture of the neural CPU in your brain.
The bioenergetic transformations that fuel life are an amazing sequence of reactions that convert light into chemical bond energy. The biological ecosystem of Earth is one gigantic solar-powered fuel cell. Plants harvest the sun and animals harvest the plants. The first step is the light-driven fusion of water and carbon dioxide into sugar via the photosynthetic organisms — green plants and some microbes. This sugar is the fuel that drives the chemical engine of animal life.
Our mitochondria use bio-catalytic converters to strip electrons from sugar and feed them into your cellular power grid. As electrons move between energy levels, current flows.
Electronic conduction thus provides the true interface between living and nonliving materials. Today’s technology does not allow fabrication of components that plug directly into this interface, but we are getting close. In the early 21st century, nanotechnology will create the tools to hard-wire into the CPU of life, while biotechnology will provide a complementary molecular schematic of our living circuits. It is the engineering destiny of nanobiotechnology to create the first electro-molecular interface between the living and nonliving worlds. Or, more correctly, the first interface that does not discriminate between the living and nonliving states of matter. Fabrication of the world’s first true Biomolecule-to-Material interface will be infinitely more than a landmark in the evolution of human technology. Like the separate days of Genesis, the first nanofabricated BTM interface will be its own monumental act of creation and a crucial step on the path to bona fide living materials, aka artificial life.
In the history of science, the conduction of signals between living and nonliving materials will be divided into the pre-nanotech and nanotech eras. We are still pre-nanotech, which means that a direct BTM interface has yet to be fabricated, although bioengineering has created synthetic devices that communicate indirectly with living materials. Take an artificial pacemaker. This device transmits an electrical voltage to the biological pacemaker cells of the heart. In a healthy human, these pacemaker cells generate their own action potential, an electrical waveform of about 100 millivolts. This may not sound like much energy until we remember that this electrical potential is sustained across an insulating membrane only five nanometers thick. That is 5 billionths of a meter. So the energy of an action potential is almost 20,000,000 volts per meter.
Compare this to the 12,000 volts per meter at a standard wall plug. Healthy pacemaker cells spark the electrical wave that drives heart muscle contraction. When these cells malfunction, an artificial pacemaker may be implanted to take over. Waves of electrical voltage generated at the metal lead of the artificial device cross over to living tissue and initiate normal muscle contraction.
While the pacemaker is a magnificent feat of bioengineering, it does not operate via a true BTM interface. The metal lead of the artificial pacemaker, a small wire, is physically embedded in cardiac tissue and the wave of voltage spreads from the charged tip into the surrounding region. Only pacemaker cells will respond to the artificial voltage wave by initiating a further action potential. So the living system must identify the artificial signal and act upon it. The voltage produced by an implanted pacemaker, like a radio signal, will pass through space unnoticed unless there is an antenna to pick it up. In this case the receiving antennae are individual protein molecules embedded in the membrane of the living cardiac pacemaker cell. Other heart cells feel the electrical signal, but do not respond to it. They may be considered as nonspecific noise in the system. We must flood the local tissue with electricity in order to obtain the desired response.
This strategy is extremely effective, but it does not constitute a direct interface between living and nonliving materials. In the end, the pacemaker does not “know” that the target cells are out there. It will send its signal regardless of whether it is received or not. Likewise, the cardiac pacemaker cells do not “know” that the charged metal lead is out there; they simply respond to an electrical shock.
By contrast, a nanofabricated pacemaker with a true BTM interface will feed electrons from an implanted nanoscale device directly into electron-conducting biomolecules that are naturally embedded in the membrane of the pacemaker cells. There will be no noise across this type of interface. Electrons will only flow if the living and nonliving materials are hard-wired together. In this sense, the system can be said to have functional self-awareness: Each side of the BTM interface has an operational knowledge of the other.
Molecular imprinting offers one nanotechnology strategy to build a BTM switch in the near future. A molecular imprint works exactly the way one would think. An isolated biomolecule is surrounded by some type of self-reactive liquified matrix, often an unpolymerized plastic like acrylamide. A cross-linking reagent is added, and a polymer forms around the biomolecule.
When the biomolecule is removed, its ghostly outline is etched into a surface of solid plastic. The imprint fits the biological surface with atomic precision so this nanoengineered component is now a socket into which any identical biomolecule can be plugged. In the case of a pacemaker, the voltage-sensitive protein switches from cardiac cells would be imprinted into an electronic material. The imprinted material would be nanomachined and joined to an equally small power generator. The entire nanodevice, except for the imprinted socket, is then coated with a biomimetic ultrathin film. This coating makes the surface compatible with heart tissue. This nanopacemaker will occupy less than 1 cubic micrometer, smaller than a single bacterium. To complete the BTM interface, a living cardiac pacemaker cell is excised from the patient and plugged into the socket created by the original molecular imprint process. This can be accomplished with a micromanipulator similar to those currently used to move living nuclei in and out of cells. The “hard-wired” nanopacemaker is implanted into the heart where it is cemented into place by the body’s normal healing process.
The example above was selected because it is relatively simple, using technology that is already in the pipeline. Far more sophisticated strategies are on the horizon. One involves literally drawing the imprinted surface around the biomolecule by polymerizing monomers with a computer-targeted laser. When bioengineers begin to fabricate these BTM interfaces we will have entered the nanobiotech era.
If we continue to insist that life on Earth can only result from biological evolution, then the first BTM interfaces built by nanobiotechnology will be speciously trivialized as just a great invention of H*** sapiens. We will congratulate ourselves and conclude that the supremely gifted toolmaker has built the first portal between the worlds of living and nonliving materials.
This simplistic view of nanobiotechnology is very much like humanity’s current strategy in the search for extraterrestrial life. In a chemically diverse universe we insist on a perversely self-congratulatory strategy. Water and organic molecules, such as methane, are the identified spoor on this trail. We look for these signs because the biology-centric assumption is that aliens will be just like us, only very, very different — little green people with acid for blood, sentient jellyfish with a taste for cheeseburgers, or insects that have evolved with a sense of humor. Even search strategies that use “universal mathematical constants” ignore the possibility, proposed by some postmodern philosophers of science, that formal modern mathematics is a function of cognitive structure unique to humans, or less specifically to a narrow range of beings similar to humans,for example, hominids. The point is that technology analysts who can only see life as some variation on biology will see the BTM interface as a way for “us” to plug into “it.” Within this
paradigm there are no consequences for the definition of life, only new enhancements for the one true life form: biology. We hold up the mirror of humanity and see our own image reflected in the universe.
Most dictionaries define biology as “the science of living things.” But the (correctly) limitless nature of that definition is truncated when plants and animals are immediately used as the prime examples. NASA, an agency that should know better, has saturated the media for decades with hypnotic invocations of water and organics as the true signs of extraterrestrial life. Meanwhile, Hollywood and pop culture endlessly anthropomorphize aliens. Robots get the blues. Silicon sentience springs directly from human mythology. Stories of demonic computers and undead cyber-blood lust are endlessly refilmed with really cool graphics, a variety of soundtracks, and excellent eyewear. Skynet, the “self-aware” computer system of the “Terminator” series, hates us and wants us dead. The equally demonic cyber-beings of “The Matrix” want to enslave us and eat our energy (making this computer both physically dangerous and dangerously ignorant of the physical laws of the universe). It is distinctly ironic that when we consider aliens, life on Earth infuses our scientific models, our dreams, and our entertainment. We could call this “the biology paradox.” The biology paradox makes xenobiology speciously comprehensible, but by clinging to it we dismiss almost all of the chemistry in the universe.
It is time for serious students of sentience to accept that common usage has rendered the term “biology” completely useless in the nanotech age. Thinking outside the biology box leads to the alternative, much more radical concept of living materials — materials with anima.
To describe this new state of life, I suggest a contraction of the term “anima-materials” — “animats.” This term has previously been used to describe adaptive or cognitive systems capable
of robust action in a dynamic environment. The goal of these systems involves the creation of higher levels of cognition from many smaller processes. Many scientists who work in this field appear ready to dismiss chemical sentience as smaller and simpler than anything they would consider smart. But we must not assume that minds are built from mindless stuff. Chemical intelligence can manifest as the ability to catalyze a single chemical reaction. It is a dangerous, and possibly terminal, error for the children of carbon to dismiss the power of pure electron fire. Much of our fear of bioterror is based on the power (chemical intelligence) of a single molecule that allows it to block a single metabolic reaction inside the human body.
Better to heed Bertrand Russell’s prescient warning that “Every living thing is a sort of imperialist, seeking to transform as much as possible of its environment into itself.” Russell goes on to use the term “chemical imperialism” as the driving force for biological life. The obvious corollary to this warning is that chemical imperialism spawned human intelligence, not the other way around. Therefore, the definition of an animat as a living material should have primacy over any definition involving more complex cognitive functions. If we accept this logic, the creation of the first BTM interface by nanobiotechnology will require a new operational definition for the living state.
To expand the chemical franchise of the living state we must first deconstruct biology. The Human Genome Project sold us the concept that DNA is the chemical basis of life. But, in fact, that is not true. DNA is the result of life, not its cause. Our genetic code is the crowning achievement of biochemistry, not its progenitor.
It is crucial to keep this distinction in mind when considering the concept of animats. Life is not defined by DNA but by a continuous chemical struggle against entropy. The second law of thermodynamics tells us that all natural systems move spontaneously toward maximum entropy.
By literally assembling itself from thin air, biological life appears to be the lone exception to this law. The gaseous molecules snared by plants during photosynthesis were once free to roam the entire atmosphere of Earth. Plants — Earth’s primary producers — fix gas molecules from the air and minerals from the water into sugars and proteins. Humans eat the plants, or we eat the animals that eat the plants. Now those molecules that were free to roam the skies and waters must be where you are, go where you go, and do what you do. Clearly, the atoms in your body have experienced a radical reduction in entropy. But thermodynamics takes the full measure of the physical world. What little biology can build is barely visible against the chaotic horizon generated as the sun exfoliates into space. Like a tiny windmill in the solar hurricane, the wheel of life is turned by a unique set of chemical reactions that capture and channel the least part of that storm of dissipating energy into further cycles of replication. Biological life is a tiny stowaway on the entropy-powered craft of our solar system.
Life, then, is not based on DNA but on a chemical programming language spoken by a discrete set of biomolecules. This language directs the set of operations necessary to assemble the next generation of biomolecules. DNA or RNA, the genetic material, stores the directory of available biochemical operations but does not execute them. The program steps for replication are executed by a set of protein catalysts collectively known as enzymes. It is probable that the first biological life forms were RNA molecules capable of both catalytic replication and data storage – so-called ribozymes. Through evolutionary time, RNA generated two biochemical subroutines, proteins and DNA, to carry out some of the operations of replication and data storage with greater efficiency. Yet a cursory look at the molecular biology of the cell proves that RNA retains its central role. If life is viewed as a discrete set of chemical operations, then nanofabricated components that directly interface biological and materials chemistry must create the possibility of new life forms. These nanofabricated components are, in fact, the next generation of self-replicating systems: not enzymes but animats.
One could argue that it is too early to be talking about animats. It is easy, and reassuring, to dismiss even the most advanced nanobiotechnology systems of the near future as mere devices. But if biological evolution is any guide, that viewpoint is both specious and potentially catastrophic. During the 3-billion-year operation of the algorithm called evolution, revolutionary new adaptations often began as trivial events. A small genetic mutation resulting in a slightly altered protein that provides an incremental, almost trivial, enhancement to catalytic function.
Thermal tolerance is a classic example. A mutation to the DNA sequence translates into a modified physical structure for an essential protein. This new structure has enhanced thermal stability, which means it retains enzymatic function at a higher temperature than the original. As a result, the mutant is capable of 100 percent catalytic efficiency in climates a few degrees hotter than normal. This change in protein structure will only involve the rearrangement of a few atoms, making molecular evolution the original nanoengineer.
Over time, the heat-tolerant progeny of the original mutant may be able to migrate into a warmer climate: say, move down the Sierra Nevada into Death Valley. But it takes thousands of reproductive generations or more for this migration to actually occur. The original mutation will not become essential for a hundred thousand, or even millions of years. Evolution covers enormous distances one angstrom at a time, which means it is almost impossible to catch an
adaptation at the exact moment, or even in the exact generation, that it becomes essential for survival. Likewise, it is highly probable that the BTM interface will evolve from smart material to living material. This means that, in order to find the moment when the first animat appeared on Earth, we will have to backtrack from the future. Or be watching the present very, very carefully.
Based on this evolutionary model, it is highly unlikely that animats will spring fully grown upon the Earth. It is much more likely that animats will initially evolve as part of a larger biological system. In order to identify the first true manifestation of a living nonbiological material, we must develop a definitive test to distinguish an organism that is at least part animat from one that carries a smart material designed simply to assist or enhance life function.
This brings us to the Third Law of Nanobotics: The carbon barrier will be eliminated when humans create the first synthetic molecular device capable of changing the state of a living system via direct, intentional transfer of specific chemical information from one to the other.
This law formalizes the concept of animats and leads directly to the “Animat Test,” which is designed to identify the moment in time when life on Earth evolves to include both biological and nonbiological materials — the date when we break the carbon barrier.
Let us define a life form as an entity that reduces entropy by self-executing the minimum set of physical and chemical operations necessary to sustain the ability to execute functionally equivalentnegentropic operations indefinitely across time. Given that, a life form will be considered an animat (living material) if all the information necessary to execute that minimum set of physical and chemical operations cannot be stored in DNA or RNA. The corollary: If all the information necessary to execute that minimum set of physical and chemical operations can be stored in DNA or RNA, the life form is biological.
In the beginning, nanobiotechnology will create minute supplemental lifesaving medical devices for humans. The purpose of these devices will rapidly expand to include the performance-enhancing — an inexorable development I have discussed previously. Some of these things will remain devices. But some will have the potential to evolve and should be termed proto-animats. The animat test is designed to be a practical engineering tool to identify the point in time when the proto-animat crosses over and becomes a true living material, an animat. The conditions of the test are independent of both the physical structure of the life form and the physical modality by which the life form perpetuates a negentropic existence across time. That modality could include replication, and/or duplication, and/or continuous self-restoration. The test cannot be applied to entropic life forms since human understanding of physical laws does not currently allow discrimination between life forms and other natural phenomena without cycles of entropy reduction.
Much as we track incoming comets on a possible collision course with Earth, extraordinary vigilance is required as we transition into the age of nanobiotechnology. If the evolutionary model prevails, we are seeking to identify proto-animats: smart materials potentially capable of evolving into animats, living materials. This, in turn, will require a radical expansion of our thinking with respect to the potential sources of artificial life. Up till now (and thanks to people like Ray, Bill and Eric), most models have focused on computers and machine intelligence.
Smart materials can certainly contain computers. But it is unlikely that animats will spring to life via some Hollywood scenario whereby a supercomputer crashes into A.I. self-awareness and begins photovoltaic-powered reproductive assembly of little A.I.s (subsequent end-of-the-human-world-as-we-know-it scenarios optional, heavy metal sound track preferred). If the evolutionary algorithm is any guide, animats will break the carbon barrier the way the Bell X-1 broke the sound barrier, carried aloft on the wings of a mother ship. The mother ship will be named H*** sapiens. The initial manifestation of an animat life form will be evolutionary in form, but revolutionary in function. There is also the possibility of progression from the ternary fusion of biological life, machine intelligence, and smart materials (proto-animats). But it is crucial to recognize that living materials need only think with their chemistry. No Boolean or humanoid logic is required to qualify as life. The absolute progress of chemical imperialism can only be measured in entropy reduction.
Unless we know what we are looking for, the first proto-animats will be invisible in the storm of nanobioengineering systems expected to come online over the next generation of human life. Most of these nanodevices will not have the potential to evolve beyond cyborg mode, i.e., technical augmentations to biological life forms. There are many future scenarios in which humans will need their machines to continue to live, but until an animat is carried through time as part of a life form’s self-executing set of essential operations, the carbon barrier will remain intact. But when the portal between two worlds is atom-size, how will we know when it finally opens?
In a world where we are already doing research on artificial life, synthetic biology and nanobiotechnology, this question cannot possibly be considered academic. Materials will continue to get smarter until they finally break the carbon barrier. In the near future, some nanoscale cyborg technology will undoubtedly be designed to propagate along with the host using molecular self-assembly, the same strategy used by biological systems.
But self-assembly is not unique to living systems and, therefore, cannot be used as the litmus test for new forms of life. Water molecules can self-assemble into the simple crystalline pattern of an ice cube or the infinite complexity of a snowflake. Quartz and other inorganic minerals can spontaneously crystallize and grow with a concomitant reduction in entropy, yet geodes are definitely not alive.
However, molecular self-assembly is an excellent strategy for building nanomachines and many researchers are studying ways to harness this phenomenon. Such nanomachines could even be designed to use self-assembly to replicate. The original “Grey Goo” scare (the very mention of which is anathema to most nanoscientists) involved a scenario whereby endlessly self-replicating nanomachines literally covered the earth. This scenario is generally attributed to speculation contained in Eric Drexlers 1986 book “Engines of Creation.”
While the science behind the original Grey Goo scare was and remains completely unrealistic, we are getting better and better at using molecular self-assembly to build, maintain and propagate nanomachines. For example, it is certainly realistic to posit nanomachines that use ingested trace metals and semiconductor nanoparticles (for example, silica) to replicate inside the host’s cells, including germ cells. This type of device could enhance human performance and even move from parent to child, yet would not be considered to be a new life form (either alone or in combination with its human host) unless it could pass the animat test. More to the point, the animat test gives us a way to determine when a smart material crosses over and becomes a life form.
It is ironic that, because of nanobiotechnology, we have never been closer to a Grey Goo scenario — although the actual color will more likely be green or red. Because biomolecules learned self-assembly through billions of years of evolution, nanobiotechnology has a tremendous advantage when it comes to applying this particular strategy to create artificial life.
In fact, we have put into motion research that will create every component necessary to build an animat. One formula is as simple as A + B + C.

A = Nanobiotechnology devices that can survive and function inside human beings. Many therapeutic devices in development for drug delivery, cancer therapy, etc., are designed to survive in the physicochemical environment of the body.

B = Nanobiotechnology devices that can derive energy from biological metabolism. Many nanomedical devices will be powered by the fuel available inside the human body. A common idea is to take our own glucose-oxidizing enzymes and use them as a fuel cell for the nanobiobot.

C = Nanobiotechnology devices capable of copying themselves by molecular self-assembly.

Which creates a completely realistic animat formula. A + B + C = a self-replicating nanobiobot capable of living inside the human body powered by our own metabolic energy.
Of course, scientists are not intentionally putting A together with B and C. No one is trying to create the first true animat — they’re just working on rudimentary forms of artificial life or synthetic biology. But if, as part of this benign research initiative, they happen to create nanobiobots some of which have traits A or B or C — our definition of life will have changed forever.
Does this mean we will immediately cease to be human? Probably not. The most probable scenario is that an array of proto-animats will be carried as an evolutionary adaptation that enhances biological function for generations before any of them become an essential part of our phenotype. After that…
If the animat test described here is not sufficient, let it stand as a challenge for the development of a completely rigorous test for the unequivocal identification of nonbiological life forms. The larger point is that humanity must initiate a search-and-test protocol now in order to prepare for the arrival of the literal alien from within.
Nanofabricated animats may be infinitessimally tiny, but their electrons will be exactly the same size as ours — and their effect on human reality will be as immeasurable as the universe. Like an inverted SETI program, humanity must now look inward, constantly scanning technology space for animats, or their progenitors. The first alien life may not come from the stars, but from ourselves.

Dr. Alan H. Goldstein
Dr. Alan H. Goldstein is Professor of Biomaterials, Fierer Chair of Molecular Cell Biology, and Biomedical Materials Engineering and Science Program Chair at Alfred University. He earned a B.Sc. In Agronomy at New Mexico State University and a Ph.D. in Genetics at University of Arizona.
Alan began his career in the 1970s as a molecular biologist before becoming a theoretician in the field of nanobiotechnology. He has codified the central concepts of this nascent area of knowledge into a set of operational rules termed the Laws of Biomimetics. As part of this work, he has published a set of guidelines specifically designed to identify the artificial life forms likely to emerge from research at the intersection of nanotechnology and biotechnology. He has also created the “Animat Test” as a practical bioengineering tool for monitoring the coming transformation from natural to artificial biology.
His essay Nature vs. Nanoengineering: Rebuilding our world one atom at a time won a 2003Shell-Economist Prize and remains the primary reference in the nascent field of nanobioethics. He was the first person to use the term “Breaking The Carbon Barrier” to identify
the future moment when humanity successfully engineers the first nonbiological life form. This concept was formally introduced and defined during a debate with Ron Bailey at the Foresight ‘Vision Weekend’ component of the 13th Foresight Conference on Advanced Nanotechnology.
Alan’s popular science publications include The (really scary) soldier of the future: Thanks to nanotechnology, he’ll be a lethal superman who can heal himself.
Everything you always wanted to know about nanotechnology… But were too afraid of quantum spookiness to ask. Nanomedicine’s brave new world In just a few years, doctors will know everyone’s genetic identity. This knowledge will be a blessing — and a curse, and Invasion of the high-tech body snatchers: Ready for infrared vision, and hearts that work better than the original? While bioethicists obsess over cloning, bioengineers will soon be able to replace every part of our bodies.
Alan is a member of the American Association for the Advancement of Science,Society for Biomaterials, and the American Society for Microbiology.

Encouraging a Positive Transcension

Encouraging a Positive Transcension

AI Buddha versus AI Big Brother, Voluntary Joyous Growth, the Global Brain Singularity Steward Mindplex, and Other Issues of Transhumanist Ethical Philosophy

Ben Goertzel, February 17, 2004

1.The (Probably) Coming Transcension
This essay is relatively brief, but its theme extremely large: how to manage the development of technology and society, in the near to mid-term future, in such a way as to maximize the odds of a positive long-term future for the universe.
My conclusions are uncertain, but bold. I believe that the era of humanity as the “Kings of the Earth” is almost inevitably coming to an end. Unless we bomb or otherwise destroy ourselves back into the Stone Age or into oblivion, we are going to be sharing our region of the universe with powerful AI minds of one form or another. Potentially depending on decisions we make in the near or moderately near future, this may or may not lead to a fundamental alteration in the nature of conscious experience in our neck of the woods: a Transcension. And the dangers to humanity may be significant – an issue that must be very carefully considered.
I conclude that there are two strong options going forward, which I associate with the catch-phrases “AI Buddha” and “AI Big Brother.” More verbosely, these correspond to the alternatives of creating an AI based on some variant of the principle of “Voluntary Joyous Growth,” and allowing it to repeatedly self-modify and become vastly superintelligent, having a potentially huge impact on the universe and posing dangers to the human race that must be carefully studied and managed,Creating an AI dictator with stability as a main goal, to rule the human race, ensuring peace and prosperity and guaranteeing that no human creates overly advanced, “dangerous” technologies.
Not surprisingly, I have a tentative preference for the Voluntary Joyous Growth scenario, but I believe that much more research (mostly, research with “primitive” AI’s that are nevertheless much more advanced than any AI’s we currently possess) is needed to fully understand the risks and rewards of each option.
My analysis is based on a few key assumptions. Chiefly, I assume that:

•The broad and rapid advance of human science and technology will continue to increase
•Once human science and technology have advanced adequately, “radical futurist” technologies such as artificial general intelligence (AGI), molecular nanotechnology (MNT), pharmacological human life extension and genetic engineering of wildly novel organisms

I recognize that these assumptions are not incontrovertibly true. There could be as-yet-unknown physical limits preventing the development of the radical futurist technologies; or, as I already noted, the human race could knock itself back to the Stone Age or oblivion or some other nontechnological condition. However, I think these assumptions are highly likely to be true; and they’re the premise for much (though not all) of the discussion to follow.
These assumptions are related to the notion of the “Singularity,” as introduced by Vernor Vinge in the 1980’s and more thoroughly developed by a host of recent futurist thinkers. To the reader who is unfamiliar with this breed of futurist thinking, I recommend the following works as prerequisites for the present discussion:

•Ray Kurzweil’s book The Singularity is Near, and his earlier work The Age of Spiritual Machines
•Damien Broderick’s book The Spike
•followed by a study of Eliezer Yudkowsky’s and John Smart’s more radical ideas.

However, the points I’ll discuss here don’t necessarily require a Singularity as defined by these thinkers; they merely require something weaker that – borrowing a word from Damien Broderick’s novel of that name, and from some of John Smart’s writings — I call a Transcension. A Singularity is a particular kind of Transcension, but not the only kind.
The basic idea of the Singularity is that, at some point, the advance of technology will become (from a human perspective) essentially infinitely rapid, thus bringing a fundamental change in the nature of life and mind. A key aspect of the Singularity concept is technological acceleration. Historical analysis suggests that the rate of technological increase is itself increasing – new developments come faster and faster all the time. At some point this increase will come so fast that we don’t even have time to understand how to use the N’th radical new development, before the N+1’s radical new development has come. Eventually technological progress will lead to the creation of powerful AI’s, and these AI’s, rather than humans, will be carrying out the bulk of technology development – thus allowing new innovations to emerge at superhuman pace. At this point, when dramatic new technologies and new ways of thinking develop daily or hourly, so fast that humans literally can’t keep up, the technological Singularity will be upon us.
Another aspect of the Singularity idea is psychological: the Singularity is envisioned as a radical transition in the nature of experience, not just technology.
When civilization and language and rational thought emerged, the nature of human experience changed radically. Or, to put it another way, the “human experience” as we now know it emerged from the experience of proto-human animals.
But there is no good reason to believe that the emergence of the modern human mind is the end state of the evolution of psyche. Indeed, the rub is this: While evolution might take millions of years to generate another psychological sea change as dramatic as the emergence of modern humanity, technology may do the job much more expediently. The technological Singularity can be expected to induce rapid and dramatic change in the nature of life, mind and experience.
That’s Singularity; what about Transcension? The basic idea of the Transcension is that at some point, the advance of technology will bring about a fundamental change in the nature of life and mind. The difference is that a Transcension can occur even if there is no exponential or superexponential growth in technology. It could occur, eventually, even with a linear or logarithmic advance in technology. In fact, I think that a Singularity scenario is extremely likely; but the points I’m going to make here are mostly valid for any Transcension, no matter how fast it occurs. Perhaps the biggest difference between the Transcension and Singularity concepts is that, if the Singularity idea is correct, then the Singularity is near and we’d better start worrying about it fast; whereas if a Transcension is going to occur 10,000 years from now, there’s no particular need for us to fuss about it at the moment.
The term “Singularity” tends to place an emphasis on the rapidity of change that is induced by exponentially or hyperexponentially accelerating advances in technology. And indeed, the suddenness or otherwise of the coming change is a very important practical point. However, the technologies involved – exciting as they are — should be viewed mainly as enablers. The key point is that we may soon be experiencing a profoundly substantial change in the “order of being”. The point is that the way we experience the world, the way we human animals live life and conduct social affairs, is not the end state of mind-in-the-universe, but only an intermediate state on the way to something else. And the Transcension to this “something else” may well occur sooner rather than later.
But what is this something else? This is where things get interesting. One might contend that, even if we are on the verge of something far beyond our current ways of thinking, living and experiencing, our limited and old-fashioned human brains really don’t stand much chance of envisioning this new order of things in any detail. On the other hand, it seems, it would be foolish to not even try.
In fact, it seems quite possible that actions we take now may play a major role in shaping the nature of this nebulous-state-to-come, this post-Transcension, post-human order of being. One of the (many) great unknown questions of the Transcension is: how much effect does the way in which the Transcension is reached, have on the nature of mind and reality afterwards? There are many possibilities, e.g.’

1. there are many qualitatively different post-Transcension states, and our choices now impact which path is taken
2. no matter what we do now, mind and reality will settle into the same basic post-Transcension attractor
3. a human-achieved Transcension will merely serve to project humans into a domain of being already occupied by plenty of other minds that have. The specifics of how humans approach the Transcension is not going to have any significant impact on this already-existent domain.

At this point, I have no idea how to assess the probabilities of these various options.
In the second two options, the only ethical question is whether the post-Transcension state-of-being will be better than the states that would likely exist without a Transcension. If yes, then we should work to bring about the Transcension – and once this is done, reality will take its course. If no, then we should work to avoid Transcension.
In the first option, the ethical choices are trickier, because some plausible post-Transcension states may be better than the states that would likely exist without a Transcension, whereas others may be worse. We then have to choose not only whether to seek or avoid Transcension, but whether to seek or avoid particular kinds of Transcension. In this case, it’s meaningful to analyze what we can do now to increase the probability of a positive Transcension outcome.
Of course, serious discussion of any of these options can’t begin until we define what a “positive” Transcension outcome really means.
The following sections of the essay deal mainly with two obvious issues that come out of the above train of thought:

•What is a “positive outcome”? That is, what is an appropriate ethical or meta-ethical standard by which to judge the positivity or otherwise of a hypothetical post-Transcension scenario?

A number of alternative, closely related approaches are presented here, mostly centered around an abstract notion I call the Principle of Voluntary Joyous Growth
In the case that Option 3 above holds, then how can we encourage a positive outcome? Here my focus is on artificial general intelligence technology, which I believe will be the primary driver behind the Transcension (because it will be making the other inventions). I will argue that, in addition to teaching AGI’s ethical behavior, it is important to embody ethical principles in the very cognitive architecture of one’s AGI systems. (Specific ideas in this direction will be presented, and discussed in the context of the Novamente AI system.)

2.The Ethics and Meta-Ethics of Transcension
What is a good Transcension? Some people would say that the only good Transcension is a non-Transcension. These people think that using technology to radically alter the nature of mind and being is a violation of the natural order of things. But even among radical techno-futurists and others who believe that Transcension, in principle, may be a good things, there is nothing close to agreement on what it means for a post-Transcension world to be a “good” one.
For Eliezer Yudkowsky, the preservation of “humaneness” is of primary importance. He goes even further than most Singularity believers, asserting that the most likely path is a “hard takeoff” in which a self-modifying AI program moves from near-human to superhuman intelligence within hours or minutes – instant Singularity! With this in mind, he prioritizes the creation of “Friendly AI’s” – artificial intelligence programs with “normative altruism” (related to “humaneness”) as a prominent feature of their internal “shaper networks” (a “shaper network” being a network of “causal nodes” inside an AI system, used to help produce that AI system’s “supergoals”). He discusses extensively strategies one may take to design and teach AI’s that are Friendly in this sense. The creation of Friendly AI, he proposes, is the path most likely to lead to a humane post-Singularity world.
On the other hand, Ray Kurzweil seems to downplay the radical nature of the Singularity – leading up to, but not quite drawing, the conclusion that the nature of mind and being will be totally altered by the advent of technologies like AGI and MNT. At times he seems to think of the post-Singularity world as being a lot like our current world, but with funkier technology around; with AI minds to talk to and the absence of pesky problems like death, disease, poverty and madness. And clearly he sees this vision as a good one; he’s quite concerned to encourage ordinary non-techno-futurist people not to be afraid of the beckoning changes.
Damien Broderick’s novelTranscension presents a more ethically nuanced perspective. In his envisioned future, a superhuman AI rules over an Earth containing several different subregions, including

• one in which humans live a traditional lifestyle based on minimal technology
• one in which humans live using highly advanced technology

(When I read the book I for some reason assumed these humans were probably uploads unknowingly living on a simulated Earth; but when I showed Broderick an earlier version of this essay that mentioned this impression, he pointed out to me that the book clearly states the people are real bodies on the real Earth. I guess I have a serious case of simulation-on-the-brain!) Anyhow, at the end of the novel the Transcension occurs – an event in which the ruling superhuman AI mind decides that maintaining human lives isn’t consistent with its other goals. It wants to move on to a different order of being, and in preparation it uploads all humans from Earth into digital form, so it can more easily guarantee their safety and help with their development. (“Transcension” in the sense that I’m using it in this essay is a bit broader than the event in Broderick’s novel; in my terminology, his Transcension event is part of the overall Transcension in his fictional universe.)
Not all techno-futurists are as concerned with the future of human life or humane-ness. For example, the poster Metaqualia, in a series of emails on Yudkowsky’s SL4 email list, has argued for alternate positions, such as:

• If a post-Transcension superhuman intelligence decides that there’s a better use for the mass-energy that humans occupy, it may well be right, and we shouldn’t fear this outcome
• A universe consisting of a giant endless orgasm of delight, with no cognition and no individual minds (human or otherwise), might not be such a bad thing after all

Clearly, given that we humans can’t agree on what’s good and valuable in the current human realm of life, it would be foolish to expect us to agree on what’s good and valuable in the post-Transcension world. But nevertheless, it seems it would be equally foolish to ignore the issue completely. It seems important to ask: What are the values that we would like to see guide the development of the universe post-Transcension?
This poses a challenge in terms of ethical theory, because for a value-system to apply beyond the scope of human mind and society, it has to be very abstract indeed – and yet there’s no use in a value-system so abstract that it doesn’t actually say anything. Thinking about the post-Transcension universe pushes one to develop ethical value-systems that are both extremely general and reasonably clear.
There may be many different value-systems of this nature; here I will discuss several of them, and their interrelationship:

• Cosmic Hedonism
• Joyous Growth
• Voluntary Joyous Growth
• Joyous Growth Biased Voluntarism
• Nostalgic Joyous Growth
• Nostalgic Voluntary Joyous Growth
• Nostalgic Joyous Growth Based Voluntarism
• Human Preservationism
• The Smigrodzkian Meta-Ethic
• Cautious Developmentalism
• Humane-ness

Each of these is a very general, abstract ethical principle. Specific ethical systems may come to exist, but the quality of an ethical system must be judged relative to the ethical principle it reflects. I will return to this point later.
I note again that I am only considering value-systems that are Transcension-friendly. Of course there are many other value-systems out there in the world today, and most of them would argue that the Transcension as I conceive it is ethically wrong. These value-systems are interesting to discuss from a psychological and cultural perspective, but they are not my concern in this essay.

2.1 Ethics, Rationality and Attractors
It is important to clearly understand the relationship between ethical principles and rationality. Once one has decided upon an ethical principle, one can use rationality to assess specific ethical systems as to how well they support the ethical principle. Below I will present two meta-ethical principles –

• Rafal Smigrodzski’s meta-ethic “Find rules that will be accepted,” and
• a rule inspired by conversations with Jef Albright and a study of Taoism: “Favor ethical principles that are harmonious with the nature of the universe.”

But one can’t choose a meta-ethical principle based on rationality alone either. Ultimately the selection and valuation process must bottom out in some kind of nonrational thought.
Reason is about drawing conclusions from premises using appropriate rules, whereas at the most abstract level, ethics is about what premises to begin with. We can push this decision back further and further – reasoning about ethical rules based on ethical systems, and reasoning about ethical systems based on ethical principles – but ultimately we must stop, and acknowledge that we need to make a nonrational choice of premises. I have chosen this stopping-point at the level of “abstract ethical principles” like the ones listed above.
Hume isolated this nonrational bottoming-out in “human nature,” the human version of “animal instinct.” Buddhist thought, on the other hand, associates it with the “higher self,” and the individual self’s recognition of its interpenetration with the rest of the universe and its ultimate nonexistence. My own view is that Buddhism and Hume are both partly right – but that neither has gotten at the essence of the matter. Hume is right that our hard-wired instincts certainly play a large role in such high-level, nonrational choices. And Buddhism is right that subtle patterns connecting the individual with the rest of the universe play a role here.
The crux of the matter, I believe, lies in the dynamical-systems-theory notion of an attractor. An attractor is a pattern that tends to arise in a dynamical system, from a wide variety of different preliminary conditions. A strict mathematical attractor must persist forever once entered into; but one may also speak of “probabilistic attractors” that are merely very likely to persist, or that may mutate slightly and gradually over time, etc. I think that part of “human nature” consists of peculiarities of the human mind/brain, whereas part of it consists of generic attractors that have appeared in the human psyche – or as emergents among human minds or between human minds and their environments — because they generally tend to pop up in a lot of complex systems in a lot of circumstances.
One reason why some meta-ethics appear more convincing than others, then, is that these meta-ethics appear to be attractors: they are “universal attractors,” i.e. principles that arise as patterns in many different complex systems in many different situations. This doesn’t mean that they’re logically correct in the sense of following from some a priori assumption regarding what is good. Rather it means that, in a sense, they follow from the universe. This point will be returned to a little later.
Of course, we are still left with a selection problem, because there may be different universal attractors that contradict each other. Does the more powerful universal attractor win, or is this just a matter of chance, or context-dependent chance, or subtle factors we paltry humans can’t understand? I’ll leave off here and turn to slightly more concrete issues!

2.2 Cosmic Hedonism and Voluntary Joyous Growth
Firstly, Cosmic Hedonism refers to the ethical system that values happiness above all. In this perspective, our goal for the post-Transcension universe should be to maximize the total amount of happiness in the cosmos. Of course, the definition of “happiness” poses a serious problem, but if one agrees that Cosmic Hedonism is the right approach, one can impose the understanding of happiness as part of the goal for the post-Transcension period. The goal becomes to understand what happiness is, and then maximize it.
However, even if one had a crisp and final definition of happiness, there would be a problem with Cosmic Hedonism – a problem that I’ve come to informally refer to as the problem of the “universal orgasm.” The question is whether we really want a universe that consists of a single massive wave of universal orgasmic joy. Perhaps we do all want this, in a sense – but what if this means that mind, intelligence, life, humanity and everything else we know becomes utterly nonexistent?
The ethical maxim that I call the Principle of Joyous Growth attempts to circumvent this problem, by adding an additional criterion:

Maximize happiness but also maximize growth

What does “growth” mean? A very general interpretation is: Increase in the amount and complexity of patterns in the universe. The Principle of Joyous Growth rules out the universal orgasm outcome unless it involves a continually increasing amount of pattern in the universe. It rules out a constant, ecstatically happy orgasmic scream.
Of course, maximizing two quantities at once is not always possible, and in practice one must maximize some weighted average of the two. Different weightings of happiness versus growth will lead to different practical outcomes, all lying within the general purvey of the conceptual Principle of Joyous Growth.
The Joyous Growth principle, without further qualification, is definitely not Friendly in the Yudkowskian sense. In fact it is definitively un-Friendly, in the sense that we humans are far from maximally happy — and in this as well as other ways, we are basically begging to be transcended. A post-Transcension universe operating according to the Principle of Joyous Growth would not be all that likely to involve the continuation of the human race.
An alternative is to add a third criterion, obtaining a Principle of Voluntary Joyous Growth, i.e.

Maximize happiness, growth and choice

This means adopting as an important value the idea that sentient beings should be allowed to choose their own destiny. For example, they should be allowed to choose unhappiness or stagnation over happiness and growth.
Of course, the notion of “choice” is just as much a can of worms as “happiness.” Daniel Dennett’s recent book Freedom Evolves does an excellent job of sorting through the various issues involved with choice and freedom of will. While I don’t accept Dennett’s reductionist view of consciousness, I find his treatment of free will generally very clear and convincing.
Note that including choice as a variable along with two others implies that ensuring free choice for all beings is not an absolute commandment. Of course, given the extent to which human wills conflict with each other, free choice for all beings is not a possible opportunity. Given a case where one being’s will conflicts with another being’s will, the Voluntary Joyous Growth approach is to side with the being whose choice will lead to greater universal happiness and growth.
Voluntary Joyous Growth is not a simple goal, because it involves three different factors which may contradict each other, and which therefore need to be weighted and moderated. This complexity may be seen as unfortunate – or it may be seen as making the ethical principle into a more subtle, intricate and fascinating attractor of the universe.

2.3 Attractive Compassion
I should note that my goal in positing “Voluntary Joyous Growth” has been to articulate a minimal set of ethical principles. These are certainly not the only qualities that I consider important. For example, I strongly considered including Compassion as an ethical principle, since Compassion is, in a sense, the root of all ethics. However, it occurred to me that Compassion is actually a consequence of choice, growth and freedom. In a universe consisting of beings that respect the free choices of other beings, and that want to promote joy and growth throughout the universe, compassion for other beings is inevitable – because “being good to others” is generally an effective way to induce these others to contribute toward the joy and growth of the universe. Without the inclusion of choice, Joyous Growth is consistent with simply (painlessly) annihilating unhappy or insufficiently productive minds and replacing them with “better” ones; but assigning a value to choice gives a disincentive to dissolve “bad” minds and leads instead to the urge to help these minds grow and be joyful.
This ties in with the notion that compassion itself is a “universal attractor.” Or, a more accurate statement is: A modest level of compassion is a universal attractor. We can see this in the fact that it ensues from the combination of the universal attractors of Joy, Growth and Choice; and we can also see it in the evolution of human society. Most likely, compassion emerged in human beings because, in a small tribe setting, it is often valuable for each individual to be kind to the other individuals in the tribe, so as to keep them alive and healthy. This is the case regardless of whether the tribe members are genetically related to each other; it’s the case purely because, in many situations, the survival probability of an individual is greater if

• The tribe has more people in it
• The people in the tribe know each other and hence can work relatively well together (hence, all else equal, there is benefit in retaining the current members of the tribe rather than recruiting new ones)

So if humanity is divided up into tribes – because individual humans can survive better in groups than all alone – then compassion toward tribe members increases individual fitness. Compassion emerges spontaneously via natural selection, in situations where there is a group of minds which each has choice, and which (via growth) have the complexity to cooperate to some extent.
Note that absolute compassion doesn’t emerge from this tribal-evolutionary logic, but a moderate level of compassion does. Similarly, absolute compassion doesn’t come out of Voluntary Joyous Growth – but it seems that a moderate level of compassion does. It seems more likely that “moderate compassion” is a universal attractor than that “absolute compassion” a la Buddha or Mother Teresa is.
Interestingly, it’s harder to see how compassion would evolve among humans living in a large-group society like modern America. In this case, there’s not such a direct incentive for an individual to be kind to others. It may be that a population of rational-actor minds plunked into a large society would never evolve compassion to any significant degree. However, I suspect that without compassion, society would collapse into anarchy – and anarchy would give way to a tribal society … in which compassion would evolve, showing the power of compassion as an attractor once again!

2.4 Nostalgia
Philip Sutton, on reviewing an earlier version of this essay, pointed out that I had omitted a value that is very important to him: sustenance and preservation of what already exists. On reading his comments, I reflected that I had made this omission because, in fact, this value – which I’ll call Nostalgia – is not all that important to me personally.
I myself am somewhat attached to many things that exist — such as my family and friends, my self, my pets, Jimi Hendrix CD’s, Haruki Murakami novels and pinon pine trees and Saturday mornings in bed and long whacky email conversations, to name just a few – but I don’t consider this kind of attachment a primary value. I think it’s important that I, as a sentient being, have a choice to retain these things if they are important to me and contribute toward my self-perceived happiness. But I don’t see an intrinsic value in maintaining the past –whereas I do see an intrinsic value in growth and development.
However, I don’t see Nostalgia as a destructive or unpleasant value, and nor do I see it as contradictory with growth, joy or choice. The universe is a big place – and quite likely, many parts of it are not terribly important to any sentient being. It may well be possible to preserve the most important patterns that currently exist in the universe, and still use the remainder of the universe to create wonderful new patterns. The values of Growth and Nostalgia only contradict each other in a universe that is “full,” in the sense that every piece of mass-energy is part of some pattern that is nostalgically important to some sentient being. In a full universe one must make a choice, in which case I’ll advocate Growth … but it’s not clear whether such a thing as a full universe will ever exist. It may be that the process of growth will continue to open up ever more horizons for expansion.

5. Smigrodzki’s Meta-Ethic
An alternative approach, proposed by Rafal Smigrodzki in a discussion on the SL4 list, is to begin with an even more abstract sort of meta-ethic. Abstract though it is, the Principle of Voluntary Joyous Growth still imposes some specific ethical standards. On the other hand, Smigrodzki proposes a pure meta-ethic with no concrete content. In fact he proposed two different versions, which are subtly and interestingly different.

Smigrodzki’s first formulation was:

Find rules that will be accepted.

This principle arose in a discussion of the analogy between ethics and science, and specifically as an analogue to Karl Popper’s meta-rule for the scientific enterprise:

Find conjectures that have more empirical content than their predecessors

Popper’s meta-rule specifies nothing about the particular contents of any scientific theory or scientific research programme, it speaks only of what kinds of theories are to be considered scientific. Similarly, Smigrodzki’s meta-rule specifies nothing about what kinds of actions are to be considered ethical, it speaks only of what kinds of rule-systems are to be considered as falling into the class of “ethical rule-systems”: namely, rule-systems that are accepted.
One interesting thing about Smigrodzki’s meta-rule is how close it comes to the Principle of Voluntary Joyous Growth. To see this, consider first that the notion of “be accepted” assumes the existence of volitional minds that are able to accept or reject rules. So to find rules that will be accepted, it’s necessary to first find (or ensure the continued existence of) a community of volitional minds able to accept rules.
Next, observe that one version of the nebulous notion of “happiness” is “the state that a volitional mind is in when it gets to determine enough of its destiny by its own free choice.” This is almost an immediate consequence from the notions of happiness and choice. For, if happiness is what a mind wants, and a mind has enough ability to determine its destiny via free choice, then naturally the mind is going to make choices maximizing its happiness.
So, “Find rules that will be accepted” is arguably just about equivalent to “Create or maintain a community of volitional minds, and find rules that the this community will accept (thus making the community happy).”
But then we run up against the problem that not all minds really know what will make them happy. Often minds will accept rules that aren’t really good for them – even by their own standards – out of ignorance, stupidity or self-delusion. To avoid this, one wants the minds to be as smart, knowledgeable and self-aware as possible. So one winds up with a maxim such as: “Create or maintain a community of volitional minds, with an increasing level of knowledge, intelligence and self-awareness, and find rules that the this community will accept (thus making the community happy).”
Incidentally, Popper’s meta-rule of science also is susceptible to the “stupidity and self-delusion” clause. In other words, “Find conjectures that have more empirical content than their predecessors” really means “Find conjectures that seem to a particular community of scientists to have more empirical content than their predecessors” – and the meaningfulness of this really depends on how smart and self-aware the community of scientists is. The history of science is full of apparent mistakes in the assessment of “degrees of empirical content.” So Popper’s meta-rule could be revised to read “Find conjectures that have more empirical content than their predecessors, as judged by a community of minds with increasing intelligence and self-awareness.”
The notion of “increasing level of knowledge” can also be refined somewhat. What is knowledge, after all? One way to gauge knowledge is using the philosophy of science. Lakatos’s theory of research programmes suggests that a scientific research programme – a body of scientific theories – is “progressive” (i.e. good) if it meets a number of criteria, including•suggesting a large number of surprising hypotheses, and•being reasonably simple.
One interpretation of “increasing level of knowledge” is “association with a series of progressive scientific research programmes.”
Once all these details are put in place, my fleshing-out of Smigrodzki’s meta-rule (which may well make it fleshier than Smigrodzki would desire) becomes an awful lot like the Principle of Voluntary Joyous Growth. We have happiness, we have choice, and we have growth (in the form of growth of intelligence, knowledge and self-awareness). The only real difference from the earlier formulation of the Principle of Voluntary Joyous Growth is the nature of the growth involved: is it in the universe at large, or within the minds in a community that is accepting ethical rules?
After I presented him with this discussion of his meta-ethic, Smigrodzki’s reaction was to create a yet more abstract version of his meta-ethic, which he formulated as

“Formulate rules that make themselves into accepted rules make themselves come true”(cause the existence of states of the universe, including conscious states, in agreement with goals stated in the rules).


“Formulate rules which, if applied, will as their outcomes have the goals explicitly understood to be inherent in these rules”.

I rephrase these as

“Create goals, and rules that, if followed, will lead to the achievement of these goals”


“Create goals, and rules that, if followed, will lead to the achievement of these goals, with as few side-effects as possible.”

This formulation is more abstract than – and inclusive of — his previous proposal, which in this language was basically “Create goal-rule systems that will be accepted.” To see the difference quite clearly, consider the “ethical” system:


This posits a goal and also some rules for how to achieve the goal. It is rational and consistent. So far as I can tell, it obeys Smigrodzki’s revised, more abstract meta-ethic. However, it seems to fail his former, more concrete meta-ethic, because at least among most of the sentient beings I know, it is unlikely to be accepted. (Now and then various psychopaths have of course accepted this “ethic,” and attempted to put it into practice.)
So, in my view, by further abstracting his meta-ethic, Smigrodzki moved from

• a very abstract formulation of “the good” [“Create goal-rule systems that will be accepted”], to
• a very abstract formulation of the general process of goal-seeking

The difference between these lies in the key role of choice in the former (as hidden in the notion of “acceptance”). This highlights the key role of the notions of choice and will in ethics.

2.6 Joyous Growth Biased Voluntarism
Voluntary Joyous Growth, obviously, has a different relationship to Friendliness than pure Joyous Growth. Voluntary Joyous Growth means that, even if superhuman AI’s determine that joy and growth would be maximized if the mass-energy devoted to humans were deployed in some other way – even so, the choices of individual humans (whether to remain human or let their mass-energy be deployed in some other way) will still be respected and figured into the equation.
One could try to make Voluntary Joyous Growth more explicitly human-friendly by making choice the primary criterion. This is basically what’s achieved by my fleshed-out version of Smigrodzki’s meta-rule. In this version, the #1 ethical meta-principle is to let volitional minds have their choices wherever possible. Only when conflicts arise do the other principles – maximize joy and growth – come into play. This might be called “Joyous Growth Biased Voluntarism.”Joy and growth still may play a very big role here, because quite obviously, conflicts may arise quite frequently between volitional minds coupled in a finite universe. However, one can envision scenarios in which all inter-mind conflicts are removed, so that it’s possible to fulfill choices without considering joy and growth at all.
For instance, what if all the minds in the universe decide they all want to play video games and live in purely automated simulated worlds rather than worlds occupied with other minds? Then living in individual video-game-worlds of their choice may gratify them quite adequately: so they have maximum choice, but no opportunity for any factor besides choice to come into play.
In this case minds may, consistently with Joyous Growth Biased Voluntarism, make themselves unhappy and refuse opportunities for growth unto eternity. In my personal judgment, this is a mark against Joyous Growth Based Voluntarism and in favor of simple Voluntary Joyous Growth with its greater flexibility. I suspect that Voluntary Joyous Growth is much closer to being a powerful attractor in the universe.

2.7 Human Preservationism and Cautious Developmentalism
A more extreme ethical principle, in the vein of Joyous Growth Biased Voluntarism, is what I call Human Preservationism. In this view, the preservation of the human race through the post-Transcension period is paramount. Where this differs from Joyous Growth Biased Voluntarism is that, according to Human Preservationism, even if all humans want to become transhuman and leave human existence behind, they shouldn’t be allowed to.
In fact, I don’t know of any serious transhumanist thinkers who hold this perspective. While many transhumanists value humanity and some personally hope that traditional human culture persists through the Transcension; transhumanists tend to be a freedom-centered bunch, and few would agree with the notion of forcing sentient beings to remain human against their will. But even so, Human Preservationism is a perfectly consistent philosophy of Transcension. There’s nothing inconsistent about wanting vastly superhuman minds and new orders of beings to come into existence, yet still placing an absolute premium on the persistence of the peculiarly human.
A (somewhat more appealing) variation on Human Preservationism is Cautious Developmentalism, a perspective I will discuss a little later on. The abstract principle here is: If things are basically good, keep them that way, and explore changes only very cautiously. In practical terms, the idea here is to preserve human life basically as-is, but to allow very slow and careful research into Transcension technologies, in such a way as to minimize any risk of either a bad Transcension or another bad existential outcome. In the most extreme incarnation of this perspective, the choice of how to approach the Transcension is deferred to future generations,
and the problem for the present generation is redefined as figuring out how to set the Cautious Developmentalist course in motion.

2.8 Humaneness
Yudkowsky has proposed that “The important thing is not to be human but to be humane.” Enlarging on this point, he argues that
Though we might wish to believe that Hitler was an inhuman monster, he was, in fact, a human monster; and Gandhi is noted not for being remarkably human but for being remarkably humane. The attributes of our species are not exempt from ethical examination in virtue of being “natural” or “human”. Some human attributes, such as empathy and a sense of fairness, are positive; others, such as a tendency toward tribalism or groupishness, have left deep scars on human history. If there is value in being human, it comes, not from being “normal” or “natural”, but from having within us the raw material for humaneness: compassion, a sense of humor, curiosity, the wish to be a better person. Trying to preserve “humanness”, rather than cultivating humaneness, would idolize the bad along with the good. One might say that if “human” is what we are, then “humane” is what we, as humans, wish we were. Human nature is not a bad place to start that journey, but we can’t fulfill that potential if we reject any progress past the starting point.
In email comments on an earlier draft of this paper, Yudkowsky noted that he felt my summary of his theory didn’t properly do it justice. Conversations following these comments have improved my understanding of his thinking; but even so, I’m not certain I fully “get” his ideas. So, rather than explicitly commenting on Eliezer’s Friendly AI theory here, I will introduce a theory called “Humane AI,” which I believe is somewhat similar to his approach, but may also have some differences. I will present some arguments describing difficulties with Humane AI, which may not all be problems with Friendly AI, in the sense that there may be solutions to these problems within Friendly AI theory that I don’t fully understand.
In Humane AI, one posits as a goal, not simply the development of AI’s that are benevolent to humans, but the development of AI’s that display the qualities of “humaneness,” where “humaneness” is considered roughly according to Yudkowsky’s description above. That is, one proposes “humaneness” as a kind of ethical principle, where the principle is: “Accept an ethical system to the extent that is agrees with the body of patterns known as ‘humaneness’.”
Now, it’s not entirely clear that “humaneness,” in the sense that Yudkowsky proposes, is a well-defined concept. It could be that the specific set of properties called “humaneness” you get depend on the specific algorithm that you use to sum together the wishes of various individuals in the world? If so, then one faces the problem of choosing among the different algorithms. This is a question for a future, more scientific study of human ethics.
The major problem with distinguishing “humaneness” from “human-ness” is to distinguish the “positive” from the “negative” aspects of human nature — e.g. compassion (viewed as positive) versus tribalism (viewed as negative). The approach hinted at in the above Yudkowsky quote is to use a kind of “consensus” process. For instance, one hopes that most people, on careful consideration and discussion, will agree that tribalism although humanly universal, isn’t good. One defines the extent to which a given ethical system is humane as the average extent to which a human, after careful consideration and discussion, will consider that ethical system as a good one. Of course, one runs into serious issues with cultural and individual relativity here.
Personally, I’m not so confident that people’s “wishes regarding what they were” are generally good ones (Which is another way of saying: I think my own ethic differs considerably from the mean of humanity’s.) For instance, the vast majority of humans would seem to believe that “Belief in God” is a good and important aspect of human nature. Thus, it seems to me, “Belief in God” should be considered humane according to the above definition — it’s part of what we humans are, AND, part of what we humans wish we were. But nevertheless, I think that belief in God — though it has some valuable spiritual intuitions at its core – is essentially ethically undesirable. Nearly all ethical systems containing this belief have had overwhelming negative aspects, in my view. Thus, I consider it my ethical responsibility to work so that belief in God is not projected beyond the human race into any AGI’s we may create. Unless (and I really doubt it) it’s shown that the only way to achieve other valuable things is to create an AGI that contains such a belief system. Of course, there are many other examples besides “belief in God” that could be used to illustrate this point.
To get around problems like this, one could try to define humaneness as something like “What humans WOULD wish they were, if they were wiser humans” — but of course, defining “wiser humans” in this context requires some ethical or meta-ethical standard beyond what humans are or wish they were.
So, in sum, the difficulties with Humane AI are

1. The difficulty of defining humane-ness
2. The presence of delusions that I judge ethically undesirable, in the near-consensus worldview of humanity

The second point here may seem bizarrely egomaniacal – who am I to judge the vast mass of humanity as being ethically wrong on major points? And yet, it has to be observed that the vast mass of humanity has shifted its ethical beliefs many times over history. At many points in history, the vast mass of humans believed slavery was ethical, for instance. Now, you could argue that if they’d had enough information, and carried out enough discussion and deliberation, they might have decided it was bad. Perhaps this is the case. But to lead the human race through a process of discussion, deliberation and discovery adequate to free it from its collective delusions – this is a very large task. I see no evidence that any existing political institution is up to this task. Perhaps an AGI could carry out this process – but then what is the goal system of this AGI? Do we begin this goal system with the current ethical systems of the human race – as Yudkowsky seems to suggest in the above (“Human nature is not a bad place to start…”)? In that case, does the AGI begin by believing in God and reincarnation, which are beliefs of the vast majority of humans? Or does the AGI begin with some other guiding principle, such as Voluntary Joyous Growth? My hypothesis is that an AGI beginning with Voluntary Joyous Growth as a guiding principle is more likely to help humanity along a path of increasing wisdom and humaneness than an AGI beginning with current human nature as a guiding principle.
One can posit, as a goal, the creation of a Humane AI that embodies humane-ness as discovered by humanity via interaction with an appropriately guided AGI. However, I’m not sure what this adds, beyond what one gets from creating an AGI that follows the principle of Voluntary Joyous Growth and leaving it to interact with humanity. If the creation of the Humane AI is going to make humans happier, and going to help humans to grow, and going to be something that humans choose, then the Voluntary Joyous Growth based AGI is going to choose it anyway. On the other hand, maybe after humans become wiser, they’ll realize that the creation of an AGI embodying the average of human wishes is not such a great goal anyway. As an alternative, perhaps a host of different AGI’s will be created, embodying different aspects of human nature and humane-ness, and allowed to evolve radically in different directions.

2.9 Ethical Principles, Systems and Rules
My discussion of ethics has lived on a very abstract level so far — and this has been intentional. I have sought to treat ethics in a manner similar to the philosophy of science. In science we have Popper’s meta-rule, and then we have scientific research programmes, which may be evaluated heuristically as to how well they fulfill Popper’s meta-rule: how good are they at being science? Then, within each research programme, we have a host of specific scientific theories and conjectures, none of which can be evaluated or compared outside the context of the research programmes in which they live. Similarly, in the domain of ethics, we have highly abstract
principles like Smigrodzki’s meta-rule or the Principle of Voluntary Joyous Growth – and then, within these, we may have particular ethical rule-systems, which in turn generate specific rules for dealing with specific situations.
My feeling is that the specific ethical rule-systems that promote a given abstract principle in a human context are very unlikely to survive the Transcension. For instance, the standard ethics according to which modern Americans live involves a host of subtle compromises, involving such issues as

• meat-eating (we’re comfortable killing some animals for meat but not others; we’re comfortable killing animals brutally but not too brutally)
• charity (we give a certain percentage of our incomes to help the less-fortunate, but not as much as we could)
• honesty (we generally allow “white lies” but frown on other sorts of lies)

and so on and so forth. This complex system of compromises that constitutes our modern American practical ethics is not in itself a powerful attractor. It is largely in accordance with the Principle of Voluntary Joyous Growth – it tries to promote happiness, progress and choice – but I have no doubt that, Transcension or no, in a couple hundred years a rather different network of compromises will be in place. And post-Transcension, the practical manifestations of the Principle of Voluntary Joyous Growth will be very radically different. (As an aside, it is clearly no coincidence that the Principle of Voluntary Joyous Growth harmonizes better with modern urban American ethics than with the ethics of many other contemporary cultures. More so than, say, Arabia or China or the Mbuti pygmies, American culture is focused on individual choice, progress and hedonism. And so, I’m aware that as a modern American writing about Voluntary Joyous Growth, I’m projecting the nature of my own particular culture onto the transhuman future. On the other hand, it’s not a coincidence that America and relatively culturally similar places are the ones doing most of the work leading toward the Transcension. Perhaps it is sensible that the cultures most directly leading to the Transcension should have the most post-Transcension-friendly philosophies.)
One thing this system-theoretic perspective says is: We can’t judge the modern American ethical system by any one judgment it makes – we can only judge, as a whole, whether it tends to move in accordance with the principle we choose as a standard (e.g. Voluntary Joyous Growth). And similarly, we can’t reasonably ask post-Transcension minds to follow any particular judgment about any particular situation – rather, we can only ask them to follow some ethical system that tends to move in accordance with some general principle we pose as a standard. (And this request is more likely to be fulfilled, a priori, if it constitutes a powerful attractor in the universe at large.)
Thus, I suggest that “Be nice to humans” or “Obey your human masters” are simply too concrete and low-level ethical prescriptions to be expected to survive the Transcension. On the other hand, I suggest that a highly complex and messy network of beliefs like Yudkowsky’s “humaneness” is insufficiently crisp, elegant and abstract to be expected to survive the Transcension. Perhaps it’s more reasonable to expect highly abstract ethical principles to survive. Perhaps it’s more sensible to focus on ensuring the Principle of Voluntary Joyous Growth to survive the Transcension, than to focus on specific ethical rules (which have meaning only within specific ethical systems, which are highly context and culture bound) or the whole complex mess of human ethical intuition. Initially principles like joy, growth and choice will be grounded in human concepts and feelings — in aspects of “humaneness” — but as the Transcension proceeds they will gain other, related groundings.
In terms of technical AI theory, this contrast between general principles and specific rules relates to the issue of “stability through successive self-modifications” of an AI system. If an AI system is constantly rewriting itself and re-rewriting itself, how likely is it that this or that specific aspect of the system is going to persist over time? One would like for the basic ethical goal-system of the AI to persist through successive rewritings, but it’s not clear how to ensure this, even probabilistically. The properties of AI goal-systems under iterative self-modification are basically unknown and will be seriously explorable only once we have some reasonably intelligent and self-modifiable AI systems at hand to experiment with. However, my strong feeling is that the more abstract the principle, the more likely it is to survive successive self-modification. A highly specific rule like “Don’t eat yellow snow” or “Don’t kill humans” or a big messy habit-network like “humaneness” is relatively unlikely to survive; a more general principle like Voluntary Joyous Growth is a lot more likely to display the desired temporal continuity. I’m betting that this intuition will be borne out during the exciting period to come when we experiment with these issues on simple self-modifying, somewhat-intelligent AGI systems. And this intuition is followed up by the intuition mentioned above: that, among all the abstract principles out there, the ones that are more closely related to powerful attractors in the universe at large, are more likely to occur as attractors in an iteratively self-modifying AGI, and hence more likely to survive through the Trancension.
So, my essential complaint against Yudkowsky’s Friendly AI theory is that – quite apart from ethical issues regarding the wisdom of using mass-energy on humans rather than some other form of existence — I strongly suspect that it’s impossible to create AGI’s that will progressively radically self-improve and yet retain belief in the “humaneness” principle. I suspect this principle is just too non-universal to survive the successive radical-self-improvement process and the Transcension. On the other hand, I think a more abstract and universally-attractive principle like Voluntary Joyous Growth might well make it.
Please note that this is very different from the complaint that Friendly AI won’t work because any AI, once it has enough intelligence and power, will simply seize all processing power in the universe for itself. I think this “Megalomaniac AI” scenario is mainly a result of rampant anthropomorphism. In this context it’s interesting to return to the notion of attractors. It may be that the Megalomaniac AI is an attractor, in that once such a beast starts rolling, it’s tough to stop. But the question is, how likely is it that a superhuman AI will start out in the basin of attraction of this particular attractor? My intuition is that the basin of attraction of this attractor is not particularly large. Rather, I think that in order to make a Megalomaniac AI, one would
probably need to explicitly program an AI with a lust for power. Then, quite likely, this lust for power would manage to persist through repeated self-modifications – “lust for power” being a robustly simple-yet-abstract principle. On the other hand, if one programs one’s initial AI with an initial state aimed at a different attractor meta-ethic, there probably isn’t much chance of convergence into the megalomaniacal condition.

10. Harmony with the Nature of the Universe
This leads us to a point made by Jef Albright on the SL4 list, which is that the philosophy of Growth ties in naturally with the implicit “ethical system” followed by the universe – i.e., the universe grows. In other words, Growth is a kind of universe-scale attractor– once one has a universe devoted to pattern-proliferation-and-expansion, one will likely continue to do so for quite a while … the newly generated patterns will generate yet more patterns, and so forth. It is also a “universal attractor,” in the sense of an attractor that is common in various dynamical subsystems of the universe.
I think there’s a similar philosophical argument that Voluntary Joyous Growth is also harmonious with the pattern of the universe – i.e. also holds promise as a universal attractor.
Regarding the Voluntary part – the evolution of life shows how powerful wills naturally emerge from the weaker-willed … and then continue to survive due to their powerful wills, and create yet more willed beings.
And if you believe humans have a greater and deeper capacity for joy than rocks or trilobites or pigs, then we can also see in natural evolution a movement toward increasing Joy. Joyful creatures interact with other Joyful creatures and produce yet more Joyful creatures – Joy wants to perpetuate itself.
On the other hand, the Friendly AI principle does not seem to harmonize naturally with the evolutionary nature of the universe at all. Rather, it seems to contradict a key aspect of the nature of the universe — which is that the old gives way to the new when the time has come for this to occur.
Sure, there’s a certain quixotic nobility in maintaining ethics that contradict nature. After all, in a sense, technology development is all about contradicting nature. But in a deeper sense, I argue, technology development is all about following the nature of the universe – following the universal tendency toward growth and development. Modern technology may be in some ways a violation of biological nature, but it’s a consequence of the same general-evolutionary principle that led to the creation of biological forms out of the nonliving chemical stew of the early Earth. There is a quixotic beauty in contradicting nature — but an even greater and deeper beauty, perhaps, in contradicting local manifestations of the nature of the universe while according with global ones.
In breaking out of local attractor patterns but remaining wonderfully in synch with global ones.
All this suggests an interesting meta-principle for selecting abstract ethical principles, already hinted at above: namely,

All else equal, ethical principles are better if they’re more harmonious with the intrinsic nature of the universe – i.e. with the attractors that guide universal dynamics.

This suggests another possible modification to Smigrodzki’s meta-ethic, namely:

Find rules that will be accepted, and that are relatively harmonious with the attractors that guide universal dynamics.

However, this enhancement may be somewhat redundant, because I believe it’s true that rules, systems and principles that are more harmonious with the attractors that guide universal dynamics will tend to be accepted more broadly and for longer. Or in other words:
Attractors that are common in the universe, are also generally attractors for communities of volitional agents.
One thing this discussion brings to mind is Nietzsche’s discussion of “a good death.”Nietzsche pointed out that human deaths are usually pathetic because people don’t know when and how to die. He proposed that a truly mature and powerful mind would choose his time to die and make his death as wonderful and beautiful as his life. Dying a good death is an example of harmonizing with the nature of the universe – “going with the flow”, or following the “watercourse way” to use Alan Watts’ metaphorical rendition of Taoism. Counterbalancing the beauty of the Friendly AI notion with its quixotic quest to preserve humane-ness at all costs in contradiction to the universal pattern of progress, one has the hyperreal Nietzschean beauty of humanity dying a good death – recognizing that its time has come, because it has brilliantly and dangerously obsoleted itself. One might call this form of beauty the “Tao of Speciecide” – the wisdom of a species (or other form of life) recognizing that its existence has reached a natural end and choosing to end itself gracefully. As Nietzsche’s Zarathustra said, “Man is something to be overcome.”
It’s an interesting question whether speciecide contradicts the universal-attractor nature of Compassion. Under the Voluntary Joyous Growth principle, it’s not favored to extinguish beings without their permission. But if a species wants to annihilate itself, because it feels its mass-energy can be used for something better, then it’s perfectly Compassionate to allow it to do so.
Of course, I am being intentionally outrageous here – in my heart I don’t want to see the human race self-annihilate just to fulfill some Nietzschean notion of beauty, or to make room for more intelligent beings, or for any other reason. I have a tremendous affection for us hypercerebrated ape-beings. And as will be emphasized below, the course I propose in practice is a kind of hybrid of Cautious Developmentalism and Voluntary Joyous Growth. I am pursuing this line of discussion mainly to provide a counterbalance to what I see as an overemphasis on “human-friendliness” and human-preservation. Preserving and nurturing and growing humanity is an important point, but not the only point. To understand the Transcension with the maximum clarity our limited human brains allow, we need to think and feel more broadly.
I can see beauty in both of these extremes – Friendly AI and the Tao of Speciecide — and I am not overwhelmingly attracted to either of them. I don’t know if it’s “best”, in a general sense, that humanity survives or not – though I have a very strong personal bias in favor of humanity’s persistence, and in practical terms I would never act against my own species. I am very strongly
motivated to spread choice, growth and joy throughout the universe – and to research ways in which to do this without endangering humanity and what it has become and achieved.

3. AGI and Alternative Dangers
OK – now, let’s get practical. Suppose that

• the above analysis is basically correct
• one accepts the goal of Voluntary Joyous Growth or one of its variants
• the post-Transcension world turns out to be significantly influence-able by the details of the route humanity takes to the Trancension

Then we have the question of what we can do to encourage the post-Transcension world to maximally adhere to the Principle of Voluntary Joyous Growth.
My own thinking on this topic has centered on the development of artificial general intelligence. Partly this is because AGI is my own area of research, but mainly it’s because I believe that

• other radical futurist technologies are most likely to be achieved via a combination of human and AGI effort.
• the chances of a positive Transcension are much greater if highly advanced AGI is developed before other radical futurist technologies

Regarding the first point, I think it’s clear that, as soon as AGI comes about, it will radically transform the future development course of all other technologies. Furthermore, these other technologies – if their development initially goes more rapidly than AGI – are likely to rapidly lead to the development of AGI, so that their final development will likely be a matter of AGI-human collaboration. Suppose, for example, that molecular nanotechnology comes about before AGI. One of the many interesting things to do with MNT will be to create extremely powerful hardware to support AGI; and once AGI is built it will lead to vast new developments in MNT, biotechnology, AGI and other areas. Or, suppose that human biological understanding and genetic engineering advance much faster than AGI. Then, with a detailed understanding of the human brain, it should be possible to create software or hardware closely emulating human intelligence – and then improve on human intelligence in this digital form … thus leading to powerful AGI. I have a suspicion that MNT or biotech will lead to AGI capabilities before they will lead to AGI-independent Singularity-launching capabilities … though of course I’m well aware this suspicion could be wrong.
Regarding the second point, I think it’s clear that a molecular assembler or an advanced genetic engineering lab will be profoundly dangerous if left in the hands of (unreliable, highly ethically variant) human beings. Quite possibly, once technology develops far enough, it will become so easy for a moderately intelligent human to destroy all life on Earth that this will actually occur. There are many possible solutions to this problem, for instance:

1. Renounce advanced technology, as Bill Joy and others have suggested
2. Modify human culture and psychology so that the impulse to destroy others is far less prominent
3. Develop technological and cultural safeguards to prevent abuse of these technologies
4. Via genetic engineering or direct neuromodification, modify the human brain so that the impulse to destroy self and others is far less prominent
5. Create AGI’s that are less destructively irrational than humans, and allow them to take primary power over the development and deployment of other radical future technologies
6. Create AGI’s that are more intelligent than humans, but not oriented toward self-improvement and self-modification – oriented, rather, toward preventing humans from either being destructive or gaining power over potentially destructive technologies.

These AGI’s, together with selected humans, may very slowly and carefully pursue Transcension-oriented research – an approach to the Transcension that I call Cautious Developmentalism.
Of these five possibilities, 4-6 are the ones that I consider to have the highest probability of successful eventuation. I think 3 is also somewhat plausible, and am highly skeptical of 1 and 2.
I think renunciation is highly unlikely given the likely practical benefits that each incremental step of technological advancement is likely to have. Basically, the vast majority of humans aren’t going to want to renounce technologies that they find gratifying. And a small set of renouncers won’t alter the course of technology development.
Potentially, though radical Luddites could force renunciation via mass civilization-destroying terrorist actions – I view this as far more likely than a voluntary mass renunciation of technology.
Next, raised as I was among Marxists, it’s hard for me to be optimistic about the “perfectibility of humanity” via any means other than uploading or radical neural modification. While social and cultural patterns definitely have a strong impact on each individual mind, it’s equally true that social and cultural patterns are what they are (and are flawed as they’re flawed) because of the intrinsic biological nature of human psychology. Traits like dishonesty, violence, paranoia and narrow-mindedness are part of the human condition and are not going to be eliminated via social engineering or education. So far, social and psychological engineering through pharmacology has been a mixed bag … but as technology advances, it seems clear that the only real hope for improving human nature lies in modifying the genome or the brain, hence physiologically modifying the nature of humanity.
On a purely scientific level, it’s hard to tell whether or not detailed human-brain or human-genome modification is “easier” than creating AGI. Pragmatically, however, it seems clear that these biological improvements would be very difficult to propagate throughout the human race — due to the fact that so many individuals believe it’s a bad idea, and are unlikely to change their minds. AGI, on the other hand, can be achieved by a small group of individuals, and then have a definitive effect on the world at large, even if most individuals on Earth greet it with confused and ambiguous (or in some cases flatly negative) attitudes.
Finally, technological safeguards may be possible, but it’s hard to be confident in this regard: even if some radical, dangerous technologies can be safeguarded (as nuclear weapons are, currently, by the difficulty of obtaining fissile materials), all it will take is one hard-to-safeguard technology to lead to the end of us all. Certainly, it’s clear that — given the increasing rate of advance of technology and its rapid spread around the globe – the onlyway the “technological safeguard” route could possibly work would be via a worldwide police state with Big Brother watching everyone. And, the aesthetics and ethics of this kind of social system not withstanding, it’s not clear to me that even this would be effective. Advanced surveillance and enforcement measures would lead to advanced countermeasures by rebel groups, including sophisticated
hacker groups in First World countries as well as terrorists with various agendas (including Luddite agendas). I suppose that the only way to make technological safeguards work would be to:
Create highly advanced technology, either AGI or MNT or intelligence-enhancing biotech or some combination thereof
Keep this technology in the hands of a limited class of people, and use this technology to monitor the rest of the world, with the specific goal of preventing the development of any other technology posing existential risks.
While this would necessarily involve the sort of universal surveillance associated with the term “Big Brother,” it certainly wouldn’t necessarily entail the sort of fascist control of thoughts and actions of the sort depicted in Orwell’s 1984. Rather, all that’s required is the specific control of actions posing significant existential risks to the human race (and any other sentients developed in the meantime). Rather than a “Big Brother”, it may be more useful to think of a “Singularity Steward” – an entity whose goal is to guide humanity and its creation toward its Singularity or other-sort-of-Transcension in a maximally wise way … or guide it away from Singularities and Transcensions if these are judged most-probably negative in ethical valence.

4.Singularity Stewardship and the Global Brain Mindplex
In fact, my suspicion is that the only way to make a Singularity Steward entity actually work would be to supply it with an AGI brain – though not necessarily an AGI brain bent on growth or self-improvement. Rather, one can envision an AGI system programmed with a goal of preserving the human condition roughly as-is, perhaps with local improvements (like decreasing the incidence of disease and starvation, extending life, etc.). This AGI – “AI Big Brother” aka the “Singularity Steward” — would have to be significantly smarter than humans, at least in some ways. However, it wouldn’t need to be autonomous – in fact, it’s natural for this entity to depend on humans for its survival.
This steward AGI would need to be a wizard at analyzing massive amounts of surveillance data and figuring out who’s plotting against the established order, and who’s engaged in thought processes that might lead to the development and deployment of dangerous technologies. Perhaps, together with human scientists, it would figure out how to scan human brains worldwide in real-time to prevent not only murderous thoughts, but also thoughts regarding the development of molecular assemblers or self-modifying AI’s, or the creation of beings with intelligence competitive with that of the steward itself.
The problem of engineering a Singularity Steward AGI is rather different from the problem of engineering an AI intended to shepherd human minds through the Transcension. In the AI Big Brother case, one doesn’t want the AI to be self-modifying and self-improving – one wants it to remain stable. This is a much easier problem! One needs to make it a bit smarter than humans, but not too much – and one needs to give it a goal system focused on letting itself and humans remain as much the same as possible. The Singularity Steward should want to increase its own intelligence only in the presence of some external threat like an alien invasion.
In extreme cases one can envision a Singularity Steward feeling compelled to act in a fascistic way – for instance, intrusively modifying the brains of rebellious AGI researchers intent on launching the Singularity according to their pet theories. But if the goal is to prevent a dangerous, inadequately-thought-out Singularity, this may be the best option. To keep things exactly the way they are now – with the freedoms that now exist — is to maintain the possibility of massive destruction as technology develops slightly further. We are not, right now, in a safe and stable sociopsychotechnological configuration by any means. This AI Big Brother option is not terribly appealing to me personally, because it grates too harshly against my values of growth, choice and happiness. However, I respect it as a logical and consistent possibility, which seems plausibly achievable based on an objective analysis of the situation we confront. And I can see that it may well be the best option, if we can’t quickly enough arrive at a confident, fully-fleshed-out theory regarding the likely outcome of iterated self-improvement in AGI systems.
The Singularity Steward idea ties in with the Cautious Developmentalism approach, mentioned earlier. Suppose we create a Singularity Steward – and then allow it to experiment, together with selected human scientists, with Transcension-related technologies. This experimentation must take place very slowly and conservatively, and any move toward the Transcension would (according to the Steward’s hard-wired control code) be made only based on the agreement of the Steward with the vast majority of human beings. Conceivably, this could be the best and safest path toward the Transcension.
In fact – Orwellian associations notwithstanding — a Singularity-Steward-dominated society could potentially be a human utopia. Careful development of technology aimed at making human life easier – cheap power and food, effective medical care, and so forth – could enable the complete rearrangement of human society. Perhaps Earth could be covered by a set of small city-states, each one populated by like-minded individuals, living in a style of their choice. Liberated from economic need, and protected by the Steward from assault by nature or other humans, the humans under the Steward’s watch could live far more happily than in any prior human society. Free will, within the restrictions imposed by the Steward, could be refined and exercised copiously, perhaps in the manner of Buddhist “mind control.” And growth could occur spectacularly in non-dangerous directions, such as mathematics, music and art.
This hypothetical future is similar to the one sketched in Jack Williamson’s classic novel The Humanoids, although his humanoids (a robot-swarm version of an “AI Big Brother”) possessed the tragic flaw of valuing human happiness infinitely and human will not at all. While this flaw made Williamson’s novel an interesting one, it’s not intrinsic in the notion of a steward AGI. Rather, it’s quite consistent to imagine a Singularity Steward that values human free will as much as or more than human happiness – and imposes on human choice only when it moves in directions that appear plausibly likely to cause existential risks for humanity.
Of course, there’s one problem with this dream of a Singularity-Steward-powered human utopia: politics. An AGI steward, if it is ever created, is most likely to be created by some particular power bloc in order to aid it in pursuing its particular interests. What are the odds that it would actually be used to create a utopia on Earth? This is hard to estimate! What happens to politics when pre-Transcension but post-contemporary technology drastically decreases the problems of scarcity we have on Earth today?
This means that there are two ways a really workable Singularity Steward could come about:

By transforming the global cultural and political systems to be more rational and ethically positive
By a relatively small group of individuals, acting rationally with positive ethical goals, creating the Singularity Steward and putting it into play

This “relatively small group” could for example be an international team of scientists, or a group operating within the United Nations or the government of some existing nation. (Of course, these two paths are not at all mutually exclusive.)
This hypothesized transformation of global cultural and political systems ties in with the notion of the Global Brain as explored in numerous writings by Valentin Turchin, Francis Heylighen, Peter Russell, the author , and various others. The general idea of the Global Brain is that computing and communication technologies may lead to the creation of a kind of “distributed
mind” in which humans and AI minds both participate, but that collectively forms a higher level of intelligence and awareness, going beyond the individual intelligences of the people or AI’s involved in it. I have labeled this kind of distributed mind a “Mindplex” and have spent some effort exploring the possible features of Mindplex psychology. The Global Brain Mindplex, as I envision it, would consist of an AGI system specifically intended to collect together the thoughts of all the people on the globe and synthesize them into grander and more profound emergent thoughts – a kind of animated, superintelligent collective unconscious of the human race. Of course the innate intelligence of the AGI system would add many things not present in any of the
human-mind contributors – but then the AGI feeds its ideas back to the mass of humans, who then think new thoughts that are incorporated back into the Global Brain Mindplex mix.
In the late 1990’s I was very excited about the Global Brain Mindplex – but then for a while I lost some of my enthusiasm for it, due to its relative unexcitingness when compared to the possibility of a broader and more overwhelming Transcension. However, I had been overlooking the potential power of the Global Brain Mindplex as a Singularity Steward. In fact, if one wishes to create a Singularity Steward AGI to help guide humanity toward an optimal Transcension, it makes eminent sense that this Steward should harness the collective thought, intuition and feeling power of the human race, in the manner envisioned for the Global Brain. The two visions mesh perfectly well together, yielding the goal of creating a Global Brain Mindplex with a goal of advocating Voluntary Joyous Growth but avoiding a premature human Transcension.
The advent of such a Global Brain Mindplex might well help achieve what has proved impossible via human means alone – the creation of rational and ethically positive social institutions. How to build such a Global Brain Mindplex is another question, however. What it will take is a group of people with a lot of money for computer hardware and software, a vast capability for coordinated creative activity, and genuinely broad-minded positive ethical intentions. Let us hope that such a group emerges.

5.Pragmatic Politics of Transcension Research
A significant benefit of the Cautious Developmentalist approach is that it makes the lives of Transcension technology researchers easier and safer.
One may argue that

IF a Transcension of type Y is the best outcome according to Ethical System E

AND the odds of successfully launching a Transcension are a lot higher with the acceptance of a greater number of humans
THEN it is worth exploring whether either

a) a Transcension of type Y is acceptable to the vast majority of humans, or if not whether
b) there is a Transcension of type Y’ that is also a very good outcome according to E, but that IS acceptable a lot more humans

If such a Transcension Y’ is found, then it’s a lot better to pursue Y’ than Y, because the odds of achieving Y’ are significantly greater.
For example, if

Y = a Transcension supporting Voluntary Joyous Growth
Y’ = a Transcension supporting Voluntary Joyous Growth, but making every possible effort to enable all humans to continue to have the opportunity to live life on Earth as-is, if they wish to then it may well be that the conditions of the above are met.

Now, one shouldn’t overestimate the extent to which Y’ is acceptable to the vast mass of humans. After all, currently the US government has outlawed hallucinogens and many kinds of stem cell research, and requires government approval for putting chips in one’s own brain. Alcor, a company providing cryonic preservation services, has been plagued with lawsuits by transhumanism-unfriendly people. So it’s naive to think people won’t stand in the way of the Transcension, no matter how inoffensive it’s made.
But, definitely Y’ is easier to sell than Y, and will create nless opposition, thus increasing odds of achievement. This is a strong argument for embracing a kind of mixture of Voluntary Joyous Growth with Cautious Developmentalism. Even if Voluntary Joyous Growth is one’s goal, the chances of achieving this in practice may be greater if a Cautious Developmentalist approach to this goal is taken – because the odds of success are greater if there is more support among the mass of humanity.
However, this doesn’t get around my above-expressed skepticism as to the possibility of guaranteeing that “all humans [will] continue to have the opportunity to live life on Earth as-is, if they wish to.”The problem is, I think it is not very easy to make this guarantee about post-Transcension dynamics. If I’m right, then the options come down to,

1. Lie about it, and convince people that they CAN have this guarantee after all, or
2. [Try to] convince people that the risk is acceptable given the rewards and the other risks at play
3. Launch a Transcension against most peoples’ will

According to my own personal ethics – which value choice, joy and growth – the most ethically sound course is 2), which supports the free choice of humanity. So, in my view, the best hope is that through a systematic process of education, the majority of humans will come to realization 2) … that although there are no guarantees in launching a Transcension, the rewards are worth the risks. Then democracy is satisfied and growth is satisfied. There is reason to be optimistic in this regard, since history shows that nearly all technologies are eventually embraced by humanity, often after initial periods of skepticism.
This line of thinking pushes strongly in the direction of the Global Brain Mindplex.

6. Creating Joyously Growing, Volition-Respecting AI
Now let’s set political issues aside and go back to pure Voluntary Joyous Growth. If one wants to launch a positive Transcension using AGI – or create a positive Global Brain Singularity Steward Mindplex — then one needs to know how to create AGI’s that are likely to be ethically positive according to the Principle of Voluntary Joyous Growth? The key, it seems lies in the combination of two things:

Explicit ethical instruction: Specific instruction of the AGI in the “foundational ethical principle” in question (e.g. Voluntary Joyous Growth)
Ethically-guided cognitive architecture: Ensuring that the AGI’s cognitive architecture is structured in a way that implicitly embodies the ethical principle (so that obeying any principle besides the foundational ethical principle would seem profoundly unnatural to the system)

The first of these – explicit ethical instruction – is relatively (and only relatively!) straightforward. In general, this may be done via a combination of explicitly “hardwiring” ethical principles into one’s AI architecture, and teaching one’s AI via experiential interaction. Essentially, the idea is to bring up one’s baby AI to have the desired value systems, by interacting with it, teaching it by example, scolding it when it does badly, and – the only novelty here – spending a decent portion of one’s time studying the internals of one’s baby’s “brain” and modifying them accordingly. A key point is that one cannot viably instruct a baby mind only in highly abstract principles; one must instruct it in one or more specific ethical system, consistent with one’s abstract principles of choice. No doubt there will be a lot of art and science to instructing AI minds in specific ethical systems or general ethical principles; experimentation will be key here.
The second point – creating a cognitive architecture intrinsically harmonious with ethical principles – is subtler but seems to be possible so long as one’s ethical principles are sufficiently abstract. For instance, a focus on joy, growth and choice comes naturally to some AI designs, including the Novamente design under development by my collaborators and myself. Novamente may be given joy, growth and choice as specific system goals – along with more pragmatic short-term goals – but at least as importantly, it has joy, growth and choice implicitly embedded in its design.
Novamente is a multi-agent design, in which intelligence is achieved by a combination of semi-autonomous agents representing a variety of cognitive processes. Each particular Novamente system consists of a network of semi-autonomous units, each containing a population of agents carrying out cognitive processes and acting on a shared knowledge base.
It’s interesting to note that an emphasis on voluntarism is implicit in the multi-agent architecture, in which mind itself consists of a population of agents, each of which is allowed to make its own choices within the constraints imposed by the overall system. Rather than merely having ideas about the value of choice imposed on the system in an abstract conceptual way, the value of choice is embedded in the cognitive architecture of the AI system.
Not just the Novamente system as a whole, but many of its individual component processes, may be tuned to act so as to maximize joy and growth. For instance, the processes involved with creating new concepts may be rewarded for creating concepts that

•display a great deal of new pattern compared to previously existing concepts (“growth”)
•have the property that thinking about these concepts tends to lead to positive affect.

The same reward structure may be put into other processes, such as probabilistic logical inference (where one may control inference so as to encourage it to derive surprising new relationships, and new relationships that are estimated to correlate with system happiness).
The result is that, rather than merely having an ethical system artificially placed at the “top” of an AI system, one has one’s abstract ethical principles woven all through the system’s operations, inside the logic of many of its cognitive processes.
Finally there is the issue of information-gathering – does the AI system have the information to really act with the spread of joy, growth and truth throughout the universe as its primary goals? In order to encourage this, I have proposed the creation of a “Universal Mind Simulator” AI which contains sub-units dedicated to studying and simulating the actions of other minds in the universe. Assuming the Novamente AI architecture works as envisioned, it should be quite possible to configure a Novamente AI system in this way (even though universal mind simulation is not a necessary part of the Novamente architecture). Again, rather than just having “respect all the minds in the universe” programmed in or taught as an ethical maxim, the very structure of the AI system is being implicitly oriented toward the respecting of all minds in the universe. Personally, I find this kind of “AI Buddha” vision more appealing than “AI Big Brother” – but I also consider it even more risky.
Note the close relationship – but also the significant difference – between the Global Brain Mindplex design and the Universal Mind Simulator design. The former seeks to merge together the thoughts of various sentients into a superior, emergent whole; the latter seeks to emulate and study the thoughts of sentients as individuals. Obviously there is no contradiction between these two approaches; the two could exist in the same AI architecture – a Universal Brain AI Buddha Mindplex!
As already noted, this notion of ethically-guided cognitive architecture fits in much more naturally with abstract ethical principles like Voluntary Joyous Growth than with more specific ethical rules like “Be nice to humans.” It is almost absurd to think about building a cognitive architecture with “Be nice to humans” implicit in its logic; but abstract concepts like choice, joy and growth can very naturally be embodied in the inner workings of an AI system.

7. Encouraging a Positive Transcension
How then do we encourage a positive Transcension? Based on the considerations I’ve reviewed above, there seem to be two plausible options, summarized by the tongue-in-cheek slogan

AI Buddha versus AI Big Brother

Or, less sensationalistically rendered:

AI-Enforced Cautious Developmentalism


AI-Driven Aggressive Transcension Pursuit

My feeling is that the best course is as follows:

1. Research sub-human-level AI and other Transcension technologies as rapidly, intensely and carefully as possible, so as to gather the information needed to make a decision between Cautious Developmentalism and a more aggressively Transcension-focused approach. This needs to be done reasonably fast, because if humans, with our erratic and often self-destructive goal-systems, get to MNT and radical genetic engineering first, profound trouble may well ensue.
2. Present one’s findings to the human race at large, and undertake an educational programme aiming to make as many people as possible comfortable with the ideas involved, so that as many educated intelligent judgments as possible are able to weigh in on the matters at hand
3. If the dangers of self-modifying AGI seem too scary after this research and discussion period (for instance, if we discover that some kind of Evil Megalomaniacal AI seems like a likely attractor of self-modifying superintelligence), then

a. build an AGI Singularity Steward – quite possibly of the Global Brain Mindplex variety — and try like hell to prevent human political issues from sabotaging the feat
b. proceed very slowly and carefully with Transcension-related research

4. If the dangers of self-modifying AGI seem acceptable as compared to other dangers, then

a. Create AGI’s as fast as possible
b. Teach the AGI’s our ethical system of choice
c. Teach the AGI’s – and perhaps more importantly,embody in the AGI’s cognitive architectures – our abstract ethical/meta-ethical principles of choice

5. In either case: Hope for the best!

This general plan is motivated by principles of growth and choice, but nevertheless, as explicitly stated it’s neutral as regards the precise ethical systems and principles used to guide the development of self-modifying AGI’s. Of course, this is a critical issue, and as discussed above, it’s a matter of both taste and pragmatics. We must choose systems and principles that we feel are “right,” and that we feel have a decent chance of surviving the Transcension to guide post-Transcension reality. The latter issue – which ethical systems and principles have a greater chance of survival – is in part a scientific issue that may be resolved by experimenting with relatively simple self-modifying AI’s. For instance, such experimentation should be able to tentatively confirm or refute my hypothesis that more abstract principles will more easily survive iterated self-modification. But ultimately, even this kind of experimentation will be of limited value, due to the very nature of the Transcension, which is that all prior understandings and expectations are rendered obsolete.
After significant reflection, my own vote is for the Principle of Voluntary Joyous Growth. Of course, I hope that others will come to similar conclusions – and I’ll do my best to convince them… both of the rational point that this sort of principle is relatively likely to survive the Transcension, and of the human point that this principle captures much of what is really good, wonderful and important about human nature. If we leave the universe – or a big portion of it — with a legacy of voluntary joyous growth, this is a lot more important than whether or not the human race as such continues for millions of years. At least, this is the case according to my own value system – a value system that values humanity greatly, but not primarily because humans have two legs, two eyes, two hands, vaginas and penises, biceps and breasts and two cerebral hemispheres full of neurons with combinatory and topographic connections. I have immense affection for human creations like literature, mathematics, music and art; and for human emotions like love and wonder and excitement; and human relationships and cultural institutions … families, couples, rock bands, research teams. But what are most important about humanity are not these often-beautiful particulars, but the joy, the growth and the freedom that these particulars express – in other words, the way humanity expresses principles that are powerful universal attractors. At any rate, these are the human thoughts and feelings that lead me to feel the way I do about the best course toward thetranshuman world. Let’s do our best to make the freedom to be human survive the Transcension – but most of all, let’s do our best to make it so that the universal properties and principles that make humanity wonderful survive and flourish in the “post-Transcension universe” … whatever this barely-conceivable hypothetical entity turns out to be….
In spite of my own affection for Voluntary Joyous Growth, however, I have strong inclinations toward both the Joyous Growth Guided Voluntarism and pure Joyous Growth variants as well. (As much as I enjoy enjoying myself, Metaqualia’s eternal orgasm doesn’t appeal to me so much!) I hope that the ethical principle used to guide our approach to the Transcension won’t be chosen by any one person, but rather by the collective wisdom and feeling of a broad group of human beings. Bill Hibbard is an advocate of such decisions being made by an American-style democratic process; I’m not so sure this is the best approach, but I’m also not in favor of a single human being or tiny research team taking such a matter into its own hands. A discussion of the various ways to carry out this kind of decision process would be interesting but would elongate the present discussion too far, and I’ll defer it to another essay.
Obviously, I’m very excited about the possibilities of the Transcension, and I have a certain emotional eagerness to get on with it already. However, I’m also a scientist and well aware of the importance of gathering information and doing careful analysis before making a serious decision. So I’ll end this essay on a less ecstatic note, and emphasize once again the importance of research. I’ve presented above a number of very major issues, which I believe will be elucidated via experimentation with “moderately intelligent,” partially-self-modifying AGI systems. And I’m looking forward very much to participating in this experimentation process – either with a future version of my Novamente AI system, or with someone else’s AGI should they get there first. Experimentation with other technologies such as genetic engineering, neuromodification and molecular nanotechnology will doubtless also be highly instructive.


Many of the ideas in this essay developed via discussions with others, including

Frequent in-person chats with Izabela Lyon Freire, Moshe Looks, Kevin Cramer; and occasional in-person chats with Lucio Coelho de Souza & Eliezer Yudkowsky
•Discussions on the SL4 and AGI email lists, and in private emails, with a variety of individuals including Eliezer Yudkowsky, Metaqualia, Rafal Smigrodzki, Jef Allbright. Philip Sutton and Michael Vassar
Some of the ideas discussed here developed purely in the privacy of my own teeming brain; and of course the responsibility for any foolishness found here is primarily my own.



For Derrida, traditional metaphysics bears on an erroneous assumption about language. According to this tradition, meaning is embodied in an immutable connection between signifier and signified to form a unified whole, which is like…two sides of a sheet of paper. Derrida, in contrast, argues that signs cannot incorporate any absolute, univocal meaning, nor do signifiers directly refer to their respective signifieds. Signifiers can do no more than relate to and become other signifiers which are divorced from the world “out there.” In other words, there are no definite meanings prior to the system of signifiers. Hence, there can be no immediate presence in consciousness of a single, isolated, and absolute signified in all its fullness. The mistaken notion that there is Derrida terms the “metaphysics of presence.”
In addition, with respect to language as a whole, there can be no determinate center nor any retrievable origin. Belief in such is no more than nostalgia, says Derrida. What actually exists is a complex network of differences between signifiers, each in some sense carrying the traces of all others. Ironically, it is due to this very important fact that the traditional “metaphysics of presence” has been able to pervade all aspects of our thought, for our body of knowledge, Derrida asserts, actually consists of a differentially interrelated fabric of signifiers set down in texts as if they all composed one text: a vast, monolithic totality sometimes called “intertextuality.” Caught within this totality, inaccessible for us due to our human limitations, escape becomes impossible. From inside the totality, however, we can at least attempt with some degree of success to point out what is absent in our partial perception of it, what part of it has been repressed, and what part of it appropriately should be accounted for.

Deconstruction Reframed, Introduction, pp. 1,2, Floyd Merrell

…the Buddhist perspective emphasizes the realization that self and world are nondual. This is an experience not to be gained from the study of texts alone, for it usually requires religious practice: that is, meditation, the “other” of philosophy, the repressed shadow of our rationality, dismissed and ignored because it challenges the only ground philosophy has. Derrida says that he has been trying to find a nonsite, or a nonphilosophical site, from which to question philosophy– precisely what meditative practice provides. The postmodern realization that no resting-place can be found within language/thought is an important step toward the experience that there is no abiding-place for the mind…However, for Buddhism this further realization requires a “leap” that cannot be thought.
Indra’s Postmodern Net, David Loy, Philosophy East and West Vol. 43, No. 3 (July 1993), pp. 481-2

Jacques Derrida, Stanford Encyclopedia Of Philosophy
First published Wed Nov 22, 2006; substantive revision Fri Jun 3, 2011

Jacques Derrida (1930-2004) was the founder of “deconstruction,” a way of criticizing not only both literary and philosophical texts but also political institutions. Although Derrida at times expressed regret concerning the fate of the word “deconstruction,” its popularity indicates the wide-ranging influence of his thought, in philosophy, in literary criticism and theory, in art and, in particular, architectural theory, and in political theory. Indeed, Derrida’s fame nearly reached the status of a media star, with hundreds of people filling auditoriums to hear him speak, with films and televisions programs devoted to him, with countless books and articles devoted to his thinking. Beside critique, Derridean deconstruction consists in an attempt to re-conceive the difference that divides self-reflection (or self-consciousness). But even more than the re-conception of difference, and perhaps more importantly, deconstruction works towards preventing the worst violence. It attempts to render justice. Indeed, deconstruction is relentless in this pursuit since justice is impossible to achieve.

1. Life and Works
Derrida was born on July 15, 1930 in El-Biar (a suburb of Algiers), Algeria, into a Sephardic Jewish family. As is well-known, Algeria at this time was a French colony. Because Derrida’s writing concerns auto-bio-graphy (writing about one’s life as a form of relation to oneself), many of his writings are auto-biographical. So, for instance in Monolingualism of the Other (1998), Derrida recounts how, when he was in the “lycée” (high school), the Vichy regime in France proclaimed certain interdictions concerning the native languages of Algeria, in particular Berber. Derrida calls his experience of the “interdiction” “unforgettable and generalizable” (1998, p. 37). In fact, the “Jewish laws” passed by the Vichy regime interrupted his high school studies.
Immediately after World War II, Derrida started to study philosophy. In 1949, he moved to Paris, where he prepared for the entrance exam in philosophy for the prestigious École Normale Supérieure. Derrida failed his first attempt at this exam, but passed it in his second try in 1952. In one of the many eulogies that he wrote for members of his generation, Derrida recounts that, as he went into the courtyard toward the building in which he would sit for the second try, Gilles Deleuze passed him, smiling and saying, “My thoughts are with you, my very best thoughts.” Indeed, Derrida entered the École Normale at a time when a remarkable generation of philosophers and thinkers was coming of age. We have already mentioned Deleuze, but there was also Foucault, Althusser, Lyotard, Barthes, and Marin. Merleau-Ponty, Sartre, deBeauvoir, Levi-Strauss, Lacan, Ricœur, Blanchot, and Levinas were still alive. The Fifties in France was the time of phenomenology, and Derrida studied closely Husserl’s then published works as well as some of the archival material that was then available. The result was a “Mémoire” (a Masters thesis) from the academic year 1953-54 called The Problem of Genesis in Husserl’s Philosophy; Derrida published this text in 1990. Most importantly, at the École Normale, Derrida studied Hegel with Jean Hyppolite. Hyppolite (along with Maurice de Gandillac) was to direct Derrida’s doctoral thesis, “The Ideality of the Literary Object”; Derrida never completed this thesis. His studies with Hyppolite however led Derrida to a noticeably Hegelian reading of Husserl, one already underway through the works of Husserl’s assistant, Eugen Fink. Derrida claimed in his 1980 speech “The Time of a Thesis” (presented on the occasion of him finally receiving his doctorate) that he never studied Merleau-Ponty and Sartre and that especially he never subscribed to their readings of Husserl and phenomenology in general. With so much Merleau-Ponty archival material available, it is possible now however to see similarities between Merleau-Ponty’s final studies of Husserl and Derrida’s first studies. Nevertheless, even if one knows Merleau-Ponty’s thought well, one is taken aback by Derrida’s one hundred and fifty page long Introduction to his French translation of Husserl’s “The Origin of Geometry” (1962). Derrida’s Introduction looks to be a radically new understanding of Husserl insofar as Derrida stresses the problem of language in Husserl’s thought of history.
The 1960’s is a decade of great achievement for this generation of French thinkers. 1961 sees the publication of Foucault’s monumental Folie et déraison (Madness and Civilization is the English language title). At this time, Derrida is participating in a seminar taught by Foucault; on the basis of it, he will write “Cogito and the History of Madness” (1963), in which he criticizes Foucault’s early thought, especially Foucault’s interpretation of Descartes. “Cogito and the History of Madness” will result in a rupture between Derrida and Foucault, which will never fully heal. In the early 60’s, Derrida reads Heidegger and Levinas carefully. Then in 1964, Derrida publishes a long two part essay on Levinas, “Violence and Metaphysics.” It is hard to determine which of Derrida’s early essays is the most important, but certainly “Violence and Metaphysics” has to be a leading candidate. What comes through clearly in “Violence and Metaphysics” is Derrida’s great sympathy for Levinas’s thought of alterity, and at the same it is clear that Derrida is taking some distance from Levinas’s thought. Despite this distance, “Violence and Metaphysics” will open up a lifetime friendship with Levinas. In 1967 (at the age of thirty-seven), Derrida has his “annus mirabilis,” publishing three books at once: Writing and Difference, Speech and Phenomena, and Of Grammatology. In all three, Derrida uses the word “deconstruction” (to which we shall return below) in passing to describe his project. The word catches on immediately and comes to define Derrida’s thought. From then on up to the present, the word is bandied about, especially in the Anglophone world. It comes to be associated with a form of writing and thinking that is illogical and imprecise. It must be noted that Derrida’s style of writing contributed not only to his great popularity but also to the great animosity some felt towards him. His style is frequently more literary than philosophical and therefore more evocative than argumentative. Certainly, Derrida’s style is not traditional. In the same speech from 1980 at the time of him being awarded a doctorate, Derrida tells us that, in the Seventies, he devoted himself to developing a style of writing. The most famous or infamous example is his 1974 Glas (“Death Knell” would be an approximate English translation); here Derrida writes in two columns, with the left devoted to a reading of Hegel and the right devoted to a reading of the French novelist-playwright Jean Genet. Another example would be his 1980 Postcard from Socrates to Freud and Beyond; the opening two hundred pages of this book consist of love letters addressed to no one in particular. It seems that sometime around this time (1980), Derrida reverted back to the more linear and somewhat argumentative style, the very style that defined his texts from the Sixties. He never however renounced a kind of evocation, a calling forth that truly defines deconstruction. Derrida takes the idea of a call from Heidegger. Starting in 1968 with “The Ends of Man,” Derrida devoted a number of texts to Heidegger’s thought. In particular, during the 1980’s, Derrida wrote a series of essays on the question of sex or race in Heidegger (“Geschlecht I-IV”). While frequently critical, these essays often provide new insights into Heidegger’s thought. The culminating essay in Derrida’s series on Heidegger is his 1992 Aporias.
Throughout the Sixties, having been invited by Hyppolite and Althusser, Derrida taught at the École Normale. In 1983, he became “Director of Studies” in “Philosophical Institutions” at the École des Hautes Études en Sciences Sociales in Paris; he will hold this position until his death. Starting in the Seventies, Derrida held many appointments in American universities, in particular Johns Hopkins University and Yale University. From 1987, Derrida taught one semester a year at the University of California at Irvine. Derrida’s close relationship with Irvine led to the establishment of the Derrida archives there. Also during the Seventies, Derrida associated himself with GREPH (“Le Groupe de Recherche sur l’Enseignement Philosophique,” in English: “The Group Investigating the Teaching of Philosophy”). As its name suggests, this group investigated how philosophy is taught in the high schools and universities in France. Derrida wrote several texts based on this research, many of which were collected in Du droit à la philosophie (1990, an approximate English title would be: “Concerning the Right to Philosophy”). In 1982, Derrida was also one of the founders of the Collège Internationale de Philosophie in Paris, and served as its first director from 1982 to 1984.
In the 1990’s, Derrida’s works went in two simultaneous directions that tend to intersect and overlap with one another: politics and religion. These two directions were probably first clearly evident in Derrida’s 1989 “Force of Law.” But one can see them better in his 1993 Specters of Marx, where Derrida insisted that a deconstructed (or criticized) Marxist thought is still relevant to today’s world despite globalization and that a deconstructed Marxism consists in a new messianism, a messianism of a “democracy to come.” But, even though Derrida was approaching the end of his life, he produced many interesting texts in the Nineties and into the new century. For instance, Derrida’s 1996 text on Levinas, “A Word of Welcome,” lays out the most penetrating logic of the same and other through a discussion of hospitality. In his final works on sovereignty, in particular, Rogues (2003), Derrida shows that the law always contains the possibility of suspension, which means that even the most democratic of nations (the United States for example) resembles a “rogue state” or perhaps is the most “roguish” of all states. Based on lectures first presented during the summer of 1998, L’animal que donc je suis (The Animal that Therefore I am) appeared as the first posthumous work in 2006; concerning animality, it indicates Derrida’s continuous interest in the question of life.
Sometime in 2002, Derrida was diagnosed with pancreatic cancer. He died on October 8, 2004.

2. “The Incorruptibles”

As we noted, Derrida became famous at the end of the 1960’s, with the publication of three books in 1967. At this time, other great books appear: Foucault’s Les mots et les choses (The Order of Things is the English language title) in 1966; Deleuze’s Difference and Repetition in 1968. It is hard to deny that the philosophy publications of this epoch indicate that we have before us a kind of philosophical moment (a moment perhaps comparable to the moment of German Idealism at the beginning of the 19th century). Hélène Cixous calls this generation of French philosophers “the incorruptibles.” In the last interview Derrida gave (to Le Monde on August 19, 2004), he provided an interpretation of “the incorruptibles”: “By means of metonymy, I call this approach [of “the incorruptibles”] an intransigent, even incorruptible, ethos of writing and thinking …, without concession even to philosophy, and not letting public opinion, the media, or the phantasm of an intimidating readership frighten or force us into simplifying or repressing. Hence the strict taste for refinement, paradox, and aporia.” Derrida proclaims that today, more than ever, “this predilection [for paradox and aporia] remains a requirement.” How are we to understand this requirement, this predilection for “refinement, paradox, and aporia”?
In an essay from 1998, “Typewriter Ribbon,” Derrida investigates the relation of confession to archives. But, before he starts the investigation (which will concern primarily Rousseau), he says, “Let us put in place the premises of our question.” He says, “Will this be possible for us? Will we one day be able to, and in a single gesture, to join the thinking of the event to the thinking of the machine? Will we be able to think, what is called thinking, at one and the same time, both what is happening (we call that an event) and the calculable programming of an automatic repetition (we call that a machine). For that, it would be necessary in the future (but there will be no future except on this condition) to think both the event and the machine as two compatible or even in-dissociable concepts. Today they appear to us to be antinomic” (Without Alibi, p. 72). These two concepts appear to us to be antinomic because we conceive an event as something singular and non-repeatable. Moreover, Derrida associates this singularity to the living. The living being undergoes a sensation and this sensation (an affect or feeling for example) gets inscribed in organic material. The idea of an inscription leads Derrida to the other pole. The machine that inscribes is based in repetition; “It is destined, that is, to reproduce impassively, imperceptibly, without organ or organicity, the received commands. In a state of anaesthesis, it would obey or command a calculable program without affect or auto-affection, like an indifferent automaton” (Without Alibi, p. 73). The automatic nature of the inorganic machine is not the spontaneity attributed to organic life. It is easy to see the incompatibility of the two concepts: organic, living singularity (the event) and inorganic, dead universality (mechanical repetition). Derrida says that, if we can make these two concepts compatible, “you can bet not only (and I insist on not only) will one have produced a new logic, an unheard of conceptual form. In truth, against the background and at the horizon of our present possibilities, this new figure would resemble a monster.” The monstrosity of this paradox between event and repetition announces, perhaps, another kind of thinking, an impossible thinking: the impossible event (there must be resemblance to the past which cancels the singularity of the event) and the only possible event (since any event in order to be event worthy of its name must be singular and non-resembling). Derrida concludes this discussion by saying: “To give up neither the event nor the machine, to subordinate neither one to the other, neither to reduce one to the other: this is perhaps a concern of thinking that has kept a certain number of ‘us’ working for the last few decades” (Without Alibi, p. 74). This “us” refers to Derrida’s generation of thinkers: “the incorruptibles.” What Derrida says here defines a general project which consists in trying to conceive the relation between machine-like repeatability and irreplaceable singularity neither as a relation of externality (external as in Descartes’s two substance or as in Platonism’s two worlds) nor as a relation of homogeneity (any form of reductionism would suffice here to elucidate a homogeneous relation). Instead, the relation is one in which the elements are internal to one another and yet remain heterogeneous. Derrida’s famous term “différance” (to which we shall return below) refers to this relation in which machine-like repeatability is internal to irreplaceable singularity and yet the two remain heterogeneous to one another.
Of course, Cixous intends with the word “incorruptibles” that the generation of French philosophers who came of age in the Sixties, what they wrote and did, will never decay, will remain endlessly new and interesting. This generation will remain pure. But, the term is particularly appropriate for Derrida, since his thought concerns precisely the idea of purity and therefore contamination. Contamination, in Derrida, implies that an opposition consisting in two pure poles separated by an indivisible line never exists. In other words, traditionally (going back to Plato’s myths but also Christian theology), we think that there was an original pure state of being (direct contact with the forms or the Garden of Eden) which accidentally became corrupt. In contrast, Derrida tries to show that no term or idea or reality is ever pure in this way; one term always and necessarily “infects” the other.
Nevertheless, for Derrida, a kind of purity remains as a value. In his 1992 The Monolingualism of the Other, Derrida speaks of his “shameful intolerance” for anything but the purity of the French language (as opposed to French contaminated with English words like “le weekend”). Derrida says, “I still do not dare admit this compulsive demand for a purity of language except within boundaries of which I can be sure: this demand is neither ethical, political, nor social. It does not inspire any judgment in me. It simply exposes me to suffering when someone, who can be myself, happens to fall short of it. I suffer even further when I catch myself or am caught ‘red-handed’ in the act. … Above all, this demand remains so inflexible that it sometimes goes beyond the grammatical point of view, it even neglects ‘style’ in order to bow to a more hidden rule, to ‘listen’ to the domineering murmur of an order which someone in me flatters himself to understand, even in situations where he would be the only one to do so, in a tête-à-tête with the idiom, the final target: a last will of the language, in sum, a law of the language that would entrust itself only to me. …I therefore admit to a purity which is not very pure. Anything but a purism. It is, at least, the only impure ‘purity’ for which I dare confess a taste” (Monolingualism, p. 46). Derrida’s taste for purity is such that he seeks the idioms of a language. The idioms of a language are what make the language singular. An idiom is so pure that we seem unable to translate it out of that language. For example, Derrida always connects the French idiom “il faut,” “it is necessary,” to “une faute,” “a fault” and to “un défaut,” “a defect”; but we cannot makes this linguistic connection between necessity and a fault in English. This idiom seems to belong alone to French; it seems as though it cannot be shared; so far, there is no babble of several languages in the one sole French language. And yet, even within one language, an idiom can be shared. Here is another French idiom: “il y va d’un certain pas.” Even in French, this idiom can be “translated.” On the one hand, if one takes the “il y va” literally, one has a sentence about movement to a place (“y”: there) at a certain pace (“un certain pas”: a certain step). On the other hand, if one takes the “il y va” idiomatically (“il y va”: what is at issue), one has a sentence (perhaps more philosophical) about the issue of negation (“un certain pas”: “a certain kind of not”). This undecidability in how to understand an idiom within one sole language indicates that, already in French, in the one French language, there is already translation and, as Derrida would say, “Babelization.” Therefore, for Derrida, “a pure language” means a language whose terms necessarily include a plurality of senses that cannot be reduced down to one sense that is the proper meaning. In other words, the taste for purity in Derrida is a taste for impropriety and therefore impurity. The value of purity in Derrida means that anyone who conceives language in terms of proper or pure meanings must be criticized.

3. Basic Argumentation and its Implications: Time, Hearing-Oneself-Speak, the Secret, and Sovereignty
Already we are very close to Derrida’s basic argumentation. The basic argumentation always attempts to show that no one is able to separate irreplaceable singularity and machine-like repeatability (or “iterability,” as Derrida frequently says) into two substances that stand outside of one another; nor is anyone able to reduce one to the other so that we would have one pure substance (with attributes or modifications). Machine-like repeatability and irreplaceable singularity, for Derrida, are like two forces that attract one another across a limit that is indeterminate and divisible. Yet, to understand the basic argumentation, we must be, as Derrida himself says in Rogues, “responsible guardians of the heritage of transcendental idealism” (Rogues, p. 134; see also Limited Inc, p. 93). Kant had of course opened up the possibility of this way of philosophizing: arguing back (Kant called this arguing back a “deduction”) from the givenness of experience to the conditions that are necessarily required for the way experience is given. These conditions would function as a foundation for all experience. Following Kant (but also Husserl and Heidegger), Derrida then is always interested in necessary and foundational conditions of experience.
So, let us start with the simplest argument that we can formulate. If we reflect on experience in general, what we cannot deny is that experience is conditioned by time. Every experience, necessarily, takes place in the present. In the present experience, there is the kernel or point of the now. What is happening right now is a kind of event, different from every other now I have ever experienced. Yet, also in the present, I remember the recent past and I anticipate what is about to happen. The memory and the anticipation consist in repeatability. Because what I experience now can be immediately recalled, it is repeatable and that repeatability therefore motivates me to anticipate the same thing happening again. Therefore, what is happening right now is also not different from every other now I have ever experienced. At the same time, the present experience is an event and it is not an event because it is repeatable. This “at the same time” is the crux of the matter for Derrida. The conclusion is that we can have no experience that does not essentially and inseparably contain these two agencies of event and repeatability.
This basic argument contains four important implications. First, experience as the experience of the present is never a simple experience of something present over and against me, right before my eyes as in an intuition; there is always another agency there. Repeatability contains what has passed away and is no longer present and what is about to come and is not yet present. The present therefore is always complicated by non-presence. Derrida calls this minimal repeatability found in every experience “the trace.” Indeed, the trace is a kind of proto-linguisticality (Derrida also calls it “arche-writing”), since language in its most minimal determination consists in repeatable forms. Second, the argument has disturbed the traditional structure of transcendental philosophy, which consists in a linear relation between foundational conditions and founded experience. In traditional transcendental philosophy (as in Kant for example), an empirical event such as what is happening right now is supposed to be derivative from or founded upon conditions which are not empirical. Yet, Derrida’s basic argument demonstrates that the empirical event is a non-separable part of the structural or foundational conditions. Or, in traditional transcendental philosophy, the empirical event is supposed to be an accident that overcomes an essential structure. But with Derrida’s argument, we see that this accident cannot be removed or eliminated. We can describe this second implication in still another way. In traditional philosophy we always speak of a kind of first principle or origin and that origin is always conceived as self-identical (again something like a Garden of Eden principle). Yet, here we see that the origin is immediately divided, as if the “fall” into division, accidents, and empirical events has always already taken place. In Of Spirit, Derrida calls this kind of origin “origin-heterogeneous”: the origin is heterogeneous immediately (Of Spirit, pp. 107-108). Third, if the origin is always heterogeneous, then nothing is ever given as such in certainty. Whatever is given is given as other than itself, as already past or as still to come. What becomes foundational therefore in Derrida is this “as”: origin as the heterogeneous “as.” The “as” means that there is no knowledge as such, there is no truth as such, there is no perception as such. Faith, perjury, and language are already there in the origin. Fourth, if something like a fall has always already taken place, has taken place essentially or necessarily, then every experience contains an aspect of lateness. It seems as though I am always late for the origin since it seems to have always already disappeared. Every experience then is always not quite on time or, as Derrida quotes Hamlet, time is “out of joint.” Late in his career, Derrida will call this time being out of joint “anachronism” (see for instance On the Name, p. 94). As we shall see in a moment, anachronism for Derrida is the flip side of what he calls “spacing” (espacement); space is out of place. But we should also keep in mind, as we move forward that the phrase “out of joint” alludes to justice: being out of joint, time is necessarily unjust or violent.
So far, we can say that the argument is quite simple although it has wide-ranging implications. It is based on an analysis of experience, but it is also based in the experience of what Derrida has called “auto-affection.” We find the idea of auto-affection (or self-affection) in ancient Greek philosophy, for example in Aristotle’s definition of God as “thought thinking itself.” Auto-affection occurs when I affect myself, when the affecting is the same as the affected. As we said above, Derrida will frequently write about autobiography as a form of auto-affection or self-relation. In the very late L’animal que donc je suis, Derrida tells us what he is trying to do with auto-affection: “if the auto-position, the auto-monstration of the auto-directedness of the I, even in man, implied the I as an other and had to welcome in the self some irreducible hetero-affection (which I [that is, Derrida] have attempted elsewhere [my emphasis]), then this autonomy of the I would be neither pure nor rigorous; it would not be able to give way to a simple and linear delimitation between man and animal” (L’animal que donc je suis, p. 133, my English translation). Always, Derrida tries to show that auto-affection is hetero-affection; the experience of the same (I am thinking about myself) is the experience of the other (insofar as I think about myself I am thinking of someone or something else at the same time). But, in order to understand more fully the basic argumentation, let us look at three of these “other places” where Derrida has “attempted” to show that an irreducible hetero-affection infects auto-affection.
The first occurs in La voix et le phénomène (literally the title is Voice and Phenomenon; the title of the English translation is Speech and Phenomena), Derrida’s 1967 study of Husserl. Here, Derrida argues that, when Husserl describes lived-experience (Erlebnis), even absolute subjectivity, he is speaking of an interior monologue, auto-affection as hearing-oneself-speak. According to Derrida, hearing-oneself-speak is, for Husserl, “an absolutely unique kind of auto-affection” (Speech and Phenomena, p. 78). It is unique because there seems to be no external detour from the hearing to the speaking; in hearing-oneself-speak there is self-proximity. It seems therefore that I hear myself speak immediately in the very moment that I am speaking. According to Derrida, Husserl’s own description of temporalization however undermines the idea that I hear myself speak immediately. On the one hand, Husserl describes what he calls the “living present,” the present that I am experiencing right now, as being perception, and yet Husserl also says that the living present is thick. The living present is thick because it includes phases other than the now, in particular, what Husserl calls “protention,” the anticipation (or “awaiting,” we might say) of the approaching future and “retention,” the memory of the recent past. As is well known, Derrida focuses on the status of retention in Voice and Phenomenon. Retention in Husserl has a strange status since Husserl wants to include it in the present as a kind of perception and at the same time he recognizes that it is different from the present as a kind of non-perception. For Derrida, Husserl’s descriptions imply that the living present, by always folding the recent past back into itself, by always folding memory into perception, involves a difference in the very middle of it (Speech and Phenomena, p. 69). In other words, in the very moment, when silently I speak to myself, it must be the case that there is a miniscule hiatus differentiating me into the speaker and into the hearer. There must be a hiatus that differentiates me from myself, a hiatus or gap without which I would not be a hearer as well as a speaker. This hiatus also defines the trace, a minimal repeatability. And this hiatus, this fold of repetition, is found in the very moment of hearing-myself-speak. Derrida stresses that “moment” or “instant” translates the German “Augenblick,” which literally means “blink of the eye.” When Derrida stresses the literal meaning of “Augenblick,” he is in effect “deconstructing” auditory auto-affection into visual auto-affection. When I look in the mirror, for example, it is necessary that I am “distanced” or “spaced” from the mirror. I must be distanced from myself so that I am able to be both seer and seen. The space between, however, remains obstinately invisible. Remaining invisible, the space gouges out the eye, blinds it. I see myself over there in the mirror and yet, that self over there is other than me; so, I am not able to see myself as such. What Derrida is trying to demonstrate here is that this “spacing” (espacement) or blindness is essentially necessary for all forms of auto-affection, even tactile auto-affection which seems to be immediate.
Now, let us go to another “other place,” which can be found in “How to Avoid Speaking.” Here Derrida discusses negative theology by means of the idea of “dénégation,” “denegation” or “denial.” The French word “dénégation” translates Freud’s term “Verneinung.” With its negative prefix (“ver”), this German term implies a negation of a negation, a denial then but one that is also an affirmation. The fundamental question then for negative theology, but also psychoanalysis, and for Derrida is how to deny and yet also not deny. This duality between not telling and telling is why Derrida takes up the idea of the secret. In “How to Avoid Speaking,” Derrida says, and this is an important comment for understanding the secret in Derrida: “There is a secret of denial [dénégation] and a denial [dénégation] of the secret. The secret as such, as secret, separates and already institutes a negativity; it is a negation that denies itself. It de-negates itself” (Languages of the Unsayable, p. 25, my emphasis). Here Derrida speaks of a secret as such. A secret as such is something that must not be spoken; we then have the first negation: “I promise not to give the secret away.” And yet, in order to possess a secret really, to have it really, I must tell it to myself. Here we can see the relation of hearing-oneself-speak that we just saw in Voice and Phenomenon. Keeping a secret includes necessarily auto-affection: I must speak to myself of the secret. We might however say more, we might even say that I am too weak for this speaking of the secret to myself not to happen. I must have a conceptual grasp of it; I have to frame a representation of the secret. With the idea of a re-presentation (I must present the secret to myself again in order to possess it really), we also see retention, repetition, and the trace or a name. A trace of the secret must be formed, in which case, the secret is in principle shareable. If the secret must be necessarily shareable, it is always already shared. In other words, in order to frame the representation of the secret, I must negate the first negation, in which I promised not to tell the secret: I must tell the secret to myself as if I were someone else. I thereby make a second negation, a “de-negation,” which means I must break the promise not to tell the secret. In order to keep the secret (or the promise), I must necessarily not keep the secret (I must violate the promise). So, I possess the secret and do not possess it. This structure has the consequence of there being no secret as such. A secret is necessarily shared. As Derrida says in “How to Avoid Speaking,”
This denial [dénégation] does not happen [to the secret] by accident; it is essential and originary. … The enigma … is the sharing of the secret, and not only shared to my partner in the society but the secret shared within itself, its ‘own’ partition, which divides the essence of a secret that cannot even appear to one alone except in starting to be lost, to divulge itself, hence to dissimulate itself, as secret, in showing itself: dissimulating its dissimulation. There is no secret as such; I deny it. And this is what I confide in secret to whomever allies himself to me. This is the secret of the alliance. (Languages of the Unsayable, p. 25)
Now, finally, let us go to one of the most recent of Derrida’s writings, to his 2002 “The Reason of the Strongest,” the first essay in the book called Rogues. There Derrida is discussing the United Nations, which he says combines the two principles of Western political thought: sovereignty and democracy. But, “democracy and sovereignty are at the same time, but also by turns, inseparable and in contradiction with one another” (Rogues, p. 100). Democracy and sovereignty contradict one another in the following way. And here Derrida is speaking of pure sovereignty, the very “essence of sovereignty” (Rogues, p. 100). On the one hand, in order to be sovereign, one must wield power oneself, take responsibility for its use by oneself, which means that the use of power, if it is to be sovereign, must be silent; the sovereign does not have to give reasons; the sovereign must exercise power in secret. In other words, sovereignty attempts to possess power indivisibly, it tries not to share, and not sharing means contracting power into an instant—the instant of action, of an event, of a singularity. We can see the outline here of Derrida’s deconstruction not only of the hearing-oneself-speak auto-affection but also of the auto-affection of the promising-to-oneself to keep a secret. On the other hand, democracy calls for the sovereign to share power, to give reasons, to universalize. In democracy the use of power therefore is always an abuse of power. Derrida can also say that sovereignty and democracy are inseparable from one another (the contradiction makes them heterogeneous to one another) because democracy even though it calls for universalization (giving reasons in an assembly) also requires force, freedom, a decision, sovereign power. For Derrida, in democracy, a decision (the use of power) is always urgent; and yet (here is the contradiction), democracy takes time, democracy makes one wait so that the use of power can be discussed. Power can never be exercised without its communication; as Derrida says, “As soon as I speak to the other, I submit to the law of giving reason(s), I share a virtually universalizable medium, I divide my authority” (Rogues, p. 101). There must be sovereignty, and yet, there can be no use of power without the sharing of it through repetition. More precisely, as Derrida says, “Since [sovereignty] never succeeds in [not sharing] except in a critical, precarious, and unstable fashion, sovereignty can only tend, for a limited time, to reign without sharing. It can only tend toward imperial hegemony. To make use of the time is already an abuse” (Rogues, p. 102, Derrida’s emphasis). This tendency defines what Derrida calls “the worst,” a tendency toward the complete appropriation or extermination of all the others.

4. Elaboration of the Basic Argumentation: The Worst and Hospitality
Throughout his career, Derrida elaborates on the basic argumentation in many ways. But Derrida always uses the argumentation against one idea, which Derrida calls “the worst” (le pire). We can extract a definition of the worst from “Faith and Knowledge” (Religion, p. 65). It revolves around an ambiguous phrase “plus d’un,” which could be translated in English as “more than one,” “more of one,” or “no more one.” On the one hand, this phrase means that in auto-affection, even while it is “auto,” the same, there is more than one; immediately with one, there is two, the self and other, and others. On the other hand, it means that there is a lot more of one, only one, the most one. The worst derives from this second sense of “plus d’un.” The worst is a superlative; it is the worst violence. Derrida, it seems, distinguishes the worst violence from what Kant had called “radical evil.” Radical evil is literally radical, evil at the root. It consists in the small, “infinitesimal difference” (see Of Grammatology, p. 234) between me and an other, even between me and an other in me. Derrida would describe this infinitesimal hiatus as the address, the “à” or the “to”; it is not only difference, across the distance of the address, it is also repetition. And, it is not only a repetition; this self-divergence is also violence, a rending of oneself, an incision. Nevertheless, radical evil is not absolute evil (see Philosophy in a Time of Terror, p. 99). The worst violence occurs when the other to which one is related is completely appropriated to or completely in one’s self, when an address reaches its proper destination, when it reaches only its proper destination. Reaching only its proper destination, the address will exclude more, many more, and that “many more,” at the limit, amounts to all. It is this complete exclusion or this extermination of the most – there is no limit to this violence—that makes this violence the worst violence. The worst is a relation that makes of more than one simply one, that makes, out of a division, an indivisible sovereignty. We can see again that the worst resembles the “pure actuality” of Aristotle’s Prime Mover, the One God: the sphere, or better, the globe of thought thinking itself (Rogues, p. 15).
What we have just laid out is the structure of the worst in Derrida’s thinking. But the structure, for Derrida, can always happen as an event. Derrida thinks that today, “in a time of terror,” after the end of the Cold War, when globalization is taking place, the fragility of the nation-state is being tested more and more. Agencies such as the International Criminal Court, the demand for universal human rights encroach on nation-state sovereignty. But the result of this universalization or “worlding” (“mondialisation” is the French word for globalization) is that the concept of war, and thus of world war, of enemy, and even of terrorism, along with the distinctions between civilian and military or between army, police, and militia, all of these concepts and distinctions are losing their pertinence. As Derrida says here in Rogues “what is called September 11 will not have created or revealed this situation, although it will have surely media-theatricalized it” (Rogues, pp. 154-55). Now, with globalization, there is no identifiable enemy in the form of a “state” territory with whom one (in Rogues Derrida uses this phrase: “the United States and its allies”) would wage what could still be called a “war,” even if we think of this as a war on international terrorism. The balance of terror of the Cold War that insured that no escalation of nuclear weapons would lead to a suicidal operation, Derrida says, “all that is over.” Instead, “a new violence is being prepared and in truth has been unleashed for some time now, in a way that is more visibly suicidal or auto-immune than ever. This violence no longer has to do with world war or even with war, even less with some right to wage war. And this is hardly re-assuring – indeed, quite the contrary” (Rogues, p. 156).
What does it mean to be “more suicidal”? To be more suicidal is to kill oneself more. The “more” means that, since there is only a fragile distinction between states (there is no identification of the enemy), one’s state or self includes more and more of the others. But, if one’s self includes others that threaten (so-called “terrorist cells,” for example), then, if one wants to immune oneself, then one must murder more and more of those others that are inside. Since the others are inside one’s state or one’s self, one is required to kill more and more of oneself. This context is very different from the rigid and external opposition, symbolized by the so-called “Iron Curtain,” that defined the Cold War. There and then, “we” had an identifiable enemy, with a name, which allowed the number of the enemies to be limited. But here and now, today, the number of “enemies” is potentially unlimited. Every other is wholly other (“tout autre est tout autre” [cf. The Politics of Friendship, p. 232]) and thus every single other needs to be rejected by the immune system. This innumerable rejection resembles a genocide or what is worse an absolute threat. The absolute threat can no longer be contained when it comes neither from an already constituted state nor even from a potential state that might be treated as a rogue state (Rogues, p. 105). What Derrida is saying here is that the worst is possible, here and now, more possible than ever.
As I said, Derrida always uses the basic argumentation that we have laid out against the idea of the worst; today the tendency towards the worst is greater than ever. The purpose in the application – this purpose defines deconstruction—is to move us towards, not the worst violence, not the most violence, but the least violence (Writing and Difference, p. 130). How does the application of the argumentation against the worst work? Along with globalization, the post-Cold War period sees, as Derrida says in “Faith and Knowledge,” a “return of the religious” (Religion, pp. 42-43). So, in “Faith and Knowledge,” Derrida lays out the etymology of the Latin word “religion” (he acknowledges that the etymology is problematic). The etymology implies that there are “two sources” of religion: “religio,” which implies a holding back or a being unscathed, safe and sound; and “re-legere,” which implies a linking up with another through faith (Religion, p. 16). We can see in this etymology the inseparable dualities we examined above: singular event and machine-like repeatability; auto-affection as hetero-affection. Most importantly, Derrida is trying to understand the “link” that defines religion prior to the link between man as such and the divinity of God. What we can see in this attempt to conceive the link as it is prior to its determination in terms of man and God is an attempt to make the link be as open as possible. Derrida is attempting to “un-close,” as much as possible, the sphericity or englobing of thought thinking itself – in order to open the link as wide as possible, open it to every single other, to any other whatsoever. Throughout his career, Derrida is always interested in the status of animality since it determines the limit between man and others. As his final book demonstrates, L’animal que donc je suis, Derrida is attempting to open the link even to animals. Animals are other and, because “every other is wholly other” (tout autre est tout autre), the link must be open to them too. Here despite the immense influence they have had on his thought, Derrida breaks with both Heidegger and Levinas both of whom did not open the link this wide (see Points, p. 279). Here, with the “door” or “border” open as wide as possible, we encounter Derrida’s idea of “unconditional hospitality,” which means letting others in no matter what, without asking them for papers, without judging them, even when they are uninvited. All are to be treated not as enemies who must be expelled or exterminated, but as friends.
This unconditional openness of the borders is not the best (as opposed to what we were calling the worst above). It is only the less bad or less evil, the less violence. Why? The unconditional opening is not possible. There are always conditions. Among all the others we must decide, we must assign them papers, which means that there is always still, necessarily violence at the borders. At once, in hospitality, there is the force that moves towards to the other to welcome and the force to remain unscathed and pulled back from the other, trying to keep the door closed. Here too, in hospitality, we see Derrida’s idea of a “messianism without messiah.” Because letting all the others in is impossible, this de-closing is always to come in the future like the messiah coming or coming back (Derrida plays on the French word for the future, “l’avenir,” which literally means “to come,” “à venir”). We must make one more point. The impossibility of unconditional hospitality means that any attempt to open the globe completely is insufficient. Being insufficient, every attempt therefore requires criticism; it must be “deconstructed,” as Derrida would say. But this deconstruction would be a deconstruction that recognizes its own insufficiency. Deconstruction, to which we now turn, never therefore results in good conscience, in the good conscience that comes with thinking we have done enough to render justice.

5. Deconstruction
As we said at the beginning, “deconstruction” is the most famous of Derrida’s terms. He seems to have appropriated the term from Heidegger’s use of “destruction” in Being and Time. But we can get a general sense of what Derrida means with deconstruction by recalling Descartes’s First Meditation. There Descartes says that for a long time he has been making mistakes. The criticism of his former beliefs both mistaken and valid aims towards uncovering a “firm and permanent foundation.” The image of a foundation implies that the collection of his former beliefs resembles a building. In the First Meditation then, Descartes is in effect taking down this old building, “de-constructing” it. We have also seen how much Derrida is indebted to traditional transcendental philosophy which really starts here with Descartes’ search for a “firm and permanent foundation.” But with Derrida, we know now, the foundation is not a unified self but a divisible limit between myself and myself as an other (auto-affection as hetero-affection: “origin-heterogeneous”).
Derrida has provided many definitions of deconstruction. But three definitions are classical. The first is early, being found in the 1971 interview “Positions” and in the 1972 Preface to Dissemination: deconstruction consists in “two phases” (Positions, pp. 41-42, Dissemination, pp.4-6). At this stage of his career Derrida famously (or infamously) speaks of “metaphysics” as if the Western philosophical tradition was monolithic and homogeneous. At times he also speaks of “Platonism,” as Nietzsche did. Simply, deconstruction is a criticism of Platonism, which is defined by the belief that existence is structured in terms of oppositions (separate substances or forms) and that the oppositions are hierarchical, with one side of the opposition being more valuable than the other. The first phase of deconstruction attacks this belief by reversing the Platonistic hierarchies: the hierarchies between the invisible or intelligible and the visible or sensible; between essence and appearance; between the soul and body; between living memory and rote memory; between mnēmē and hypomnēsis; between voice and writing; between finally good and evil. In order to clarify deconstruction’s “two phases,” let us restrict ourselves to one specific opposition, the opposition between appearance and essence. Nietzsche had also criticized this opposition but it is clearly central to phenomenological thinking as well. So, in Platonism, essence is more valuable than appearance. In deconstruction however, we reverse this, making appearance more valuable than essence. How? Here we could resort to empiricist arguments (in Hume for example) that show that all knowledge of what we call essence depends on the experience of what appears. But then, this argumentation would imply that essence and appearance are not related to one another as separate oppositional poles. The argumentation in other words would show us that essence can be reduced down to a variation of appearances (involving the roles of memory and anticipation). The reduction is a reduction to what we can call “immanence,” which carries the sense of “within” or “in.” So, we would say that what we used to call essence is found in appearance, essence is mixed into appearance. Now, we can back track a bit in the history of Western metaphysics. On the basis of the reversal of the essence-appearance hierarchy and on the basis of the reduction to immanence, we can see that something like a decision (a perhaps impossible decision) must have been made at the beginning of the metaphysical tradition, a decision that instituted the hierarchy of essence-appearance and separated essence from appearance. This decision is what really defines Platonism or “metaphysics.” After this retrospection, we can turn now to a second step in the reversal-reduction of Platonism, which is the second “phase” of deconstruction. The previously inferior term must be re-inscribed as the “origin” or “resource” of the opposition and hierarchy itself. How would this re-inscription or redefinition of appearance work? Here we would have to return to the idea that every appearance or every experience is temporal. In the experience of the present, there is always a small difference between the moment of now-ness and the past and the future. (It is perhaps possible that Hume had already discovered this small difference when, in the Treatise, he speaks of the idea of relation.) In any case, this infinitesimal difference is not only a difference that is non-dualistic, but also it is a difference that is, as Derrida would say, “undecidable.” Although the minuscule difference is virtually unnoticeable in everyday common experience, when we in fact notice it, we cannot decide if we are experiencing the past or the present, if we are experiencing the present or the future. Insofar as the difference is undecidable, it destabilizes the original decision that instituted the hierarchy. After the redefinition of the previously inferior term, Derrida usually changes the term’s orthography, for example, writing “différence” with an “a” as “différance” in order to indicate the change in its status. Différance (which is found in appearances when we recognize their temporal nature) then refers to the undecidable resource into which “metaphysics” “cut” in order to makes its decision. In “Positions,” Derrida calls names like “différance” “old names” or “paleonyms,” and there he also provides a list of these “old terms”: “pharmakon”; “supplement”; “hymen”; “gram”; “spacing”; and “incision” (Positions, p. 43). These names are old because, like the word “appearance” or the word “difference,” they have been used for centuries in the history of Western philosophy to refer to the inferior position in hierarchies. But now, they are being used to refer to the resource that has never had a name in “metaphysics”; they are being used to refer to the resource that is indeed “older” than the metaphysical decision.
This first definition of deconstruction as two phases gives way to the refinement we find in the “Force of Law” (which dates from 1989-1990). This second definition is less metaphysical and more political. In “Force of Law,” Derrida says that deconstruction is practiced in two styles (Deconstruction and the Possibility of Justice, p. 21). These “two styles” do not correspond to the “two phases” in the earlier definition of deconstruction. On the one hand, there is the genealogical style of deconstruction, which recalls the history of a concept or theme. Earlier in his career, in Of Grammatology, Derrida had laid out, for example, the history of the concept of writing. But now what is at issue is the history of justice. On the other hand, there is the more formalistic or structural style of deconstruction, which examines a-historical paradoxes or aporias. In “Force of Law,” Derrida lays out three aporias, although they all seem to be variants of one, an aporia concerning the unstable relation between law (the French term is “droit,” which also means “right”) and justice.
Derrida calls the first aporia, “the epoche of the rule” (Deconstruction and the Possibility of Justice, pp. 22-23). Our most common axiom in ethical or political thought is that to be just or unjust and to exercise justice, one must be free and responsible for one’s actions and decisions. Here Derrida in effect is asking: what is freedom. On the one hand, freedom consists in following a rule; but in the case of justice, we would say that a judgment that simply followed the law was only right, not just. For a decision to be just, not only must a judge follow a rule but also he or she must “re-institute” it, in a new judgment. Thus a decision aiming at justice (a free decision) is both regulated and unregulated. The law must be conserved and also destroyed or suspended, suspension being the meaning of the word “epoche.” Each case is other, each decision is different and requires an absolutely unique interpretation which no existing coded rule can or ought to guarantee. If a judge programmatically follows a code, he or she is a “calculating machine.” Strict calculation or arbitrariness, one or the other is unjust, but they are both involved; thus, in the present, we cannot say that a judgment, a decision is just, purely just. For Derrida, the “re-institution” of the law in a unique decision is a kind of violence since it does not conform perfectly to the instituted codes; the law is always, according to Derrida, founded in violence. The violent re-institution of the law means that justice is impossible. Derrida calls the second aporia “the ghost of the undecidable (Deconstruction and the Possibility of Justice, pp. 24-26). A decision begins with the initiative to read, to interpret, and even to calculate. But to make such a decision, one must first of all experience what Derrida calls “undecidability.” One must experience that the case, being unique and singular, does not fit the established codes and therefore a decision about it seems to be impossible. The undecidable, for Derrida, is not mere oscillation between two significations. It is the experience of what, though foreign to the calculable and the rule, is still obligated. We are obligated – this is a kind of duty—to give oneself up to the impossible decision, while taking account of rules and law. As Derrida says, “A decision that did not go through the ordeal of the undecidable would not be a free decision, it would only be the programmable application or unfolding of a calculable process” (Deconstruction and the Possibility of Justice, p. 24). And once the ordeal is past (“if this ever happens,” as Derrida says), then the decision has again followed or given itself a rule and is no longer presently just. Justice therefore is always to come in the future, it is never present. There is apparently no moment during which a decision could be called presently and fully just. Either it has not a followed a rule, hence it is unjust; or it has followed a rule, which has no foundation, which makes it again unjust; or if it did follow a rule, it was calculated and again unjust since it did not respect the singularity of the case. This relentless injustice is why the ordeal of the undecidable is never past. It keeps coming back like a “phantom,” which “deconstructs from the inside every assurance of presence, and thus every criteriology that would assure us of the justice of the decision” (Deconstruction and the Possibility of Justice, pp. 24-25). Even though justice is impossible and therefore always to come in or from the future, justice is not, for Derrida, a Kantian ideal, which brings us to the third aporia. The third is called “the urgency that obstructs the horizon of knowledge” (Deconstruction and the Possibility of Justice, pp. 26-28). Derrida stresses the Greek etymology of the word “horizon”: “As its Greek name suggests, a horizon is both the opening and limit that defines an infinite progress or a period of waiting.” Justice, however, even though it is un-presentable, does not wait. A just decision is always required immediately. It cannot furnish itself with unlimited knowledge. The moment of decision itself remains a finite moment of urgency and precipitation. The instant of decision is then the moment of madness, acting in the night of non-knowledge and non-rule. Once again we have a moment of irruptive violence. This urgency is why justice has no horizon of expectation (either regulative or messianic). Justice remains an event yet to come. Perhaps one must always say “can-be” (the French word for “perhaps” is “peut-être,” which literally means “can be”) for justice. This ability for justice aims however towards what is impossible.
Even later in Derrida’s career he will formalize, beyond these aporias, the nature of deconstruction. The third definition of deconstruction can be found in an essay from 2000 called “Et Cetera.” Here Derrida in fact presents the principle that defines deconstruction:
Each time that I say ‘deconstruction and X (regardless of the concept or the theme),’ this is the prelude to a very singular division that turns this X into, or rather makes appear in this X, an impossibility that becomes its proper and sole possibility, with the result that between the X as possible and the ‘same’ X as impossible, there is nothing but a relation of homonymy, a relation for which we have to provide an account…. For example, here referring myself to demonstrations I have already attempted …, gift, hospitality, death itself (and therefore so many other things) can be possible only as impossible, as the im-possible, that is, unconditionally (Deconstructions: a User’s Guide, p. 300, my emphasis).
Even though the word “deconstruction” has been bandied about, we can see now the kind of thinking in which deconstruction engages. It is a kind of thinking that never finds itself at the end. Justice – this is undeniable – is impossible (perhaps justice is the “impossible”) and therefore it is necessary to make justice possible in countless ways.


The following is an excerpt from David Abrams book, The Spell of the Sensuous.In this passage he speaks about the ‘logos’–

being motivated by a wisdom older than my thinking mind, as though it was held and moved by a logos, deeper than words, spoken by the Other’s body, the trees, and the stony ground on which we stood.…the Church had long assumed that only human beings have intelligent souls, and that the other animals, to say nothing of trees and rivers, were “created” for no other reason than to serve mankind. We can easily understand why European missionaries, steeped in the dogma of institutionalized Christianity, assumed a belief in supernatural, otherworldly powers among those tribal persons whom they saw awestruck and entranced by nonhuman (but nevertheless natural) forces.

A large part of the world wound is a sense of having been torn away from the natural world, and a large part of our healing has to do with rejoining this ‘logos’ ‘deeper than words.’

“…Late one evening I stepped out of my little hut in the rice patties of eastern Bali and found myself falling through space. Over my head the black sky was rippling with stars, densely clustered in some regions, almost blocking out the darkness between them, and more loosely scattered in other areas, pulsing and beckoning to each other. Behind them all streamed the great river of light with its several tributaries. Yet the Milky Way churned beneath me as well, for my hut was set in the middle of a large patchwork of rice paddies, separated from each other by narrow two-foot high dikes, and these paddies were all filled with water. The surface of these pools, by day, reflected perfectly blue sky, a reflection broken only by the thin, bright green tips of new rice. But by night the stars themselves glimmered from the surface of the paddies, and the river of light whirled through the darkness underfoot as well as above; there seemed no ground in front of my feet, only the abyss of star-studded space falling away forever.
I was no longer simply beneath the night sky, but also above it—the immediate impression was of weightlessness. I might have been able to reorient myself, to regain some sense of ground and gravity, were it not for a fact that confounded my senses entirely: between the constellations above drifted countless fireflies, their lights flickering like the stars, some drifting up to join clusters of stars overhead, others, like graceful meteors, slipping down from above to join the constellation underfoot, and all these paths of light upward and downward were mirrored, as well, in the still surface of the paddies. I felt myself at times falling through space, at other moments floating and drifting. I simply could not dispel the profound vertigo and giddiness; the paths of the fireflies, and their reflections in the water’s surface, held me in a sustained trance. Even after I crawled back to my hut and shut my door on this whirling world, I felt that now the little room in which I lay was itself floating free from the earth.
I had rarely paid much attention to the natural world. But…I became increasingly susceptible to the solicitations of nonhuman things…My ears began to attend, in a new way, to the songs of the birds—no longer just a melodic background to human speech, but meaningful speech in its own right, responding to and commenting on events in the surrounding earth. I became a student of subtle differences: the way a breeze may flutter a single leaf on a whole tree, leaving the other leaves silent and unmoved; or the way the intensity of the sun’s heat expresses itself in the precious rhythm of the crickets. Walking along the dirt paths I learned to slow my pace in order to feel the difference between one nearby hill and the next, or to trace the presence of a particular field at a certain time of day…
And gradually, then, other animals began to intercept me in my wanderings, as if some quality in my posture or the rhythm of my breathing had disarmed their wariness; I would find myself face-to-face with monkeys, and with large lizards that did not slither away when I spoke, but leaned forward in apparent curiosity. In rural Java, I often noticed monkeys accompanying me in the branches overhead, and ravens walked toward me on the road, croaking. While at Pangandaran, a nature preserve jutting out from the north coast of Java…I stepped out from a clutch of trees and found myself looking into the face of one of the rare and beautiful bison that exist only on that island. Our eyes locked. When it snorted, I snorted back: when it shifted its shoulders, I shifted my stance; when I tossed my head, it tossed its head in reply. I found myself caught in nonverbal communication with this other, a gestural duet with which my conscious awareness had very little to do. It was as if my body in its actions was suddenly being motivated by a wisdom older than my thinking mind, as though it was held and moved by a logos, deeper than words, spoken by the Other’s body, the trees, and the stony ground on which we stood.…the Church had long assumed that only human beings have intelligent souls, and that the other animals, to say nothing of trees and rivers, were “created” for no other reason than to serve mankind. We can easily understand why European missionaries, steeped in the dogma of institutionalized Christianity, assumed a belief in supernatural, otherworldly powers among those tribal persons whom they saw awestruck and entranced by nonhuman (but nevertheless natural) forces. What is remarkable is the extent to which contemporary anthropology still preserves the ethnocentric bias of these early interpreters. We no longer describe the shaman’s enigmatic spirit-helpers as the “supernatural clap-trap of the heathen primitives”—we have cleansed ourselves of at least that much ethnocentrism; yet we still refer to such enigmatic forces, respectfully now, as “supernaturals”—for we are unable to shed the sense, so endemic to scientific civilization, of nature as a rather prosaic and predictable realm, unsuited to such mysteries. Nevertheless, that which is regarded with the greatest awe and wonder by indigenous, oral culture is, I suggest, none other than what we view as nature itself. The deeply mysterious powers and entities with whom the shaman enters into a rapport are ultimately the same forces—the plants, animals, forests, and winds—that to literate, “civilized” Europeans are just so much scenery, the pleasant backdrop of our more pressing human concerns.
The most sophisticated definition of “magic” that now circulates through the American counterculture is “The ability or power to alter one’s consciousness at will.” No mention is made of any reason for altering one’s consciousness. Yet in tribal cultures that which we call “magic” takes its meaning from the fact that humans, in an indigenous and oral context, experience their own consciousness as simply one form of awareness among many others. The traditional magician cultivates an ability to shift our of his or her own common state of consciousness precisely in order to contact with other organic forms of sensitivity and awareness with which human existence is entwined. Only by temporarily shedding the accepted perceptual logic of his culture can the sorcerer hope to enter into relation with other species on their own terms; only by altering the common organization of his senses will he be able to enter into a rapport with the multiple nonhuman sensibilities that animate the local landscape. It is this, we might say, that defines a shaman: the ability to readily slip out of the perceptual boundaries that demarcate his or her particular culture—boundaries reinforced by social customs, taboos, and most importantly, the common speech or language –in order to make contact with, and learn from, the other powers in the land. His magic is precisely this heightened receptivity to the meaningful solicitations—songs, cries, gestures—of the larger, more-than-human field.
Magic, then, in its perhaps most primordial sense, is the experience of existing in a world make up of multiple intelligences, the intuition that every form perceives—from the swallow swooping overhead to the fly on a blade of grass, and indeed the blade of grass itself—is an experiencing form, an entity with its own predilections and sensations that are very different from our own…
Yet we should not be so ready to interpret these dimensions as “supernatural,” nor to view them as realms entirely “internal” to the personal psyche of the practitioner. For it is likely that the “inner world” of our Western psychological experience, like the supernatural heaven of Christian belief, originates in the loss of our ancestral reciprocity with the animate earth. When the animate powers that surround us are suddenly construed as having less significance than ourselves, when the generative earth is abruptly defined as a determinate object devoid of its own sensations and feelings, then the sense of a wild and multiplicitous otherness (in relation to which human existence has always oriented itself) must migrate, either into supersensory heaven beyond the natural world, or else into the human skull—the only allowable refuge in this world, for what is ineffable and unfathomable.
But in genuinely oral, indigenous cultures, the sensuous world itself remains the dwelling place of the gods, of the numinous powers that can either sustain or extinguish human life. It is not by sending his awareness out beyond the natural world that the shaman makes contact with the purveyors of life and health, nor by journeying into his personal psyche; rather, it is by propelling his awareness laterally, outward into the depths of a landscape at once both sensuous and psychological, the living dream that we share with the soaring hawk, the spider, and the stone silently sprouting lichens in its coarse surface.
The magician’s intimate relationship with nonhuman nature becomes most evident when we attend to the easily overlooked background of his or her practice—not just to the more visible tasks of curing and ritual aid to which she is called by individual clients, or to the large ceremonies at which she presides and dances, but to the content of the prayers by which she prepares for such ceremonies, and to the countless ritual gestures that she enacts when alone, the daily propitiations and praise that flow from her toward the land and its many voices.”
The Spell Of The Sensuous, David Abram