'The Mystery of the Prime Numbers', Secrets of Creation vol. 1 by Matthew Watkins


Robin Robertson, "The evolution of number: The archetype of order"

Late in his life, Jung came to believe that number was the primary archetype which underlay the multiplicity of other archetypes.  In his famous essay "Synchronicity: an Acausal Connecting Principle," Jung describes the archetypal significance of numbers:

There is something peculiar, one might even say mysterious, about numbers. . . .If . . . a group of objects is deprived of every single one of its properties or characteristics, there still remains, at the end, its number, which seems to indicate that number is something irreducible. . . . [something which] helps more than anything else to bring order into the chaos of appearances. . . . It may well be the most primitive element of order in the human mind. . . . we [can] define number psychologically as an archetype of order which has become conscious.

(CW8:870).

What in the world does Jung mean when he says that "we can define number psychologically as an archetype of order which has become conscious?"  I don't think many non-mathematicians think of numbers that way.  But consider that there is no such thing as a number existing in physical reality.  Our sense of numbers is purely relational.  The thing common between my "two" eyes and my "two" ears and my "two" arms and my "two" legs is that there are "two" of each.  A relationship doesn't exist as a "thing"; it is a statement about the connections between "things".  Mathematics is the study of the relationships between numbers; hence the relationships between relationships.

The primary relationship in all of our lives is the relationship between the ego and the Self, what Edward Edinger has called the ego-Self axis.  As the most primitive expression of order and relationship in the universe, number directly mirrors the relationship between ego and Self at every stage of man's development.  In this article, I'll trace four phases of the evolution of number and relate each to the relationship between the ego and the Self: (1) the integers or natural numbers; (2) zero; (3) infinity; and (4) self-referential numbers.

Natural numbers are very primitive indeed.  No human community  has been identified which does not use at least the smaller integers.  Zero began to appear as early as the 3rd century B.C., but didn't reach full development until the 5th century A.D., nor full acceptance until the 15th century.  Infinity began to emerge in concrete mathematical form in the seventeenth century and achieved explicit definition in the fin de siècle period late in the nineteenth century.  Self-referential numbers  first appeared early in the twentieth century in the work of mathematician Kurt Gödel and are not yet fully symbolized.  Most recently self-reference is cutting across virtually every field of science in the form of "chaos theory".  It's no coincidence that mathematics is struggling with self-reference, science with chaos, each of us with questions of definition in a world shorn of rules and definitions.  Why  worry about number as an archetype of order?  Because we are speaking of the most primitive express of the Self.

THE INTEGERS

. . . I always come upon the enigma of Natural Number.  I have a distinct feeling that number is a key to the mystery, since it is just as much discovered as it is invented.  It is quantity as well as meaning.  For the latter, I refer to the arithmetical qualities of the fundamental archetype of the so-called Self ("monad, microcosm, etc.") . . .

(Jung, 1975:399f).).

For early man, divinity took on many faces in the world outside man.  Jung uses Levy-Bruhl's term "participation mystique" to characterize a state of consciousness where the archetypes within man are experienced exclusively through the animals, objects and natural forces outside man.  Men did not yet have a separate ego under conscious control.  The Self could only be projected outwards.

At this stage, man's sole conception of number and relationship was of the smaller integers, each of which possessed characteristic form and attributes, much as the gods themselves.  Men knew of "one" or "two" or "three".  They didn't yet know anything of "number" itself.

The "natural" numbers, integers, counting numbers.  These are all phrases for the simple whole numbers we know so well: one, two, three, four, five, and so forth.  Jung discovered that these numbers - especially the smaller ones -were true symbols; that is, each was an endlessly inexhaustible metaphor.  For example (and by no means is  this intended to be exhaustive):

  • "one" is undifferentiated, unity, the point, by extension the circle;
  • "two" splits "one" apart, it demonstrates polarity, opposition, thesis and antithesis, a cross, an "X";
  • "three" is movement away from the stasis of opposition, the possibility of reconciliation between two polarities, the new synthesis contained within thesis and antithesis, the Christian trinity, a triangle;
  • "four" is stability, a constructed unity, the Christian trinity plus Mary, a square.


Scattered throughout the Collected Works are numerous mini-essays on the natural numbers.  Marie-Louise von Franz extended Jung's work on the first four integers in Part II of her Number and Time.

In his Number and the Language of Science, Tobias Dantzig tells the story of a crow who had built its nest in the watch-tower on a squire's estate.  The squire was determined to shoot the crow, but the crow was too canny; whenever the squire or his men would enter the tower, the crow would fly away until the coast was clear.

The squire tried sending two men went into the barn.  One stayed hidden in the tower and one came out again.  However, the crow was too smart and wouldn't return until the second man also came out.  The experiment was tried on successive days - unsuccessfully - until finally five men went in and only four came out.  The crow seemed to think that all the men had come out, and returned to the watch-tower.  The squire was finally rid of the crow.

The story seems to demonstrate that crows (or at least the crow in the story) have a sense of "one", "two", "three", and "many".  When five men went in and four came out, the crows saw "many" go in and "many" go out and thought they were safe.  Interestingly, early twentieth century anthropologists found that the numeric systems of some African, South American, Oceanic and Australian cultures were also limited like the crows.  In the case of the Australian Aborigines, they had numbers for "one" through "six", and "many".

No one could believe that crows and men are of comparable intelligence.  It's more likely that the archetypal quality of the smaller numbers is so ancient that it predates humanity itself.  Because men are capable of "counting" ("one, two, three . . . "), we imagine that is how numbers were arrived at.  But when crows can recognize "one", "two", "three" and "many", few of us would argue they arrived at these numeric relationships by counting per se.  Instead there must be a pattern recognition, a "primordial image" (to use Jung's earlier formulation of "symbol") that corresponds to the smaller integers.  In other words, we have an innate sense of what "one" and "two" and "three" mean.

In fact, the Australian Aborigines actually limit themselves to "one" and "two", then use composites of "one" and "two" to make up numbers up to "six".  For example, "three" is "two" and "one", "six" is "two" and "two" and  "two".  They count in pairs, so that they wouldn't be likely to notice if two pins were removed from a heap of seven pins, but would instantly recognize if only one pin had been removed.  (This story is also from Dantzig).

Why are the cultures we've mentioned so limited in their numeric system in comparison with the unlimited counting numbers we are all aware of?  It's because their system lacked a way to structure numbers hierarchically, so that a small number of symbols could be endlessly combined to produce ever bigger numbers.  The Australians had the right beginning idea in considering, for example, "five" as "two" and "two" and "one".  They were constructing bigger numbers out of smaller numbers.  However, in order to formally construct a true hierarchy of numbers, a new symbol is needed.  That symbol is "zero".

THE NATURE OF ZERO

Zero appeared contemporaneously with the dawn of the Christian era.  The birth of Christ marked a new phenomenon: the god/man.  Christ was wholly man, yet he contained divinity.  Man was beginning to remove his projections from the outer world.  The ego had begun its lengthy dialog with the Self.

However, few were yet willing to engage in that dialog; most were content to project the god/man exclusively onto the figure of Christ.  However, there were embryonic exceptions, such as the Gnostics in the 1st and 2nd century A.D. and the Neoplatonists in the 3rd through the 5th  century.  Neoplatonism in particular explicitly defined the graduated stages between man and God.  It culminated with Proclus, who conceived of theurgy - a ritual method of recreating the characteristics of a god - as a tool of spiritual development.

Zero evolved as a symbol over roughly the same period.  It first appeared in inchoate form in Babylon in the 3rd century B.C.  Just before the birth of Christ, the Romans evolved a concept of accumulating integers into bigger and bigger quantities, though the numbers became increasingly awkward to express as they increased in size.  However, zero didn't reach full expression until it was independently discovered by both the Mayans in Mexico and the Jains in India in approximately the fifth century A.D.  The Mayan development was to die without being passed on to other civilizations.  From India, the concept passed to China and the Arabic world by the eighth century.  It then migrated from the Arabs to the Western world in the twelfth century, though it was not to be fully accepted until the fifteenth century.  (I'm indebted to Georges Ifrah's From One to Zero for this chronology).

Why was zero so important?  As we've seen, crows seemed to have a symbolic understanding of the numbers "one" through "three" and "many", the Australian Aborigines "one" through "six" and "many".  Let's extend that just a bit, and presume that we have numbers for "one" through "ten".  Let's also make use of the ability of the Australians to combine  smaller numbers to make larger numbers.  With "one" through "ten" and the idea of combining, we can make any number under a hundred.  For example, "eighty-seven" is "eight tens" and "seven".

This is a powerful technique: ten symbols can become a hundred.  But there has to be some method for symbolizing the combination.  The Romans picked one way: the numbers "one" through "four" were symbolized by that number of "ones" (i.e., "IIII").  When the Romans arrived at "five", they picked a new symbol: "V".  Then "six" through "nine" could be combinations of "five" and "ones" (i.e., "eight" is "VIII").  When they arrived at "ten" they had to invent a new symbol ("X") , and so forth.

Every time the Romans arrived at a new hierarchical level of numbers they needed a new symbol for that level.  Unfortunately, this technique produces numbers which are very long and very hard to interpret quickly.  (As can be attested to by anyone who has every tried to read the date of a film - which is normally given in Roman numerals - before it passes from the screen.  In Roman numerals, 1988 becomes MCMLXXXVIII).

There's a better way.  Begin with separate symbols for "one" through "nine" (i.e., 1, 2, 3, 4, 5, 6, 7, 8, 9).  When you arrive at "ten", don't invent a new symbol.  Instead start with "one" again; however, this time the "one" means "one ten".  But how do you know it means "one ten" and  not just "one"?  The key insight is that the position of the symbol tells whether it means "ones" or "tens".

Here's where "zero" comes in.  Put a "zero" down meaning "no more ones", then put a "one" to its left indicating a single "ten" (i.e., "10").  "Ten" becomes "10", meaning "one ten" and "zero ones".  The power in this idea of a position for "ones" and a position for "tens" is that it can be extended indefinitely.

For example, when you get to "99" (meaning "nine tens" and "nine ones") and want to count higher, put "zero" in both the "ones" position and the "tens" position and add a "1" to their left: "100".  A new number has been created.  We know it as one hundred, but it doesn't have to have a unique name; it already has a unique symbolization.  Thus a mere 10 symbols can express any number, no matter how large!  And all because of the symbol of "zero", of "nothingness expressed." We think the world is filled with matter and energy, with "stuff", but it is equally filled with "structure", and structure is dependent on emptiness, nothingness.  Until we have a symbol for nothingness, every number has to have a unique symbol.  Once we have "zero", then we can develop numbers endlessly from a small number of symbols.  Thus "zero" leads ineluctably to "infinity", which is our next topic.

THE SEARCH FOR INFINITY

As the Renaissance dawned, man turned his vision out onto the world.  He began to realize he also could create, that he also contained divinity.  God had created a world which obeyed immutable laws.  Man's task was to uncover those laws and use that knowledge to further creation.  The ego was beginning to develop, spurred on by the Self.

Mathematics flourished.  Descartes discovered "analytic geometry" which was able to reduce geometry to numbers.  Then he invented a new method of proof, "mathematical induction", which depended on an infinite process.  For the first time, infinity had appeared as an explicit part of mathematics.  A little later, Newton and Leibniz jointly and independently discovered "calculus".  Calculus introduced the concept of a "limit" to an infinite process.  Infinity was becoming something quite natural.  Correspondingly, man's ego was growing to quite inflated proportions.  He began to feel that he could dispense with divinity.  The ego was trying the impossible task of swallowing the Self.

This state of mind reached its peak just as the nineteenth century gave way to the twentieth.  Bertrand Russell and Alfred North Whitehead confidently expected to reduce all knowledge to logic, all logic to mathematics, all mathematics to a small number of axioms and operations.  Physicists hoped to add Clerk Maxwell's Laws for Electrodynamics to Newton's Laws of Gravitation and Motion and explain all of reality.  Social idealist believed that  the already accomplished transition of the major nations from feudal monarchies to democratic republics heralded the coming of a social utopia.  And psychologists, led by personalities as disparate as William James and William Wundt, were equally confident they could reduce psychic reality to associations of sense perceptions under the control of the will.  Hubris rained as the ego tried to identify itself with the Self.  Jung perfectly captured this state:

If the ego is dissolved in identification with the Self, it gives rise to a sort of nebulous superman with a puffed-up ego and a deflated Self.

(CW8:430).

In mathematics, this attempt to identity with the Self took the form of attempts to prove mathematics to be both complete and consistent; that is, mathematics would need nothing outside of itself.  By the nineteenth century, it was clear that an explicit symbolic expression for the infinite was needed as a necessary step toward this goal.  Earlier mathematicians had considered infinity little more than a trick of language.  As late as 1831, mathematician Carl Friedrich Gauss (who is arguably the greatest mathematician of all time) railed against infinity:

I protest against the use of infinite magnitude as something completed, which is never permissible in mathematics.  Infinity is merely a way of speaking.

(reported in Bell, 1937:556).

THE PIGEON HOLE TECHNIQUE

But infinity could not be dismissed as "merely a way of speaking", even if as great a mathematician as Gauss said so.  A half-century later, mathematician Georg Cantor argued that:

the uncritical rejection of the legitimate actual infinite is no lesser a violation of the nature of things.

(ibid:557).

It was with the "legitimate actual infinite" that Cantor was concerned.  For example, there is clearly no end to the integers; if a largest integer could be conceived, merely add one to it and it is no longer the largest.  There is also clearly no end to the points on a line provided that a point is understood to be without dimension.  On the surface, it seems that all one can safely assert is that there is an infinite number of positive integers, an infinite number of points on a line.  But Cantor asked if these infinities were the same size.  Lest this sound like the medieval scholastic arguments about the number of angels who could stand on the head of a pin, the reader should be aware that Cantor found a way of quantifying infinity, and thus answering his own question.

Remember the story of the crow "counting" the men with guns who went into the tower, then "counting" the number who came out.  The crow wanted to be sure that every man who went in also came out.  He had the right idea: counting is essentially matching one group of things to another (in  their case, incoming to outgoing men).  He only failed because he had no concept for numbers bigger than three.

Let's say that I have a box filled with oranges and a bag filled with apples.  How do I know that I have the same number of each?  The surest way is to take an apple out of the box and an orange out of the bag and place them aside as a pair.  Continue doing this.  If eventually every apple is paired with every orange, and there are no apples or oranges left over, I can safely say that I have the same number of each.  Note it makes absolutely no difference which apple pairs with which orange as long as each apple pair with one and only one orange.

This method of counting by pairing one group of things with another is usually called the "pigeon-hole" technique.  Cantor found that the "pigeon hole" technique produced surprising results with infinite groups.  For example, it would seem that there must be only half as many even integers (i.e., 2, 4, 6, 8, . . . ) as there are all integers (1, 2, 3, 4, . . . ).  But Cantor insisted that there were exactly the same number of each!

Try using the pigeon-hole technique to prove this.  We'll match every integer with a unique even integer.  Let's take an imaginary box and fill it with integers, just as we filled it with apples before.  Instead of oranges in our bag, we'll put even numbers.  Now pull an integer out of the box, let's say it's "5".  Hunt around in the bag of even numbers until you find "10".  Set them aside as a pair.   Take another integer from the integer box: "12".  Find "24" in the even number bag and set them aside.  Continue doing this.

Each time you take an integer out of the integer box, take an even number that is twice as big out of the even number bag.  Since every integer is paired up with an even integer, and there are no integers left over, there are exactly as many integers as there are even integers.

Right about now, you're probably objecting that there will be integers left over in the box because you're only reducing the number of integers in the box half as fast as you're reducing the number of even numbers in the bag.  But that doesn't make any difference since there's an infinite number of each.  In order for there to be more integers than even numbers, we'd have to name an integer that is left over after this counting process is through.  But we can't.  Say we insist that the number "one billion trillion quadrillion" would be left over.  Cantor could just double it and say that it paired up with the number "two billion trillion quadrillion".

Cantor could use the same logic to show that the set of numbers evenly divisible by three was also just as big as the set of all integers.  In fact, the particular multiple made no difference; the set of numbers divisible by a "billion trillion quadrillion" is just as big as the set of all integers.

Going in the opposite direction produced the same result.  The integers are a sub-set of the "rational" numbers (what non-mathematicians normally refer to as fractions).  Once again Cantor used the pigeon-hole technique to prove that there were exactly as many rational numbers (i.e. fractions) as integers.  This was a very surprising result indeed!  In fact, Cantor proved that any infinite sub-set of the set of rational numbers contains exactly the same number of members.  He even gave that number a name: "aleph-null".  This was the first "transfinite" number (i.e., transcending the finite). Finally infinity had a symbol, just as "zero" had before it.

It would seem that for all of his strange results, Cantor just proved what common sense would have insisted all along: "infinity is infinity is infinity and that's all you can say about it."  Maybe Gauss was right when he said that: "Infinity is merely a way of speaking."  Well Cantor still had some surprises in store: he found that there are more flavors than vanilla in the world of the infinite.  Transfinite numbers were not limited to aleph-null.

HOW BIG IS INFINITY?

We've talked about integers and rational numbers.  Now let's discuss "real" numbers.  A real number is any number that can be expressed as a decimal.  All rational numbers are can be expressed that way; for example, 1/2 = .5, 1/3 = .3333 . . . (" . . ." just means that the 3's go on indefinitely).

However, there are other real numbers which can be expressed as a decimal, but cannot be expressed as a fraction.  For example, "pi" cannot be expressed as a fraction.  Similarly, the square root of 2, which is the length of the diagonal of a square which is one unit on a side, cannot be expressed as a fraction.  But surely there must only be a few such strange numbers.  Appearances can be deceiving.  Cantor used a clever application of the pigeon-hole technique which had already served him so well to prove that the infinity of real numbers is bigger than the infinity of rational numbers.  He called this new transfinite number aleph-one.

How many transfinite numbers are there?  That is, how man sizes of infinity exist?  Cantor proved that number is also infinite; i.e., there are an infinite numbers of infinities.  Thus there is no end to infinity, even though Cantor provided a method of quantifying it.

GöDEL NUMBERS

The key to Cantor's discoveries is his strange self-referential technique.  Self-referential means simply that something refers to itself.  In this case, the integers refer to a subset of the integers.  As you will recall, this produced the surprising result that there were exactly as many even integers and there were integers.  Though the logic was impeccable, there is something both mysterious and disturbing about this result.

As the twentieth century dawned, self-reference began to infiltrate every field of man's thought and work.  Probably the best-known instance is physicist Werner Heisenberg's discovery that it is impossible to measure both the position and the momentum of a sub-atomic particle.  The act of observation of the particle's position changes its momentum and vice-versa.

But this was hardly an isolated instance.  Artists like Kandinsky consciously considered the implications of the fact that a painting was only pigment on canvas and could never mirror outer reality.  They began to see a painting as a thing in itself.  Musicians discarded tonality and harmony in favor of abstract systems like the twelve-tone music of Schonberg.  In every field of art and science, men were struggling with the self-referential quality of man's relationship to nature.  The ego, unable to integrate the Self, was now confronted with the paradox of their self-referential relationship.

In the early 1930's, mathematician Kurt Gödel used a brilliant self-referential technique to cut the floor out of logic itself.  I have previously summarized Gödel's method this way (see "Gödel and Jung: the Twilight of Rational Consciousness" in the Fall 1987 Psychological Perspectives):

Gödel took every "sign" that mathematicians and logicians use in discussing arithmetic, i.e., their "language", and converted each sign into a unique number: a "Gödel number".  Since a mathematical equation is nothing but a sequence of numbers and signs, Gödel could convert an equation into a sequence of Gödel numbers.

He then demonstrated how to convert the sequence of Gödel numbers, which represented the sequence of signs which made up an equation, into a unique Gödel number.  Finally, a mathematical proof is nothing but a sequence of mathematical statements (i.e., equations).  Gödel proceeded to reduce the sequence of Gödel numbers, of the statements that made up the proof, into a single, unique Gödel number. One number thus stood for a whole mathematical proof!

This remarkable technique enabled Gödel to make numbers talk about numbers.  Using this technique, Gödel produced a result far stranger than Cantor.  Gödel constructed a mathematical statement that talked about itself.  That is, the statement discussed a Gödel number.  And the Gödel number of the statement was the same Gödel number.  Further, the statement said in effect that it could not be proved.

Therefore, if the statement was true, logic itself was limited since a true statement existed which could not be proved within logic.  On the other hand, if the statement was false, then that implied that it could be proved.  If a false statement could be proved within logic, that meant that logic was illogical.  The mind spins at the possibilities.  And all because Gödel had revealed that not even number itself was immune to the paradoxes of self-reference.

CHAOS THEORY

Most recently, self-reference has intruded into science in the form of chaos theory.  As opposed to most of science, which continues to bifurcate into ever more narrow specialities, chaos theory cuts across virtually all fields  of science.  Key contributions to chaos theory have come from fields as varied as mathematics, fluid dynamics, meteorology, economics, etc.  Chaos theory is discussed a great deal in other parts of this issue, but let me just consider a few points here.

At core, chaos theory considers self-referential or feedback situations.  In such situations, as in life, results effect future results.  And when that happens, ultimate results can be very strange indeed.  In situation after situation, scientists found that some system - say economic trends - might start off behaving predictably, then suddenly diverge in seemingly erratic behavior.

When they looked at that chaotic behavior more closely, an order seemed to underlie chaos itself.  And that order was also self-referential.  That is, if a little piece of a total system was vastly expanded, it began to look like the whole system.  If a still tinier piece of that little piece was expanded, it in turn began to look like the whole system.  No matter how deep you went, the system kept repeating itself.

Consider a simple self-referential system we encounter in life.  If you're nice to me, I'm more likely to like you and be nice to you in turn.  That makes you like me still more, and so forth.  That's straightforward and predictable.  Let's make the situation just a bit more complex.  Let's suppose that I'm mistrustful of others and presume that if they're being nice to me, they're really  setting me up in order to do something awful.  Then if you're nice to me, I act suspiciously.  That makes you feel that I don't like you and you stop being so nice.  I might then take that as proof you weren't being honest and act still colder toward you.  Or I might now feel more comfortable since I didn't have to deal with your proffered friendship.  Then I might be willing to behave nicer in response to your increased indifference.  And so it goes in the real world.

Life is made up of such complex situations, the sort of situations that science and mathematics traditionally ignore.  Science prefers to deal with situations where variables are independent and the state of a system can be calculated exactly from those variables.  But that really doesn't cover much of the world.  Most of the world involves feedback and self-reference.

When Cantor developed his theory of transfinite numbers late in the nineteenth century, the strange results he obtained by feeding numbers back upon themselves seemed anomalous, cut off from the rest of mathematics.  Then, in the early 1930's, Gödel proved that such anomalies lay hidden in the nature of number itself.  But Gödel's results seemed to have little to do with the normal run of mathematics and science, much less with the normal lives we all live.  However, by the time chaos theory started to emerge in the late 1960's, self-reference had begun to  intrude into normal science.  It shouldn't take much longer before it's a part of normal life.

It is important to realize that no new numeric symbol has yet emerged for this new self-referential quality.  Gödel's work brought an end to the way mathematicians had previously approached number, but it didn't yet offer a new alternative.  Chaos theory is still in an inchoate stage, not yet ready to give birth to a new symbol, if in fact it is from chaos theory that the new symbol will emerge.

EVOLUTION OF THE NUMBER/SELF ARCHETYPE

We've traced Jung's archetype of order--number--from the natural numbers, to zero, to infinity, to self-reference.  As number evolved, so did the relationship between the ego and the Self.  Natural numbers coincided with the period of participation mystique, when man could only experience himself through is projections on nature.  Zero appeared at the birth of the Christian era, as the ego began to emerge as a thing in itself.  As zero gradually led to the explicit formulation of infinity, the ego tried unsuccessfully to swallow the Self.  Infinity gave way to self-reference in the twentieth century, as order began to crumble into chaos everywhere around us.  The ego, unable to swallow the Self, gave way to despair.  God was dead; chaos reigned.

But even chaos was revealed as possessing a self-referential order.  The relationship between ego and Self is not one in which ego rules Self or Self rules ego.  Rather,  the Self is at one and the same time both the goal of the ego, the process by which the ego attains that goal.  The ego is both the Self's expression in the world of limits and the process of evolution of the Self.

Sensing the Self as something irrational, as an indefinable existent, to which the ego is neither opposed nor subjected, but merely attached, and about which it revolves much as the earth revolves around the sun--thus we come to the goal of individuation.

(CW7:405).

Man does not change at death into his immortal part, but is mortal and immortal even in life, being both ego and Self.

(CW5:596n).

Number, as the archetype of order, is in the process of finding a new symbol with which to clothe itself.   A new order is trying to emerge from chaos.  When that happens, it will correspond to a new vision of the Self, as it did when natural numbers ruled, when zero emerged, when infinity found concrete expression.

BOOKS CITED

Bell, E. T. (1937). Men of Mathematics. New York: Simon and Schuster.

Dantzig, Tobias (1954). Number and the Language of Science. New York: Macmillan Company.

Ifrah, Georges (1985). From One to Zero: A Universal History of Numbers. New York: Viking Penguin.

Jung, C. G. (1953) Collected Works, Vol. 7. Princeton: Bollingen Series, Princeton University Press.

Jung, C. G. (1956) Collected Works, Vol. 5. Princeton: Bollingen Series, Princeton University Press.

Jung, C. G. (1964) Collected Works, Vol. 10. Princeton: Bollingen Series, Princeton University Press.

Jung, C. G. (1969) Collected Works, Vol. 8. Princeton: Bollingen Series, Princeton University Press.

Jung, C. G. (1975) Letters, Vol. 2. Princeton: Princeton University Press.

Robertson, Robin (1987) "Gödel and Jung", Psychological Perspectives, Fall 1987.

 


back to Jungian/number theory page

inexplicable secrets of creation        home        number theory and physics archive