search expand

From the quest for certainty to the quest for wholeness

John Heron, unpublished paper, 1962

Introduction

Around about 1927 intellectual events of the greatest interest occurred. In physics Heisenberg framed his uncertainty principle; in mathematics a paper by von Neumann foreshadowed results later made fully explicit by Gödel in his celebrated theorem of 1931. Physics is one of the positive sciences of nature that has achieved the highest degree of systematization: it has a great range of coherent, established and reliable empirical knowledge. Mathematics is the formal science par excellence with the most complete range of coherent rational knowledge so far established by any activity of the human intellect; while it is closely employed in determining and expressing much of the data of physics. In mathematics and physics the intellect had, up to 1927, exercised itself in two fields that seemed to contain within them the promise of an absolute, self-consistent certainty of intellectual knowledge, to be attained by the high development of exclusively rational powers of analysis and synthesis, exercised in relation to number theory and precise and controlled observation. Certain special problems had of course begun to emerge since the beginning of the century in physics with Planck’s inauguration of quantum theory, and in mathematics with the problem of consistency raised by Burali-Forti’s and Russell’s paradoxes. But only in the uncertainty principle of 1927 and Gödel’s theorem of 1931 was the demise of the promise of a purely rational certainty made fully explicit in physics and mathematics respectively.

Physics: the goal of total determinism

The goal of an absolute knowledge for physics was first expressed in its crudest and most uncompromising form by Laplace, in the late 18th century, as the doctrine of total determinism. This was an idea of causal determinism such that “an intelligence which at a given moment knew all the forces that animate nature, and the respective positions of the beings that compose it, and further possessing the scope to analyze these data could condense into a single formula the movement of the greatest bodies of the universe and of the least atom; for such an intelligence nothing could be uncertain, and past and future alike would be before its eyes” (Laplace, Essay on the Calculus of Probabilities). Here the notion of causality – that there is some law behind everything – becomes the extreme doctrine of an immanent mathematical law such that if one could state for a given instant of time the mathematical relation of functional dependence between the variables (such as mass, position, velocity) that affect all particles in the universe at that time, then from such a formula all past events could be derived and all future events be predicted.

This great deterministic myth, or ideal of measurement, of Laplace, which lasted 150 years, was based on his great success in applying the principles of Newtonian mechanics to the motion of the solar system. The idea of an immanent causal determination of this kind was not present in Newton’s mind: there were some irregularities in the motions of parts of the solar system that he could not explain, in particular certain anomalies in the motions of Jupiter and Saturn. Mathematics and astronomers of his time could still ask whether the solar system was stable; and Newton himself considered that divine intervention might be necessary from time to time to correct minor deviations and irregularities, thus putting the solar system back in order and securing it from destruction by the collision or scattering of its members.

Laplace, however, on the sole basis of Newton’s three laws of motion and the inverse square law, was able to account for all the irregul­arities that had troubled Newton, and to show that despite the fluctuations of the planets’ motions the solar system will retain its stability and structure (the total planetary invariability of secular inequalities was established as a general law by Lagrange and Laplace, 1773-84). Thus any need for divine intervention appeared to be unnecessary, and the universe appeared in the guise of a self-sufficient mechanism. The old notion of transcendent law, the imposed command of deity, still in some measure present in Newton’s mind, was banished in favour of immanent mathematical law. From the fact that in the solar system if we know at a given moment the positions and velocities of all the planets in relation to the sun, we can calculate their motions and positions at an earlier or a later date, came the idea of causality in its modern, pre-1927 form, which is that the laws of physics determine the future of a physical system completely, given all the relevant information at a point of time (thus “determine” means renders calculable on the basis of mathematically stated laws).

Laplace grandiosely suggested that such an idea might be applied to everything in the universe, to the universe as a whole. Lamettrie carried the idea through and applied it to the concept of man as a machine: all atoms whether in an organic or inorganic setting obey mechanical laws. This broad mathematical mythology became the common background to much scientific thought: Ernest Mach stated that the great majority of scientists at the end of the nineteenth century tended to think in this deterministic manner. Generally, we may say that the mythical goal for physics was that of absolute certainty of prediction in terms of mathematically expressed laws. The intellect alone could probe, measure, analyze and compute, and attain absolute powers of quantitative description.

Mathematics: the goal of an absolute and finitistic proof of consistency

In mathematics the goal was formally analogous to that in physics. In physics, ideal rational certainty was to be found in total prediction arising from a thorough understanding of the quantitative interrelations between ultimate particles. In mathematics, it was to be found in a demonstration of consistency arising out of a thorough understanding of the formal interrelations between the ultimate elements of a deductive system from which arithmetic in its entirety could be derived. This goal for rational certainty in mathematics was first explicitly formulated by the German mathematician David Hilbert, from the turn of the century onwards, as the programme of finding an “absolute” and “finitistic” proof of the consistency of arithmetic. To show why this programme had become necessary we must again make a brief historical review.

For centuries geometry had represented the ideal of certain and consistent knowledge. The Greeks had developed it systematically by means of the axiomatic method, which consists of accepting without proof certain axioms or postulates (e.g. through two points just one straight line can be drawn) and then deriving from them all other propositions of the system as theorems. The axioms of Euclidean geometry were considered to be self-evidently true of physical space or objects in space; and since true statements could not be logically incompatible, the whole Euclidean system was considered to have a built-in and guaranteed consistency. But the discovery of non-Euclidean geometries after 1825 marked a certain turning point in the history of mathematics. These had different axioms to those of Euclid.

This led to two developments: firstly, the gradually increasing tendency to make explicit the axioms of other branches of mathematics – up to the later nineteenth century only geometry had a fully explicit axiomatic basis; secondly, and relatedly, a growing tendency to inquire about the logical consistency of the axiom systems so developed. This second issue arose partly out of the nature of the non-Euclidean axiom systems: the axioms were not in certain cases obviously true of physical space. How, then, did one determine their consistency, that is, ensure that they might not at some point produce mutually contradictory theorems? The problem of consistency for mathematics in general was heightened by the discovery of a paradox by Burali-Forti (1897) which indicated that Cantor’s arithmetic of cardinal and ordinal numbers was not consistent, that is, two contradictory theorems could be derived from it. And the problem of consistency for cardinal arithmetic was again raised by a paradox concerning the most general class (Russell, 1901). Meanwhile, even the supposed self-evidence of the axioms of Euclid was considered to rest on an inductive argument whose credentials were dubious.

Thus in the nineteenth century, though mathematics took great strides forward, expanded in many new directions, became increasingly emancipated in its methods, explicit and rigorous in its techniques, yet at the same time serious doubts were raised as to the guarantees of its absolute consistency. Did the various mathematical systems harbour hidden internal contradictions? Was mathematics, in fact, an absolutely certain science? And if so how could this be demonstrated?

The first approach to consistency was the relativistic one. Euclidean and non-Euclidean geometries were shown by Hilbert (1899, 1903) to be consistent by recourse to arithmetic, that is, by recasting them in arithmetical terms; in other words, their consistency was shown to depend on or be relative to the consistency of arithmetic. So that Hilbert’s final programme, in the 1920’s, was to get a consistency demonstration of simple arithmetic which would be “absolute” – that is, would not be relative to the consistency of some other system, and “finitistic” – that is, would involve analysis of only a finite number of structural features of a fully explicit arithmetical calculus.

In the early years of this century, then, the rational ideal of causal determinism lay behind the investigation of ultimate material particles and the rational ideal of an absolute proof of consistency lay behind the investigation of the axiomatic elements of arithmetic, in physics and mathematics respectively. But the intellect, through the great sharpening of its power and precision, unexpectedly achieved, not a demonstration of its ideals, but a demonstration of the impossibility of attaining them. To these developments and their broad philosophical implications we must now turn.

Physics: the uncertainty principle and the breakdown of determinism

Heisenberg’s uncertainty principle, arising out of the concepts of quantum theory and wave mechanics, states that it is impossible to determine exactly both the position of an object and its momentum, or any quantity related to its momentum such as velocity or energy. Since the expected error is of the order of 10-27 in the C.C.S. system, the effect is negligible for bodies of normal mass, but it is pronounced with electrons, negatively charged atomic particles. For such a particle there is and can be no physical law in which reference is made to its exact position and momentum. The product of uncertainty as to position and uncertainty as to momentum is equal to Planck’s constant (6.625 x 10-27 erg sec): the two uncertainties cannot dwindle to nothing and the ideal of absolute precision of measurement is unattainable. As one approaches zero, the other approaches infinity; so that precise information about the one implies total ignorance about the other. There is between them a relation of indeterminacy. The uncertainty relation as defined in terms of Planck’s constant sets a limit to the accuracy of measurement. The limitation is inherent in the mathematics of the situation and is reflected in experimental observations: to locate an electron means using short wave-length radiation whose quanta of energy will change the electron’s momentum in the act of locating it, while any determination of its momentum means a disturbance of its location.

The uncertainty principle, then, states that the causal laws of physics cannot be applied to atomic events taking place below the limit set by the quantum constant, since atomic events below this limit are not observable, in principle and in fact. Does this mean that below this limit there are no causal laws? This is the Copenhagen interpretation (Bohr and Heisenberg) of the uncertainty principle. And it is supported by a theorem of von Neumann to the effect that any modification of quantum mechanics in its present shape so as to render it deterministic would make the theory self-contradictory. Yet the theory has so far proved potent in rendering coherent a wide range of phenomena. This interpretation of the principle takes the view that beyond an explicitly stated limit of measurement, randomness and chance prevail at the sub-atomic level.

Clearly this has a bearing on the Laplacian programme for total determinism. The intellectual quest for certainty in physics was guided for 150 years by Laplace’s theory that, in principle, the notion of chance could be totally ruled out of physical happenings; that, in principle, all events no matter how minute the scale could be reduced to mathematical laws of predictable regularity. But the uncertainty principle involves a renunciation of this ideal of a total causal description of ultimate mechanical processes. It is now seen to be impossible, in the nature of the case, to know the state of an atom with such accuracy that its subsequent fate would be completely predictable. Quantum mechanics are essentially different from Newtonian mechanics, they deal with probabilities not with precise prediction.

It is important to examine in more detail what is implicit in the old ideal of thoroughgoing causal determinism. For there seems to be latent within it an unacknowledged element of primitive animism, linked with the acknowledged mathematical notion of causal laws, introduced by Galileo, as functional relations between measurable variables. This is the concept of immanent law explicitly inaugurated by Laplace to exclude any need for the invocation of transcendent law imposed by the fiat and command of deity (to Napoleon’s comment that his book about the universe made no mention of its Creator, Laplace replied: “Je n’avais pas besoin de cette hypothèse-là”). But it is secretly welded to a vague animism, and its success in routing the need for imposed laws rests on the assumption, to put it crudely, that there is a kind of animistic mathematical formula lodged in the submicroscopic heart of matter whence all its subsequent mechanical motions are determined. The alternative to law imposed from above becomes the idea of precise causal law built in to the ultimate particles of matter and driving the whole universal mechanism. It is this reaction against the authoritarian mediaeval concept of divinely imposed fiats that has by its implicit assumptions guided physicists into the realm of ultimate particles. But having arrived, the uncertainty principle shows, on one of its interpretations, that no causal laws are operative where the assumptions which guided the search required that they should be found. There is no totally deterministic causal principle of mechanical action that can, as it were, be dug out of the heart of matter. Events below the limit of the quantum constant fall outside the scope of causal laws; and the laws of quantum physics themselves are laws of probability, are statistical in nature. There is a discontinuity between the principles of celestial mechanics and the principles of quantum mechanics. The causal concept of Laplace collapses at the spot where it should be enshrined.

Statistical average vs. form

But if there is a play of randomness and chance among sub-atomic particles, how do we account for the regularity and relatively reliable causal order evident at the macroscopic level? Is it simply that out of sub-atomic randomness, macroscopic order arises purely on the basis of statistical theory? It may be sufficient to say, when considering only the behaviour of, say, a million electrons, that the laws of chance combined with the quantum laws of probability as expressed in Schrödinger’s wave equations, allow predictions with all the accuracy we normally require. But it seems highly improbable that such sparse concepts can account for the whole highly articulate order of nature. What we must protest against, therefore, is the notion now being put forward (cf. F. Waismann, Turning Points in Physics, p.140) that what appears at the level of normal human perception and measurement is simply the result of a statistical knockabout of elementary particles whose random irregularities become somehow smoothed out into a regular picture because of the enormous numbers involved. “Certainly, what God meant, he did. When he said, Let there be light, there was light and not a mere imitation or a statistical average. Thus the statistical notion, though it may explain some facts of our confused perception, is not applicable to the ultimate, imposed laws.” (Whitehead, Adventures of Ideas, p.118.)

What the uncertainty principle rather suggests is that the old notion of a mathematical determining principle built into the heart of matter and running it from the microscopic to the macroscopic scale, can be replaced by the broad concept of a causal order organizing matter from other levels of being. This causal order becomes manifest at the macroscopic level but allows of a free play at the sub-atomic and atomic level. The randomness of ultimate particles is constrained, as it were, within the limits set by causal laws that organize macroscopic events according to certain formal principles of regularity. Causal laws, on this hypothesis, represent forces working from outside matter into it, and their penetration falls short of subatomic events by an amount set by the quantum constant. Order does not just emerge out of chaos by pure chance; order is imposed on chaos in accordance with principles that allow of a chance play within certain limits.

Further, we must distinguish between laws of process and laws of structure. The electron may move about randomly, so to speak, below the limit set by the quantum constant, yet above that limit its probable position is expressed by the Schrödinger wave equations. Now in a sense these equations reflect probabilistic laws of process, they give a statistical view of processes going on in the atom. Yet in another sense they are related to quite clear laws of the structure of the atom, for they reflect the organization of the atom into specific electron shells where the number of orbitals per shell is the square of the shell number, and so on. They suggest, then, that there are archetypal laws of form and structure that rule over or work into the atom and so determine the limits within which the probabilistic sub-atomic processes occur; while wider laws of macroscopic structure constrain atomic events within processes that reveal a determinate causal order. Certainly the current mathematical account of the atom is a very transcendent conception: it is a partial differential equation in an abstract multi-dimensional space.

A certain minimal randomness also occurs at the atomic and molecular as well as at the sub-atomic level. The atoms and molecules of any substance, according to the well-substantiated kinetic theory of matter, are in a state of haphazard agitation. The Brownian movement, of a large particle (e.g. a pollen grain) suspended in a fluid, was observed in 1827. Its final explanation, in terms of haphazard bombardment by the molecules of the fluid, was secured by Perrin’s experiments about 1910. We assume, then, that the play of chance at the sub-atomic level is cumulative so that there is a minute random motion of atoms and molecules.

Entropy essentially measures the degree of disorder in a physical system. Now entropy or disorder must increase in random processes. Why then does not nature as a whole increasingly resolve itself into total disorder and chaos, move to an entropy death? This is the essential problem which arises once it has been established that a deterministic causal order does not prevail among ultimate material particles. The problem comes into clearest focus in the case of living organisms. These exhibit a high degree of orderliness; yet to live and grow they must metabolize, and the processes of metabolism “raise the total entropy to a larger degree than corresponds to the growth of orderliness they achieve” (K. Mendelssohn, Turning Points in Physics, p.52). Yet the random molecular agitation in the inorganic realm combined with metabolic increase of entropy in the organic realm do not lead to increasing disorder, but are constrained within the limits of highly articulate principles of structure.

To deal specifically with this problem, we may elaborate our hypothesis as follows. Laws of structure, form, orderliness are imposed on the random play of atoms and molecules through the intermediary of a paraphysical, non-material substance, the ether. In the ether forces interact, under the influence of the organizing power of mind, and in a mode expressible in precise mathematical relations, to produce formative lines and planes that constitute the archetypal form field or matrix pattern for the organization of material substance. Randomness at the material level, no matter how much it may apparently increase, is contained within the bounds imposed by a paraphysical  invisible matrix pattern. The notion of non-material, etheric patterns or form fields, certainly seems a relevant hypothesis for the development of organic forms, crystal formation, snowflake formation, and so on. Of course, the hypothesis raises elaborate problems of its own: (1) the nature of the mind or minds working through the ether; (2) the constitution of the ether itself, and experimental detection of its existence; (3) the interaction of material substance and the ether, how the former is obedient to the controlling influence of the latter; (4) the genesis of material substance in relation to the ether; (5) the mathematics of form fields and etheric processes generally. (Some pioneer speculations in this realm were exercised by Sir Oliver Lodge in Beyond Physics, 1930).

The statistical concept of truth, current in modern physics, that laws of nature are statistical averages, is a view of causal law taken, as it were, from inside matter, from within the notion of random molecular motion. But it clearly betrays paucity of imagination, for it by no means accounts for the factor of the organizing law that shapes the statistical averages into the determinate forms of perceptual experience: it cannot do justice to the clarity and differentiation of form that exists at the macroscopic level and that is especially evident in the biological realm.

The notion of imposed law was, of course, the doctrine both of Newton and of Descartes. Without such a doctrine science would scarcely have been born. It was connected on the one hand with deism: for Newton, laws of the solar system expounded in his Principia were themselves to be explained by, and to him made obvious the need for, the concept of a transcendent purposive deity. The doctrine was linked on the other hand to the expression of law as mathematical relations of functional dependence. Laplace, as we have seen, took over the notion of law as a functional relation, but dropped the notion of law as transcendental intention. For him law became entirely immanent, a species of animated built-in mathematics. This view, always rather short-sighted, required as its least justification evidence that Newtonian mechanics prevailed among ultimate particles as among the celestial bodies. The discovery of random motion among ultimate particles and the resultant problem of entropy justifies the reintroduction of the hypothesis of purposive law.

We are not suggesting, however, that the doctrine of purposive law should be revived simply in a crude and vaguely deistic form (although ultimate deistic notions can always be retained), but in a manner which describes clearly and explicitly the mode of its execution; and this via the hypothesis of intermediary archetypal force fields and matrix patterns of an ether that is conceived as the vehicle of purposive mind. Thus the quest for certainty among ultimate particles is transposed into a quest for the hidden origins of total form at the macroscopic level. Moving outside physics, we may note the great scope for far reaching hypotheses related to morphology in the biological sciences; and even here mathematical treatment is highly relevant (cf. the application of polar-Euclidean geometry to the form of plant growth by G. Adams and O. Whicher, The Plant Between Sun and Earth).

The knowing of form

What is, perhaps, significant about this suggestion is that it implies a general reorientation in our mode of knowing whose effects may be considered under three heads – psychological, epistemological and moral. Psychologically, the study of macroscopic form involves something other than the purely intellectual analysis, the sharpened rational penetration, required for the close study of sub-atomic events and structure. For the observer needs to participate intuitively in the unseen matrix pattern and the processes that work through it, by an imaginative structuring of the total physical form and of the sequence of forms and physical processes that develop through time. Psychological functions of empathy and identification – with active imagination and intuition – are employed, as well as purely intellectual grasp and ability and, of course, careful and controlled observation. Explicitly and consciously, then, a more comprehensive range of cognitive powers is involved. This suggests, firstly, that there is active a richer faith than the normal faith of the scientist that there is an intelligible order in nature that the intellect can abstract by reasoned observation. This wider faith is a faith in a sustaining realm of ordering forms and processes transcending nature that can be entered into in an experiential mode beyond pure abstraction and analysis. Secondly, in order to bring into exact and effective balance the wider range of cognitive functions, there is implicit in this approach a harmonious balance and integration of the personality. Integrated cognition is a function of integrated being.

Epistemologically there is entailed the view that the range of phenomena capable of precise scientific description is not restricted to sense data alone, but that an imaginative-cum-intellectual grasp of the wholeness of sense phenomena can lead beyond them to a participation in and understanding of their immediately transcendent matrix forms and processes. This species of exact, careful observation allied actively to imagination and intuition as well as to intellectual analysis, leads to a consideration of moral issues.

The moral factor may be stated in terms of values. The exclusive use of intellectual analysis allied to observation as a mode of cognition tends to leave underdeveloped those functions of feeling and intuition that are particularly responsive to values. There is a certain neurosis of the intellect, a certain ruthlessness in the quest for certainty that seeks to strip matter to its bare subatomic bones of immanent mathematical law. This puts aside as irrelevant the role of these other psychological functions that contribute to comprehensive human cognition. The resultant insensitivity is reflected in the old deterministic myth which excludes from the economy of the universe any teleological role for values, and in the new statistical myth which reduces the purposiveness implicit in form to the mere play of chance. Our technological civilization based on science has among its characteristic motives those of dominance and acquisition. This is a consequence of the moral implications of the scientific mode of cognition, which acquisitively seeks an intellectual dominance over facts, to the exclusion of an imaginative and intuitive attunement to and through  the facts. There is a price to be paid for probing nature with the sharpened tool of the intellect alone, for the knowledge that is won backfires in a way that draws dramatic attention to the practical relevance of values and modes of cognition that have been put aside (cf. the A and H bombs whose development was unforeseeable in the early days of Rutherford’s work on the atom). It is an approach that disregards the role of transcendent functions in nature, and is piratical in the sense that it appropriates and applies its discoveries disregarding influences from, and effects of, the unacknowledged realms intimately involved in the zone of operations concerned.

We may suggest, then, that there is a morality of our modes of knowing, whose principles are concerned with the range of, and the interrelation between, our different cognitive functions. The more comprehensive the range of functions, the more integrated and mutually fructifying their working, the greater the interrelation of fact and value in the knowledge gained, the direction of research, the applications achieved, and in the characteristics motives of the culture in which they are applied. The study of form, its origin, processes and metamorphoses requires an intuitive-­aesthetic as well as a purely rational grasp; and it can never be far removed, via a general metaphysic of form, from a consideration of the purposive role of values.

We may summarize this section by saying that the knowing of form involves (1) a joint consideration of fact and aesthetic value; (2) a wider range and integration of cognitive functions within the psyche; (3) an understanding of the integration between the phenomenal realm and the transcendent matrix fields that interpenetrate it with formative influences; (4) in cognizing this integration, an active attunement to the fields and processes concerned. The intellectual quest for certainty, then, becomes transformed in this alternative approach into the quest for wholeness and depth within the observer, wholeness and depth within the observed, and an experiential unity as between the two. And this leads quite naturally on to the unfolding of a new descriptive metaphysic of the most general characteristics of form and structure as such, and of the most basic principles involved in their transformations.

(NB: The above discussion is not intended to suggest in any way that the investigation of ultimate particles should be abandoned, but simply that it should be complemented by and subsumed within the deeper and wider macroscopic approach outlined).

Mathematics: Gödel’s theorem

Whatever uncertainty may have arisen out of quantum mechanics and the formulation of Heisenberg’s principle, physicists themselves took for granted the complete reliability and certainty of the mathematical tools by means of which the principle itself and the facts underlying it could be made so explicit. A certain small group of research mathematicians, however, were less content to rely solely on the apparent pragmatic evidence of the reliability of mathematics. And we have seen how, in the light of the disturbing paradoxes mentioned earlier, they were concerned to establish an absolute, “finitistic” proof of the consistency of arithmetic. At this point we come to Gödel’s theorem.

Gödel’s theorem is certainly one of the outstanding intellectual achievements of the present century and marks a high point in the development of rational skill, ingenuity and inventiveness.      Nor is it likely that its broad philosophical implications have as yet been fully fathomed. The theorem is conducted in the abstruse realm of metamathematics. Mathematics proper embraces the formal deductive systems – algebra, geometry, arithmetic, etc., – that mathematicians construct; metamathematics deals with the description, discussion and theorizing about such systems. Thus two main metamathematical issues are: (1) is a mathematical system consistent? (2) are its axioms independent, such that no one can be derived as a theorem from the others? Hilbert’s programme for an absolute and finitistic proof of consistency was, first, completely to formalize a deductive system into a calculus consisting of a set of signs, and rules showing how the signs are to be combined and manipulated. The combination rules show how the signs may be arranged so as to give intelligible axioms and theorems. Theorems are derived from axioms in accordance with transformation rules, or rules of inference. The calculus, then, consists of certain signs combined according to certain rules into a set of axioms from which, in turn, theorems are derived by rules of inference. Secondly, Hilbert hoped that such a calculus could be shown to be a “geometrical” pattern of formulae standing to each other in a finite number of structural relations examination of which would establish that contradictory formulae cannot be obtained within the calculus. What is clearly essential to such a metamathematical demonstration of consistency is that it should not involve an infinite number of structural properties of formulae, nor an infinite number of operational procedure on formulae. One mast be able to make fully explicit all the axioms and all the transformation rules to be applied to them.

Until Gödel’s theorem it was taken for granted that a complete set of axioms for arithmetic or any given branch of mathematics could be assembled: there was the apparent evidence of the axiom sets of the various geometries, and Peano’s apparently complete axiomatization of the arithmetic of cardinal numbers (five axioms formulated with the help of only three undefined terms – “number”, “zero”, and “immediate successor of”). Now Gödel proved, by a highly ingenious technique of mapping metamathematical statements about a formalized calculus onto arithmetical formulae within  that calculus, that arithmetic can never be fully axiomatized: one can never set out the complete set of axioms of a deductive system from which all true arithmetical theorems could be derived. He showed that any system within which arithmetic can be developed is essentially incomplete; for given any set of arithmetical axioms, there will be true theorems that cannot be derived from the set, and no matter how the set may be augmented by additional axioms, there will always be further true theorems not derivable from the augmented set. There is thus an inherent limitation in the axiomatic method as a technique for encompassing and sustaining the whole of arithmetical truth. This is the “incompleteness” aspect of the theorem.

But the theorem also showed that the consistency of arithmetic cannot be demonstrated by any argument that can be represented in the formalized arithmetical calculus. That is to say, it is impossible to prove the internal logical consistency of a deductive system from which all arithmetic can be derived, unless the proof employs rules of inference wider than and essentially different from the transformation rules used to derive theorems within the system. But then the consistency of the assumptions implicit in the wider transformation rules is as open to doubt as the consistency of arithmetic itself. The long and short of these two results of the theorem is that Hilbert’s programme for an absolute and finitistic proof of the consistency of arithmetic has to be abandoned. The intellect has demonstrated that it cannot, as it were, encompass a finite, impeccable guarantee that many significant branches of mathematics are entirely free from internal inconsistency. The search for absolute intellectual certainty in mathematics thus finally arrives only at the certitude of its unattainability.

Some qualifying statements must here be introduced. (1) Gödel only showed the impossibility of a consistency proof that can be represented within arithmetic. There may be a finitistic proof that cannot be so represented; it is difficult, however, to conceive what such a proof could be like. (2) The theorem does not exclude the possibility of a metamathemat­ical proof of the consistency of arithmetic, one that is not finitistic and that cannot be mapped onto formalized arithmetic. Thus its rules of inference lie outside the impeccable guarantees of a finite consistent system. An example is Genzen’s proof (1936) which organizes all arithmetical demonstrations in linear order according to degree of simplicity. This order has the pattern of a “transfinite ordinal type”. A proof of consistency is got by applying to this order a rule of inference called the “principle of transfinite induction”. (3) Similarly, the several mathematical theorems or general statements which have so far evaded proof, that is, which it has not been possible to derive from a given set of axioms (e.g. Golbach’s theorem that every even number is the sum of two primes), may be established by metamathematical proofs; but again the rules of inference used (and the consistency of their implicit assumptions) will lie outside those contained within a formalized calculus.

The implications of Gödel’s theorem

We have seen that a full account of mathematical inference cannot be given in terms of the axiomatic method. We cannot, that is, make fully explicit all the rules and principles involved in valid mathematical demonstrations. The whole logical form of mathematical proof cannot be set forth in any self-contained deductive system. There will always be new principles of mathematical inference awaiting discovery. This gives rise to two problems: (1) How do we justify the use of, that is, what are our criteria for the soundness of, metamathematical principles of inference that fall outside the scope of a formalized axiomatic system? (2) What sort of an account can we give of the nature of mathematical and logical truth? In approaching an answer to these two questions, two suggestions may be put forward.

(1) If we cannot forever go on codifying rules of inference within self-contained consistent deductive systems, then we must appeal in the case of new and wider rules of inference to a general aesthetic criterion of elegance, simplicity, cogency. Such a rule of inference will be justified by its elegance, by its range of unifying, cohesive power upon the system to which it is applied. We are again, in the realm of metamathematics, brought face to face with the notion of form, here pure abstract form, to which in the last analysis aesthetic criteria alone apply. (Cf. L.L.Whyte, Accent on Form: “All intellectual processes depend on the operation of the aesthetic sense that recognizes an elegant ordering when one is presented to it. This sense is prior to reason and cannot be justified by analysis or interpreted by definition”.) This does not mean, of course, that in this realm we discard the rational processes of cogent proof, but that the rules of inference employed which achieve the cogency find their ultimate sanction in pure intuitions of formal elegance: the rational or deductive content is subsumed within the aesthetic criterion.

(2) In the light of Gödel’s theorem, mathematical and logical truth may be conceived as a world of abstract formal relationships spreading out beyond the horizons of present intellectual advance, and awaiting discovery. This, of course, revives a kind of Platonic realism, a doctrine of subsistent universals: mathematicians discover the objects which in some sense subsist prior to their discovery. And it appears to be Gödel’s view: “Classes and concepts may be conceived as real objects existing independently of our definitions and constructions. It seems to me that the assumption of such objects is quite as legitimate as the assumption of physical bodies and there is quite as much reason to believe in their existence” (Gödel on Russell’s mathematical logic in The Philosophy of Bertrand Russell, Schlipp, 1944). It might be more apposite to speak of a universal and subsistent mind whose content of intelligible relationships extends beyond both the confines of concrete actuality and the areas of formal possibilities so far mapped out by human inquiry.

The mathematician’s intellectual faith in the possibility of a consistency demonstration for arithmetic has to be replaced by a faith that in subsistent, universal mind there are wide principles of inference which can be discovered and whose ultimate justification resides in an aesthetic criterion of unifying elegance. And it is at this point that we may move forward toward a more general philosophy of symbolic forms.

Towards a philosophy of symbolic form

Intellectus archetypus

What is remarkable about the two events we have been discussing is that they both point to the capacity of the intellect to determine its own limitations. In physics the intellect has established what it cannot measure, in mathematics it has demonstrated what it cannot prove. If we examine this peculiar ability of the intellect we may see in it evidence of the implicit application of a criterion that transcends purely rational demonstration and analysis as such. The intellect is able in a rational mode to reflect principles that transcend its own nature by virtue of their aesthetic content. To establish what you cannot measure is to reflect a subliminal awareness that spatial physical form is transcendentally organized by the action of forces flowing through creative symbols or patterns or ultimate matrix structures. To prove what you cannot prove is to appeal to standards, ultimately, of cogency, fittingness, elegance which subsume yet still transcend rational principles of inference. It is a process, perhaps, in which the intellect reflects its access to realms of purely symbolic thought that lie behind all rational demonstrations.

The justification, then, for accepting the hypothesis of a transcendent function of mind, an “intellectus archetypus” as Kant called it, is that this function demonstrates its existence at the rational level through the activity of reason proving the impossibility of certain kinds of proof. A mind that can describe its own rational limits is implicitly revealing at a rational level its transrational insights, and is implicitly calling for a wider and deeper faith to prompt the search for knowledge.

We may state the broad philosophic hypothesis as follows. The intellectus archetypus has access to subsistent, universal mind whose content of symbolic forms is independent of human minds and awaits their discovery. The active intellectus archetypus is a heightened function of mind in which intuition and intellect, feeling and will are in harmonious integrative balances. As a result the symbolic forms that it apprehends are not simply mentally observed, but are inwardly experienced: they are passed through like cognitive gateways to a living reality. These symbolic forms are on the one hand the ultimate justification for the intelligible content of all human speculation, reflection, contemplation and demonstration; and on the other hand they are the ultimate origin for the formal properties of form fields and matrix patterns in the ether, on which material entities are built up.

Paralogic

If the discursive intelligence, Kant’s “intellectus ectypus”, finds a purely rational justification for its products in the formal principles of logic, then the intellectus archetypus finds a transcendent justification (wider than the rational) for logic itself in the symbolic forms the study of which we shall designate as “paralogic”.

It is clearly the case that paralogic will have a language of its own in which the interrelations within and between symbolic forms will be expressed in a series of symbols or glyphs arranged together in accordance with principles that are wider end deeper than is evident to purely rational inspection and analysis. This is not to assert, however, that there is an unbridgeable gulf between logic and paralogic. For once the intelligible content of a set of paralogical glyphs is grasped, or rather, entered into and experienced at the level of the intellectus archetypus, then its rational content will be susceptible of consistent and coherent explication. In the same way, while paralogic itself may be expounded in its own paralogical terms, yet it is still susceptible of reflection in the explanatory terms of the discursive intelligence. There is, however, a leap involved in the passage from a discursive account of paralogic to the experience of its intelligibility. For there is more than a rational dimension to a symbolic form, whose general properties we must now consider.

It is also clear that the ultimate study for paralogic will be the symbolic form of a symbolic form. A symbolic form has both a rational and an aesthetic component. It is, as we have said, susceptible of rational explication and interpretation for there is a correspondence between its elements and their interrelation and certain rational concepts and their interrelation; logical relations reflect paralogical relations. Yet in its own nature it has a cohesion, unity, elegance, fittingness, cogency, compactness, simplicity that is susceptible of purely aesthetic appreciation. It is this aesthetic component, involving the grasp of a whole as a whole, that raises cognition of symbolic form to the level of the intellectus archetypus.

The two components are, of course, interrelated. A symbolic form deficient in rational potential is not likely to persuade aesthetically; while one that is aesthetically disingenuous is not likely to sustain adequate rational development. Yet the aesthetic component subsumes the rational, and in transcending it, acts on it, unifies it, and transforms it, creating out of the whole a well-formed cogency. But we cannot reduce the rational to the aesthetic (or vice versa); each has its own distinct mode of mental being. We may, then, symbolize a symbolic form by a horizontal line for the rational dimension of its intelligible content, a vertical line (intersecting the horizontal at right angles) for the aesthetic dimension that interacts with the rational, and a circle for the well–formed cogency of the result. The lines extend beyond the circle since we cannot suppose that the experienced cogency of a symbolic form totally exhausts the intelligible possibilities of the interaction of its aesthetic and rational components. This symbol might be better conceived with the two lines of indefinite length and a series of circular ripples proceeding out continuously from the central point of interaction. In graphically symbolizing this mental symbol of a symbolic form, we present one circular ripple or wave on a portion of the lines.

It can scarcely be argued that this symbolic form of a symbolic form has a solely subjective reference to the mode of human understanding, that it simply depicts intelligent understanding and nothing more. For it seems more reasonable to suppose a consonance between our understanding and that which is understood, and to argue in the light of our general  hypothesis that it is a symbol of the intelligibility inherent in universal mind as such: it depicts the prior substrate of all knowledge. If so, we shall expect to find contained within it a correspondence between ultimate principles in diverse realms of knowledge. We shall therefore call it the basic symbol of the intelligible.

Aesthetic validation through the paralogical principle of correspondence

Systems of formal concepts, belonging to diverse domains of enquiry, that can be seen to be subsumed under a symbolic form of wide application will thereby tend to be validated by the aesthetic cogency of the correspondences established between them. While conversely the symbolic form itself will receive an aesthetic validation by the extent to which its rational component is explicated in terms of the varying sets of formal concepts that can be developed under it.

For example, we can take the basic symbol of the intelligible given above and seek to subsume under it an explication of key concepts descriptive of metamathematical reasoning on the one hand, and of the (meta)physics of spatial form on the other. The symbol will then elucidate a correspondence, having a purely aesthetic cogency, between formal elements in these two realms of inquiry. If we consider metamathematical demonstration, the vertical line represents what we have seen to be the transcendent rules of inference, interacting with the horizontal line representing the ordered elements of the deductive system to which the transformation rules are applied; the circle represents the well-formed cogency of the result. Similarly, in a general metaphysic of material forms, the vertical line represents the transcendent laws of change or process working through the matrix patterns, interacting with the horizontal line that represents the elements of the material system affected; the circle represents the well-formed organism that results.

We are able, then, to suggest a significant correspondence between the aesthetic component of a symbolic form, the rules of inference employed in a metamathematical demonstration, and the transcendent formative processes acting on any physical system. Laws of inference are like laws of process: they tend to weld horizontally disposed units into a whole, just as the aesthetic component of a symbol welds its rational potential into a cogent and fitting whole. The transcendent nature of metamathematical rules of inference and of the aesthetic component of a symbolic form enables us, via the paralogical principle of correspondence, to suggest the aesthetic reliability of the notion of the transcendence of formative processes in nature.

But where does this paralogical method stand in relation to an empirical methodology? Are we seriously suggesting that such a principle of correspondence can be a substitute for empirical investigation? Certainly not. What we are suggesting is that it is a way of cohering and organizing our intuitions prior to such investigation. The paralogical method introduces determinate principles into the hitherto mysterious process of insight that precede scientific method. The sets of correspondences which it establishes serve on the one hand to unify, deepen and enrich knowledge by establishing harmonic relations between concepts, and on the other to provide by analogy and the structuring of operational concepts fruitful hypotheses to guide empirical research. Certainly, if it fails to do the latter it can scarcely claim comprehensive validity as a method.

Symbolic forms

By way of conclusion, this final section seeks to set out some general features of the hypothesis of symbolic forms.

(1) Symbolic forms in their pristine, original, subsistent state will of course be quite distinct from, although they will participate in, any two dimensional graphical representation. We may perhaps conceive of the origin of a symbolic form as a particular focus or area of intelligibility in the universal mind, a zone of realization that is in its ultimate nature perhaps never entirely attainable, but which springs into determinate form in the mind of an interpreter attuned to it. So according to the nature of the interpreting mind and the level of being at which its understanding is being exercised, different facets, aspects and dimensions of the symbolic form will disclose themselves. Thus we have the original focus of intelligibility in universal mind, its many-faceted representation in diverse interpreting minds, and its basic graphical or embodied representation outside a mind. The graphical expression will be to a considerable degree adequate as an anchorage for a set of correspondences. But reflection on symbolic forms can go beyond the purely graphical expression or glyph to experience the unrestricted thought-form in which the symbolic form discloses itself in the multi-dimensional space of the interpreter’s mind or creative imagination. Out of such a thought-form a richer and more complete set of correspondences can perhaps be developed.

(2) Need symbolic forms be represented to the interpreter as obvious geometrical forms? No, but we may suggest that any explicit representation to the mind must have spatial characteristics.

(3) Symbolic forms, as we have already indicated, may be considered not simply as transcendent foci for the unification of basic concepts in diverse realms of knowledge, but also as focal points that radically influence the nature and activity of formative processes. Individual sciences will seek the key symbolic forms relevant to their special domains. Metaphysics will seek those that are relevant to many or all domains. If symbolic forms are generically related to the formal properties of etheric matrix patterns, etc., then paralogical intuition may fruitfully precede the formation of working hypotheses in the new physics.

(4) Symbolic forms, as distinct from their point of origin in universal mind, may also be considered as the unifying focus for distinct groups of transcendent, discarnate minds, which relay the formative or conditioning influences characteristic of the forms to which they are particularly attuned. Here again is an obvious connection with the new physics.

(5) Comparable with what we said about participation in unseen matrix patterns of macroscopic forms, the psychological significance of the paralogical method is that a symbolic form invites total participation: we need to experience its aesthetic component by empathetic identification. Symbolic forms subsist in a zone of intelligibility and invite us to enter into an inner rapport through them with this zone and to experience its intelligible life as a sustaining reality. The intellectus archetypus achieves this by the integration of diverse psychological functions in one experiential mode of knowing.

(6) We may, finally, suggest the view that a knowledge of symbolic forms is received by a rapport with the minds referred to in (4) above. This raises the question of the conditions under which the symbolic forms are received. We here come again to a deep interior connection between morality and epistemology. We do not of course mean morality in any narrow puritanical sense; and it is prefereable to use the term metamorality for a fusion of psychological wisdom, creativity, experimentation and ethical values, that is more subtle, adventurous, far-reaching and imaginative than any rigid, authoritarian prescriptions governing correct behaviour. What we are suggesting, of course, is that a knowledge of symbolic forms is not simply the fruit of intellectual speculation or rational analysis. It is rather that which is bestowed upon the mind when it enters into and acts on comprehensive attitudes of faith which we have broadly outlined; and when as a result of this it cultivates in a deliberate manner transrational modes of knowing. There is no recommendation in all this that rational modes of knowing as such are to be discarded; far from it. But the cultivation of transrational insights is closely related to the integration of behaviour through its interaction with deep metamoral values: the insight involved is the fruit of integration occurring within deep levels of the personality. Metamoral values interact with total behaviour to produce well-formed insights, a thesis which can also be explicated under the basic symbol of the intelligible given above. Receptivity is allied to rational judgment, intuition to discrimination, and the whole is an inquiry guided by a deep and aspiring faith in the reality of a transcendent knowledge. Its results are justified aesthetically by the cogency with which symbolic forms unify metaphysically different areas of experience, rationally by the extent to which the symbolic forms are capable of rational explication in diverse realms, empirically by the fruitfulness of the paralogical method in guiding research and objective inquiry.