(Theo Theocharis)



"There are some matters which no mind, however gifted, can present in such a way as to be understood in a cursory reading. There is need of meditation and a close thinking through of what is said." (Johanness Kepler, New Astronomy, Chapter 59, translation by W. H. Donahue)


In our (post-)modern times, the neglect, subversion, or even the outright rejection of both terms 'science' and 'truth' has become an almost universal affliction. The attempt is here made to strip away the various layers of misconceptions, and thus develop sound (as well as useful) definitions.


The 2 April 1989 issue of the London Sunday newspaper The Observer published an article by Michael Ignatief under the heading "Defenders of [Salman] Rushdie [are] Tied Up in Knots". The explanation ran briefly as follows: The Islamic (and other religious) fundamentalists have a dogma, and they are absolutely certain about their dogma. On the other hand, the western intellectuals have a so-called "philosophy", but by their own uncoersed admission they can never be certain about their own "philosophy". Therefore, the western intellectuals cannot really believe in whatever they say or, worse, preach. For the same reason, Clifford Longley, the then religious affairs correspondent of the London daily The Times, could boldly claim that the post-Enlightenment "philosophy" of the modern secular western world is now dead, and he could thus carry out, with evident delight, the "Inquest on the Enlightenment" (The Times, 25 March 1989).

Significantly, both The Observer and The Times are leading organs of the mainstream western culture in the UK. I expect that in 1989 similar articles appeared in the analogous press throughout Europe and America and indeed the whole world. I also guess that the same arguments feature all the time with much greater force in the religious fundamentalist media everywhere.

It may have not seemed very obvious to everybody, but the plain truth is that this incident in 1989 exposed to the whole wide world a grave intellectual flaw at the heart of so-called 'Western' culture. The Islamic fatwa elicited by Salman Rushdie's The Satanic Verses and the subsequent intense international controversy dealt a deadly blow to western "philosophy", and the grave flaw at its core was forcibly brought to the fore of international affairs.

This sorry international hot episode in 1989 supplied me with one more piece of powerful evidence with which to show that the voluntarily but thoughtlessly professed (universal and permanent) "anti-certaintism" by the intellectual leaders of the West was the exact point "Where the [entire] West[ern Civilisation] Has Gone Wrong" (T. Theocharis, Journal of Applied Philosophy Vol. 6, 1989, pp. 249-250).

As strongly suggested by Clifford Longley in the above cited Times article ("Inquest on the Enlightenment", The Times, 25 March 1989), the ultimate origin of the western universal and permanent anti-certaintism is western science, which of course is the most fundamental and the most distinctive constituent of western culture. Again it was with much sorrow that M. Psimopoulos and I had to explain that this is the exact point "Where Science Has Gone Wrong" (Nature 1987; 329: 595-598; Nature, 1988; 333: 389).

Worse, many science spokesmen, while all the time paying lip-service to the hallowed rhetoric of anti-certaintism and (consequently also) of humilitism, speak with a superior air of cock-sureness and supercilious arrogance that comprehensively defeats any avowed dogmatist. Naturally, this is seen as insincere and hypocritical, and as smacking of bad faith, and it thus, regrettably, fuels the already hotly and ferociously burning flames of anti-science.

The dogmatic preaching of anti-certaintismism is yet another (post-)modernist oxymoron. (The term 'oxymoron' literally means 'acute folly'). Having as the only non-negotiable principle the rejection of every certitude is a self-refuting and self-destructive proposition.


When it is stated (rather ambiguously) that science is an open-ended quest for knowledge, it must be clarified that it is the horizons that are constantly and ceaselessly expanded, not that one never attains any definite and final article of knowledge. The open-endedness is in the quest, not in the specific findings resulting from the quest. The quest for knowledge is open-ended because the number of individual items of possible knowledge is infinite, but each item of course is in principle fully attainable and conclusively verifiable.

Regrettably, the current (post-)modernist (mis-)conception of science implies that science goes on for ever not because the number of truths is infinite (for allegedly there aren't any truths at all) but because science never gets anywhere.

Throwing everything into doubt was justifiable at the very beginning and also during the early days of science. But surely all the resourceful and productive work painstakingly done by so many industrious individuals and groups during the past 2500 years must have yielded at least a few results that are by now beyond dispute! Of course critical questioning is still a must at the frontiers of research.

It is especially in the delicate field of new types of consumable and ingestable products (chiefly foods and drugs), where there naturally is no sufficient evidence at the beginning to declare a new product absolutely safe where one must be cautious and open-minded. But this does not entail that one must reject all certainties, for if one has no centainties at all, one ultimately has nothing.

I had hoped then that the extraordinary events of 1989 in international affairs over the Islamic fatwa elicited by Salman Rushdie's The Satanic Verses ought to have galvanised all western thinkers and their influential organisations to seriously reconsider the inconsistent, unsound, shaky, and ultimately untenable foundations of their "philosophy", and thus devise a new, coherent, unshakeable, and indestructible set of fundamental principles.

Regrettably, the then burning issue of inconsistent and unsound fundamentals was glossed over and quickly forgotten, and no concrete steps were ever made so as to remove this intellectually ruinous and crippling flaw, to the continuing further detriment of science, culture, and society - but especially scientific prestige. To this day, the authority of both western "science" and western "philosophy" remains both subverted and unconvincing. Intellectually, this has effectively amounted to a crushing, if unacknowledged, defeat.

There is only one satisfactory way to answer this devastating criticism. This would entail tackling the very roots of the problem by starting afresh from the very beginning, re-examining the foundations, and then determining and adopting ab initio a new set of core principles that are soundly and conclusively proven and thus no longer questionable. These non-negotiable axioms will therefore serve as the solid epistemological foundation upon which to be able to erect a stable, secure, defensible, and indestructible scientific edifice.

The definition of 'science' and 'truth' must necessarily be one (arguably the first) of the non-negotiable postulates.


It is more or less universally agreed that the whole subject of discussing the definition and meaning of 'science' is not part of science, but instead it belongs to some other discipline, perhaps the so-called 'metaphysics' or 'philosophy'. The philosopher C.E.M. Joad argued correctly that people who think they haven't got a philosophy simply have a badly-thought-out philosophy. This is another way of stating Aristotle's observation (more than two millennia earlier) that the claim to not have a metaphysics is itself a form a metaphysics.

In fact the whole subject of investigating the definition and meaning of 'science' - ie the 'science of science' or 'epistemology' - is the most basic, fundamental, and indispensable part of science. The ordinary persons who think that they haven't got a 'metaphysics' or a 'philosophy' can be excused for being poor metaphysicists or philosophers. But can the scientists who do not know, or worse do not care, what science is be similarly excused?


There are millions of people who would call themselves a 'scientist', but how many ever asked the question "What Is Science?", let alone try to answer it? The relatively few individuals who have taken the trouble to reflect on the issue invariably say that science is the systematic study of observational data in order to gain an understanding of the world around us. The understanding is (almost universally) supposed to somehow come about by devising models or theories that work; all that is required from a scientific theory in this scheme is empirical 'adequacy' and practical 'reliability'. Models in this scheme are mere tools that serve as instruments for routine computations and standard predictions. The old saying "If it ain't broken, don't fix it" comes readily to mind here.

When fresh data come along which do not accord with the existing model, the model is no longer "empirically adequate", or to use the parlance of folk philosophy, the model is now "broken" and it needs "fixing". If, however, it cannot be fixed, it is then discarded. Then a new model is constructed that 'works' with the fresh data, and the process may be repeated ad infinitum.


However, this is a seriously inadequate conception of science, for it involves a rather superficial understanding of 'understanding'. This is best demonstrated thus: For the human infant, the Santa-Claus theory of Christmas gifts works unfailingly year after year. It follows that (according to the widespread viewpoint as to what science is) the Santa Claus theory of Christmas gifts is a proper scientific theory. Moreover, the laboratory rat all the time makes and re-makes models that work and are empirically 'adequate' or 'reliable' in order to find its way to the intermittently altered location of rat-food in the experimental maze. It follows that (according to the common viewpoint as to what science is) the laboratory rat is a proper scientist.

All organisms living now have already been rigorously selected and programmed by the austere and harsh evolutionary process to be able to cope with everyday tasks and thus survive in the cut-throat world of nature. Science surely has to comprise in more than the ordinary chores required for mere survival, and the 'empirical adequacy' conception of science is thus shown to be epistemologically inadequate and unreliable. In the final analysis, in the long term the "but the theory works" theory of science does not work adequately; in other words, it is indeed "broken" and it needs "fixing".

All the frequent colourful public displays of batteries of technical wizardry paint too rosy a picture of the current state of science, and omit the dark shadows concerning the non-obvious deficit in scientific understanding. Any amount of technical wizardry cannot really compensate for this deficiency which entails that science is being neglectfully and wastefully under-utilised, and so there is an unnecessary delay in growth and the resulting progress falls far short of the optimum. Thus the unpalatable truth is that all the technical wizardry conceals very effectively a serious flaw and unless this flaw is recognised and corrected, the public will continue to be both misled and, worse, deprived from even greater benefits.

Imagine a public event in 1600 AD exhibiting terrestrial and celestial globes, orreries, clocks, sextants, and other such ornate instruments and impressive devices. Consider also the claim then that all these ingenious devises and the then (fairly successfully) practised calendar and ocean navigation PROVE the geocentric model of planetary astronomy. How could someone then conceivably challenge this (prima facie powerful) argument? It is the chief task of this essay to answer this question, not only for 1600 AD but for all times.


What is the cause of this inadequate understanding of science, and why does it stop short of capturing the full meaning of science? The short answer is 'short-termism'. Of course the short-sighted, blinkered, and one-track vision of short-termism is not specifically confined to the scientific community. On the contrary, it is endemic in the wider world (of politics, industry, business, commerce, etc) and the scientific community evidently feels, and yields to, the socio-political pressure. However, the causal influence does not operate only in the one way direction from the outside world to the scientific community. Owing to the (not totally justified) prestige that the scientific community enjoys, the pathology of 'short-termism' flows also in the opposite direction.

The "model-that-works-now" (mis-)conception of science is geared up to the prevalent short-term outlook of the current socio-political set-up that focuses on short-term goals, and thus neglects the long term at the expense of optimum progress. The 'profit-now' drive of most entrepreneurs and the 'must-win-the-next-election' motive of all political organisations combine to concentrate on short-term goals, and thus generally undermine progress in the long term. It is adequate in a NON-scientific society (both human or non-human).

The "If it ain't broken, don't fix it" folk "wisdom" was devised in an era when society was essentially static and where no change and no progress were ever expected to take place. It was designed to preserve things exactly as they had always been. But for guiding a society with a fully-fledged science, the "If it ain't broken, don't fix it" folk "philosophy" is a woefully inadequate principle; if fact it is a badly "broken" rule. Nor is the "quick fix" attitude quite satisfactory; only the 'correct fix' will suffice.


Nowadays, those few science spokesmen who still use the term 'truth' invariably concede that science is only one of many ways of producing 'truth'; only very few insist that it is the most reliable way. It will be argued here that in fact only the scientific method (and only when correctly applied) can generate truth. Conversely, apart from the obvious, the only truth possible is by definition scientific.


The historically fully-documented answer to the following question has a immense cultural significance, and it will further illustrate the ongoing arguments: When did the Chinese and the Japanese first discover that the Earth is spherical? In the sixteenth century, when Jesuit missionaries from Europe first arrived in China and Japan, and simply informed them. In fact it is also documented that during the Yuan dynasty (1279-1368), visiting Muslim astronomers presented to the Chinese court the image of the spherical Earth in the (by then standard in western Eurasia) form of a globe. This incident, however, apparently had little, if any, impact. It is questionable whether the tremendous import of this information was appreciated; and if it was, it was kept at court and was not circulated outside.

This valuable historical information regarding the shape of the Earth is almost completely unknown, and its great importance unrecognised; the only publications known to the present author that mention it are:

(i) Pingui Chue, "Trust, Instruments, and Cross-Cultural Scientific Exchanges: Chinese Debate over the Shape of the Earth", Science In Context 12 (3), pp. 385-411, 1999;

(ii) Akihito (Emperor of Japan), "Science in Japan: A Historical Viewpoint - Early Cultivators of Science in Japan", Science 258, 23 October 1992, pp. 78-79.

Unquestionably, China and Japan had a recognisable civilisation for many centuries (arguably millennia) before the sixteenth century (with writing, legal code, state bureaucracy, literature, art, music, religious rituals, etc.), but it was a NON-scientific civilisation. During all those centuries, China and Japan (like other, less advanced, civilisations) also had arithmetic, logic, astronomy, medicine, technology, history, etc, but strictly speaking all these disciplines were not developed to the point where they could qualify as fully scientific.

It is frequently implied that the early foundation and operation in China of an astronomical observatory and also the early invention in China of paper, printing, gunpowder, rocket, magnetic compass, porcelain, acupuncture, etc., somehow proves the presence of some well-developed state of science. However, like many early mostly serendipitous or fairly easy (but nevertheless important) inventions elsewhere (cooking, pottery, metallurgy, weaving, sewing, plough, wheel, mill, lever, oar, boat, thread, string, rope, cloth, sail, pulley, gear; trade, writing, coinage; bread, wine, ice-cream, mirror, glass, lens, spectacles; bridge, arch, dome, cement; soap, sieve, button, needle, syringe, saddle, harness, stirrup, inoculation, grafting, etc.) the above early discoveries in China do not require any in-depth understanding of the workings of either nature or society.


Perhaps the best way to demonstrate the stark difference between PRE-scientific and strictly scientific medicine and technology (and hence to really understand the meaning of 'science' itself) is to cite these two very illuminating and helpful examples. Consider and contrast both the comparative easiness and the very early date of the PRE-scientific achievements on the one hand, with on the other hand both the great difficulty and the quite recent date of their strictly science-based counterparts:

  1. Grafting in, or cloning of, plants; and grafting in, or cloning of, animals. The former were both achieved in ancient times, whereas the latter were both achieved in the 20th century AD.
  2. The original (and by comparison crude) 'compression' wheel; and the later (sophisticated) 'suspension' wheel.
The former was invented in the 4th millennium BC and its spokes are necessarily thick (and therefore heavy) so as to be able to withstand the large compressive forces, whereas the latter was invented in the 19th century AD and its spokes are very thin (and therefore very light) because they were ingeniously designed (scientifically) to be in tension.

It is also true that, on some very early date, an emperor of China instituted an astronomical observatory which for many centuries meticulously recorded and preserved accurate observations. Routine observation and tabulation is of course an essential part of science. But by itself the mere compiling of observed data (however accurate), and even any (re-)arrangement of any type of sensory input, do not qualify as a fully-fledged science. To paraphrase Ernest Rutherford, science is more than mere 'stamp-collecting'. A substantial degree of understanding of (or at least a conscious effort to understand) the underlying reality that gives rise to the observations is required, but evidently this never happened independently in China or Japan before the arrival of the Jesuits.

Moreover, the ordinary 'trial and error' type of experiment is also a legitimate part of the scientific process. But again routine 'trial and error' by itself (and even together with the correct result) is not quite fully-fledged science. Otherwise, the laboratory rat which always finds (by straightforward 'trial and error') its way to the intermittently altered location of rat-food in the experimental maze will have to be promoted to the status of a proper scientist.


Form all the above considerations it follows that a consummate and competent scientist (by definition) must cultivate what might be termed 'EPISTEMIC ACUMEN'. This must comprise of:

I. The ability to attain a certain degree of cerebral exertion and cognitive abstraction;

II. The studious mastery of at least one scientific discipline (and preferably more);

III. The achievement of a higher proficiency in logic, mathematics, planning, design, craftsmanship, programming, etc.

The special expertise must embrace both breadth and depth of knowledge - both are necessary. A scientific investigation can be either highly-focused and penetrative or broadly spanned and integrative. Both approaches (either separately or, more effectively, jointly) can yield novel results. The strict adherence to experimental accuracy, logical discipline, mathematical rigour, etc, is an absolute must. Familiarity with the links between one's main discipline with others is also essential. Multi-disciplinarity (that gives rise to positive 'inter-play', 'inter-action', 'feed-back', 'cross-information', and 'cross-fertilisation') is even better. The fostering of initiative, discernment, sagacity, originality, creativeness, inventiveness, etc. are recommended extras.


Plainly then, the Chinese and Japanese never discovered science by themselves. In fact no other civilisation, except uniquely the Hellenic, discovered science. Science was discovered once, in the lands in and around the Aegean sea, in about the sixth century BC, and gradually spread from there. This is not to say that all the people inhabiting these lands of ancient Greece from the sixth century BC possessed the scientific frame of mind. On the contrary, as always, only a certain small (but for a few centuries influential) section of a privileged minority of the intellectual community possessed the scientific frame of mind. The vast majority of people (even most of the highly literate elites) have always lacked the scientific frame of mind.


Certain very influential individuals today believe and state that the "central element of a civilisation" (ie a distinctive culture or society), that distinguishes it from other civilisations, is religion (eg Samuel P. Huntington, The Clash of Civilizations and the Remaking of World Order, Touchstone Books, 1998). This statement is inaccurate. The correct proposition is this: The central element of PRIMITIVE civilisations (of the past, present, and future) is RELIGION; whereas the central element of HIGHLY evolved civilisations (of the past, present, and future) is SCIENCE as defined here correctly. In fact science is the one and only DEFINING characteristic of MODERN society. If one wants to really understand modern society, one will first have to understand science.


Science began with these prodigious realisations:

I. No gods, divinities, or spirits act in the physical world; all phenomena have a natural cause;

II. The direct sense-impressions may sometimes be illusory; the phenomenon (=appearance) is not necessarily the reality; and therefore a non-obvious (but still physical) truth may be hidden behind the appearances;

III. Humans (uniquely) as a species, although part of nature themselves, (have evolved and) possess the cognitive tools with which to probe, explore, scrutinise, and understand the hidden realities of nature - and also possibly exploit them.


These original discoveries are the keys that unlocked the still continuing process of further discovery. In the long term, this is the most powerful wealth creator in history that opened the super-highway to riches for the many to travel to.

This is not to suggest that these momentous realisations occurred in a single person in one instant. Moreover, it is very unlikely that we will ever know exactly how it happened. But we can be absolutely certain that it happened somehow, and we must all be grateful that it did.

The above listed ground-breaking recognitions are undoubtedly the earliest and most seminal scientific discoveries. This truly momentous point in history must be recognised and celebrated as arguably humanity's highest achievement that set the Earth (humanity's home in the universe) in motion, so to speak. This colossal achievement out-classed and still over-shadows everything that has happened before or since. During the last two centuries, we have collectively harvested the ripe technological fruit grown on the now mature and highly productive scientific tree that germinated from the epistemic seed that was planted 2500 years ago by the first cultivators of natural philosophy.


All those who value the artificial comforts of our technological civilisation (and who also want to know whom to be grateful to for this material opulence) have got to really understand all these challenging points. Otherwise, they will neglect the greatest heroes - the discoverers and inventors - as the vast majority of people ignorantly and unjustly do. Moreover, appreciating the force of these arguments is a must to those who want the best possible future for our descendants. The latter can be brought about only by the optimum advancement of science, technology, and medicine. Naturally, such an optimum advancement requires the CORRECT understanding of science, its method, and the resulting knowledge.

From all conceivable candidates competing for the title 'humanity's finest hour', the case considered here is surely the best. Churchill's memorable (para-)phrase also applies: "Never in history have so few achieved so much for the benefit of so many." Such is the huge debt that everyone owes to them.


The following are a few examples of excellent candidates for early fully-fledged science:

I. The (inter-related) concepts: Inductive generalisation, deductive logic, mathematical theorem, rigorous proof.

II. (What is commonly considered to be) The first mathematical law of nature: Uniform strings produce harmonious sounds when their lengths are in simple numerical ratios: 1 : 3/4 : 2/3 : 1/2.

III. The fundamental 'constitutional' theory of the universe: All matter in the universe consists of 'atoms' (ie 'building blocks') moving and interacting in space; nothing is ever created or perishes, but only combines and separates.

IV. The next in significance universal scientific theory: (The (related) propositions that) the Earth is spherical and that the Sun and Moon and all the stars are (not mere lights in the sky but instead) large massive material spherical bodies like the Earth.

It is remarkable that the one important point that was understandably missed in the fundamental 'constitutional' theory of the universe was the hierarchical structure of all matter in the universe, and that the unfortunately named "atom" (= indivisible) is not the ultimate in the division of matter. Rather it is just another spatial step in this universal (and possibly endless to both the infinite and the infinitesimal directions) hierarchy. Perhaps now it is too late to replace the unfortunately named "atom" (whose literal meaning is 'indivisible') with an etymologically correct term. (Or maybe it is not!) We must always explain its post-19th century use as a chemical 'building block'. The lesson we must draw from this unfortunate incident in the history of science is that we must always be careful in how we move forward.

Apart from this single blemish, every subsequent painstaking elaboration of, or improvement on, the above early scientific propositions has been a minor adjustment or refinement of some marginal detail.

It is true that much early (and, remarkably, also recent) work in science turned out to be wrong and consequently forgotten. (But those were the early days of no precedence and little experience.) The fashionable view in recent decades has been that ALL scientific knowledge is impermanent and transitory. One must wonder how long one still has to wait for the just cited 2500-year or so old (and still obviously robust) articles of scientific knowledge to be refuted and replaced by still other ephemeral work. Could it possibly be the case that these most ancient items of scientific knowledge have never been refuted because they are indestructible eternal truths?

Another early theory (circa 400BC) that may yet be spectacularly verified is that attributed to Metrodorus of Chios: "It goes against nature in a large field to grow only one shaft of wheat, and in an infinite universe to have only one living world."

Yet another remarkable early theory that was repeatedly confirmed in the twentieth century was enunciated in the fourteenth century by Ibn Khaldun (1332-1406) in Muqaddimah: "Famines are not the result of the land's incapacity to cope with increasing demand, but of the political chaos and physical oppression that invade the state in its decline."

Thucydides of Athens (5th century BC) was a pioneer of scientific history. In his History of the Peloponnesian War, Thucydides claimed: "My work is not a piece of writing designed to meet the taste of an immediate public, but was done to last forever." As this was written in the 5th century BC, the claim has already gone a very long way to being fulfilled. Is "doing work to last forever" too much of a challenge for the 21st century AD research scientist?


The utilisation of scientific knowledge for harnessing the resources of nature (for either good or regrettably bad) is an optional extra that has motivated many but by no means all scientists. In fact it is unlikely that the exploitation of scientific knowledge was high in the list of possible motives of the founding fathers of science. It is also doubtful that they ever recognised the great potency either of their specific discoveries or of the scientific method in general. Democritus of Abdera (5th century BC), of "atom" fame, was one of the early scientists. Not untypically, Democritus is reputed to have preferred to have discovered one more true cause than be King of Persia. In a similar spirit, in Academia Cicero talks of the inner drive of the scholar: "curiosity drives him on much more strongly than the carrot of promised rewards". Following this virtuous tradition (inaugurated in the 6th century BC), for very many scientists the one and only objective is knowing for the sake of knowing, and the resulting knowledge is its own reward.

It must be emphasised that there existed little scientific medicine and technology anywhere (not even in Europe) before the nineteenth century. Before the era of Leonardo da Vinci (circa 1500 AD), only a handful of individuals (notably Archimedes in the third century BC) seem to have appreciated the utility and potency of scientific technology. As for the first unambiguous articulation of the tremendous emancipatory capacity of scientific medicine and technology, that seems to have been made by Francis Bacon in the early seventeenth century. However, as indicated above, Bacon's ambitious great project really took off as recently as the nineteenth century.

The philosopher Martin Heidegger correctly observed that there is no etymological link between the terms 'technology' and 'science'. The 'techn' in 'technology' and 'technique' is the Greek for 'art' (as in 'artful'), and 'art' is the Latin for 'skill'. The early Greek pioneers of science insisted on the careful distinction between 'techne' (= traditional practical know-how) and 'episteme' (= scientific knowledge), which the speakers of Latin later called 'scientia', and which the speakers of English today call 'science'.


This is yet another point that is almost universally mis-understood, so it needs to be stated explicitly and correctly: The highest form of art is the technological invention, and the highest form of literature is the scientific text.

The following interesting pronouncement is attributed to the eminent professor of semiotics and novelist Umberto Eco: "I write novels because I don't understand what happens in the world. If I had a clear idea I would write a scholarly work." ("They said it", Daily Telegraph, 14 October 1995) (Evidently, writing novels - even good ones - is comparatively easy.) The reader is invited to consider this statement with the utmost seriousness that it deserves.


The standard English rendition of the first line of the creed of Nicaea (325 AD) is: "We believe in one God the Father Almighty, MAKER of all things visible and invisible;"

The word 'MAKER' here is a gross mistranslation of the corresponding word in the original Greek text, which is 'POIHTHS' (POIETES). Of course the correct English translation is 'POET'. Naturally, no human person can be a 'poet' "of all things visible and invisible". However, if the term 'poet' is to be used meaningfully to designate with any justice any specific group of humans, it will have to be the inventors and the discoverers. Small-time 'word-smiths' like Homer and Shakespeare may be skilful at telling a good fictional(-ised) story, but as literal 'poets' they are of little consequence. Obviously, a certain class of arrogant impostors appropriated the term 'poet' in order to inflate their importance.

But if the definition of genuine 'poetry' is as given above, what is then the correct definition of conventional "poetry"? Conventional "poetry" is nursery rhymes for grown up infants. Similarly a "novel" is a bed-time story again for grown up infants.


It may be useful to also point out here that the 'gn' in 'dia-gnosis' and in 'gnome' is, in evolutionary linguistics, the same as the 'gn' in 'co-gnise', and also the same as the 'kn' in 'know'; also the 'math' in 'mathematics' is the same as the 'math' in 'polymath' (= very learned); finally, the original meaning of 'school' (whence 'scholarship') is 'leisure'.

The significance of the original meaning of 'school' (= 'leisure') to the present (scientific) investigation may not be obvious, so it is necessary to state it explicitly: The social institution 'school' was founded by 'scholars', ie those members of the leisured class who entertained themselves by conducting 'scholarship' in various fields, including of course scholarship in science, and by all accounts they had a really good time. They enjoyed doing it, and they were further pleased by their discoveries. There seems to be no evidence that they did their scientific research specifically in order to exploit the resulting knowledge. Nevertheless, and this is a delightful irony, the (unintended) useful application of the accumulated scientific knowledge over the centuries is the most basic (if unknown) cause for the abolition of slavery throughout the world in the 19th century, and also for the general increase of leisure time for all social classes in much of the world in the 20th century.

For most school pupils in later times, gradual changes in social conditions slowly but surely brought about the surprising change in the meaning of 'school' from the original 'leisure' to the almost exact opposite 'hard toil'. Those school officials who now decry the recent movement to inject some enjoyment into schooling and make it less of a boring chore obviously would benefit from some schooling in the history of 'schooling'.


The economic advancement of societies with no technology is infinitesimal, if any.

The optimum economic advancement of societies with PRE-scientific technology is linear.

The optimum economic advancement of societies with science-based technology is exponential.

These are tentative conjectures derived from the comparative study of civilisations, and submitted for further scrutiny by the appropriate specialists.

Moore's Law seems to be an elementary corollary of the stated general laws. Enunciated in 1965 by Gordon Moore, Moore's Law states that computing power rises exponentially over time.



There exist widespread and justifiable concerns about the possible influence of certain paymasters on the direction and conduct of research, on the researchers themselves, on their findings, and also on how these findings are applied. These concerns must be acknowledged and subjected to the one and only objective scrutiny in existence: ie that of science. There is also the problem of the unequal distribution of the fruits of applied science. The problem of the unequal distribution of wealth of course predates science. What the critics either neglect or forget or never notice is that without science there would have been no fruits at all of applied science to distribute to anyone. Naturally, this problem and that of who supplies the funds for scientific research and how scientific knowledge is applied are essentially political issues, and in a democratic polity they must engage the due attention of all citizens. This point demonstrates the need, in a democracy, for a scientifically literate public to make informed decisions.

The best solutions to all such essentially political problems (with scientific input), as in all the others, will be arrived at by the judicious application of the scientific method, not by rejecting or decrying it.

Moreover, many critics have lost sight of the enormous benefits and complain incessantly about the (comparatively minor) side effects. Sadly, this has been happening for decades. Again, the solution of this problem is the better application of science, not its rejection.


It follows from all the above arguments that the best possible one-sentence definition of science is this:

Science is the conscious, disciplined, systematic, and sustained, endeavour to methodically discover the non-obvious truths - of both nature and society.

The non-obviousness of scientific truths may range from the frequently uncommonsensical, to the often counter-intuitive, to the sometimes surprising, and occasionally to the truly astonishing.


In about 300BC the first Greek King of Egypt following Alexander's conquest, Ptolemy I Soter, asked Euclid if there were not an easier way to learn geometry. Euclid is said to have replied: "In the realm that you are King there are roads for the common people and there are roads reserved just for royalty. In geometry there is no such royal road."

George Bernard Shaw is reported to have said: "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." Adapting the world to oneself requires a scientific understanding of the world. Shaw's perceptible remarks capture the spirit of scientific endeavour perfectly.

CAUTION: Another instructive lesson that one must learn from the long history of science is that theorising must be securely grounded on the solid foundation of careful observation and sound logic. Otherwise, wild speculation compounded by indifferent logic will invariably become completely divorced from reality, and then the careless speculator will go over the top and fall over the edge of the precipice into the abyss of pure phantasy.


Needless to say, the truth is the most important thing that anyone can know. It is the simplest of truisms to say this, but in our benighted (post-)modern era, this truism needs to be said and explained until it is no longer questioned. In the thick smoke of the suffocating (mis-)information overload that has plagued our (post-)modern world, free expression can help to ensure that unpopular truths may be communicated, debated, and better evaluated. However, without the correct understanding of the fundamental concept 'truth', everything loses meaning.

In this context, it must be noted that some very senior spokesmen of the science community sometimes say words to the effect that the hallmark of science is independence of thought and freedom of expression. The most notable example is Sir Michael Atiyah who expressed this precise viewpoint in his 1995 Presidential Address to the Royal Society (published in both the Financial Times of 19 Dec 1995 and the THES of 5 January 1996). Atiyah asserted that "independence of thought really is the hallmark of a scientist".

Regrettably, this viewpoint is grossly inaccurate, and so it has to be corrected. The simplest way to expose the grave flaw in the above common misconception is by considering the proverbial chimpanzee who is given both a typewriter and the freedom to type indefinitely. How long will it take to generate a single line from (never mind the entirety of) Shakespeare's writings?

The most basic hallmark of science must be CORRECTNESS of thought, expression, and execution. Apart from accidental discoveries that can be made by anybody, it is 'EPISTEMOLOGICAL CORRECTNESS' that CAUSES discovery, invention, and advancement. Freedom merely FACILITATES the communication and dissemination of discovery and invention (and, naturally, of everything else).


The Swedish poet Thomas Thorild (1759-1808) is credited with what everyone must understand to be a very great aphorism which, since 1887, graces the building of the University of Uppsala, Sweden, as a splendid epigram:



This superb epigram ought to be displayed prominently at all places that entertain any serious pretensions to being 'intellectual'.

The early Greek culture was not unique in tolerating a certain degree of individual initiative, free argument, and vigorous competition. (For example, the contemporary civilisations of Phoenicians, Hebrews, etc. also permitted a certain degree of initiative, argument, and competition.) However, although the argumentative and competitive elements in early Greek culture naturally facilitated the advancement of science, they did NOT cause them - either the beginning as such of science or the subsequent advancement of science. The cause, of course, was and remains 'epistemo-logical correctness'. This is essentially the traditional scientific world-view and method (plus the resulting truths) as articulated and gradually refined by the likes of Thales, Pythagoras, Thucydides, Aristotle, Archimedes, al-Kwarismi, al-Kindi, Ibn-Khaldun, Viete, Bacon, Galileo, Descartes, Newton, etc.


In, the final analysis, if there is one thing that completely and unambiguously separates the human species - HOMO-SCIENTIFICUS - from all other biological species (and culturally raises humans above animals), it must surely be 'science' as correctly defined here. All other proposed criteria (tools, language, thought, aesthetics, ethics, etc.) do not quite succeed in doing this conclusively.


There is a very popular creed commonly known as "holism" whose fundamental postulates, too, are very well known:

"Less Is More"

"The whole is greater than the sum of its parts."

"Holism" is supposed to be about the 'whole', but it is so full of (epistemo-)logical holes that a more appropriate appellation would be "Hole-ism".


'Epistemo-logical correctness' entails 'EPISTEMIC (W)HOLISM'. 'Epistemic (w)holism' is concerned about 'overall correctness' and the 'integrity of the whole'. The latter of course are its defining basic tenets. Of course the correct scientific mathematics consistent with 'Epistemic (w)holism' are these:

"Less Is Less"

"More Is More"

"Equal Is Equal"

"The whole is exactly equal to the total sum of all its parts."

Although 'Hole-ism' is a formidable adversary of 'Epistemic (w)holism', the rational person has only one rational choice: 'Epistemic (w)holism'.


'Truth' must be unique, objective (= inter-subjective), permanent, constant, unchangeable, invariable, immutable, imperishable, indestructible, theory-free, ideology-transcendent, universal, eternal, ultimate, absolute; and also in principle (but perhaps difficult in practice) discoverable, accessible, inspectable, attainable, amenable, tractable, knowable, comprehensible, verifiable, effable, and communicable.

Any use of the term 'truth' which does not satisfy one or more of the above defining features is fatally flawed, and is bound to lead, sooner or later, to "anything goes" - as indeed it has.

Contrary to the popular misconception, there is only one correct 'world-view'; all the others are '(un-)world(-ly) phantasies'.


Proven scientific truths entail scientific certainty. In its turn, scientific certainty entails scientific rational dogma. Thus the valid proposition "healthy degree of scepticism" must be complemented by the equally valid - and equally indispensable - proposition "healthy degree of dogmatism".

To be sure, the rejection and demonisation of rational scientific certitude generated a pandemic of pathological public dogma-phobia, in addition to the enormous intellectual vacuum. Predictably, the various dogmatic so-called "fundamentalist" religions gratefully stepped in to fill this gigantic intellectual gap with their transcendental metaphysical "certainties", to the huge detriment of society at large.

In the (post-)modern era, the term 'dogma' has come to mean irrational, usually religious, belief. This is evidently the result of the Christian Church hijacking the word 'dogma' for its own beliefs some two millennia ago. The term 'dogmatism' acquired further unpleasant connotations in the 20th century as a result of the actions of people like Stalin and Hitler. (Post-)modernist logic therefore induced that all kinds of certaintism and dogmatism must be as wrong and as evil as Stalinism and Hitlerism - the ultimate sin. But the non-religious, philosophical use of the term 'dogma', as a proven tenet or justified opinion, and of the term 'dogmatism, as a positive assertion of reasoned belief, preceded the Christian Church by several centuries, and that use is still recorded in some of today's dictionaries.


It turns out that if one is not a fundamentalist, ie if one has no fundamental principles or dogmas, one cannot really have anything else, ie one is a nihilist. Between rational fundamentalism and irrational nihilism there are no intermediates. And between logical positivism and illogical negativism there is only irrational nihilism.

To the standard postmodernist objection that "truth is elusive and never attainable", I devised the standard reply:

"How can one be certain that one knows the truth? This is often difficult to answer. But we suggest that this is the very question that every professional (not only scientists and philosophers but also historians, physicians, journalists, police officers and so on) ought to be trying to answer, instead of denying the very existence of truth. If one does not do so, this conduct must be seen for what it really is: a breach of professional duty. The question is whether on the available evidence a hypothesis has been refuted, or verified, or is still open to investigation. In the latter case one has to be sceptical, in the former dogmatic." ("Where Science Has Gone Wrong", Nature, 1988; 333: 389)


Like in all complex human affairs, there are always many causes and contributory factors. But if (post-)modernism has a unique origin, it is the rejection of scientific 'truth'. In other words, (post-)modernism originated IN science. The (post-)modernist revolution first took place in science decades ago and, as a result, the old-fashioned ideas about 'science' and 'truth' advocated here were characterised as "scientism", denounced as wrong, and demonised as evil and dangerous. Naturally, the cognate terms "scientist", "scientific", and "science" have by association suffered dearly as a result. In fact this is the most dangerous (because unrecognised) cause of anti-science. M. Psimopoulos and I explained that this is the exact point "Where Science Has Gone Wrong" (Nature 1987; 329: 595-598; Nature, 1988; 333: 389).


The (post-)modernist revolution succeeded in replacing: the concept of permanent truth with the transient paradigm of (transient) paradigm; "healthy dogmatism" with "unhealthy (and invariably bogus) humilitism". This ushered the twentieth century culture into a veritable new Dark Age of intellectual decline.

The medieval Dark Age was essentially decreed by brute force by the then all-powerful Roman Emperors. Mysteriously, the

(post-)modern Dark Age came about in the ostensibly free and tolerant culture of our Europe and America - the so-called 'West'. Arguably, this is the greatest intellectual tragedy in history.

The ultimate cause of the oxymoron 'skill-less art' is the earlier and greater oxymoron 'truth-less science'. (Oxymoron = acute folly)


Charles Darwin's book The Descent of Man (1871) was about the genetic extraction of Homo-sapiens. Jacob Bronowski's book The Ascent of Man (1973) was about the intellectual rise of Homo-sapiens. Tragically, we may now be witnessing "The Descent of Man" - in the sense of not Darwin but Bronowski.


Watts T L P ("After postmodernism", Lancet 2000; 355: 149) stated: "[Logical] Positivism foundered on the criterion of verifiability because it was not verifiable." This is clearly the philosophically most basic cause of the supposed discrediting of both 'logical positivism' and 'scientism' and the alleged triumph of '(post-)modernism'. Obviously it is also for this (ultimately unsound) reason why the original champions of 'Logical Positivism' later turned against it. For example, A. J. Ayer's view on 'Positivism' in his later years was: "Nearly all false". (Bryan Magee, Men of Ideas, BBC, 1978, p. 131). Regrettably, not only the Lancet but practically all supposed "scientific" forums are only too happy to demonstrate their openness to "dissent" by publishing such negativist "critiques" of science. What these bastions of "openness" curiously do not tolerate is the rebuttals and refutations of 'IL-logical negativism'.

In my 1988 "On the Method and Scope of Research" (In: Belardinelli, E. (editor), Imola Conference on University and Research, Edizioni Martello, Bologna, 1988, pp. 157-192), I went a long way towards verifying the 'criterion of verifiability'. I now want to put it on public record that I have positively completed the verification of the 'criterion of verifiability'. Only this breakthrough and probably no other can obviously rescue for good both 'logical positivism' and 'scientism' from the ruinous threat posed by both 'IL-logical negativism' and 'anti-science'.


"If I have seen further, it is by standing on the shoulders of giants."Isaac Newton, 1675 AD"If I, too, have seen further, it is also by standing on the shoulders of giants like Newton. But before I could climb on the shoulders of giants, I had to struggle to free my neck from under the boot of the dwarves."Theo Theocharis, 1978 AD


The second most important unsolved problem of science is the so-called "problem of induction", which was stated by David Hume in the 18th century and gave rise to the so-called "Humean scepticism". I discovered the key that has unlocked the process of solving this problem too.


'Heuristics' is already a fully recognised scientific discipline that covers the early stages in the process of scientific discovery. It is argued here that a new scientific discipline, to be christened 'apodeictics', also needs to be founded in order to complement heuristics. 'Apodeictics' will cover the regrettably still largely empty ground concerning the last steps that in principle bring the long and tortuous scientific process to its final conclusion - imperishable truth. These final steps of course should constitute the rigorous 'verification' - and not just in pure mathematics.


During the last three or four years (since about 1996), a very curious paradox concerning our highly cultured society has been noted and vigorously debated. This is the quandary of declining standards in education, in the media, and in popular culture generally. This has been termed 'dumbing down'. In fact there seems to exist a general downward trend in every domain of culture, both vernacular and elite, both low-brow and high-brow.

Of course it must be stressed at this point that one must always be careful never to infringe another's inalienable right to be dumb.

Many years before, I pointed out this more basic paradox, in fact the ultimate puzzle of our (post-)modern era: the destructive influence of the endemic anti-science movement in the most successful scientific civilisation in history. I identified the most basic and most dangerous (because unrecognised) cause of anti-science to have been the either negativist or nihilist attitude to science (and its method and knowledge) by the scientific community itself, who thus have mysteriously been the authors of their own misfortune. The whole of culture went wrong because the most basic foundation of culture, namely scientific understanding, went wrong first. A 'domino effect' of a causal chain of spread has been at work since then. This is how the debasement of the entire culture has come about. The ultimate cause of the oxymoron 'skill-less art' is the earlier and greater oxymoron 'truth-less science'. (Oxymoron = acute folly)


  1. Why has the (post-)modernist "Mickey Mouse" grotesque (and worse) art been elevated in this century above classical art?
  2. Why are (post-)modernist "noisicians" widely regarded in this century more highly than classical musicians?
  3. Why has the (post-)modernist "theatre of the absurd" gained so much ground in this century over the traditional theatre of the serious?
  4. Why has factual knowledge and theoretical rigour in all disciplines in traditional education been replaced by "airy-fairyness" in (post-)modernist education?
Many years before, I pointed out this more basic paradox, in fact the ultimate puzzle of our (post-)modern era: the destructive influence of the endemic anti-science movement in the most successful scientific civilisation in history. I identified the most basic and most dangerous (because unrecognised) cause of anti-science to have been the either negativist or nihilist attitude to science (and its method and knowledge) by the scientific community itself, who thus have mysteriously been the authors of their own misfortune. The whole of culture went wrong because the most basic foundation of culture, namely scientific understanding, went wrong first. A 'domino effect' of a causal chain of spread has been at work since then. This is how the debasement of the entire culture has come about. The ultimate cause of the oxymoron 'skill-less art' is the earlier and greater oxymoron 'truth-less science'. (Oxymoron = acute folly)


If naturally one next makes the consequent enquiry as to where exactly scientific understanding first went wrong, the following brief (but not perfectly accurate) answer can be given:

In his regular column "Hard drive" in the London Daily Telegraph weekly supplement Connected, Peter Cochrane is described thus: "Peter Cochrane holds the Collier Chair for the Public Understanding of Science and Technology at the University of Bristol." In the opening paragraph of his "Bricks in an unreal city" (Hard drive, 10 February 2000), Peter Cochrane wrote with evident admiration: "Richard Feynman was ... a founding father of our most fundamental atomic understanding. One of my favourite [Feynman key pronouncements] is the shrewd: 'I think we can safely assume that no one understands quantum mechanics'." (emphasis added)

The curious "no one understands quantum mechanics" viewpoint articulated in the very last year of the 20th century by a Professor for the Public Understanding of Science and Technology at a leading University has been the standard viewpoint of establishment science throughout the 20th century. I recognised the stark and gross inconsistency pointed out here from the very beginning of my scientific studies in the 1970s when I also devised my standard rebuttal (published in a Letter in The Listener): "A theory that no one understands is not scientific but hopelessly mystic." ("Men of Ideas", The Listener, 4 May 1978)

As indicated, the above is a brief but not a perfectly accurate answer to enquiry as to where exactly scientific understanding first went wrong. A both full and accurate answer is too long to be included in this essay; it will be published separately soon.


Regrettably, my warnings since the 1970s were not heeded, and the situation is not improving. The following is a recent and by no means untypical example, but noteworthy in that spans the Atlantic: The one Lecture (out of hundreds) from the AAAS millennium conference in Washington DC in February 2000 that the UK daily newspaper The Independent selected to publish was given the very suggestive but disappointing (and utterly typical of our (post-)modern era) title: "Cherish mistakes, since to err is science" (Douglas Allchin, Podium, Friday Review, 25 February 2000, p. 4). Sadly, this has been the (sub-)standard official line for many decades.

Future generations will surely regard with puzzled and amazed incomprehension the anti-science sentiment that dominated the benighted last decades of the 20th century, and its most basic cause - the prevailing negativism in the scientific community itself that unthinkingly rejected scientific truth and certainty, and dogmatically (but frivolously and inconsistently) preached anti-certaintism.


The (post-)modernist and "politically correct" idea of "truth" - subjective, relative, parochial, impermanent, ephemeral, transient, perishable, destructible, theory-laden, ideology-dependent, falsifiable, unprovable, changeable, variable, surreal; as well as the traditional religious idea of "truth" - untestable, unknowable, incomprehensible, ineffable, transcendental, other-worldly; (or a any confused mixture of the two) are of course meaningless, incoherent, untenable, and, needless to say, untrue.

Probably beginning with Bertrand Russell, all Professors of philosophy everywhere throughout the 20th century have taught universal anti-certaintism. However, I have yet to find a refutation of Descartes's brilliant argument (from the 1630s) that the proposition 'I think' is indubitable: Any and every attempt to disprove it, only succeeds in proving it!


Romantic "truth" is alleged to be found either through attaining harmony with nature or through spiritual exploration of the inner self. All this is too airy-fairy and mystical to be debated meaningfully.


On the meaning and content of religious "truth", countless millions of individuals have speculated ceaselessly (and often fancifully) all the way to the grave and, apart from the grave, arrived at no other final destination. However, there exist some important questions about religion that are amenable to objective and definite answers, notably these:

- Which particular discoveries were made by means of either divine or spiritual contemplation?

- Which specific inventions were caused by godly inspiration?

Both the theological and romantic versions of "truth" fail the crucial test of objectivity (= inter-subjectivity).


Any "paradigm" that appears merely to work and does not claim to be (at least an approximation to) a truth of nature is a SantaClaus-type infantile theory and cannot legitimately claim to be scientific. For sound scientific practice in general, the terms 'truth' and 'reality' are indispensable. Moreover, any non-accidental advancement (both theoretical and practical) in every scientific field is heavily predicated on knowing and using correctly a theory that is close to the truth. In fact the closer the practised theory is to the truth, the greater is the theory's effectiveness (both theoretical and practical), and also the greater the probability of advancement. In genuine science, the "but-the-theory-works" theory of knowledge does not really work. In the final analysis, at best it is an infantile theory; and as explained above there is also a less charitable evaluation.


The less charitable evaluation in question can be best demonstrated by recounting the following fascinating story (involving the 'epistemology' of a popular species of bird - the turkey) that was beautifully narrated by Richard Dawkins in his River Out of Eden (1995):

It has been established by zoologists that a mother turkey protects her hatchlings quite competently by means of a model that (in most cases) works quite effectively: anything that moves and does not make the characteristic hatchling sound 'glu, glu, glu' (eg cats, snakes, etc), the mother turkey attacks and repulses. Someone, however, carried out the following ingenious experiment: this bright, if cruel, experimenter somehow caused the mother turkey to lose completely her sense of hearing. The first thing that the mother turkey then did was astonishing but easily explainable by the "but-the-theory-works" theory of knowledge - she attacked and killed, one after another, all her own hatchlings.

There must surely be a lesson buried somewhere deep in this extraordinary but true story for the fans of the "but-the-theory-works" theory of knowledge.


The example that best illustrates the indispensability of 'truth' and 'reality' is the millennia-long history of astronomy's endeavour to model accurately the movement of the planets. Ptolemy's geocentric model of planetary motion worked adequately for the purposes that it was designed for (keep a calendar and predict the positions of planets and the eclipses of Sun and Moon).

Imagine a public event in 1600 AD exhibiting terrestrial and celestial globes, orreries, clocks, sextants, and other such ornate instruments and impressive devices. Consider also the claim then that all these ingenious devises and the then (fairly successfully) practised calendar and ocean navigation PROVE the geocentric model. How could someone then conceivably challenge this powerful argument?

From the (for many centuries) apparently empirically "adequate" and practically "reliable" (but essentially untrue) Ptolemaic model, it is impossible to derive either Kepler's or Newton's laws. As Kepler himself fully realised, one has to discard (the essentially untrue) Ptolemaic model and instead use the (approximately true) Copernican model in order to get to the (closer to the truth) Keplerian model, and subsequently to the (still closer to the truth) Newtonian model. It is clear that Newton also understood this point. This is exactly the reason why Newton quoted the famous phrase: "If I have seen further, it is because I stood on the shoulders of giants."

Owing to the periodic character of planetary movement, the geocentric model could (and indeed still can) make successfully ROUTINE predictions concerning the observed positions of the planets in the sky from the terrestrial observatory, but it never made any NOVEL theoretical predictions or practical applications such as stellar parallax, Foucault pendulum, black box of inertial navigation, Global Positioning System, etc. The latter, of course, can only be made (as indeed have been made) by using the one TRUE model and no other.


On the very important and pertinent matter of practical accomplishments, without the Newtonian model it is impossible to achieve guided rockets, space travel, satellite communication, the Global Positioning System, etc. Accurate measurement alone, or sophisticated experimentation alone, or meticulous data-compilation alone, or complicated mathematical computation alone, or any type of trial-and-error alone, or even any combination of all these will never lead to the above tremendous accomplishments. (For, putting it rather crudely, it is like letting the proverbial chimpanzee type away on a type-writer and expect him to re-produce by pure chance a poem by Shakespeare.) The CORRECT theoretical modelling (ie the appropriate 'TRUTH') is absolutely indispensable.

Mysteriously, the paramount significance of Kepler's and Newton's true knowledge as explained here seems to have been buried under the gigantic mountain of the (post-)modernist confusion - and thus forgotten. All the well-meaning people who nowadays disregard (and, worse, slight) the possibly true model by rehashing the old chestnut that the present model "ain't broken" and thus needs no "fixing", could be doing as much harm (by obstructing progress) as those who in earlier millennia opposed the (approximately) true heliocentric model on the (here shown to be) untenable grounds that the (now known to be) untrue geocentric model worked. This 'naïve instrumentalist' view of science may not actually be actively killing people, but it is quite possible that it is failing to save lives by neglecting to develop new live-saving drugs and cures.

Thus the concept of the mere "empirical adequacy" of scientific knowledge (that allegedly dispenses with 'truth' because allegedly it is not needed) is empirically proven to be woefully inadequate. The similar concept of the mere "reliability" of scientific knowledge is similarly shown to be notoriously unreliable.

Without the solid foundation of verified truths on which to erect sound and enduring scientific edifices, any flimsy fabrications on the shifting sands of any transient paradigm will (as everyone seems to agree) naturally collapse sooner or later.

For optimum scientific advancement, there is no substitute for good old-fashioned TRUTH. In the final analysis, in the long term it is (closeness to) imperishable 'truth' that invariably delivers the goods, and brings home the bacon.


Thus the much vaunted "epistemological pluralism" (another basic tenet of (post-)modernism) is thus been shown to be epistemologically untenable, although of course "political pluralism" is politically still tenable - and desirable.

The rejection of 'truth' is tantamount to the proverbial killing the scientific 'goose' that lays the 'golden eggs' of medicine and technology. We still have the already laid 'golden eggs', but how many more have been missed as a result of this thoughtless, reckless, and ruinous cultural "crime"?


As I already explained, the rejection or subversion of 'truth' is the exact point "Where Science Has Gone Wrong" (Nature 1987; 329: 595-598; Nature, 1988; 333: 389). The one and only response in the English language anywhere to the serious warnings contained in "Where Science Has Gone Wrong" was a Comment article published on 7 December 1987 in the UK daily The Independent by the then President of the British Society for the Philosophy of Science Dr Peter Gibbins (p. 15). The heading of this article was again very telling and disappointing (and again typical of our (post-)modern era): "Never mind the truth: research must pay off"; and so also was its profoundly erroneous conclusion: "Profit does not depend on truth".

In fact any serious scientific study of the true history of scientific research proves that in the LONG run: Profit (from the fruits of the scientific endeavour) does indeed depend PROFOUNDLY (though not obviously nor straightforwardly) on truth. I hope that I have shown that if scientific research is to really pay off, it must be research for imperishable truths and eternal verities, not for current consensus or an ephemeral paradigm that appears to work today, but fails tomorrow.


Another noteworthy concept that was rendered meaningless by

(post-)modernity is 'justice'. Injustice (either to an individual or a group) is perpetrated generally by the powerful to the powerless. The correction of injustices is heavily predicated on discovering and making known the (usually suppressed) underlying truths.


The total confusion resulting from the (post-)modernist subversion of the meaning of 'truth' perfectly suits the unscrupulous - both individuals and groups.

Curiously, the most enthusiastic embracers and promoters of (post-)modernist nonsense have been the self-proclaimed political "progressives" who declare the correction of social injustices to be their prime concern. These same people also expelled forcibly the term 'correctness' from its natural home in the sciences and relocated it in politics. Thus emerged the ugly term "POLITICAL CORRECTNESS" (PC). Worse, some of these phoney "progressives" sank even deeper in the (post-)modern mire: they asserted that the concept "objective truth" is a pernicious device invented by the powerful (white European heterosexual male) specifically to despoil the environment and exploit and oppress the weak (women, minorities, third world).

The elected political leaders customarily let down their supporters some years after the former have made their promises to the latter. In stark contrast, the intellectual leaders of the (post-)modernist movement harm the welfare of the persons whom they purport to care about at the very moment they utter their basic principle. Their lofty rhetoric concerning the exploited and oppressed is all about enabling the differently-abled and empowering the differently-powered (or some other similarly colourful PC phrases to the same effect). However, the simple statement of their basic principle nullifies instantly the emancipatory prospects that the correct definition of the basic concepts 'truth' and 'justice' entails.

The rejection or subversion of 'truth' (as defined here) is the "Philosophical Mother" of all PCs, from which all lesser PCs are (epistemo-)logically derived.

Most people believe that PC is merely about a language code designed so as to not offend the feelings of vulnerable groups. No, as shown here it is a fundamental issue of substance. It is about the meaning and application of such basic concepts as 'truth' and 'justice'; and also about the many harmful effects, as explained here, of the prevailing (post-)modernist MIS-conceptions of 'truth' and 'justice' on science, culture, society, and progress.


The "Disconnects between science and the law" Perspective article by David T. Case and Jeffrey B. Ritter (Chemical & Engineering News Vol. 78, No. 7, pp. 49-60, 14 February 2000) stated:

"The clash between scientific and legal truth is thought to be fundamental. Scientific conclusions are subject to perpetual revision, both in detail and sometimes in the fundamentals. This paradigm of a dichotomy between law and science has been recognised and embraced by the U.S. Supreme Court (in 1993 as part of Daubert v Merrell Dow Pharmaceuticals)."

Lawyers (and the entire public) cannot complain that they were not amply forewarned about the grave dangers stemming from the (post-)modernist alleged "fundamental clash between scientific and legal truth".

On 17 and 22 February 1986 BBC2 broadcast Hilary Lawson's Horizon film "Science ... Fiction?" and on 20 February 1986 The Listener published Hilary Lawson's article "The fallacy of scientific objectivity" (pp. 12-13). In a Letter published in the 27 February 1986 issue of The Listener (and entitled "Science versus fiction"), M. Psimopoulos, T. Theocharis, and N. Bedding rebutted Hilary Lawson's (post-)modernist preaching, and demonstrated that by literally accepting and faithfully implementing "the fallacy of scientific objectivity", then "one can literally get away with murder".


There has been much talk about the need for a 'Hippocratic Oath' for scientists. Invariably, all such talk boils down to the injunction: "Do Good." But as the old saying goes: "The road to ruin is paved with good intentions." More often than not, busybody meddlesome do-gooders end up effectively becoming busybody meddlesome do-badders. This indeed is always the case with those who have demonstrated their complete lack of understanding of such basic concepts as 'truth' and 'justice'.

Aside from the inherent and insurmountable subjectivity of the concept 'good', the command "Do Good" applies to every profession and to every single citizen. Neither moral piety nor ideological preaching should be the primary concern of the scientist. As already explained, the first duty of the scientist must be 'EPISTEMOLOGICAL CORRECTNESS'. Hence, if there is to be an oath specifically for scientists, this will have to be along the idea: "Uphold the Truth." The oath must also explain that, if the oath is to be of lasting value, 'truth' must be of the imperishable type as defined here, not of the perishable (post-)modernist variety of the "current consensus" or "present paradigm" type.


In the cultural chaos that is (post-)modernity, an infinite variety of daft ideas get publicised. In the wide and deep ocean of this

(mis-)information overload, what I argue are the correct ideas about the most fundamental principles of our culture have all but disappeared from the public domain. (In the old times it was said that humanity cannot bear much reality. Sadly, in our benighted (post-)modern world humanity obviously cannot bear ANY reality.)

It is high time that all the earnest supporters of science began to re-assert the positive virtues of science (as correctly expounded here at some length), as well as the tremendous positive achievements of science-based medicine and engineering. On the germane point about making the 'truth' public, the playwright Bertold Brecht can teach us all a rather instructive lesson. In Brecht's The Life of Galileo, there is this dramatic scene:

LITTLE MONK: Will not the truth, if it is the truth, prevail either with or without us?

GALILEO: No, no, no! So much of the truth will prevail that WE make prevail!


Without a clear and sound understanding of the current state of affairs of our world, humanity's future prospects are uncertain and nebulous. There can be no such understanding if the crucial events of the past (that have shaped the present) are MIS-understood, as indeed they are, and especially those that have the closest bearing on: THE MEANING OF 'SCIENCE' AND 'TRUTH'. For the best possible future of humanity, a comprehensive plan is needed that is informed by the important lessons (a few outlined here) drawn from the correct understanding of both past and present.

- - - - -


Theo Theocharis was born in Cyprus in 1952, and took a degree in Physics at London's Imperial College in 1976. In 1977 he became a dissident in both science and philosophy (as well as conventional politics). In this respect, Theo Theocharis is probably unique in that he became a dissident from the outset while he was still a PG student and thus before he could get any academic job. After his university degree, his entire CV consists of only his dissident publications. The most notable of these (co-authored with M. Psimopoulos) is a 1987 Nature Commentary article entitled "Where Science Has Gone WRONG" (Vol. 329, pp. 595-598, 15 October 1987). Following this highly critical, dissenting, and whistle-blowing publication, Theocharis has been unable to obtain any employment since then, although he has continued to generate research material of both sound (if controversial) scholarship and uncompromised integrity ever since. Theocharis's most recent (easily accessible) publication is: "What's WRONG with science", Issues in Science and Technology, Vol. XVII, No. 1, Fall 2000, pp. 24-25; online: http://bob.nap.edu/issues/17.1/forum.htm (last forum item).

200A Merton Road

London SW18 5SW



E-mail: TheoTheocharis@hotmail.com, theotheocharis@ic4life.net