How Far Can Human Knowledge Go?

by Christopher Ormell (January 2024)

Bounds of the Intellect— Paul Klee, 1927

 

There is a long-standing branch of philosophy—called ‘epistemology’ —which studies the limits of human knowledge. It addresses the question how far future knowledge can go. It uses logic, reasoning and realistic imagination (thought experiments) to try to envisage and conceptualise the boundaries which knowledge might reach. It was much discussed and mulled-over by the best thinkers from Ancient Greece onwards. However in recent times it has fallen out of favour. It seemed to hit a brick wall after the collapse of faith in general reasoning-and-theory in the 1970s: the post-modern sinkhole. As a result it has been (a) neglected by most of today’s philosophers, (b) trampled-on by the now-dominant digital computerists, and (c) ignored—insofar as it poses constraints—by some rash late 20th century scientists who started speculating that a ‘theory of everything’ might be ‘just round the corner’.

We know the leading anglophone philosophers of the early and middle 20th century—Russell, Whitehead, Ramsey, Wittgenstein, Ayer, Popper, Quine, Lakatos—paid their full attention to epistemology. But, in spite of their strenuous efforts, they were unable to reach any kind of satisfying conclusion. Since the 1970s, it has almost disappeared. Most of the academic ‘philosophy’ published since then has been in semantics, ethics, wellbeing, or preoccupied with social issues like cultural identity, personal identity and racism.

Does it matter? Yes, epistemologists have recently been sorely missed—because they were needed to defend commonsense against unwitting assaults. These assaults were mounted by computerists—who also, incidentally, virtually ticked themselves into a ‘New Age’ box, because they didn’t claim, or even pretend, to have any acquaintance with epistemology. Instead, they brashly proclaimed a new subject, ‘knowledge engineering,’ and set-up previously unknown quests like ‘creating knowledge,’ ‘getting computers to talk’ and ‘training computers to show “artificial intelligence.”‘ (It is ironic that they assume that “intelligence” can be “trained,” because for hundreds of years it was agreed that humans needed education, not training, to grow intelligence.)

There is a discontinuity here, one which provokes sceptical questions, such as whether knowledge can be ‘engineered,’ whether it can be glibly ‘created,’ whether it makes sense to call the audible outputs of computers any kind of ‘speech.’ Finally, there is the moot question whether computers could ever operate on an intellectual level where attaching the praiseworthy, high-status, personable word ‘intelligence’ makes proper sense.

Here is a thought experiment: if a student produces a body of work on a difficult conceptual area which happens—by sheer accident—to be almost identical with the output of ChatGPT on the same problem, should we immediately say that the student had shown intelligence? There is a significant minority of sceptical digital experts who will say that the answer is “No!” They are aware of a vein of overblown hype in the groupspeak emanating from Silicon Valley. They are aware that AI is a selective copying operation. It can certainly cobble together a semblance of prose which sounds initially quite credible, but closer inspection reveals the shocking fact that it has no genuine understanding of what it is “saying.” It is a concoction put together using a vast amount of computer power … cleverly copying and fielding a melange of OK-phrases, idioms and hitherto unnoticed usages, but doing so in a copycat, randomised, objective-neutral way which blandly ignores the self-questioning and emotive dimension of human intelligence. (In other words, it ignores the very things we tend to look for, when we are minded to attribute the high-status word ‘intelligent’ to someone.)

When challenged about their AI outputs, apologists for AI tend to give examples of things we have always known computers can do well, like scanning huge areas of technical information looking for unnoticed patterns and hidden pitfalls.

But what about the absence of genuine human feeling, goodwill, moral values or emotion in AI outputs? AI apologists are apt to reply that they are moving towards incorporating simulations and concentrations of human emotion into their neural networks.

Oh dear. They seem to be unaware that the very concept of ‘simulated human emotion’ is dodgy. ‘Simulated human emotion’ is precisely what sensible, responsible people abhor, and which they try to avoid. It is the stock-in-trade of con-artists, scammers and swindlers. And it won’t do the public image of AI a lot of good, if AI is seen to be actively trying to join this unsavoury band.

That the latest AI can simulate what looks initially like meaningful human speech is, of course, an amazing technical feat. Trying to do this used to be considered, until quite recently, to be a hopelessly over-ambitious goal. It is astonishing that such plausible outputs can result from a nexus of inter-connected microchips. But the devil is in the detail, and the detail is, unfortunately, that the level of really good judgment expected by responsible human users, depends on the very element (honest feeling, responsibility, shared emotion, self criticism) which a mixture of deterministic algorithms and randomised neural connections don’t, can’t, and never will, deliver.

Too many otherwise savvy experts have let themselves be seriously dazzled by the technical software feat involved in getting computers to appear to speak sense. They have been bamboozled by their own propaganda, and have let this amazement blind them to the devilish detail … that reaching the level of ‘appearing to make sense’ is not good enough.

The human brain is said to have billions of neural links in place—mysteriously organised in some baffling, obscure, currently un-understood way. These billions may be apparently up-staged in the future by vastly expensive mega-clouds filled with trillions of microchips! But the sheer size of a neural network is not, actually, the main factor which will determine whether or not it is going to be trusted as the “last meaningful word” by ordinary human beings. The feeling, sense of relevance and emotion associated with genuine human speech is evidently a direct consequence of its essential biological, organic, personal origin. This isn’t going to be replicated by a mixture of deterministic and randomly accessed microchips, however many trillions are pressed into service. Apologists for AI seem to assume—with very little reflection—that it is eventually going to be replicated. They are evidently convinced that “in the last analysis” what a human being says mustof coursebe a product of biochemical and neuro-electric micro-processes in her or his brain … and hence ultimately be a mathematical construct.

This demeaning assumption about what it means to be human appears to be a quite widely, even casually, swallowed, notion among software apparatchiks. It cannot claim, however, to have any kind of logical basis. What it takes for granted lacks common credibility. We have no reason whatever to think that mathematics, which is an inherently regimented, clunky, predictable, inert, timeless medium—and which has been used to discover representations of similar regimented, clunky, ordered, inorganic systems throughout history—can be expected to be equally good at unlocking infuriatingly fluid, defiantly-unpredictable, elusive, purposive, holistic, personable, emotional living systems. The human brains of cosmologists have also somehow managed to achieve a partial picture (a fragmented, semi- understanding) of a vast distant universe out there, which also, crucially, includes us.

So the real “devil in the detail” is that so-called AI lacks the key, vital element—genuine humanity—which is the essential sine qua non if the new agency is to contribute to the cultural wellbeing of the human race. In terms of factual content, word-counts, grammar, subject framework, and associations … AI does well. Overall its utterances might score 98% for believability. But 98% is not enough in many sensitive areas of human endeavour. In aviation, space travel, finance, medicine, weapon performance … we expect the experts to get their answers absolutely right. NASA famously expects 0% error.

We have a mantra which applies when this level is not achieved: A miss is as good as a mile. There have already been various would-be public demonstrations of the new “artificial intelligence” which have ended in embarrassment.

It seems likely, therefore, that the inevitably unobvious vein of unreliability present in the new AI outputs, will gradually loom larger and larger with the passage of time. We can be sure rigorous thinkers are prudently going to avoid taking the risk it packs. It is, I’m afraid, fairly likely that the current AI craze will eventually turn into a bubble … which will then burst… much like the ill-fated Japanese project to invest heavily in early, premature versions of AI in the 1990s.

So the casual neglect and dismissal of more than two thousand years of hard thinking about the nature of knowledge (and its limits) comes at a price. It involves going out on a risky limb, which comes about when a ‘precise result’ is offered which is known not to be based on understanding.

There have been plenty of warnings about what can go wrong. The phrase ‘creating knowledge’, for example, brazenly contradicts the traditional sense that discovering new knowledge often involves much hard, self-critical, submissive effort. New unexpected knowledge is discovered when careful observers humbly observe, not when they are imposing their preconceptions onto the empirical scene. It also requires things like: achieving a clear conceptualisation, valid mathematisations, rigorous checking, re-checking, more re-checking and finally clear-cut acceptance (of the final results) among acknowledged independent trustworthy intellectual opinion leaders.

So, getting computers to say or print words is one thing: getting them to do this in a way which meets the best acceptable human levels of relevance, point, substance and responsibility, is another. When a person speaks we can hold them responsible for their message. And if it turns out badly, we can reprove the speaker. But modern computers don’t wince when they get their prognostications wrong. (Of course Apple, Dell or HP could easily build a “visible wince response” into their machines—one which would cause the motherboard to shake briefly if it detected that it had been caught-out by a glitch. But it wouldn’t carry much appeal for potential buyers, because they would know perfectly well that computers lack psyches, and they are certainly not going to feel the pain of the rebuff. Also, any future wince-like mechanical response might look clever, but everyone will know it is a fake.)

We have no good reason to suppose that what underpins the intrinsic freedom of the human mind is a brain “which can be modelled by mathematics.” In the past some unwary highly intelligent people have jumped to the conclusion that it “must be” describable mathematically, because they think that this is all it can be. (Mrs Thatcher’s phrase was ‘TINA’ = ‘there is no alternative’.) For more than 2,000 years it has been a virtually unquestioned tenet in science that any evidently elaborate, but not-understood, complex structure must have a mathematical explanation. It is only in the last few years that anti-math has been around, offering a radically alternative explanation. Math can be used to great effect to describe structures composed of timeless, rigidly-ordered elements. But not all elements (maybe fewer than we think) are ultimately composed of timeless, wooden, infinitely static, absolutely indestructible elements. Anti-math is the new 100% abstract, 100% rational, lucid discipline which studies the logic of transient realities.

So now we have a huge conundrum for epistemology: is the human brain more likely to be a mathematic or an anti-mathematic structure?  The answer is obvious: the human brain is inherently transient, because we are transient, mortal beings.

Actually there are two major arguments which tell us that the human brain and the universe cannot expect to be describable using math.

They are: (1) because mathematical modelling can only happen (operate) with the help of an infusion of ordinary human imagination. It is like a food which consists of a powder, which can only be digested after it has been activated by mixing it with water. In math modelling, we have to interpret specific math configurations which contain—and are controlled by—the variable t: this may be t minutes or t microseconds. We have to imagine these configurations changing as t changes. No one with poor envisioning powers can hope to be effective as a math modeller, because they will be unable to “see” (in their mind’s eye), the way the model is changing with time. (Those dimensions which go up and down, those shapes which shrivel, burgeon or invert.)

Now a so-called “math model of everything” can’t call-in the human imagination to infuse it with activity, because the math is supposed to have already incorporated everything there is, into its structure, including human imagination. There is no way that the existence of this special activator—imagination—operating onto the model from the outside could be (ever) explained. (It also incidentally needs an external clock, again outside the math.)

(2) Math modelling depends on axioms. The first math modelling which sported process and changing structures came in the 17th century. It was introduced into a scientific culture already committed (since the renaissance, which had turned its back on Aristotelian holistic thinking) to piecemeal methodology. Local axioms were needed to provide a sound basis for the math. They were based on specific empirical knowledge which had been around for some time, and which was known to be thoroughly trustworthy. But, as items of generalised empirical knowledge, these axioms also needed to be explained. This need not present any problems when the scientific culture is piecemeal, because a different bit of piecemeal knowledge can pull the trick. But a ‘math theory of everything’ would have to empirically explain its own axioms, which is a mathematical impossibility.

This is the nub of the granitelike enigma which Western science hit with devastating consequences around 1900. It was at the time, a sickening realisation. When Einstein discovered that there is no such thing as objective simultaneity, he discovered—though he didn’t want to admit it—that ‘the objective cosmos’ was itself an illusion. The math modelling, on which the human race was accustomed to rely, was evidently not suited to modelling such an un-rigid, unobjective universe.

We have been in a state of severe cognitive shock ever since. Confidence in the capacity of human brainpower to ever understand the deepest secrets of the universe has plunged alarmingly: today it is beginning to sap confidence on every level. Fortunately a solution—anti-math—has emerged recently, and it is anti-math-based scientific modelling which now offers our best bet to unlock the secrets of the cosmos.

Anti-math begins with a picture of astronomic numbers of jumping random tally sequences, treated as reflections of jumping random shadows “out there” —real, but possessing no deeper substance.

It lets us off the hook of the impossibilities (a) and (b) above because: (a) it is self-activating (and energising) as a result of the random vitality of the vast substratum of chaotic jumping shadows on which it rests. And (b), it can be a source of essentially transient reliable structures—objects, processes and conditions which last for quite a long time—and we have every reason to believe that it can also support (=provide models for) agents with cybernetic capabilities, the pinnacle being human capabilities. So it is reasonable to believe that it can support agents (us) which are capable of establishing their own final anti-axioms. (‘Final anti-axioms’ = anti-math axioms capable of creating representations of us.) Our being and intelligence can thus be understood as the final result of these immensely structured, unconscious, creative anti-axioms. This is a new, explicit, modern version of what Kant was saying more than 200 years ago. It might seem to be unlikely, but what can be potentially achieved via modelling with anti-axioms, is an as yet un-opened openbook. And there is every reason to think that it is more suited to producing fluent cybernetic outcomes than modelling with inherently rigid math.

Let’s admit it, we do have a machinelike side. We are only semi-miraculous beings, because the material structures of our brains are dependent on the anti-axioms we tacitly (unconsciously) fully respect (probably through layer after layer of complexity built into our DNA) to secure our existence. But we do have an ocean of freedom to think and act, because we are the source of our own interpretations: we are not the victims of some strange, unknowable, omniscient, invisible, alien devilbrain. We need to be thoroughly responsible, though, on all the levels where we have an initial freedom to think and act, because there is no other agency guaranteeing the stability of our world. The total amount of goodness around is that embodied in individuals of goodwill. Here, too, Kant showed amazing percipience.

The human brain is obviously the most sophisticated, most complex structure present in the physical universe. It is also the most powerful, because without the mass of anti-math axioms it unconsciously imposes onto chaos, there would be no structures “out there” (or “in here”), only a jumping random shadow wasteland. So it is likely that the physical cosmos is a necessary by-product of the key final-anti-axioms which support our being and intelligence.

And if it takes an immense, vast, awesome physical universe to serve as the by-product of the anti-axioms which are needed to support our being and intelligence, it is rather unlikely that a similar result could be pulled-off with a relatively few microchips.

So the (very unexpected) answer to the $64 question How far can human knowledge go? is All the way. The epistemology of the future can aspire to be ‘Total Epistemology’, an epistemology which leaves nothing hanging as an unknowable unknown. (But because anti-math is based on absolute randomness, most future answers can only be indications of the kind of things that are there.) This conclusion may seem to be (and is) completely at odds with today’s unsure, weary, demoralised, despairing outlook. It is completely at odds with common expectations precisely because it isn’t any kind of product of today’s seriously muddled public world. It has emerged instead—unexpectedly under the radar—by rigorously following the trail of epistemology, the long forgotten, searching, traditional quest. It may not be the easiest trail of reasoning to follow, but it is, in the last analysis, thoroughly checkable. We also know that hope springs eternal and that hope, especially determined hope, is capable of snatching success out of the jaws of defeat.

 

Table of Contents

 

Christopher Ormell is an older philosopher whose ideas spring from a thoroughly de-mystified interpretation of math. He set out a thought experiment which was the mirror image of Descartes’ I think therefore I am in six articles in the journal Cogito 1992-94. It showed that absolute randomness was both a logical possibility and an unavoidable fact of life.

Follow NER on Twitter @NERIconoclast

image_pdfimage_print

11 Responses

  1. “… anti-math is based on absolute randomness …”
    But if things and ideations, and connections of stuff exist then absolute randomness, other than as a phrase, does not exist and implies that anti-math is non-operational, ineffective.
    What am I misunderstanding?

  2. The crunch issue is whether an exceptionally highly motivated predictor can always move forward towards predicting any named unknown, or whether there are some irremediably unpredictable phenomena which could stand in her or his way.
    The simplest case is a phenomenon which can occur in two forms, A and B.
    Let Joan be the exceptionally highly motivated predictor.
    Joan predicts that the next case of this phenomenon will be an A.
    But a B is logically possible.
    Then she predicts the second case will be a B.
    A is logically possible.
    Joan could face a lifetime when she gets every prediction of this phenomenon’s form wrong.
    So an absolutely infuriating unpredictable phenomenon is logically possible.
    The notion that everything is, in principle, predictable is wrong. (It was based on the notion that everything is controlled by math.) Our universal experience of reality is that we often get our expectations dashed… this is the message which the word ‘reality’ carries.

    1. As current quantum physicists are discovering what ancient Advaita Vedantists experienced as reality, everything that will happen is due to its antecedents. There is no Free Will, only unfulfilled choices until one is selected, having been predetermined, in reality, but 0unknown/uncertain by the selector before selection.
      Further, which makes a better 2, 1 + 1, or, 857 – 855
      Maths, + or -, only inadequately describe How reality operates.
      Ultimately, ‘WHY’ is there anything anyway?

  3. The current fixation with math and the electro-magnetic spectrum as explanation for our existence only explains the “mechanics” of the universe. It doesn’t explain “existence” in any way, which is why we have belief in God.
    A recent discussion with the obnoxiously opinionated son of a friend of mine (a not-yet fully formed 22 year-old) confirmed to me that God does not exist in the minds of the newly minted computer programmers and AI specialists. (And the son seemed to me to be fairly representative of his generation)
    They feel that everything can be explained by the “big bang” theory and by the structure of the atom. They feel that all will be revealed once we break the code of Dark Matter, when those of us who have been around a bit longer know that it will just open up another endless path.
    All of this puzzle reduced to mathematical formulae with no allowances made for the randomness of human nature.
    It is sad really , to think that there is no room for a higher purpose in the journey of life in these calculations.

    1. Yes, and the ‘Big Bang’ if real, was an occurrence.
      Who, What was/is the ultimate Occurrer without precedent?
      I’ll bet on God (h/t Pascal’s Wager).

  4. I’m afraid both Howard and Bill have not quite taken on board what my essay is saying.
    Nelson comments that we don’t know why anything exists. But the new logic of transient reality offers a way of creating stable structures of great complexity with long lifetimes out of a vast field of chaos (random sequences). Willpower is needed to do this. And if the possibilities of anti-math objects include human beings —which is most likely— we can take on the burden of unconsciously imposing the anti-axioms needed to seal our own reality.
    No assumptions are being made about anything, except that a general awareness is being fielded about the scope which exists for structures to be built with cybernetic qualities. This is just the broad lesson of the last seventy years —that material structures can be built with the cybernetic powers of recognition. (Although it is highly unlikely that computers will ever show genuine intelligence, we do know that they can show recognition.)
    Corden raises the idea of God, which is associated with the hypothesis that all the marvellous structure we find in nature must be the handiwork of a super-brain. How else could it come about? It took Julius Caesar’ authority to create July, Alexander’s friends to create Alexandria, and Napoleon to create the metric system. This is how lesser marvellous structures came about: obviously a supernatural mind would be needed to create the universe. This kind of thinking convinced almost everyone until Charles Darwin came along. He seemed to show that evolution could do the trick. It sounded plausible once life had appeared, but there was no obvious motivation driving evolution prior to the arrival of life.
    Today we are much more aware than our predecessors that mind is the performance of a brain. So wherever is the brain of this supernatural mind to be found? There is no sign of it anywhere. The only credible potential source of the will-power and creativity needed to create the universe is the combined effect of the brains of all the individual people of goodwill who have contributed, or who are contributing, to the maintenance and improvement of the world. This solution was first articulated by Kant more than 200 years ago. He realised that the laws of nature must be the preconditions necessary for our minds to exist and to operate. Kant, however, couldn’t offer any glimpse of the details of how it happened. This is what anti-math provides.
    Corden’s young friend will no doubt eventually realise that the Big Bang explains nothing. It is supposed to be an ”explanation” of a vast universe (containing intelligent beings) … using a scientific vacuum as the “working mechanism”!

    1. Hi Christopher, I guess the only answer I have is that Professorial conjecture goes no further towards answering these questions than layman’s conjecture.
      I would argue that even with the incredible advances we have made in the last few centuries we are still in the same human condition and we haven’t made one step forward from the time we as humans started to think about it.
      I’m afraid I can’t get my head around the “anti Math” idea just yet although I do like the idea that we should be looking at something other than the planes we seem to be stuck on.
      And then I got to thinking, to make things even more mysterious… does an idea in your brain have a mass?😊

  5. If this world is composed of energy, mass, and form (information), ideas if energetic ought to be convertibile to mass.

  6. Brains don’t exist before the Universe with its properties exist. Brains came long after the Universe appeared.
    If anything, Mind produced Universe with its eventual brains with their individuated (mini)minds.
    The Buddhists’ Diamond and Heart sutras, and the Advaita Vedantists explain, describe what is, isn’t, how,
    from an ‘extra dimensional’ position.
    A Western rationale and practice to that position can be found in ‘Silence of the Heart’ , Dialogues With Robert Adams.
    You can discover that you are always only already That. The cosmic joke is gotten and forgotten both and neither.

  7. Binary digital coding, 0,1, has taken over linear/analog thinking, logic, epistemology, knowledge, wisdom, Christopher. It’s sublimely simple. Artificial intelligence is an oxymoron. A contradiction in terms. There’s no such thing. Read Socrates, or Heraclitus, who believed most people were incapable of wisdom. Artificial intelligence is a reductio ad absurdum.

  8. Any essay such as this one on the smorgasbord of topics on ultimate questions, without discussion of non-ordinary states of consciousness is bound to follow the usual merry-go-round of speculative reasoning common to the age old topics. There are plenty of books out there that can help with introduction to these states, and it is not that difficult nowadays to find pathways to them, especially with the ingestion of certain substances. For example DMT is one of them that exists in some plants and probably in all vertebrates, and was possibly at work on the apostle Paul on the road to Damascas, helping out with the visionary state he encountered. Speculation is also that at death, the substance is present in the organism in large amounts and is at work in the NDE phenomena. It is generated in the pineal gland in mammals. Here is an astounding scientific accounting of the effects of this molecule: https://www.nature.com/articles/s41598-022-11999-8

    Also the advent of psychedelic therapy with psilocybin has introduced to ordinary people the possibility of profound ontological shifts to their lives. The highly esteemed Stanislav Grof introduced to the world the revolutionary framework for this field in the following, with huge implications for the frontiers of knowledge: https://www.amazon.com/review/R93YN1VTGE33U

Leave a Reply

Your email address will not be published. Required fields are marked *

New English Review Press is a priceless cultural institution.
                              — Bruce Bawer

Order here or wherever books are sold.

The perfect gift for the history lover in your life. Order on Amazon US, Amazon UK or wherever books are sold.

Order on Amazon, Amazon UK, or wherever books are sold.

Order on Amazon, Amazon UK or wherever books are sold.

Order on Amazon or Amazon UK or wherever books are sold


Order at Amazon, Amazon UK, or wherever books are sold. 

Order at Amazon US, Amazon UK or wherever books are sold.

Available at Amazon US, Amazon UK or wherever books are sold.

Send this to a friend