by Kenneth Francis (March 2022)
There’s been a lot of talk lately about AI being on the cusp of reading our thoughts (consciousness). But can a computer or some form of AI ever achieve this? It’s highly unlikely because mental states are not physical, despite the brain manifesting them via communication with other human beings and/or the external world.
Computers and other forms of AI are 100% physical devices, thus the best they can ever achieve at expressing our thoughts is by disrupting them in the ‘machinery’ of the brain. In other words, frustrating the ‘ghost’ (consciousness) from operating the ‘machine’ (wet physical brain).
Think about it: How can a robot ever have a spiritual vision of reality? How can it even be emotional? And even if it could tap into our feelings, how confused would it be with such a complex network of semantics and emotional states? In fact, it would probably view movies like Deliverance or Fatal Attraction as a love stories.
Last month (Feb 22), in an article in the U.S. Sun newspaper, Ellie Cambridge wrote: “The first ever recording of a dying brain has revealed we might relive some of our best memories in our last moments. Scientists accidentally captured our most complex organ as it shut down, showing an astonishing snapshot into death.”
According to the article, the study was published in Frontiers in Aging Neuroscience, which claimed their data provided the first evidence from the dying human brain in a non-experimental, real-life acute care clinical setting and advocate that the human brain may possess the capability to generate coordinated activity during the near-death period. This is just one single case study, with a brain that had already been injured due to the epilepsy.
The German philosopher Gottfried Wilhelm Leibniz (1646–1716) wrote: “It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions. And, supposing there were a machine so constructed as to think, feel and have perception, we could convince of it as enlarged and yet preserving the same proportions, so that we might enter it as a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought for, therefore, in the simple substance and not the composite or in the machine (Section 17, Monadology, 1714).”
In ancient times, the debate on consciousness/thoughts (the mind/body problem) goes back to Plato and ancient Greece. However, it really got kick-started with Descartes and Locke during the 17th century, the former saying, “I think, therefore I am.” Could a robot ever think such a thing? Very doubtful.
Without even a basic understanding of what consciousness is, the idea of putting it into a machine, while not difficult to imagine in the fantasy of science fiction, becomes almost impossible to grapple with when it comes down to real and practical implementation.
The field of AI in the last twenty years has made many claims that one day we will create a robot with a consciousness similar, if not the same, as our own. So far, no one has achieved this.
Yet, even though the field of AI has evolved on tasks of solving practical problems such as complex scheduling, rather than on emulating human behaviour, many AI scientists still believe that the original goals of AI will become a reality in the near future. Unfortunately, a lot of scientists don’t believe in the human soul.
But even without recourse to the notion of a soul, there are a number of ways in which the mind is not simply brain. The wet, grey organ known as the brain does not have mental states such as love, hate or sadness. Then there is the problem of propositional attitudes such as fear, hope, desire, wish, dread and think.
As to where the mind resides, that is the biggest mystery in philosophy. Although it interacts with the brain, it can’t be a kind of invisible vapour hovering above one’s head. Nor can it be located in some part of the universe, as it’s supernatural and outside of space and the material world.
If epiphenomenalism (mind is brain) is true, then how come fake drugs sometimes work in placebo effects? And at what point in evolution did the atoms in brains develop morals? That we can have logic, reason and truth evolving out of a material process that is aimless, purposeless, misguided and unaware of self seems absurd.
Materialism would never have endowed human beings with consciousness, as zombie clones would reproduce and spread their genes more effectively. If intelligence emerged from brute matter and not a Superior Mind then robotic ideas of life and the cosmos are not to be taken seriously because they could never tackle metaphysical questions reliably. Should one trust a machine made of blind chemicals or computer software functioning on syntactic information but oblivious to philosophical truths?
For if a robot could read thoughts as material entities that would also have to include laws of logic, which is absurd. One can hit a rock or a tree with a hammer but not a law of logic. The same applies to the existence of objective moral values and duties, aesthetic values, love, the existence of other minds other than one’s own, proving the duration of the past, beauty, and proving science and the existence of the external world.
Could a carbon android at the funeral of a child read the thoughts and emotions of the mourners? Or would it view the death of a loved one as nothing more than the rearrangement of atoms in a wooden box?
Some scientists speak as though the mysteries of consciousness and mental experience have been fathomed. That is certainly not the case. There is something non-material about thought and experience which has not been explained by scientific materialism.
To draw an analogy: the brain is like a motor car and the mind a human who drives it. If the car (‘brain’) is damaged or the engine not tuned properly, then the driver (‘mind’) will not be able to drive it properly. So, rest assured: The only ones that can ever read your mind are yourself… and God.
__________________________________
Kenneth Francis is a Contributing Editor at New English Review. For the past 30 years, he has worked as an editor in various publications, as well as a university lecturer in journalism. He also holds an MA in Theology and is the author of The Little Book of God, Mind, Cosmos and Truth (St Pauls Publishing) and, most recently, The Terror of Existence: From Ecclesiastes to Theatre of the Absurd (with Theodore Dalrymple).
Follow NER on Twitter @NERIconoclast
- Like
- Digg
- Del
- Tumblr
- VKontakte
- Buffer
- Love This
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- Subscribe
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
5 Responses
To answer your question: “And at what point in evolution did the atoms in brains develop morals? When man discovered he could choose. In situations where two choices are not equal, when one is ostensibly better, that is more life-affirming than the other, and being cognizant that one choice is better than the other, choosing, that is doing what is right, became synonymous with the moral imperative.
On Naturalism, humans (and other creatures) never ‘discovered’ they could choose, as they were always endowed with instinctual decision-making attributes based on appetite or survival requirements. Similarly, robots can ‘decide’ when ‘choosing’ to make a move playing chess or other functions. The input of the information into the system will influence the moves it makes. But my question is on the existence of freedom of the will and choosing morally, based on objective moral values and duties. On atheism, atoms just blindly bump into one another. Also, be careful using terms such as ‘ostensibly better’, ‘one choice better than the other’, or ‘doing what is right’ (on atheism, says who?). Such terms, subjectively, bring great comfort to the psychopaths of this world.
“Unfortunately, a lot of scientists don’t believe in the human soul.”
It’s much worse than that. Materialism is the dominant philosophy of academic science and in the academy at large. Because of the success and resulting prestige of the physical sciences based mainly in reductionism, much of the liberal arts emulate the philosophical outlook of scientists; even philosophy departments are materialist oriented.
“Some scientists speak as though the mysteries of consciousness and mental experience have been fathomed.”
The best data from consciousness research emerges with the application of psychedelics. [see https://www.amazon.com/review/R93YN1VTGE33U ] This type of research can be characterized as ‘non-reductionist’ which should not be a reason to discount it. There are certain features of this type of research that can be seen as similar to naturalism, especially repeatability. It is ‘non-reductionist’ in the manner of sociology, near death studies, and psychiatric studies of young children’s claims of previous incarnation experiences, e.g. at the UVA Psychiatry dept. Interestingly, the data from psychedelic studies supports the view of the immaterial mind and point to certain features of the mind which are unfathomable and must remain so.