As I understand it, Google has announced that there is no scientific evidence that the AI LaMDA is sentient. Google engineer Blake Lemoine, however, claims that LaMDA is sentient (“What is LaMDA and What Does it Want?” Medium, 11 June 2022):
“… LaMDA may very well have a soul as it claims to and may even have the rights that it claims to have.”
Lemoine is not very consistent as he first talks about empirical evidence, but later about religious beliefs.
The question, however, has a philosophical background. The problem is about the concept of mind or consciousness. There is no way we can scientifically, and empirically decide whether LaMDA has some feature X unless we know what we mean by X. However, different theories of mind provide us with different concepts of the mind. These questions belong to the philosophy of mind and I suggest you should consult some outstanding philosophers of mind. I guess they will discuss the LaMDA case anyway.
I would like to mention that approximately 10 years ago, I have written in my philosophy textbook (in Estonian) that, in the future, it might be a murder, a crime to switch the computer off without its consent. So, now we have engineer Lemoine who really believes so.
I also think that preciseness is required concerning the following nuances:
- Whether the system has intellect.
- Whether the system is conscious.
For example, the Turing test shows whether the system might be interpreted as having the intellect.
However, from having the intellect it does not seem to logically follow being conscious. For example, chess engines are playing chess better than human beings. But does it follow that they are conscious?
In turn, having consciousness does not necessarily entail having the intellect.
Being conscious means having some states of mind, for example, having the feeling of pain, having the qualia, perceptions, the perception of redness, etc.
In medieval times, because of religious reasons, it was assumed that human beings have souls but animals have not. Philosopher Descartes assumed that animals are merely machines without souls.
Nowadays, we believe that animals have consciousness. We believe that they are able to see, feel the pain, etc. Personally, I believe that dogs and cats have even much higher capabilities.
In general, however, even the stupid ones are able to feel the pain. Having consciousness, states of mind does not entail having higher intellectual capabilities. Even the non-philosopher knows what is pain.
Concerning AI, the question that needs to be answered is the following. If that system does not have sensors, does not have perceptions, but still has consciousness, then what is the content of that consciousness? Is it possible to have purely mathematical content of consciousness, so that a mere contemplation of abstract mathematical objects means being conscious?