That is a very constructive and helpful reply, but, I fear, it raises more questions than it answers. It seems to me that you have to overcome at least four objections if you wish to say that humans and thinking machines are qualitatively different in principle from each other. The contrary argument is that human kind and machine kind are converging rapidly and will be indistinguishable from each other some time soon. You suggest that (a) “Every apparent human quality in the machine has its origin in us”. I would reply “where else where we find human qualities to implement”. A good copy may be considered the same as the original can it not? You also argue that “it is inner experience which characterises human consciousness”. I would reply “But how would we know what an intelligent robot feels? We only have what they say to go on.”
Anyway, back to my four objections. They are:
The distinction between animal kind and human kind is now much less sharply defined. Perhaps the only significant differences are better language skills, better problem solving skills and better social group building skills. Such differences are merely quantitative and never qualitative. So where is the uniqueness of human consciousness?
The distinction between machine kind and human kind is now much less sharply defined. There are some cognitive tasks where machines now excel e.g. Chess at Grand Master level. So again where is the uniqueness of human consciousness?
Next, scientific methods really do not equip us to deal with “inner experience” in a testable and replicable way. All we can do is observe the observable and deduce the minimum necessary theoretical constructs to explain them. Do we really need to focus on the concept of inner experience? In the words of Laplace, we have no need for that hypothesis!
Fourth, what about the fusion of human kind and machine kind? We can produce all sorts of prosthetic body parts, but what about the advent of cognitive prosthetics? These are already in our laboratories and I am thinking of technologies for augmented cognition, augmented reality, brain implants and brain-computer interfaces. Imagine a simple thought experiment. Imagine an individual who lives long enough to have an increasing number of prosthetic transplants, 1 . . . 2 . . . 3 . . . 4 etc. When does that person become non-human? Is it at the first transplant? (Probably not). Is it at the last human aspect to be replaced? (All most certainly, yes!) If you agree with me or disagree with me, how are we to make such judgements? So again where is the human uniqueness?
I should add that in my own lab we are not only building human brain – computer interfaces but we are attempting to build user models that capture the essence of our intended system users in machine readable forms. I hope that you can overcome these critical objections but, when it comes to regarding the human brain as an intelligent thinking machine, the future is nearer than you might think.
I am happy to answer your four objections, all of which had been considered in one way or another by Lewis. It seems to me that the more elaborately you lay out your case for a purely materialist or mechanistic account of consciousness and self-hood, the more apparent become the inherent contradictions in the materialist position. Let us begin with the (I may say) naïve assumption that “A good copy may be considered the same as the original”. This is a major philosophical issue from Plato’s myth of the Cave onwards and Lewis deals with it at length. My Position and indeed Plato’s and Lewis’s is that a copy is precisely not the same as the original but by virtue of being a copy, it points beyond itself and bears witness to an (otherwise hidden) original. Lewis sets this out in a beautiful allegory in The Silver Chair where a Queen of Underland tries to persuade the children that, because the lamp from her ceiling can be described as a copy of the sun, it is the same as the sun, indeed that there is no sun, only the copy. A perspicacious marsh-wiggle rescues the children from her false logic and they return to the land of the true sun. Now I turn to your objections. Your first and second objections, based on some parallels on the one hand between humans and animals and on the other between humans and machines, really go together. You ask where is the uniqueness of human consciousness, since it has so many elements in common with animal consciousness. There is a two-fold answer here; first we don’t know about animal consciousness since we don’t experience it from the inside. We can only conjecture, but second, and more importantly, the unique thing about human consciousness is not awareness of environment, such as animals have or number-crunching power such as computers have, the defining feature of human consciousness is reflective self-awareness, being conscious of ones self as a distinct reflecting entity within the environment. Indeed this is what the word conscious literally means; knowing with or knowing alongside, we don’t think animals have such self awareness, and it is certain that machines do not have this kind of self-awareness even if they have self-referential programs.