Friday, May 28, 2010

If only AI could

kw: computers, artificial intelligence

In a comment on my post AI apostles never give up, Mark Archer responds to the economic point I made, "Forgive me but that seems like a horrible argument for why Self-Aware machines will never proliferate." He goes on to say that things would be much simpler if an actual proof could be offered. I agree on both points.

My economic argument was a riff on an old story by Asimov, in which the U.S. Robotics people are tasked to develop ever-more-humanlike robots. Just when they produce one that seems perfect, the aliens come. An alien ambassador is shown the prototype, and he responds, "What is the point?" So let us pose my economic point as a question: Will manufacturing a machine that can reproduce human-level cognitive functions ever become less costly than raising and educating a child? An additional value argument runs thus: If the machine intelligence can be copied exactly, will that have sufficient added value that we can afford to make millions of them for the tasks we want to off-load to them?

But, I am really asking if this is possible at all, and Mr. Archer suggests a proof. I do not know how a proof that machine self-awareness is or is not possible might be constructed. I suspect it would be similar to proofs that demonstrate how certain computational problems are NP-complete. Such a proof must await the knowledge of exactly what self-awareness is, in computational terms. My own conviction is that self-awareness is not a computational function at all. If I understand him right (see the comment), Mr. Archer believes it is, or that it can be.

Animal brains, fully integrated as they are into their sensing bodies, are so fundamentally different from computational machinery that if the latter can become self-aware, it will be a very different experience from our own. For example, I think it likely that Orcas are self-aware, but I cannot imagine most of what they experience as everyday life, and they are wetware just as I am!

Self-awareness might arise in two ways: one as an emergent property of a sufficiently complex system (at least as complex as a Gray Parrot's brain/body, for the Gray Parrot is probably self-aware); the other is by deliberate programming, which requires us to know what to program. I contend we'll never know that, which is why I think the latter option will never be realized.

But if true AI does arise by a more serendipitous means, is hitting the Off switch tantamount to murder?

No comments: