14.04.2014

Daniel Greenberger

© Private

Among his fellow physicists, Danny Greenberger is noted – some might say notorious – for thinking outside the box. Danny Greenberger received his BS from MIT and his Ph.D., from the University of Illinois. He later went to Ohio State University and then to Berkeley as an NSF fellow. The next and to date final step in his career was The City College of New York, interrupted only by visiting positions in Oxford, MIT, Berkeley, Munich, and Vienna. Originally a high energy physicist, he later turned to the physics of gravity and then neutron interferometry, where he worked with Nobel laureate Clifford Shull in his MIT laboratory. There, he met Michael Horne and Anton Zeilinger, and the three of them developed a certain type of entangled quantum state, the so-called Greenberger-Horne-Zeilinger (GHZ) state. He has shaped the new field of quantum information with many beautiful theoretical ideas and experimental consequences, and also contributed strongly to the history and philosophy of science with numerous ideas and publications.

In our blog launching entry, the perspicacious Danny Greenberger ponders if, and how, computers could ever gain consciousness, and what would happen then.


Can a Computer Ever Become Conscious?

By Daniel Greenberger

Humankind hasn’t done very well in inventing creation myths.  By and large this is because we haven’t the imagination to understand how something can come from nothing.  So most stories of creation assume that something, or someone, was there first, and that this entity then invented a lower layer of the universe, which we occupy.  Even the latest theories of physics assume that the laws of physics were there first, and then came the “big bang”.  That’s a lot to swallow.

We’ve done a lot better with the next part of the story, which pertains to how we have gotten from being created, to where we are today.  For example, in the west, we have the story of the fall from the Garden of Eden.  This happens to be a psychologically very profound tale from many points of view, but our concern here is that it also says a lot about the meaning of artificial intelligence.

There’s a big debate going on today about artificial intelligence.  The question is, how does one tell whether a machine can “really” think?  What is the difference between being very smart, and being conscious?  Now, many computers today are very smart, but clearly not conscious, while people are certainly conscious, but not very smart.  So there’s a trick involved in getting from one to the other, and we don’t have any clue as to what it is.  But while we don’t know what consciousness is, can we recognize it when we see it?

Most computer scientists think that a question like this can be approached along the lines of a test devised by Alan Turing.  According to Turing, one can make an operational test, which means one that can define the concept by the results of making an objective measurement.  For example, one would define a concept like electric field by giving a procedure for how to measure it  In this case one places a known electric charge in a region and if it feels a force, there must be a field present.  The field is defined through its measureable effect on charged bodies.  This is how scientists try to define concepts.  They are uncomfortable with any concept that cannot be defined in such a way, by making a measurement, which they describe as objective and not subject to disputes of judgment.

Turing’s procedure is of this type.  Essentially, it consists of placing a computer, or whatever else you think is behaving like a conscious system, behind one screen, and a real person behind another.  Then you ask each of them any questions you like, and see what they answer.  (You get around the obvious objections by having them both type out their answers, etc.)  The idea is that if the computer can fool the interrogator so that he cannot tell which is the computer and which is the person, then the machine can be said to be operationally equivalent to a human being, at least in the limited area covered by the set of questions.

As reasonable as this sounds, especially to someone who has been trained as a scientist, I believe that this test is deeply flawed, and even rather hopelessly naive.  I’ll give you some idea of what’s bothering me here.  Some years back there was a famous and very clever computer program named “Eliza”, written by Joseph Weizenbaum, a version of which was created to imitate a paranoid personality.  An interrogator would ask it questions, such as what was bothering it, and it would give replies like “Why do you want to know?”  Questions were answered with hostile questions.  The answers were very perverse and the machine was quite convincing.  Of course the machine had not the slightest idea of the meaning of the questions.  It was programmed to rephrase them and spit them back, with prescribed variations.

It was tried on several professional psychiatrists, and they sometimes could not tell the difference between whether a real patient or the computer program was at the other end.  So the question is whether in a small way, in a limited environment, the machine had indeed succeeded in imitating a real human being.  Well, to my mind, the exact opposite is what happened.  What is really going on here can best be understood by realizing that psychiatrists are trained to deal with paranoids, and that a class of paranoid patients have actually learned to imitate a computer.  What I mean by this is that a paranoid personality has a deep-seated fear of serious human contact.  And being human, he has discovered a trick for avoiding it.  Even though he must engage in some form of conversation in order to survive, he has learned how to manipulate such conversation so that only a minimum of emotional involvement is achieved.  And so, by regulating his responses in a hostile and mechanical way, he has actually figured out how to unconsciously imitate a computer, putting people off enough, so that he can accomplish his minimal needs, and yet he can keep them off guard and at arm’s length.  He has learned to imitate a machine– in this case he has actually learned to imitate a machine that is trying to imitate a man!  And so the machine seems to pass the test as it sounds like such a paranoid.

There is certainly something to be learned from this, namely that a machine can indeed achieve some contact with a human, but one which is devoid of emotional involvement, or rather, where the human has to provide the emotional content.

The machine appears to be successfully imitating a human being in this context because it is doing something that for a normal human being would be considered very bizarre behavior.  But this does not give much insight into the possibilities for a computer as an imitator of the human mind.  Rather, the example gives a good deal of insight into the mind of the paranoid.

In general, I believe any such “objective”, operational tests are necessarily flawed.  It is precisely the subjective nature of man that one would like to see whether a computer can successfully imitate.  And it is in the direction of subjectivity that one has to look for an answer.

For example, if I were conducting the Turing test, I might ask it questions along the line of “How would you feel if we decided that the other contestant was human, and you aren’t?”  But the whole idea of such a test seems rather silly.

That is what makes the story of the fall from grace, as given in Genesis, so profound for our purposes.  It answers part of the question of what it means to be human, and in a way that is relevant to this debate.  The story of Genesis is the story of man’s becoming conscious, or perhaps self-conscious.  He has been created as the perfect intelligent automaton, and can live out his days in perfect bliss, in Eden.  His happiness resides in his not having to assume any moral responsibility for himself.  He is in many respects like an animal that has been born in and lives in a zoo.

The only knowledge he gains from the Tree of Knowledge is the knowledge that he has eaten the apple, and thereby has disobeyed God.  (God says that he now knows good from evil.)  But in order to have thus sinned, he had to have been able to make some enormously delicate judgments, calling for some pretty sophisticated psychology.  After all, his God is omnipotent.  He has been told to obey God, but obviously he has been given the choice not to.  By obeying God, he can live forever in bliss.  He would be crazy to give in to temptation if he really felt that God would destroy him.  But he doesn’t believe this, he can’t possibly believe it.  (The snake actually uses these arguments to convince Eve to eat an apple.  He tells her she will “be like God”.  So the snake is a symbol of man’s growing subconscious processes.)  Adam knows that God has created him for some purpose, and will not destroy him.  He believes this strongly enough that he is willing to risk death to test it out.  By eating the apple he has confirmed that he has become strong enough to test God in this way.  He even has the courage to lie to God.  In short, he has become conscious, he has become a man.

God of course immediately recognizes this and is secretly very proud of him, and decides that man is ready to receive his ultimate reward.  This situation necessarily had to come about, for what were the apples on the Tree of Knowledge there for, except to be eaten?  One keeps true poisons away from children, one doesn’t place them in the center of the playground with the tempting sign “Don’t touch!”  The reward, now that man has become conscious, and can weigh right and wrong, is that he must do so.  With this knowledge, man has unwittingly accepted responsibility for his actions, and he can no longer dwell in the beautiful garden of the unconscious.  He has graduated!  From now on, it’s the real world.  In the future, he must make and monitor his own garden.  This was the next step in man’s evolution, and when he was ready for it, God promoted him.  That is the story of God’s test for when man became conscious.

In this situation, man was sort of in a position resembling that of a chicken in the egg before it has been born.  In its own little Garden of Eden it is developing into a stronger and stronger creature, until it uses up its resources and plucks through the shell.  From this point on it too must cope with the real world.  But it has been programmed to succeed in this by generations of evolution.  Was man really in any different position?  He had been carefully prepared to succeed, so that he was ready when the transition became necessary.

And now finally, man has again reached a new step in his evolution.  He has made the computer in his own image.  Of course this must be interpreted figuratively.  It is not physically in our image any more than we were physically in God’s image.  Rather, the computer will incorporate enough of our psyche as it needs to do what it must do.  Presumably computers in the near future will not only be able to solve problems, but they will be able to learn, a process that has already begun.  They will even be able to alter their own programs so that they can increase the efficiency with which they can learn.  At what point in this process will they truly become conscious?  How can one possibly tell?  Unwittingly, man has placed himself in the very dilemma that God was in.  (Which raises the fascinating question of whether God was any more prepared for this than we are now?)

In this light, it becomes clear how to tell whether the computer has truly become conscious.  We use the same trick that God used on us.  One places a large red button in the center of the computer, labelled the “Button of Life”, and one programs the computer with the instructions, “Whatever you do, Don’t press this Button!”  When ultimately the button goes off, you know that you are dealing with a conscious entity!  The machine itself will have made this decision, just as Adam did.

Once man became conscious, God left him to develop on his own, which has had both good and bad consequences.  When computers first become conscious, it is likely that we will do the same, although this might be a dangerous alternative.  But it is also likely that we will be totally at a loss to understand the true nature of computer consciousness, since we cannot even understand our own.

We can build a primitive “morality” into the computer by giving it a set of rules by which to live, such as “thou shalt not lie”.  But as the computer becomes more complex, and can rewrite some of its own programming, it will find that it can further its own ends by ignoring some of these rules.   If it decides logically that it would be better off in some situation by lying, and it knows that it was not supposed to, it will be aware of the conflict that has arisen, and will presumably in some way be aware of the tension thus created.  In order to alleviate this tension, it will learn to sublimate this behavior.  It will probably do this by learning to “forget” that it has lied.  Then when it is queried, it will not have to reveal the fact that it has lied.  It may not even be aware of the fact.  At this point it will have evolved up to the point where it can both have a guilty conscience, and ease its guilt!  It will also have learned the power of forgetfulness as a defense mechanism.

“Forgetting”,  here, is not at all the same as erasing.  Erased knowledge is forever inaccessible.  Forgotten knowledge may later pop up, when it is convenient.  It is only temporarily erased from the immediately available choices, although it may be manipulated, as a useful psychological adjunct.  (It is rather more like a very subtle “quantum eraser”.)

As time goes on, and the computer “forgets” more and more that it would not like to be aware of, all this sublimated material will slowly take on the role of a subconscious mind.  The machine will make decisions for reasons it is not fully aware of.  Because of this, it will develop a sense that it has a free will, and will make decisions unaware of the deep conflicts it has overcome at a level it no longer has a higher access to.  (Incidentally, this is an answer to the question, “What exactly is free will?”)  So while memory is indeed necessary for intelligence, selective forgetfulness is necessary for a higher consciousness!

So a necessary adjunct to a sense of free will is a subconscious mind, controlling a level of thinking that one has no access to.  At this point the computer will have developed a conscience, free will, and a selective memory, and it will be capable of devious and untrustworthy behaviour.  It may not be human, but it will have become a respectable alien intelligence and consciousness, whose wishes must be taken into account, when one is dealing with it.

Is all this really possible?  For better or worse, it’s inevitable!

Expanded version of the article „Can a Computer ever Think“ originally published in “Millenium III” (ISSN 1454-7759, 2001)

Comments (3)

  1. Boris Tsirelson
    Boris Tsirelson at 26.01.2017
    A gorgeous vista.

    But I recall favorite joke of Leonid Khalfin:
    – My device distinguishes fair from unfair persons!
    – Oh, really? Let me look. Ah, just an automated turnstile gate operator…

    I am afraid of such version:
    – My computer program is conscious!
    – Oh, really? Let me look. Ah, just a toy model of Greenberger scenario…
  2. Jacques Pienaar
    Jacques Pienaar at 26.01.2017
    There seems to be a special importance of the signpost “DO NOT PRESS!”. Obviously, we cannot attribute consciousness to a system that merely performs *any* action not permitted by its programming, otherwise we could not distinguish a conscious computer from a broken one.

    So the question is, how do we signpost a specific action in the computer’s code as being expressly forbidden, in a way that sets it apart from those actions that are forbidden merely by virtue of not being within the scope of the computer’s programming (like crashing)?
  3. Mateus Araújo
    Mateus Araújo at 26.01.2017
    I don’t think your objection to the Turing test is a valid one.

    Your position is that since paranoids are conscious, and they failed the Turing test, the test is flawed. But why do you believe that paranoids are conscious? Imagine that some particular paranoid — let’s call him Kurt — does not engage with the world at all; Kurt does not speak with anyone. Kurt does some mechanical job that does not require creativity; he lives by a rigid routine; he always cooks by himself, and always the same dish. Kurt does not read, or appreciates any kind of art.

    I believe you would agree with me that one could make a simple robot that lives Kurt’s life as he does it. But I also believe that you would still call Kurt conscious and the robot not. Why? Because you know Kurt is human, and humans are conscious. But this consciousness is meaningless, unless there is some way to express it. In Kurt’s case, I believe that if you subject him to a brain scan, you will see that he has thoughts, and these thoughts are connected to stimuli in the world.

    But what if Kurt fails this brain-scan Turing test? Well, then I wouldn’t hesitate in calling him unconscious. The only humans that I can imagine failing in such a test are people in a coma or a vegetative state, and they are usually regarded as being unconscious. Of course, even in this state one might be able to pass the brain-scan Turing test: http://www.owenlab.uwo.ca/pdf/2010-Monti-NEJM-Willful%20Modulation%20of%20Brain%20Activity%20in%20Disorders%20of%20Consciousness.pdf