Path: utzoo!utgpu!water!watmath!isishq!doug
From: doug@isishq.UUCP (Doug Thompson)
Newsgroups: comp.society.futures
Subject: Re: The future of AI
Message-ID: <36.22625115@isishq.UUCP>
Date: 12 Apr 88 16:05:55 GMT
Organization: FidoNet node 221/162 - ISIS International, Waterloo ON
Lines: 150


 
 rwojcik@bcsaic.UUCP (Rick Wojcik) writes: 
 
 RJ> Weisenbaum's defection is even better known, and his Eliza program 
 RJ> is 
 RJ> cited (but not quoted :-) in every AI textbook too.  Winograd 
 RJ> took us a 
 RJ> quantum leap beyond Weisenbaum.  Let's hope that there will be 
 RJ> people to take 
 RJ> us a quantum leap beyond Winograd.  But if our generation lacks 
 RJ> the will 
 RJ> to tackle the problems, you can be sure that the problems will 
 RJ> wait 
 RJ> around for some other generation.  They won't get solved by pessimists. 
 
Weisenbaum's problem with AI does not fairly translate into "pessimism" 
or a lack of "will". Sure, he points out that many forecasts of 
breakthroughts by AI types just haven't been realized. His complaint, in 
the book "Computer Power and Human Reason" seems much more about the 
mentality of some AI workers, and the way problems are defined. 
 
He disputes, for instance, the populist scientific view of man, that 
homo-sapiens is just a complex machine (behaviourism). Machines depend 
on "effective procedures" -- logical if-then and cause-effect 
relationships. Artificial intelligence, so long as it is based on the 
kind  of machines we know about today, must also be based on "effective 
procedures". Human decision-making, according to Weisenbaum, makes use 
of "effective-procedures" at times, but is not confined to it. Human 
decision-making usually involves two or more people talking about a 
problem and coming to understand its meaning to them, as distinct 
persons, in an historical situation. This suggests avenues for solution. 
 
The way we talk to each other, the way we understand meaning -- this has 
much to do with the experience of being a human being, and being treated 
like a human being by other human beings. 
 
How are you going to mechanize that? Either the machine will understood 
pre-defined meanings "programmed" into it, or it will develop its own 
meanings based on its own experience. The latter is far beyond current 
capabilities, and the former is -- well -- relatively trivial. All you 
end up with is a decision-making loop which is only successful it can 
take account of *all* possible input and permutations. It is basically 
no different than the rules of a welfare bureaucracy, for instance. The 
raw data is the applicant, the applicant is examined according to 
certain pre-defined criteria, and the bureaucracy decides to pay or not 
pay. We all know this is unfair to some because people who don't really 
need help get it, and some who really do need help don't. The real 
humans applying for welfare don't always fit the pre-defined categories. 
 
You could mechanize that process though, because it is based on clear 
rules that are expressed as effective procedures. You might even call it 
intelligence, but it is still not going to replace the human appeal 
committee that can look at what the "machine" or the "bureaucracy" 
decided and over-rule it when an exceptional case arises. 
 
This is just one of the problems for which AI has, in Weisenbaum's 
argument, no theoretical solution. People deal with new situations with 
creativity, often through  such things as empathy, based on their 
experience of being human, and what it means to be human. Can you make a 
machine think it is a human and think like a human? 
 
Well, there are those who say you can -- there are those who say you 
can't and Weisenbaum is saying you probably can't but you most certainly 
*should not*. 
 
Such a machine, if it worked, could end up imposing its creator's sense 
of meaning, mostly frozen in time, on everyone subject to such a 
machine. It could generate its own sense of meaning and run the world 
according to what it -- not its creators or subjects -- thought was 
important. Already there is lots of evidence that the limited 
instruments we have today are doing this. 
 
Ultimately the perfect AI machine would behave exactly as a human might, 
and have all the capabilities that a human has. One seriously has to ask 
why one would want to make a mechanical man for trillions of dollars 
when we can get billions of flesh-and-blood men for pennies per hour? 
 
I think we have here "optimism" based on a combination of ancient dreams 
of "perfect slaves" and "supermen". Of course, we presume that we will 
be able to control these machines once we have built them. That is the 
romatic misconception. We build bureaucracies we cannot control, 
institutions we cannot control, we have all written computer programs or 
parts of programs that no one can control -- or properly understand. 
 
I very much identify with Weisenbaum's basic question: "Why on earth 
would we want such machines?" "What possible *good* could they do for 
us?" The debate is partly technical, but mostly philosophic: grated for 
the moment that you may eventually build such a thing (which is quite 
doubtful) what would you use it for? 
 
The answer, if you look about, is to make more effective military 
weapons. Or at least this represents the majority of answers being found 
today. Very little AI work is being directed toward reconciling human 
differences, resolving world problems, feeding the poor, or bringing 
justice to the oppressed. Very much AI work is being directed toward 
increasing the capacity of some men, in possession of these instruments, 
to control other men. 
 
Since most AI advocacy is rooted in a behaviouristic understanding of 
mankind, it is not surprising that instruments to modify behaviour 
comprise most of what is being produced -- or researched. At best 
though, we could "artificialize" only a very small and specific portion 
of human "intelligence" by pursuing this path. Theologians define 
idolatry as the worship of a sub-set of human attributes at the expense 
of others, leaving an unbalanced, distorted result.  
 
This is precisely Weisenbaum's complaint against AI. He doesn't say 
there is not good to be achieved down this road, he does say that the 
approach being taken in this culture is very unbalanced, and unhealthy 
and therefore the net effect is negative. 
 
His plea is not that we stop work on AI, but that we approach it in a 
more balanced and wholistic way with a more civilized list of human 
priorities such that the machines we create serve to benifit mankind, 
and not make life more tenuous and intolerable. 
 
At the moment, this is largely impossible for a wide variety of 
political reasons. Weisenbaum was not prepared to work on AI devices to 
enhance the kill power of military equipment, nor was he prepared to 
work on machines to mechanize psychotherapy and remove the human doctor 
from the treatment of humans. AI apologists generally are quite prepared 
to work on such projects, and even hail them as great progress for the 
human race. To Weisenbaum -- and me -- such things are Frankenstinian 
obscenities which can only degrade human life. 
 
 RJ> Henry Ford had a good way of putting it:  "If you believe you 
 RJ> can, or if 
 RJ> you believe you can't, you're right." 
 
Well, I'm not gonna knock the power of "positive thinking" -- but Hitler 
believed he could . . .  
 
In addition to the question of "do you believe?" we must ask "in what do 
you believe?" before deciding to help you or put a stop to you. 
 
------------------------------------------------------------------------ 
Fido      1:221/162 -- 1:221/0                         280 Phillip St.,   
UUCP:     !watmath!isishq!doug                         Unit B-3-11 
                                                       Waterloo, Ontario 
Bitnet:   fido@water                                   Canada  N2L 3X1 
Internet: doug@isishq.math.waterloo.edu                (519) 746-5022 
------------------------------------------------------------------------ 
 
 
  

---
 * Origin: ISIS International H.Q. (II) (Opus 1:221/162)
SEEN-BY: 221/162