Xref: utzoo comp.society.futures:404 comp.ai:1513 Path: utzoo!mnetor!uunet!lll-winken!lll-tis!ames!mailrus!tut.cis.ohio-state.edu!bloom-beacon!mit-eddie!uw-beaver!ssc-vax!bcsaic!rwojcik From: rwojcik@bcsaic.UUCP (Rick Wojcik) Newsgroups: comp.society.futures,comp.ai Subject: Re: The future of AI Message-ID: <4741@bcsaic.UUCP> Date: 6 Apr 88 18:27:25 GMT References: <8803270154.AA08607@bu-cs.bu.edu> <962@daisy.UUCP> <4640@bcsaic.UUCP> <1134@its63b.ed.ac.uk> Reply-To: rwojcik@bcsaic.UUCP (Rick Wojcik) Organization: Boeing Computer Services AI Center, Seattle Lines: 53 Keywords: AI philosophy Summary: Problems seldom get solved by pessimists. In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes: >[re: my reference to natural language programs] >Errmmm...show me *any* program which can do these things? To date, >AI has been successful in these areas only when used in toy domains. > NLI's Datatalker, translation programs marketed by Logos, ALPs, WCC, & other companies, LUNAR, the LIFER programs, CLOUT, Q&A, ASK, INTELLECT, etc. There are plenty. All have flaws. Some are more "toys" than others. Some are more commercially successful than others. (The goal of machine translation, at present, is to increase the efficiency of translators--not to produced polished translations.) >... Does anyone think AI would be as prominent >as it is today without (a) the unrealistic expectations of Star Wars, >and (b) America's initial nervousness about the Japanese Fifth Generation >project? > I do. The Japanese are overly optimistic. But they have shown greater persistence of vision than Americans in many commercial areas. Maybe they are attracted by the enormous potential of AI. While it is true that Star Wars needs AI, AI doesn't need Star Wars. It is difficult to think of a scientific project that wouldn't benefit by computers that behave more intelligently. >Manifest destiny?? A century ago, one could have justified >continued research in phrenology by its popularity. Judge science >by its results, not its fashionability. > Right. And in the early 1960's a lot of people believed that we couldn't land people on the moon. When Sputnik I was launched my 5th grade teacher told the class that they would never orbit a man around the earth. I don't know if phrenology ever had a respectable following in the scientific community. AI does, and we ought to pursue it whether it is popular or not. >I think AI can be summed up by Terry Winograd's defection. His >SHRDLU program is still quoted in *every* AI textbook (at least all >the ones I've seen), but he is no longer a believer in the AI >research programme (see "Understanding Computers and Cognition", >by Winograd and Flores). Weisenbaum's defection is even better known, and his Eliza program is cited (but not quoted :-) in every AI textbook too. Winograd took us a quantum leap beyond Weisenbaum. Let's hope that there will be people to take us a quantum leap beyond Winograd. But if our generation lacks the will to tackle the problems, you can be sure that the problems will wait around for some other generation. They won't get solved by pessimists. Henry Ford had a good way of putting it: "If you believe you can, or if you believe you can't, you're right." -- Rick Wojcik csnet: rwojcik@boeing.com uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346 phone: 206-865-3844