Philosophy of Mind: Part 15, Artificial Intelligence

516bnd16NoL._SY800_

By Frank Simons

The Meditation Center presents the 24 part Great Courses series Philosophy of Mind: Part 15, Artificial Intelligence, at 5:30pm, Thursday, September 14, 2017, at the Center, Callejon Blanco 4.Can a machine be genuinely intelligent? Alan Turing proposed a test: Could you tell, on the basis of answers to your questions received on a computer monitor, whether you were communicating with a person or a machine? Such a machine, Turing proposed, would be a genuinely thinking machine, intelligent in the full sense of the term. In 1950, Turing made a prediction: ‘I believe that in about fifty years it will be possible to programme computers to play the imitation game.” His prediction has not been fulfilled.

At the Dartmouth Artificial Intelligence Conference in 1956, participants, who would be instrumental in the development of the field, had two approaches and goals. Some wanted to understand human intelligence. Some was to build machines with new capabilities. Some thought the core of intelligence was symbol processing. This approach came to be called GOFAI, ‘Good, Old-Fashioned Artificial Intelligence”. Others who thought we should build machines that operated on the basic principles of the human brain, an approach ‘Connectionism”. The history of AI since has often been of the competition between the two.

The first successes were from the symbol manipulation approach. Allen Newell and Herbert Simon produced the Logic Theorist, which proved logical theorems. Marvin Minsky, founder of the AI lab at MIT, produced a machine to prove theorems in geometry. The future of AI seemed unlimited, and wildly optimistic predictions were made, none of which were realized.

In 1962, Frank Rosenblatt developed the perceptron, an example of the neural net component of the ‘Connectionism” approach. Its limitations convinced the field that neural nets could not succeed. But in the mid-1980s, neural nets were reborn through the work of the Parallel Distributed Processing Research Group. A new learning rule, the back-propagation of errors, allowed the training of multilayer nets. These neural nets were good a pattern recognition. But neural nets can be frustrating. Opening up a neural net may not tell us how it does what it does. Ironically, we may succeed in building machines that mimic human abilities yet not understand those abilities. Much of contemporary work in AI uses hybrids of the GOFAI and Connectionist approaches.

The professor Patrick Grim, as Distinguished Professor of Philosophy at the State University of New York at Stony Brook, has provided his students with invaluable insights into issues of philosophy, artificial intelligence, theoretical biology and other fields. Professor Grim was awarded the university’s Presidential and Chancellor’s awards for teaching excellence and was elected to the Academy of Teachers and Scholars.

There will be an opportunity for discussion following the video.

Presentations of the Center are offered without charge. Donations are gratefully accepted.

 

Video Presentation

Philosophy of Mind: Part 15, Artificial Intelligence

By Frank Simons

Thu, Sep 14, 5:30pm

Meditation Center

Callejon Blanco 4

Free, donations accepted

 

 

 

Comments are closed

 photo RSMAtnWebAdRed13.jpg

 photo RSMAtnWebAdRed13.jpg

Photo Gallery

Log in | Designed by Gabfire themes All original content on these pages is fingerprinted and certified by Digiprove