Marvin Minsky
The father of the field of A.I. / Toshiba Professor of Media Arts and Sciences, and Professor of Electrical Engineering and Computer Science, at the Massachusetts Institute of Technology
At the Massachusetts Institute of Technology, Marvin Minsky holds the titles of Toshiba Professor of Media Arts and Sciences and Professor of Electrical Engineering and Computer Science. His work has advanced both theory and practice in the fields of neural networks, Turing machines, artificial intelligence, cognitive psychology, and recursive functions. (In 1961, he found a solution to Emil Post's "Tag" puzzle and demonstrated that any computer could be mimicked by a device with just two registers and two straightforward commands.) He has also made contributions in the areas of knowledge representation, commonsense semantics, machine perception, and both symbolic and connectionist learning. He has also made contributions in the areas of graphics and symbolic mathematical computing. He has also worked on cutting-edge technology for space exploration.
Robotics and telepresence were fields in which Professor Minsky pioneered. He created the original LOGO "turtles," as well as some of the earliest visual scanners, mechanical hands with tactile sensors, and associated software and hardware interfaces. They had an impact on several robotic ventures after that.
Stochastic Neural-Analog Reinforcement Computer, or SNARC, was the first randomly connected neural network learning computer he created, and it was based on rewarding synaptic connections that contributed to recent reactions. He created the first Confocal Scanning Microscope, an optical device with unmatched resolution and picture clarity, in 1956 as a Junior Fellow at Harvard.
Marvin Minsky has been working on imbuing computers with intelligence and employing computational concepts to define human psychological processes since the early 1950s. His groundbreaking 1961 paper, "Steps Towards Artificial Intelligence," reviewed and assessed previous work and listed several significant issues that the developing science would eventually have to deal with. Making self-aware robots was the topic of the 1963 article "Matter, Mind, and Models". Minsky and Seymour Papert described the possibilities and constraints of loop-free learning and pattern recognition computers in "Perceptrons," published in 1969. See my further comments below. Minsky proposed a model of knowledge representation in "A Framework for Representing Knowledge" (1974) to explain a variety of phenomena in cognition, language comprehension, and visual perception. These representations, known as "frames," received their variable assignments from earlier established frames and are frequently regarded as an early example of object-oriented programming.
The Society of Mind, a hypothesis that Minsky and Papert developed in the early 1970s, combines knowledge from research on artificial intelligence with observations from developmental child psychology. According to the Society of Mind, intelligence arises from the controlled interaction of a wide range of resourceful agents rather than being the result of any single process. They contended that such diversity is essential because various activities call for fundamentally distinct processes, changing psychology from a useless search for a handful of "basic" principles to one for methods that a mind may employ to control the interplay of many different aspects.
By the 1970s and the beginning of the 1980s, parts of this idea began to appear in studies. As Minsky continued to focus largely on the theory, Papert shifted his focus to using these new concepts to reform education. He produced "The Society of Mind" in 1985, a book that has 270 interrelated, one-page concepts that mirror the organization of the theory itself. Each page either suggests a particular mechanism to explain a certain psychological phenomenon or addresses an issue that was raised by a different page's suggested answer. In his follow-up book, "The Emotion Machine," released in 2006, Minsky makes suggestions that might explain how humans have higher-level emotions, objectives, and conscious thoughts in terms of various levels of processes, some of which can have an impact on other levels. Providing us with multiple alternatives "means to think," these mechanisms might account for much of our uniquely human inventiveness.