# artificial intelligence

(redirected from Machine reasoning)
Also found in: Dictionary, Thesaurus, Medical, Financial.

## artificial intelligence

(AI), the use of computerscomputer,
device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical
to model the behavioral aspects of human reasoning and learning. Research in AI is concentrated in some half-dozen areas. In problem solving, one must proceed from a beginning (the initial state) to the end (the goal state) via a limited number of steps; AI here involves an attempt to model the reasoning process in solving a problem, such as the proof of a theorem in Euclidean geometryEuclid
, fl. 300 B.C., Greek mathematician. Little is known of his life other than the fact that he taught at Alexandria, being associated with the school that grew up there in the late 4th cent. B.C.
.

In game theory (see games, theory ofgames, theory of,
group of mathematical theories first developed by John Von Neumann and Oskar Morgenstern. A game consists of a set of rules governing a competitive situation in which from two to n
), the computer must choose among a number of possible "next" moves to select the one that optimizes its probability of winning; this type of choice is analogous to that of a chess player selecting the next move in response to an opponent's move. In pattern recognition, shapes, forms, or configurations of data must be identified and isolated from a larger group; the process here is similar to that used by a doctor in classifying medical problems on the basis of symptoms. Natural language processing is an analysis of current or colloquial language usage without the sometimes misleading effect of formal grammars; it is an attempt to model the learning process of a translator faced with the phrase "throw mama from the train a kiss." Cyberneticscybernetics
[Gr.,=steersman], term coined by American mathematician Norbert Wiener to refer to the general analysis of control systems and communication systems in living organisms and machines.
is the analysis of the communication and control processes of biological organisms and their relationship to mechanical and electrical systems; this study could ultimately lead to the development of "thinking" robots (see roboticsrobotics,
science and technology of general purpose, programmable machine systems. Contrary to the popular fiction image of robots as ambulatory machines of human appearance capable of performing almost any task, most robotic systems are anchored to fixed positions in factories
). Machine learning occurs when a computer improves its performance of a task on the basis of its programmed application of AI principles to its past performance of that task.

In the public eye advances in chesschess,
game for two players played on a square board composed of 64 square spaces, alternately dark and light in color. Basic Rules

The board is positioned so that a light-colored square is in the corner to the right of both players, each of whom is provided with
-playing computer programs were symbolic of early progress in AI. In 1948 British mathematician Alan TuringTuring, Alan Mathison,
1912–54, British mathematician and computer theorist. While studying at Cambridge he began work in predicate logic that led to a proof (1937) that some mathematical problems are not susceptible to solution by automated computation; in arriving at
developed a chess algorithm for use with calculating machines—it lost to an amateur player in the one game that it played. Ten years later American mathematician Claude ShannonShannon, Claude Elwood,
1916–2001, American applied mathematician, b. Gaylord, Michigan. A student of Vannevar Bush at the Massachusetts Institute of Technology (MIT), he was the first to propose the application of symbolic logic to the design of relay circuitry with his
articulated two chess-playing algorithms: brute force, in which all possible moves and their consequences are calculated as far into the future as possible; and selective mode, in which only the most promising moves and their more immediate consequences are evaluated.

In 1988 Hitech, a program developed at Carnegie-Mellon Univ., defeated former U.S. champion Arnold Denker in a four-game match, becoming the first computer to defeat a grandmaster. A year later, Garry KasparovKasparov, Garry
, 1963–, Armenian chess player, b. Azerbaijan (then in the USSR) as Garik Kimovich Wainshtein. He became the world junior champion at the age of 16 and was International Chess Federation (FIDE) champion from 1985 to 1993. His first title match (Sept.
, the reigning world champion, bested Deep Thought, a program developed by the IBM Corp., in a two-game exhibition. In 1990 the German computer Mephisto-Portrose became the first program to defeat a former world champion; while playing an exhibition of 24 simultaneous games, Anatoly KarpovKarpov, Anatoly
, 1951–, Russian chess master. In 1970 he became the world's youngest international grand master. Karpov won (1975) the world championship by default when Bobby Fischer, the titleholder, refused to agree to terms for a match.
bested 23 human opponents but lost to the computer.

Kasparov in 1996 became the first reigning world champion to lose to a computer in a game played with regulation time controls; the Deep Blue computer, developed by the IBM Corp., won the first game of the match, lost the second, drew the third and fourth, and lost the fifth and sixth. Deep Blue used the brute force approach, evaluating more than 100 billion chess positions each turn while looking six moves ahead; it coupled this with the most efficient chess evaluation software yet developed and an extensive library of chess games it could analyze as part of the decision process.

Subsequent matches between Vladimir Kramnik and Deep Fritz (2002, 2006) and Kasparov and Deep Junior (2003) resulted in two ties and a win for the programs. Unlike Deep Blue, which was a specially designed computer, these more recent computer challengers were chess programs running on powerful personal computers. Such programs have become an important tool in chess, and are used by chess masters to analyze games and experiment with new moves. In 2016 Google's DeepMind AlphaGo defeated one of the world's best gogo
or i-go,
a board game popular in Japan that probably originated in China or India as long ago as the third millennium B.C. The board is marked by a grid of 19 horizontal and 19 vertical lines to form 361 intersections.
players, Lee Sedol, in a five-game tournament in South Korea; go is considered to be more difficult than chess for a computer program. In 2017 an updated version of the program defeated the world's best go player, Ke Jie, in a three-game tournament.

Another notable IBM AI computer, Watson, competed in 2011 on the "Jeopardy!" television quiz show, defeating two human champions. Watson, about 100 times faster than Deep Blue, was designed to process questions in natural human language (as opposed to simple commands), making sense of the quirky questions' complexity and ambiguity, and to search an extensive database to quickly provide the correct answers. Watson is a prototype for programs or services that can act as knowledgeable assistants, or even human substitutes, in such different fields as medicine, catalog sales, and computer technical support.

The introduction of the smartphone has brought aspects of artificial intelligence to cellular telephones, most prominently in the voice-controlled "personal assistants" that can provide a range of information and recommendations or perform tasks in response to the user's voice commands, to the schedule maintained by the user's calendar application, and the like. First introduced on the Apple smartphone in 2011, such personal assistants have become widely available on smartphones, computer tablets, personal computers, and other electronic devices. Global-positioning system (GPS) devices, or similar smartphone applications, which provide turn-by-turn directions as a person drives, can redirect the driver in real time to avoid traffic jams, and increasingly accept voice commands, are another, more limited but common utilization of advances in artificial intelligence, as are translation and voice-recognition programs. Self-driving automobiles, whether fully autonomous or acting as an assistant to a human driver, are another example of the use of artificial intelligence. Such vehicles use sensor, mapping, and GPS information to locate where they are, rely on sensors and interpretative software to determine what vehicles, people, objects, and the like are nearby, and use computer-controlled systems to drive and maneuver.

a computer system or program that uses artificial intelligence techniques to solve problems that ordinarily require a knowledgeable human. The method used to construct such systems, knowledge engineering, extracts a set of rules and data from an expert or experts
.

### Bibliography

See D. Freedman, Brainmakers: How Scientists Are Moving Beyond Computers to Create a Rival to the Human Brain (1994); D. Gelernter, The Muse in the Machine: Computerizing the Poetry of Human Thought (1994); D. Rasskin-Gutman, Chess Metaphors: Artificial Intelligence and the Human Mind (2009).

## artificial intelligence

[¦ärd·ə¦fish·əl in′tel·ə·jəns]
(computer science)
The property of a machine capable of reason by which it can learn functions normally associated with human intelligence.

## Artificial intelligence

The subfield of computer science concerned with understanding the nature of intelligence and constructing computer systems capable of intelligent action. It embodies the dual motives of furthering basic scientific understanding and making computers more sophisticated in the service of humanity.

Many activities involve intelligent action—problem solving, perception, learning, planning and other symbolic reasoning, creativity, language, and so forth—and therein lie an immense diversity of phenomena. Scientific concern for these phenomena is shared by many fields, for example, psychology, linguistics, and philosophy of mind, in addition to artificial intelligence. The starting point for artificial intelligence is the capability of the computer to manipulate symbolic expressions that can represent all manner of things, including knowledge about the structure and function of objects and people in the world, beliefs and purposes, scientific theories, and the programs of action of the computer itself.

Artificial intelligence is primarily concerned with symbolic representations of knowledge and heuristic methods of reasoning, that is, using common assumptions and rules of thumb. Two examples of problems studied in artificial intelligence are planning how a robot, or person, might assemble a complicated device, or move from one place to another; and diagnosing the nature of a person's disease, or of a machine's malfunction, from the observable manifestations of the problem. In both cases, reasoning with symbolic descriptions predominates over calculating.

The approach of artificial intelligence researchers is largely experimental, with small patches of mathematical theory. As in other experimental sciences, investigators build devices (in this case, computer programs) to carry out their experimental investigations. New programs are created to explore ideas about how intelligent action might be attained, and are also developed to test hypotheses about concepts or mechanisms involved in intelligent behavior.

The foundations of artificial intelligence are divided into representation, problem-solving methods, architecture, and knowledge. To work on a task, a computer must have an internal representation in its memory, for example, the symbolic description of a room for a moving robot, or a set of features describing a person with a disease. The representation also includes all the knowledge, including basic programs, for testing and measuring the structure, plus all the programs for transforming the structure into another one in ways appropriate to the task. Changing the representation used for a task can make an immense difference, turning a problem from impossible to trivial.

Given the representation of a task, a method must be adopted that has some chance of accomplishing the task. Artificial intelligence has gradually built up a stock of relevant problem-solving methods (the so-called weak methods) that apply extremely generally.

An important feature of all the weak methods is that they involve search. One of the most important generalizations to arise in artificial intelligence is the ubiquity of search. It appears to underlie all intelligent action. In the worst case, the search is blind. In heuristic search extra information is used to guide the search.

Some of the weak methods are generate-and-test (a sequence of candidates is generated, each being tested for solutionhood); hill climbing (a measure of progress is used to guide each step); means-ends analysis (the difference between the desired situation and the present one is used to select the next step); impasse resolution (the inability to take the desired next step leads to a subgoal of making the step feasible); planning by abstraction (the task is simplified, solved, and the solution used as a guide); and matching (the present situation is represented as a schema to be mapped into the desired situation by putting the two in correspondence).

An intelligent agent—person or program—has multiple means for representing tasks and dealing with them. Also required is an architecture or operating framework within which to select and carry out these activities. Often called the executive or control structure, it is best viewed as a total architecture (as in computer architecture), that is, a machine that provides data structures, operations on those data structures, memory for holding data structures, accessing operations for retrieving data structures from memory, a programming language for expressing integrated patterns of conditional operations, and an interpreter for carrying out programs. Any digital computer provides an architecture, as does any programming language. Architectures are not all equivalent, and one important scientific question is what architecture is appropriate for a general intelligent agent.

In artificial intelligence, the basic paradigm of intelligent action is that of search through a space of partial solutions (called the problem space) for a goal situation. Each step offers several possibilities, leading to a cascading of possibilities that can be represented as a branching tree. The search is thus said to be combinatorial or exponential. For example, if there are 10 possible actions in any situation, and it takes a sequence of 12 steps to find a solution (a goal state), then there are 1012 possible sequences in the exhaustive search tree. What keeps the search under control is knowledge, which suggests how to choose or narrow the options at each step. Thus the fourth fundamental concern is how to represent knowledge in the memory of the system so it can be brought to bear on the search when relevant.

An intelligent agent will have immense amounts of knowledge. This implies another major problem, that of discovering the relevant knowledge as the solution attempt progresses. Although this search does not include the combinatorial explosion characteristic of searching the problem space, it can be time consuming and hard. However, the structure of the database holding the knowledge (called the knowledge base) can be carefully tailored to suit the architecture in order to make the search efficient. This knowledge base, with its accompanying problems of encoding and access, constitutes the final ingredient of an intelligent system.

An example of artificial intelligence is computer perception. Perception is the formation, from a sensory signal, of an internal representation suitable for intelligent processing. Though there are many types of sensory signals, computer perception has focused on vision and speech. Perception might seem to be distinct from intelligence, since it involves incident time-varying continuous energy distributions prior to interpretation in symbolic terms. However, all the same ingredients occur: representation, search, architecture, and knowledge. Speech perception starts with the acoustic wave of a human utterance and proceeds to an internal representation of what the speech is about. A sequence of representations is used: the digitization of the acoustic wave into an array of intensities; the formation of a small set of parametric quantities that vary continuously with time (such as the intensities and frequencies of the formants, bands of resonant energy characteristic of speech); a sequence of phons (members of a finite alphabet of labels for characteristic sounds, analogous to letters); a sequence of words; a parsed sequence of words reflecting grammatical structure; and finally a semantic data structure representing a sentence (or other utterance) that reflects the meaning behind the sounds.

A class of artificial intelligence programs called expert systems attempt to accomplish tasks by acquiring and incorporating the same knowledge that human experts have. Many attempts to apply artificial intelligence to medicine, government, and other socially significant tasks take the form of expert systems. Even though the emphasis is on knowledge, all the standard ingredients are present.

In careful tests, a number of expert systems have shown performance at levels of quality equivalent to or better than average practicing professionals (for example, average practicing physicians) on the restricted domains over which they operate. Nearly all large corporations and many smaller ones use expert systems. A common application is to provide technical assistance to persons who answer customers' trouble calls. Computer companies use expert systems to assist in configuring components from a parts catalog into a complete system that matches a customer's specifications, a kind of application that has been replicated in other industries tailoring assembled products to customers' needs. Troubleshooting and diagnostic programs are commonplace. Another widespread use of this technology is in software for home computers that assists taxpayers. One important lesson learned from incorporating artificial intelligence software into ongoing practice is that its success depends on many other aspects besides the intrinsic intellectual quality, for example, ease of interaction, integration into existing workflow, and costs.

Expert systems have sparked important insights in reasoning under uncertainty, causal reasoning, reasoning about knowledge, and acceptance of computer systems in the workplace. They illustrate that there is no hard separation between pure and applied artificial intelligence; finding what is required for intelligent action in a complex applied area makes a significant contribution to basic knowledge. See Expert systems

In addition to the subject areas mentioned above, significant work in artificial intelligence has been done on puzzles and reasoning tasks, induction and concept identification, symbolic mathematics, theorem proving in formal logic, natural language understanding and generation, vision, robotics, chemistry, biology, engineering analysis, computer&hyphen;assisted instruction, and computer-program synthesis and verification, to name only the most prominent. As computers become smaller and less expensive, more and more intelligence is built into automobiles, appliances, and other machines, as well as computer software, in everyday use. See Automata theory, Computer, Control systems, Cybernetics, Digital computer, Intelligent machine, Robotics

## artificial intelligence

the study of the modelling of human mental functions by computer programs

## artificial intelligence

(artificial intelligence)
(AI) The subfield of computer science concerned with the concepts and methods of symbolic inference by computer and symbolic knowledge representation for use in making inferences. AI can be seen as an attempt to model aspects of human thought on computers. It is also sometimes defined as trying to solve by computer any problem that a human can solve faster. The term was coined by Stanford Professor John McCarthy, a leading AI researcher.

Examples of AI problems are computer vision (building a system that can understand images as well as a human) and natural language processing (building a system that can understand and speak a human language as well as a human). These may appear to be modular, but all attempts so far (1993) to solve them have foundered on the amount of context information and "intelligence" they seem to require.

The term is often used as a selling point, e.g. to describe programming that drives the behaviour of computer characters in a game. This is often no more intelligent than "Kill any humans you see; keep walking; avoid solid objects; duck if a human with a gun can see you".

See also AI-complete, neats vs. scruffies, neural network, genetic programming, fuzzy computing, artificial life.

ACM SIGART. U Cal Davis. CMU Artificial Intelligence Repository.
Site: Follow: Share:
Open / Close