Chapter 16 review
. Are the following statements true or false? Explain why or why not.a) A number of philosophers and mathematicians, notably Descartes, Leibniz, Hobbes,
Boole and Turing, have argued that intelligent thought is simply logical calculation.
True, more or less. Descartes, however, did not have mechanistic view of thought,
unlike Leibniz. Turing viewed the question "Can machines think" as
inherently meaningless, since it's not clear what one means by "think"--so he
proposed a behavioral test for intelligence.
b) You believe that intelligent thought is simply logical calculation.
False. This author (Blank) doesn't, because he some thought involves emotion or intuition,
which may have a biochemical, psychological (unconscious) or even spiritual basis, all
beyond the scope of logical calculation. What do you think?
c) Other philosophers, notably Searle, have argued that a calculating machine could
never truly think.
True. See Searle's Chinese Room argument.
d) A machine has passed the Turing test for intelligence.
False. Though performance in the Loenbner contest is improving, passing the Turing test
remains the Holy Grail of AI.
e) Programs like ELIZA suggest that machines that can pass the Turing
test, at least behaviorally, are around the corner.
False. ELIZA uses a fairly simple pattern matching and substitution scheme which involves
no reasoning, let alone intelligence.
f) Weak AI advocates believe that a machine will never achieve intelligent thought.
False. Weak AI is agnostic about the inevitability of machine consciousness.
g) The best chess programs use cognitive techniques that imitate the skills of grand
masters.
False. They rely heavily on computational speed to search for the best possible move.
However, the very best ones also recognize common opening and endgame patterns, which more
closely resembles the strategy of excellent human players.
h) A machine will never be intelligent until understands sentences like The horse
raced past the barn fell.
False. Most human beigns have trouble understand garden-path sentences like this. Some AI
researchers have suggested that the goal of AI should be to imitate the limited
rationality or real time constraints of human intelligence.
i)Because machines arent limited by short-term memory, they will eventually
understand and translate languages better than people do.
False. First of all, human short-term memory limitations may have alot to do with the
ability to process language in real time, a key part of human performance. Secondly,
understanding language requires a great deal of background knowledge about the world; it's
not clear how to acquire or search this knowledge effectively. In the judgment of this
author, there's nothing inevitable about machines achieving human performance, since we
don't nearly understand how that works, though progress is certainly happening.
j) AI machines are so far better at playing checkers than recognizing your mother.
True. AI has had far more success in artificial or limited domains, such as games or
expert knowledge, than in natural and routine areas, such as vision or language. True,
speech recognition is now available on PCs, but even 95% accuracy is significantly short
of human performance. But what do you think?
k) GOFAI is an attempt to implement the dream of philosophers and mathematicians like
Descartes, Leibniz, Hobbes, Boole and Turing.
True. At its heart, Good Old Fashioned Artificial Intelligence seeks to represent
knowledge in terms of symbols and make inferences by means of logical rules.
l) The most successful chess programs can defeat grand masters because they can look at
all possible moves in a game.
False. The number of possible moves in a chess game is so huge that it would take the
fastest computers billions of years to consider them all--if indeed there ever were an end
(after all, some endgames wind up in stalemates or infinite loops). Successful chess
programs do consider far many more moves than grand masters, using a heustic state space
algorithm, but still have to cut off search at some arbitrary depth.
m) The most successful chess programs use heuristics.
True. Rather than search all possible moves, a chess program might evaluate the goodness
of moves in terms of such factors as the total value of pieces achieved by a move, control
of the center of the board, advancement of pawns, etc., then pursue the N best moves,
ignoring the rest.
n) Problems like chess and the traveling salesperson problem (exercise 16.16) suggest
that much of what AI is about is solving computationally intractable problems.
True. Complete solutions for chess and TSP are so large that they are not solvable by
finite machines. Of course, that doesn't prevent machines (and people) from finding
approximate solutions, using heuristics to cut down the search space.
o) Expert systems, reasoning in terms of formal rules, have achieved some success in
limited subject domains.
True. Rule-based expert systems have had considerable success in domains ranging from
mineral prospecting to Campbell's soup production.
p) Backward chaining inference is especially suitable for engineering design problems.
False. Because design problems do not have a specific goal, but are driven by a set of
problem specifications, they are better tackled by forward chaining inference which can
respond to data and solve the problem in stages.
q) Forward chaining inference is especially suitable for diagnosis or troubleshooting.
False. Forward chaining, which is data-driven, is better suited for monitoring or
design/configuration problems. Diagnostic probems, with a specific goal, are better solved
with backward-chaining from the goal: a diagnosis.
r) The independence assumption is why probabilistic reasoning is especially reliable.
False. The independence assumption can lead probabilistic reasoning astray when combining
rules that have causal or other relationship not accounted for by probabilities (or
certainty factors). THe book gives an example this flaw using certainty factors,
ultimately due to the simplifying independence assumption.
s) Semantic networks, frames and classes all share a similar technique for inferring
common properties.
True. They all rely on inheritance to infer properties from supertypes
t) Case-based reasoning systems are better at birthday parties than solving practical
problems.
False. Case-based reasoning systems, which solve problem by recalling and adapting similar
problems, much the way lawyers reason in term of precedents, have had their best success,
as with rule-based systems, in limited rather than open-ended problem domains.
u) A perceptron, like a neuron, packs the power of a CPU into a single small unit.
False. A perceptron is relatively simple processor, whose output is essentially a bit of
information. The power of perceptron comes by combing them to form networks.
v) A perceptron learns by asking its trainer to give it more examples.
False. Well, it doesn't ask, and it requires not just training data but complete feedback
about the right answers for every example, so it can learn how to improve its performance.
Note than human learning does not rely so much on negative feedback, but can learn from a
much smaller set of positive examples.
w) Back-propagation, like a child learning how to talk, learns by propagating error
values back to hidden units and input units, causing them to adjust their weights to give
better results.
True. Except for the bit about how a child learns how to talk; children do not require all
the error values, so they aren't learning by anything like back-propagation. Nevertheless,
NETTalk, using back propagation, was able to learn how to generate speech from text that
sounds remarkly like a small child.
x) Shakey showed that GOFAI-based robots would inevitably have the intelligence and
endearing shape of R2D2 of Star Wars fame.
False. Shakey, while a significant milestone in AI research, had none of the robustness of
robot movie stars. The Mars Pathfinder and Rover, which made headlines in 1997, apparently
owed more to the behavior-based robotics approach, which eliminates any centralized
representation of world states.
y) Automatic behaviors, unlike controlled ones, must have limited ability to search for
solutions.
True. Behaviors like walking, driving a car, recognizing a face or a understanding a
sentence, all are highly automatic and real-time, and hence do not involve unbounded
search.
z) A softbot is a virtual robot.
True. A softbot is an agent that senses and acts in a virtual world. Here we come to the
end of the book by recalling a theme for the beginning: the power of the universal machine
is its ability to simulate any other machine, as a virtual machine. From virtual machines
it's a few short steps to virtual reality and virtual worlds.