Chapter 7 review. Are the following statements true or false? Explain why or why not.
a. If linear search fails
to find a key in an array, a run-time error occurs.
False. The algorithm returns NOTFOUND, a
constant value set to -1, letting the caller know that know value was found.
(-1 is not a valid array subscript, so if the caller tried using it to access
an element of the array, there might be a run-time error).
b. Linear search is fine
for little arrays, but a dog for telephone books!
True. Searching through
small collections, one element at a time, is OK, but inefficient for large collections.
c. Selection sort swaps
the smallest value in an array with the first element of the array.
True. The inner loop of
the sort finds index of the smallest value, then the outer loop swaps the smallest
value with the first element, so that the smallest value is at the beginning
of the array. Then the outer loop increments a counter so that the next iteration
will swap the next smallest value with the second element of the array, and
so forth.
d. Binary search works by
examining the binary values of array elements.
False. The values are usually
not binary (i.e., they may be integers or strings). Rather, binary search works
by computing computing the middle of a sorted array, determining whether the
key value is the middle item, or if not, in the upper or lower half, then repeating
this procedure until it finds the item. "Binary" refers to the dividing
by 2 to compute the middle item.
e. A recursive function
that cannot match a base case is a logical error.
True. It would keep calling
the recursive case until the computer runs out of memory.
f.
Activation records or frames make recursion possible in modern programming languages.
True. Each time a program
makes a function call, it stores state information in an activation record or
frame, so that it can remember where it left off in the caller. A recursive
function needs to return a result to each caller until it reaches a base case,
then returns its results to its original caller.
g.
A recursive descent parser solves the Towers of Hanoi problem.
False. A recursive descent
parser is a way to analyze a program, where the syntax rules of a language are
encoded in functions which may call other functions. The Towers of Hanoi problem
can be solved elegantly by a recursive function, but not a parser.
h. Complexity in a program is highly
desirable because complex programs are more powerful.
False. Complex programs are often powerful,
but their complexity makes them hard to understand, correct, and maintain. Complex
programs also tend to be much slower.
i. A linear search is a highly
efficient way to search an array.
False. For example, the entire array must
be searched to discover that a desired value is not in the array.
j. Hashing is always more
efficient than linear search.
False. In worst cases, it can amount to
a linear search.
k. A binary search is substantially
more efficient that a linear search.
True. Whereas finding a value in an array
averages roughly N/2 steps in a linear search, it takes roughly log2(N)/2
steps in a binary search.
l. An array should always be sorted
before searching in it.
False. It depends on how often the array
is to be searched. Since sorts are more costly that searches, sorting a very
large array before searching it only once would be very inefficient.
m.
A bubble sort is so-called because it was invented by "Bubbles" Ort, an early
programmer.
False. Rather, in this algorithm, values
"bubble up" to their correct places in the array.
n. A bubble sort is conceptually
a very complicated type of sort.
False. It’s one of the easiest--both to
explain and to write. But it’s often one of the slowest to run.
o. A selection sort is roughly
twice as efficient as a bubble sort.
True. In worst cases, the cost of a bubble
sort is just under 2.5 times the cost of a selection sort.
p. In the worst case, an insertion
sort is about as efficient as a selection sort.
False. In the worst case, the selection
sort is about twice as efficient
q. In the best or average cases,
an insertion sort is a good deal better than a selection sort.
True. In the best case, the insertion sort
is much better. In the average case, the cost of an insertion sort is about 85%
of the cost of a selection sort.
r. All of the sorts we considered
involve nested loops.
True. Certain blocks of statements must be
repeated inside larger blocks, which are themselves repeated.
s. Quicksort is fundamentally different
from the bubble, selection, and insertion sorts.
True. It is based on recursion, while the
others are not.
t. Quicksort is more efficient
than the bubble, selection, or insertion sorts.
True. The efficiency of the quicksort is
on the order of N*log(N)--substantially better than any of the others.
u.
An algorithm which is O(N*log(N)) is more efficient than one which is O(N2).
True. Since O(log(N)) is less
than O(N), O(N*log(N)) is less than O(N2).
v.
O(log2(N)) is smaller than O(log10(N)).
False. Although log2(N) is
larger than log10(N), since log2(N) = log2(10)*log10(N), it is
nevertheless the case that O(log2(N)) =
O(log10(N)).
w.
Order of magnitude is where the biggest payoffs in performance are.
True. Order of magnitude differences between
algorithms can make major differences in performance. E.g., if algorithm A performs
in 6N steps and algorithm B in 2N2, then for N=10, A performs in
60 steps and B in 200 steps (already B is greater), and for N=100, A performs
in 600 steps and B in 20,000 steps (now B is much greater than A). So when analyzing
algorithms, we focus primarily in orders of magnitude.
x. Linear time is excellent for
a search algorithm.
False. Linear time is the worst case, looking at every item. Binary search performs
in O(log2(N)),
which is much better than linear time.
y) Chess-playing is an example of a problem that can require exponential time.
True. If we consider all the moves player A can make (N moves), then consider
all the moves that player B can make in response (N*N or N2), then
all the moves player A can make in response to player B, etc., we get a N*N*N...N
or O(NN).