Recursion: Conceptual Foundations

Recurrence relations and other problem domains tp which recursion is applicable

Mathematical recurrence

Some mathematical sequences are defined (at least in part) according to a recurrence relation between the terms of the sequence. In this type of definition, the value of the $n^\text{th}$ term of a sequence is defined as a function of the preceding terms. This type of definition is often used for infinite sequences.

Given the sequence $A$, where

\[\begin{aligned} A &= \left( a_0, a_1, \ldots \right), \end{aligned}\]

a recurrence relation may be defined (in very general terms) as

\[\begin{aligned} a_n &= f\left( a_0, a_1, \ldots, a_{n-1} \right). \end{aligned}\]

In practice, the function $f$ usually isn’t expressed as a function of all the preceding terms, but of a small number of terms immediately preceding $a_n$. Also, note that the recurrence relation usually doesn’t define $A$ completely: the definition generally includes one or more initial values, as well.

The most widely known examples of recurrence relations are probably the factorial function and the Fibonacci sequence.

Factorial function

\[n! = \begin{cases} 1, & \text{when } n = 0; \\ n (n - 1)!, & \text{when } n \in \{1, 2, 3, \ldots \}. \end{cases}\]

Fibonacci sequence

\[F_n = \begin{cases} 0, & \text{when } n = 0; \\ 1, & \text{when } n = 1; \\ F_{n - 1} + F_{n - 2}, & \text{when } n \in \{2, 3, 4, \ldots \}. \end{cases}\]

Problems that can be defined using recurrence relations are prime candidates for recursive implementations.

Other applications

Recursion isn’t limited to evaluating mathematical functions. Many problems that are not strictly mathematical in nature can be specified in recursive terms, with potential for recursive implementation. The most widely used sorting algorithms are usually implemented in recursive forms. A depth-first search in a tree data structure is easily implemented as a recursive operation. A number of puzzles, such as the Tower of Hanoi, can best be understood in recursive terms. The task of parsing (for compilation) the source code of many programming languages is often specified and implemented as a recursive process. Even processing natural language is at least partially a recursive task.

Even though these problems aren’t obviously mathematical, they have a conceptual structure similar to that of recurrence relations in mathematics—namely, that the solution to a problem can be expressed in terms of solutions to smaller versions of the same problem.

For example, we can articulate the process of sorting a collection of values as follows:

If the collection is trivial, containing 0 or 1 values, then the task is done.

Otherwise,

  1. Divide the collection into 2 or more sub-collections.
  2. Sort the sub-collections.
  3. Merge the sorted sub-collections into a sorted collection.

(Notice that step 2 is recursive.) Of course, this isn’t really an algorithm, as written here, but more a meta-algorithm. Steps 1 & 3 can be performed many different ways: the quicksort algorithm (for example) does a lot of processing in step 1, and virtually none in step 3; mergesort, on the other hand, does most of its work in step 3, but almost none in step 1.

Advantages

The last paragraph of each of the examples points out some of the key benefits of recursion:

Disadvantages

It is often the case that a recursive implementation—especially a naïve one—may not be the best option, however elegant its expression. The main disadvantages of recursion (apart from the difficulty we might have in wrapping our heads around the concept at first) have to do with the time cost of method invocation and the space cost of (and limits on) stack space.