Recursion
Last updated
Last updated
Recursion is a programming and mathematical concept where a function or a process calls itself as a subroutine or part of its own execution. In other words, it's a technique where a problem is solved by breaking it down into smaller instances of the same problem.
TL;DR: A function that calls itself again and again until we hit a base case
Recursion → “defining something in terms of itself”
All recursive algorithms must have the following:
Base Case:
Defining when the recursion should stop
Work toward Base Case:
How does our problem approach the base case?
Recursive Call:
Call the function again with a smaller instance which is actively working towards the basecase.
In a recursive algorithm, the computer "remembers" every previous state of the problem. This information is "held" by the computer on the "activation stack" (i.e., inside of each functions workspace).
Every function has its own workspace PER CALL of the function.
Let P be a problem:
Divide P into two or more subproblems (smaller instances)
Divide until the subproblems are simple enough to be solved
All the subproblem solutions are then combined to give a solution to the original problem
This is a basic program solving approach called: “Divide and Conquer Algorithm”.
This also leads to the basis of “Dynamic Programming”.
The base case should hold the simplest solution for the simplest, smallest instance of the problem
Base Case: In a recursion algorithm, the problem is broken down to subproblem until we reach the base case
Recursion algorithms are allowed to have multiple base cases.
Base cases are considered “end conditions”
Single → It only calls itself once … only invokes one recursion to occur
(Example: Adding up all numbers in a list)
Multiple → It can invoke multiple recursion to solve the answer
(Example: Fibonacci → Fib(n) = Fib(n-2) + Fib(n-1)
The primary difference between single recursion and multiple recursion is the number of times a function invokes itself.
Single recursion involves one recursive call, while multiple recursion involves multiple recursive calls within a single function. The choice between single and multiple recursion depends on the specific problem being solved and the problem's inherent structure.
Tail recursion is a special form of recursion where the recursive call is the last operation performed in the function, allowing Python to optimize it by reusing the current function's stack frame.
Define a base case: As with any recursive function, start by defining the base case(s) that determine when the recursion should stop.
Pass accumulated values as arguments: Instead of relying on the call stack to maintain the state between recursive calls, pass the accumulated values as arguments to the recursive function. This ensures that the recursive call is the last operation in the function.
Update arguments in the tail-recursive call: In each recursive call, update the arguments to move towards the base case or the desired result. Continue to make the recursive call with these updated arguments until you reach the base case.
The tail-recursive version r_factorial2
is considered better because it benefits from tail call optimization in Python, converting the recursion into an iterative loop and avoiding the buildup of stack frames.
This leads to improved memory efficiency and reduces the risk of stack overflow errors for large input values.
In contrast, the non-tail-recursive version r_factorial1
does not optimize tail calls and requires storing intermediate results on the call stack, making it less memory-efficient, particularly for large inputs.
The relationship between recursion and dynamic programming often lies in the fact that dynamic programming solutions can be implemented using recursion with memoization. In the top-down approach, a recursive function is used to solve subproblems, and the results are stored (memoized) to avoid redundant computations. This combines the benefits of recursion's natural problem decomposition with dynamic programming's efficiency in avoiding duplicated work.