⬅ Previous Topic
Divide and Conquer Technique in DSANext Topic ⮕
Backtracking Technique in DSA⬅ Previous Topic
Divide and Conquer Technique in DSANext Topic ⮕
Backtracking Technique in DSATopic Contents
Dynamic Programming (DP) is used for solving complex problems by breaking them down into simpler overlapping subproblems and solving each subproblem only once, storing the result for future reuse. This avoids the overhead of recalculating solutions repeatedly (as in recursion).
DP is applicable when the problem has two main properties:
Problem Statement:
Find the nth Fibonacci number, where the Fibonacci series is defined as follows:
F(0) = 0
F(1) = 1
F(n) = F(n - 1) + F(n - 2)
for n ≥ 2
This means each number in the sequence is the sum of the two previous numbers. For example, the first few Fibonacci numbers are:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
If we write a simple recursive solution like:
function fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
This looks clean, but it's extremely inefficient because it recalculates the same values again and again. For example, fib(5)
will compute fib(4)
and fib(3)
, but then fib(4)
again computes fib(3)
and fib(2)
— so fib(3)
is computed twice. The number of recursive calls grows exponentially.
Dynamic Programming solves this problem by storing solutions to subproblems so we don’t compute them again. Let’s explore both top-down and bottom-up DP approaches:
We use recursion, but we store the result of each Fibonacci number we calculate in a dictionary (called memo
). Before computing any value, we check if it’s already in memo
and use it directly if found.
fib(n)
.n
is already in memo
. If yes, return it.n
is 0 or 1, return n
(base case).fib(n-1)
and fib(n-2)
.memo[n]
.memo[n]
.// Memoization approach
function fib(n, memo):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
Each Fibonacci number is calculated only once and reused wherever needed. This avoids redundant calculations.
memo
.Instead of starting from the top and going down recursively, we start from the bottom (base cases) and build up the solution iteratively.
dp[]
of size n+1
to store Fibonacci values.dp[0] = 0
and dp[1] = 1
.i = 2
to n
.dp[i] = dp[i-1] + dp[i-2]
.dp[n]
as the final result.// Tabulation approach
function fib(n):
if n <= 1:
return n
dp = array of size (n+1)
dp[0] = 0
dp[1] = 1
for i from 2 to n:
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
This method uses an iterative approach and solves the subproblems first before combining them to solve the main problem.
dp
array (can be optimized to O(1) by storing only last two values).The Fibonacci number problem is a great way to understand dynamic programming. It highlights the idea of breaking a problem into smaller subproblems and using stored results to avoid recomputation. For beginners, this teaches how to recognize overlapping subproblems and how memoization and tabulation can drastically improve performance.
Problem: You are given:
weight
and a value
.W
.Your goal is to choose a subset of these items such that:
W
.This problem has two important properties:
Hence, Dynamic Programming (DP) is the ideal approach. We’ll use Bottom-Up DP (Tabulation) where we fill a 2D table dp[i][w]
representing:
i
items and capacity w
.(n+1) x (W+1)
.dp[0][w] = 0
→ 0 items, so 0 value.dp[i][0] = 0
→ 0 capacity, so 0 value.dp[i][w]
we ask:
i-1
? Check if weight[i-1] ≤ w
.value[i-1] + dp[i-1][w - weight[i-1]]
.dp[i-1][w]
.dp[n][W]
contains the max value for the full problem.Imagine the following small example:
weights = [2, 3, 4]
values = [40, 50, 100]
capacity = 5
The table dp[i][w]
gets filled row by row. Each cell compares two options: take or skip the item.
// Bottom-up 2D DP
function knapsack(weights, values, n, W):
dp = 2D array of size (n+1) x (W+1)
for i from 0 to n:
for w from 0 to W:
if i == 0 or w == 0:
dp[i][w] = 0
else if weights[i-1] <= w:
dp[i][w] = max(values[i-1] + dp[i-1][w - weights[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
Dynamic Programming helps solve the 0/1 Knapsack problem by:
This avoids exponential time from trying all subsets (which is what brute force would do), and gives an efficient solution.
Problem Statement: Given two strings, find the length of the Longest Common Subsequence (LCS) present in both. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.
Let’s say:
"abcde"
"ace"
The LCS is "ace"
and its length is 3.
When solving LCS recursively, we may end up solving the same subproblems repeatedly. For instance, the same substring comparisons happen over and over again.
Dynamic Programming helps avoid this by:
Let dp[i][j]
represent the length of the LCS of the first i
characters of string 1 and first j
characters of string 2.
i == 0
or j == 0
), then dp[i][j] = 0
.s1[i-1] == s2[j-1]
, then the characters match. So, dp[i][j] = 1 + dp[i-1][j-1]
.s1[i-1] != s2[j-1]
, we take the maximum LCS length by either excluding the current character from s1 or s2:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
.Use nested loops to fill the table starting from dp[0][0]
to dp[m][n]
where m
and n
are the lengths of s1 and s2.
The value at dp[m][n]
gives the length of the longest common subsequence.
// Bottom-up 2D DP
function lcs(s1, s2):
m = length of s1
n = length of s2
dp = 2D array of size (m+1) x (n+1)
for i from 0 to m:
for j from 0 to n:
if i == 0 or j == 0:
dp[i][j] = 0
else if s1[i-1] == s2[j-1]:
dp[i][j] = 1 + dp[i-1][j-1]
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[m][n]
Let’s take s1 = "abcde", s2 = "ace"
i\j | "" | a | c | e |
---|---|---|---|---|
"" | 0 | 0 | 0 | 0 |
a | 0 | 1 | 1 | 1 |
b | 0 | 1 | 1 | 1 |
c | 0 | 1 | 2 | 2 |
d | 0 | 1 | 2 | 2 |
e | 0 | 1 | 2 | 3 |
Final result: dp[5][3] = 3
→ LCS length = 3 (which is "ace")
Dynamic Programming ensures that every subproblem (like comparing substrings of s1 and s2) is solved only once and reused. This drastically improves performance from exponential to polynomial time.
LCS is a classic DP problem and a stepping stone to many related problems like Shortest Common Supersequence, Longest Palindromic Subsequence, and Edit Distance.
Dynamic Programming is a versatile and powerful technique for solving optimization problems. By breaking problems into overlapping subproblems and using memoization or tabulation, DP ensures optimal and efficient solutions. It’s essential for problems involving sequences, paths, and decision making under constraints.
⬅ Previous Topic
Divide and Conquer Technique in DSANext Topic ⮕
Backtracking Technique in DSAYou can support this website with a contribution of your choice.
When making a contribution, mention your name, and programguru.org in the message. Your name shall be displayed in the sponsors list.