
- 1Find Maximum and Minimum in Array using Loop
- 2Find Second Largest in Array
- 3Find Second Smallest in Array
- 4Reverse Array using Two Pointers
- 5Check if Array is Sorted
- 6Remove Duplicates from Sorted Array
- 7Left Rotate an Array by One Place
- 8Left Rotate an Array by K Places
- 9Move Zeroes in Array to End
- 10Linear Search in Array
- 11Union of Two Arrays
- 12Find Missing Number in Array
- 13Max Consecutive Ones in Array
- 14Find Kth Smallest Element
- 15Longest Subarray with Given Sum (Positives)
- 16Longest Subarray with Given Sum (Positives and Negatives)
- 17Find Majority Element in Array (more than n/2 times)
- 18Find Majority Element in Array (more than n/3 times)
- 19Maximum Subarray Sum using Kadane's Algorithm
- 20Print Subarray with Maximum Sum
- 21Stock Buy and Sell
- 22Rearrange Array Alternating Positive and Negative Elements
- 23Next Permutation of Array
- 24Leaders in an Array
- 25Longest Consecutive Sequence in Array
- 26Count Subarrays with Given Sum
- 27Sort an Array of 0s, 1s, and 2s
- 28Two Sum Problem
- 29Three Sum Problem
- 304 Sum Problem
- 31Find Length of Largest Subarray with 0 Sum
- 32Find Maximum Product Subarray

- 1Binary Search in Array using Iteration
- 2Find Lower Bound in Sorted Array
- 3Find Upper Bound in Sorted Array
- 4Search Insert Position in Sorted Array (Lower Bound Approach)
- 5Floor and Ceil in Sorted Array
- 6First Occurrence in a Sorted Array
- 7Last Occurrence in a Sorted Array
- 8Count Occurrences in Sorted Array
- 9Search Element in a Rotated Sorted Array
- 10Search in Rotated Sorted Array with Duplicates
- 11Minimum in Rotated Sorted Array
- 12Find Rotation Count in Sorted Array
- 13Search Single Element in Sorted Array
- 14Find Peak Element in Array
- 15Square Root using Binary Search
- 16Nth Root of a Number using Binary Search
- 17Koko Eating Bananas
- 18Minimum Days to Make M Bouquets
- 19Find the Smallest Divisor Given a Threshold
- 20Capacity to Ship Packages within D Days
- 21Kth Missing Positive Number
- 22Aggressive Cows Problem
- 23Allocate Minimum Number of Pages
- 24Split Array - Minimize Largest Sum
- 25Painter's Partition Problem
- 26Minimize Maximum Distance Between Gas Stations
- 27Median of Two Sorted Arrays of Different Sizes
- 28K-th Element of Two Sorted Arrays

- 1Reverse Words in a String
- 2Find the Largest Odd Number in a Numeric String
- 3Find Longest Common Prefix in Array of Strings
- 4Find Longest Common Substring in Two Strings
- 5Check If Two Strings Are Isomorphic - Optimal HashMap Solution
- 6Check String Rotation using Concatenation - Optimal Approach
- 7Check if Two Strings Are Anagrams - Optimal Approach
- 8Sort Characters by Frequency - Optimal HashMap and Heap Approach
- 9Find Longest Palindromic Substring - Dynamic Programming Approach
- 10Find Longest Palindromic Substring Without Dynamic Programming
- 11Remove Outermost Parentheses in String
- 12Find Maximum Nesting Depth of Parentheses - Optimal Stack-Free Solution
- 13Convert Roman Numerals to Integer - Efficient Approach
- 14Convert Integer to Roman Numeral - Step-by-Step for Beginners
- 15Implement Atoi - Convert String to Integer in Java
- 16Count Number of Substrings in a String - Explanation with Formula
- 17Edit Distance Problem
- 18Calculate Sum of Beauty of All Substrings - Optimal Approach
- 19Reverse Each Word in a String - Optimal Approach

- 1Check if i-th Bit is Set
- 2Check if a Number is Even/Odd
- 3Check if a Number is Power of 2
- 4Count Number of Set Bits
- 5Swap Two Numbers using XOR
- 6Divide Two Integers without using Multiplication, Division and Modulus Operator
- 7Count Number of Bits to Flip to Convert A to B
- 8Find the Number that Appears Odd Number of Times
- 9Power Set
- 10Find XOR of Numbers from L to R
- 11Prime Factors of a Number
- 12All Divisors of Number
- 13Sieve of Eratosthenes
- 14Find Prime Factorisation of a Number using Sieve
- 15Power(n, x)


- 1Preorder Traversal of a Binary Tree using Recursion
- 2Preorder Traversal of a Binary Tree using Iteration
- 3Inorder Traversal of a Binary Tree using Recursion
- 4Inorder Traversal of a Binary Tree using Iteration
- 5Postorder Traversal of a Binary Tree Using Recursion
- 6Postorder Traversal of a Binary Tree using Iteration
- 7Level Order Traversal of a Binary Tree using Recursion
- 8Level Order Traversal of a Binary Tree using Iteration
- 9Reverse Level Order Traversal of a Binary Tree using Iteration
- 10Reverse Level Order Traversal of a Binary Tree using Recursion
- 11Find Height of a Binary Tree
- 12Find Diameter of a Binary Tree
- 13Find Mirror of a Binary Tree
- 14Left View of a Binary Tree
- 15Right View of a Binary Tree
- 16Top View of a Binary Tree
- 17Bottom View of a Binary Tree
- 18Zigzag Traversal of a Binary Tree
- 19Check if a Binary Tree is Balanced
- 20Diagonal Traversal of a Binary Tree
- 21Boundary Traversal of a Binary Tree
- 22Construct a Binary Tree from a String with Bracket Representation
- 23Convert a Binary Tree into a Doubly Linked List
- 24Convert a Binary Tree into a Sum Tree
- 25Find Minimum Swaps Required to Convert a Binary Tree into a BST
- 26Check if a Binary Tree is a Sum Tree
- 27Check if All Leaf Nodes are at the Same Level in a Binary Tree
- 28Lowest Common Ancestor (LCA) in a Binary Tree
- 29Solve the Tree Isomorphism Problem
- 30Check if a Binary Tree Contains Duplicate Subtrees of Size 2 or More
- 31Check if Two Binary Trees are Mirror Images
- 32Calculate the Sum of Nodes on the Longest Path from Root to Leaf in a Binary Tree
- 33Print All Paths in a Binary Tree with a Given Sum
- 34Find the Distance Between Two Nodes in a Binary Tree
- 35Find the kth Ancestor of a Node in a Binary Tree
- 36Find All Duplicate Subtrees in a Binary Tree

- 1Find a Value in a Binary Search Tree
- 2Delete a Node in a Binary Search Tree
- 3Find the Minimum Value in a Binary Search Tree
- 4Find the Maximum Value in a Binary Search Tree
- 5Find the Inorder Successor in a Binary Search Tree
- 6Find the Inorder Predecessor in a Binary Search Tree
- 7Check if a Binary Tree is a Binary Search Tree
- 8Find the Lowest Common Ancestor of Two Nodes in a Binary Search Tree
- 9Convert a Binary Tree into a Binary Search Tree
- 10Balance a Binary Search Tree
- 11Merge Two Binary Search Trees
- 12Find the kth Largest Element in a Binary Search Tree
- 13Find the kth Smallest Element in a Binary Search Tree
- 14Flatten a Binary Search Tree into a Sorted List

- 1Breadth-First Search in Graphs
- 2Depth-First Search in Graphs
- 3Number of Provinces in an Undirected Graph
- 4Connected Components in a Matrix
- 5Rotten Oranges Problem - BFS in Matrix
- 6Flood Fill Algorithm - Graph Based
- 7Detect Cycle in an Undirected Graph using DFS
- 8Detect Cycle in an Undirected Graph using BFS
- 9Distance of Nearest Cell Having 1 - Grid BFS
- 10Surrounded Regions in Matrix using Graph Traversal
- 11Number of Enclaves in Grid
- 12Word Ladder - Shortest Transformation using Graph
- 13Word Ladder II - All Shortest Transformation Sequences
- 14Number of Distinct Islands using DFS
- 15Check if a Graph is Bipartite using DFS
- 16Topological Sort Using DFS
- 17Topological Sort using Kahn's Algorithm
- 18Cycle Detection in Directed Graph using BFS
- 19Course Schedule - Task Ordering with Prerequisites
- 20Course Schedule 2 - Task Ordering Using Topological Sort
- 21Find Eventual Safe States in a Directed Graph
- 22Alien Dictionary Character Order
- 23Shortest Path in Undirected Graph with Unit Distance
- 24Shortest Path in DAG using Topological Sort
- 25Dijkstra's Algorithm Using Set - Shortest Path in Graph
- 26Dijkstra’s Algorithm Using Priority Queue
- 27Shortest Distance in a Binary Maze using BFS
- 28Path With Minimum Effort in Grid using Graphs
- 29Cheapest Flights Within K Stops - Graph Problem
- 30Number of Ways to Reach Destination in Shortest Time - Graph Problem
- 31Minimum Multiplications to Reach End - Graph BFS
- 32Bellman-Ford Algorithm for Shortest Paths
- 33Floyd Warshall Algorithm for All-Pairs Shortest Path
- 34Find the City With the Fewest Reachable Neighbours
- 35Minimum Spanning Tree in Graphs
- 36Prim's Algorithm for Minimum Spanning Tree
- 37Disjoint Set (Union-Find) with Union by Rank and Path Compression
- 38Kruskal's Algorithm - Minimum Spanning Tree
- 39Minimum Operations to Make Network Connected
- 40Most Stones Removed with Same Row or Column
- 41Accounts Merge Problem using Disjoint Set Union
- 42Number of Islands II - Online Queries using DSU
- 43Making a Large Island Using DSU
- 44Bridges in Graph using Tarjan's Algorithm
- 45Articulation Points in Graphs
- 46Strongly Connected Components using Kosaraju's Algorithm
Dynamic Programming Technique in DSA | Strategy & Examples
Dynamic Programming in a Nutshell
- Break problems into overlapping subproblems.
- Store and reuse solutions to subproblems (memoization or tabulation).
- Ideal for optimization problems with optimal substructure.
What is the Dynamic Programming Technique?
Dynamic Programming (DP) is used for solving complex problems by breaking them down into simpler overlapping subproblems and solving each subproblem only once, storing the result for future reuse. This avoids the overhead of recalculating solutions repeatedly (as in recursion).
DP is applicable when the problem has two main properties:
- Overlapping Subproblems: The problem can be broken into smaller subproblems that repeat over time.
- Optimal Substructure: The optimal solution of a problem can be constructed from the optimal solutions of its subproblems.
Top-Down vs Bottom-Up Approach
- Top-Down (Memoization): Solve the problem recursively and store results to avoid duplicate work.
- Bottom-Up (Tabulation): Build the solution iteratively using a table from base case up to the final result.
Example 1: Fibonacci Number — Explained for Beginners
Problem Statement:
Find the nth Fibonacci number, where the Fibonacci series is defined as follows:
F(0) = 0
F(1) = 1
F(n) = F(n - 1) + F(n - 2)
forn ≥ 2
This means each number in the sequence is the sum of the two previous numbers. For example, the first few Fibonacci numbers are:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
Why Not Use Simple Recursion?
If we write a simple recursive solution like:
function fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
This looks clean, but it's extremely inefficient because it recalculates the same values again and again. For example, fib(5)
will compute fib(4)
and fib(3)
, but then fib(4)
again computes fib(3)
and fib(2)
— so fib(3)
is computed twice. The number of recursive calls grows exponentially.
This is where Dynamic Programming helps.
Dynamic Programming solves this problem by storing solutions to subproblems so we don’t compute them again. Let’s explore both top-down and bottom-up DP approaches:
Top-Down DP Approach (Memoization)
We use recursion, but we store the result of each Fibonacci number we calculate in a dictionary (called memo
). Before computing any value, we check if it’s already in memo
and use it directly if found.
Step-by-step Explanation:
- Start from
fib(n)
. - Check if
n
is already inmemo
. If yes, return it. - If
n
is 0 or 1, returnn
(base case). - Otherwise, compute
fib(n-1)
andfib(n-2)
. - Store their sum in
memo[n]
. - Return
memo[n]
.
Pseudocode
// Memoization approach
function fib(n, memo):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
Why It Works:
Each Fibonacci number is calculated only once and reused wherever needed. This avoids redundant calculations.
Time Complexity:
- O(n) — each number is computed once and stored.
Space Complexity:
- O(n) — to store
memo
.
Bottom-Up DP Approach (Tabulation)
Instead of starting from the top and going down recursively, we start from the bottom (base cases) and build up the solution iteratively.
Step-by-step Explanation:
- Create a table
dp[]
of sizen+1
to store Fibonacci values. - Initialize:
dp[0] = 0
anddp[1] = 1
. - Iterate from
i = 2
ton
. - At each step, calculate
dp[i] = dp[i-1] + dp[i-2]
. - Return
dp[n]
as the final result.
Pseudocode
// Tabulation approach
function fib(n):
if n <= 1:
return n
dp = array of size (n+1)
dp[0] = 0
dp[1] = 1
for i from 2 to n:
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
Why It Works:
This method uses an iterative approach and solves the subproblems first before combining them to solve the main problem.
Time Complexity:
- O(n) — we compute each Fibonacci number from 2 to n once.
Space Complexity:
- O(n) — for the
dp
array (can be optimized to O(1) by storing only last two values).
The Fibonacci number problem is a great way to understand dynamic programming. It highlights the idea of breaking a problem into smaller subproblems and using stored results to avoid recomputation. For beginners, this teaches how to recognize overlapping subproblems and how memoization and tabulation can drastically improve performance.
Example 2: 0/1 Knapsack Problem
Problem: You are given:
- n items, where each item has a
weight
and avalue
. - A knapsack (bag) with a fixed capacity
W
.
Your goal is to choose a subset of these items such that:
- The total weight of the selected items does not exceed
W
. - The total value of the selected items is maximized.
- You can either take an item or leave it — no fractions allowed. Hence the name "0/1 Knapsack".
Why Dynamic Programming?
This problem has two important properties:
- Optimal Substructure: The solution to the main problem can be built using solutions of its subproblems (like smaller capacities or fewer items).
- Overlapping Subproblems: Many subproblems repeat — solving and storing them avoids recomputation.
Hence, Dynamic Programming (DP) is the ideal approach. We’ll use Bottom-Up DP (Tabulation) where we fill a 2D table dp[i][w]
representing:
- The maximum value achievable using the first
i
items and capacityw
.
Step-by-Step Explanation
- Create a DP Table: Build a 2D table with dimensions
(n+1) x (W+1)
. - Base Case Initialization:
dp[0][w] = 0
→ 0 items, so 0 value.dp[i][0] = 0
→ 0 capacity, so 0 value.
- Fill the Table:
- Loop through each item (1 to n).
- Loop through each capacity (1 to W).
- For each
dp[i][w]
we ask:- Can I include item
i-1
? Check ifweight[i-1] ≤ w
. - If yes, then consider both:
- Include it → value becomes
value[i-1] + dp[i-1][w - weight[i-1]]
. - Exclude it → value is just
dp[i-1][w]
.
- Include it → value becomes
- Choose the maximum of these two options.
- If item can’t be included, just copy the value from above row.
- Can I include item
- Final Answer: The cell
dp[n][W]
contains the max value for the full problem.
Visual Representation of Table Update
Imagine the following small example:
weights = [2, 3, 4]
values = [40, 50, 100]
capacity = 5
The table dp[i][w]
gets filled row by row. Each cell compares two options: take or skip the item.
Pseudocode (Bottom-Up Approach)
// Bottom-up 2D DP
function knapsack(weights, values, n, W):
dp = 2D array of size (n+1) x (W+1)
for i from 0 to n:
for w from 0 to W:
if i == 0 or w == 0:
dp[i][w] = 0
else if weights[i-1] <= w:
dp[i][w] = max(values[i-1] + dp[i-1][w - weights[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
Time and Space Complexity
- Time: O(n × W)
- Space: O(n × W) — due to the 2D DP table
Takeaway
Dynamic Programming helps solve the 0/1 Knapsack problem by:
- Breaking it into subproblems (fewer items and smaller capacities).
- Solving each subproblem only once.
- Building up the final result using those stored solutions.
This avoids exponential time from trying all subsets (which is what brute force would do), and gives an efficient solution.
Example 3: Longest Common Subsequence (LCS)
Problem Statement: Given two strings, find the length of the Longest Common Subsequence (LCS) present in both. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.
Example
Let’s say:
- String 1:
"abcde"
- String 2:
"ace"
The LCS is "ace"
and its length is 3.
Why Use Dynamic Programming?
When solving LCS recursively, we may end up solving the same subproblems repeatedly. For instance, the same substring comparisons happen over and over again.
Dynamic Programming helps avoid this by:
- Breaking the problem into subproblems.
- Storing results of subproblems in a table (2D array).
- Building up the final solution using these stored results.
Step-by-Step Dynamic Programming Solution
Step 1: Create a 2D Table
Let dp[i][j]
represent the length of the LCS of the first i
characters of string 1 and first j
characters of string 2.
- If either string is empty (
i == 0
orj == 0
), thendp[i][j] = 0
. - If
s1[i-1] == s2[j-1]
, then the characters match. So,dp[i][j] = 1 + dp[i-1][j-1]
. - If
s1[i-1] != s2[j-1]
, we take the maximum LCS length by either excluding the current character from s1 or s2:dp[i][j] = max(dp[i-1][j], dp[i][j-1])
.
Step 2: Fill the Table
Use nested loops to fill the table starting from dp[0][0]
to dp[m][n]
where m
and n
are the lengths of s1 and s2.
Step 3: Final Answer
The value at dp[m][n]
gives the length of the longest common subsequence.
Pseudocode
// Bottom-up 2D DP
function lcs(s1, s2):
m = length of s1
n = length of s2
dp = 2D array of size (m+1) x (n+1)
for i from 0 to m:
for j from 0 to n:
if i == 0 or j == 0:
dp[i][j] = 0
else if s1[i-1] == s2[j-1]:
dp[i][j] = 1 + dp[i-1][j-1]
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[m][n]
Trace Example
Let’s take s1 = "abcde", s2 = "ace"
i\j | "" | a | c | e |
---|---|---|---|---|
"" | 0 | 0 | 0 | 0 |
a | 0 | 1 | 1 | 1 |
b | 0 | 1 | 1 | 1 |
c | 0 | 1 | 2 | 2 |
d | 0 | 1 | 2 | 2 |
e | 0 | 1 | 2 | 3 |
Final result: dp[5][3] = 3
→ LCS length = 3 (which is "ace")
Time and Space Complexity
- Time Complexity: O(m * n)
- Space Complexity: O(m * n)
Dynamic Programming ensures that every subproblem (like comparing substrings of s1 and s2) is solved only once and reused. This drastically improves performance from exponential to polynomial time.
LCS is a classic DP problem and a stepping stone to many related problems like Shortest Common Supersequence, Longest Palindromic Subsequence, and Edit Distance.
Advantages and Disadvantages of DP
Advantages
- Efficient: Reduces time complexity significantly compared to naive recursion.
- Scalable: Works well even for large inputs due to subproblem reuse.
Disadvantages
- Higher space usage: Often requires arrays or matrices to store results.
- Complex logic: Needs deep understanding of subproblem relationships.
Conclusion
Dynamic Programming is a versatile and powerful technique for solving optimization problems. By breaking problems into overlapping subproblems and using memoization or tabulation, DP ensures optimal and efficient solutions. It’s essential for problems involving sequences, paths, and decision making under constraints.
Comments
Loading comments...