Binary TreesBinary Trees36
  1. 1Preorder Traversal of a Binary Tree using Recursion
  2. 2Preorder Traversal of a Binary Tree using Iteration
  3. 3Inorder Traversal of a Binary Tree using Recursion
  4. 4Inorder Traversal of a Binary Tree using Iteration
  5. 5Postorder Traversal of a Binary Tree Using Recursion
  6. 6Postorder Traversal of a Binary Tree using Iteration
  7. 7Level Order Traversal of a Binary Tree using Recursion
  8. 8Level Order Traversal of a Binary Tree using Iteration
  9. 9Reverse Level Order Traversal of a Binary Tree using Iteration
  10. 10Reverse Level Order Traversal of a Binary Tree using Recursion
  11. 11Find Height of a Binary Tree
  12. 12Find Diameter of a Binary Tree
  13. 13Find Mirror of a Binary Tree
  14. 14Left View of a Binary Tree
  15. 15Right View of a Binary Tree
  16. 16Top View of a Binary Tree
  17. 17Bottom View of a Binary Tree
  18. 18Zigzag Traversal of a Binary Tree
  19. 19Check if a Binary Tree is Balanced
  20. 20Diagonal Traversal of a Binary Tree
  21. 21Boundary Traversal of a Binary Tree
  22. 22Construct a Binary Tree from a String with Bracket Representation
  23. 23Convert a Binary Tree into a Doubly Linked List
  24. 24Convert a Binary Tree into a Sum Tree
  25. 25Find Minimum Swaps Required to Convert a Binary Tree into a BST
  26. 26Check if a Binary Tree is a Sum Tree
  27. 27Check if All Leaf Nodes are at the Same Level in a Binary Tree
  28. 28Lowest Common Ancestor (LCA) in a Binary Tree
  29. 29Solve the Tree Isomorphism Problem
  30. 30Check if a Binary Tree Contains Duplicate Subtrees of Size 2 or More
  31. 31Check if Two Binary Trees are Mirror Images
  32. 32Calculate the Sum of Nodes on the Longest Path from Root to Leaf in a Binary Tree
  33. 33Print All Paths in a Binary Tree with a Given Sum
  34. 34Find the Distance Between Two Nodes in a Binary Tree
  35. 35Find the kth Ancestor of a Node in a Binary Tree
  36. 36Find All Duplicate Subtrees in a Binary Tree
GraphsGraphs46
  1. 1Breadth-First Search in Graphs
  2. 2Depth-First Search in Graphs
  3. 3Number of Provinces in an Undirected Graph
  4. 4Connected Components in a Matrix
  5. 5Rotten Oranges Problem - BFS in Matrix
  6. 6Flood Fill Algorithm - Graph Based
  7. 7Detect Cycle in an Undirected Graph using DFS
  8. 8Detect Cycle in an Undirected Graph using BFS
  9. 9Distance of Nearest Cell Having 1 - Grid BFS
  10. 10Surrounded Regions in Matrix using Graph Traversal
  11. 11Number of Enclaves in Grid
  12. 12Word Ladder - Shortest Transformation using Graph
  13. 13Word Ladder II - All Shortest Transformation Sequences
  14. 14Number of Distinct Islands using DFS
  15. 15Check if a Graph is Bipartite using DFS
  16. 16Topological Sort Using DFS
  17. 17Topological Sort using Kahn's Algorithm
  18. 18Cycle Detection in Directed Graph using BFS
  19. 19Course Schedule - Task Ordering with Prerequisites
  20. 20Course Schedule 2 - Task Ordering Using Topological Sort
  21. 21Find Eventual Safe States in a Directed Graph
  22. 22Alien Dictionary Character Order
  23. 23Shortest Path in Undirected Graph with Unit Distance
  24. 24Shortest Path in DAG using Topological Sort
  25. 25Dijkstra's Algorithm Using Set - Shortest Path in Graph
  26. 26Dijkstra’s Algorithm Using Priority Queue
  27. 27Shortest Distance in a Binary Maze using BFS
  28. 28Path With Minimum Effort in Grid using Graphs
  29. 29Cheapest Flights Within K Stops - Graph Problem
  30. 30Number of Ways to Reach Destination in Shortest Time - Graph Problem
  31. 31Minimum Multiplications to Reach End - Graph BFS
  32. 32Bellman-Ford Algorithm for Shortest Paths
  33. 33Floyd Warshall Algorithm for All-Pairs Shortest Path
  34. 34Find the City With the Fewest Reachable Neighbours
  35. 35Minimum Spanning Tree in Graphs
  36. 36Prim's Algorithm for Minimum Spanning Tree
  37. 37Disjoint Set (Union-Find) with Union by Rank and Path Compression
  38. 38Kruskal's Algorithm - Minimum Spanning Tree
  39. 39Minimum Operations to Make Network Connected
  40. 40Most Stones Removed with Same Row or Column
  41. 41Accounts Merge Problem using Disjoint Set Union
  42. 42Number of Islands II - Online Queries using DSU
  43. 43Making a Large Island Using DSU
  44. 44Bridges in Graph using Tarjan's Algorithm
  45. 45Articulation Points in Graphs
  46. 46Strongly Connected Components using Kosaraju's Algorithm

Dynamic Programming Technique in DSA | Strategy & Examples

Dynamic Programming in a Nutshell

  • Break problems into overlapping subproblems.
  • Store and reuse solutions to subproblems (memoization or tabulation).
  • Ideal for optimization problems with optimal substructure.

What is the Dynamic Programming Technique?

Dynamic Programming (DP) is used for solving complex problems by breaking them down into simpler overlapping subproblems and solving each subproblem only once, storing the result for future reuse. This avoids the overhead of recalculating solutions repeatedly (as in recursion).

DP is applicable when the problem has two main properties:

  • Overlapping Subproblems: The problem can be broken into smaller subproblems that repeat over time.
  • Optimal Substructure: The optimal solution of a problem can be constructed from the optimal solutions of its subproblems.

Top-Down vs Bottom-Up Approach

  • Top-Down (Memoization): Solve the problem recursively and store results to avoid duplicate work.
  • Bottom-Up (Tabulation): Build the solution iteratively using a table from base case up to the final result.

Example 1: Fibonacci Number — Explained for Beginners

Problem Statement:

Find the nth Fibonacci number, where the Fibonacci series is defined as follows:

  • F(0) = 0
  • F(1) = 1
  • F(n) = F(n - 1) + F(n - 2) for n ≥ 2

This means each number in the sequence is the sum of the two previous numbers. For example, the first few Fibonacci numbers are:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...

Why Not Use Simple Recursion?

If we write a simple recursive solution like:

function fib(n):
    if n <= 1:
        return n
    return fib(n - 1) + fib(n - 2)

This looks clean, but it's extremely inefficient because it recalculates the same values again and again. For example, fib(5) will compute fib(4) and fib(3), but then fib(4) again computes fib(3) and fib(2) — so fib(3) is computed twice. The number of recursive calls grows exponentially.

This is where Dynamic Programming helps.

Dynamic Programming solves this problem by storing solutions to subproblems so we don’t compute them again. Let’s explore both top-down and bottom-up DP approaches:

Top-Down DP Approach (Memoization)

We use recursion, but we store the result of each Fibonacci number we calculate in a dictionary (called memo). Before computing any value, we check if it’s already in memo and use it directly if found.

Step-by-step Explanation:

  1. Start from fib(n).
  2. Check if n is already in memo. If yes, return it.
  3. If n is 0 or 1, return n (base case).
  4. Otherwise, compute fib(n-1) and fib(n-2).
  5. Store their sum in memo[n].
  6. Return memo[n].

Pseudocode

// Memoization approach
function fib(n, memo):
    if n in memo:
        return memo[n]
    if n <= 1:
        return n
    memo[n] = fib(n-1, memo) + fib(n-2, memo)
    return memo[n]

Why It Works:

Each Fibonacci number is calculated only once and reused wherever needed. This avoids redundant calculations.

Time Complexity:

  • O(n) — each number is computed once and stored.

Space Complexity:

  • O(n) — to store memo.

Bottom-Up DP Approach (Tabulation)

Instead of starting from the top and going down recursively, we start from the bottom (base cases) and build up the solution iteratively.

Step-by-step Explanation:

  1. Create a table dp[] of size n+1 to store Fibonacci values.
  2. Initialize: dp[0] = 0 and dp[1] = 1.
  3. Iterate from i = 2 to n.
  4. At each step, calculate dp[i] = dp[i-1] + dp[i-2].
  5. Return dp[n] as the final result.

Pseudocode

// Tabulation approach
function fib(n):
    if n <= 1:
        return n
    dp = array of size (n+1)
    dp[0] = 0
    dp[1] = 1
    for i from 2 to n:
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

Why It Works:

This method uses an iterative approach and solves the subproblems first before combining them to solve the main problem.

Time Complexity:

  • O(n) — we compute each Fibonacci number from 2 to n once.

Space Complexity:

  • O(n) — for the dp array (can be optimized to O(1) by storing only last two values).

The Fibonacci number problem is a great way to understand dynamic programming. It highlights the idea of breaking a problem into smaller subproblems and using stored results to avoid recomputation. For beginners, this teaches how to recognize overlapping subproblems and how memoization and tabulation can drastically improve performance.


Example 2: 0/1 Knapsack Problem

Problem: You are given:

  • n items, where each item has a weight and a value.
  • A knapsack (bag) with a fixed capacity W.

Your goal is to choose a subset of these items such that:

  • The total weight of the selected items does not exceed W.
  • The total value of the selected items is maximized.
  • You can either take an item or leave it — no fractions allowed. Hence the name "0/1 Knapsack".

Why Dynamic Programming?

This problem has two important properties:

  • Optimal Substructure: The solution to the main problem can be built using solutions of its subproblems (like smaller capacities or fewer items).
  • Overlapping Subproblems: Many subproblems repeat — solving and storing them avoids recomputation.

Hence, Dynamic Programming (DP) is the ideal approach. We’ll use Bottom-Up DP (Tabulation) where we fill a 2D table dp[i][w] representing:

  • The maximum value achievable using the first i items and capacity w.

Step-by-Step Explanation

  1. Create a DP Table: Build a 2D table with dimensions (n+1) x (W+1).
  2. Base Case Initialization:
    • dp[0][w] = 0 → 0 items, so 0 value.
    • dp[i][0] = 0 → 0 capacity, so 0 value.
  3. Fill the Table:
    • Loop through each item (1 to n).
    • Loop through each capacity (1 to W).
    • For each dp[i][w] we ask:
      • Can I include item i-1? Check if weight[i-1] ≤ w.
      • If yes, then consider both:
        • Include it → value becomes value[i-1] + dp[i-1][w - weight[i-1]].
        • Exclude it → value is just dp[i-1][w].
      • Choose the maximum of these two options.
      • If item can’t be included, just copy the value from above row.
  4. Final Answer: The cell dp[n][W] contains the max value for the full problem.

Visual Representation of Table Update

Imagine the following small example:

  • weights = [2, 3, 4]
  • values = [40, 50, 100]
  • capacity = 5

The table dp[i][w] gets filled row by row. Each cell compares two options: take or skip the item.

Pseudocode (Bottom-Up Approach)

// Bottom-up 2D DP
function knapsack(weights, values, n, W):
    dp = 2D array of size (n+1) x (W+1)
    for i from 0 to n:
        for w from 0 to W:
            if i == 0 or w == 0:
                dp[i][w] = 0
            else if weights[i-1] <= w:
                dp[i][w] = max(values[i-1] + dp[i-1][w - weights[i-1]], dp[i-1][w])
            else:
                dp[i][w] = dp[i-1][w]
    return dp[n][W]

Time and Space Complexity

  • Time: O(n × W)
  • Space: O(n × W) — due to the 2D DP table

Takeaway

Dynamic Programming helps solve the 0/1 Knapsack problem by:

  • Breaking it into subproblems (fewer items and smaller capacities).
  • Solving each subproblem only once.
  • Building up the final result using those stored solutions.

This avoids exponential time from trying all subsets (which is what brute force would do), and gives an efficient solution.


Example 3: Longest Common Subsequence (LCS)

Problem Statement: Given two strings, find the length of the Longest Common Subsequence (LCS) present in both. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.

Example

Let’s say:

  • String 1: "abcde"
  • String 2: "ace"

The LCS is "ace" and its length is 3.

Why Use Dynamic Programming?

When solving LCS recursively, we may end up solving the same subproblems repeatedly. For instance, the same substring comparisons happen over and over again.

Dynamic Programming helps avoid this by:

  • Breaking the problem into subproblems.
  • Storing results of subproblems in a table (2D array).
  • Building up the final solution using these stored results.

Step-by-Step Dynamic Programming Solution

Step 1: Create a 2D Table

Let dp[i][j] represent the length of the LCS of the first i characters of string 1 and first j characters of string 2.

  • If either string is empty (i == 0 or j == 0), then dp[i][j] = 0.
  • If s1[i-1] == s2[j-1], then the characters match. So, dp[i][j] = 1 + dp[i-1][j-1].
  • If s1[i-1] != s2[j-1], we take the maximum LCS length by either excluding the current character from s1 or s2: dp[i][j] = max(dp[i-1][j], dp[i][j-1]).

Step 2: Fill the Table

Use nested loops to fill the table starting from dp[0][0] to dp[m][n] where m and n are the lengths of s1 and s2.

Step 3: Final Answer

The value at dp[m][n] gives the length of the longest common subsequence.

Pseudocode

// Bottom-up 2D DP
function lcs(s1, s2):
    m = length of s1
    n = length of s2
    dp = 2D array of size (m+1) x (n+1)

    for i from 0 to m:
        for j from 0 to n:
            if i == 0 or j == 0:
                dp[i][j] = 0
            else if s1[i-1] == s2[j-1]:
                dp[i][j] = 1 + dp[i-1][j-1]
            else:
                dp[i][j] = max(dp[i-1][j], dp[i][j-1])
                
    return dp[m][n]

Trace Example

Let’s take s1 = "abcde", s2 = "ace"

i\j""ace
""0000
a0111
b0111
c0122
d0122
e0123

Final result: dp[5][3] = 3 → LCS length = 3 (which is "ace")

Time and Space Complexity

  • Time Complexity: O(m * n)
  • Space Complexity: O(m * n)

Dynamic Programming ensures that every subproblem (like comparing substrings of s1 and s2) is solved only once and reused. This drastically improves performance from exponential to polynomial time.

LCS is a classic DP problem and a stepping stone to many related problems like Shortest Common Supersequence, Longest Palindromic Subsequence, and Edit Distance.


Advantages and Disadvantages of DP

Advantages

  • Efficient: Reduces time complexity significantly compared to naive recursion.
  • Scalable: Works well even for large inputs due to subproblem reuse.

Disadvantages

  • Higher space usage: Often requires arrays or matrices to store results.
  • Complex logic: Needs deep understanding of subproblem relationships.

Conclusion

Dynamic Programming is a versatile and powerful technique for solving optimization problems. By breaking problems into overlapping subproblems and using memoization or tabulation, DP ensures optimal and efficient solutions. It’s essential for problems involving sequences, paths, and decision making under constraints.


Comments

💬 Please keep your comment relevant and respectful. Avoid spamming, offensive language, or posting promotional/backlink content.
All comments are subject to moderation before being published.


Loading comments...