
- 1Overview of DSA Problem Solving Techniques
- 2Brute Force Technique in DSA
- 3Greedy Algorithm Technique in DSA
- 4Divide and Conquer Technique in DSA
- 5Dynamic Programming Technique in DSA
- 6Backtracking Technique in DSA
- 7Recursion Technique in DSA
- 8Sliding Window Technique in DSA
- 9Two Pointers Technique
- 10Binary Search Technique
- 11Tree / Graph Traversal Technique in DSA
- 12Bit Manipulation Technique in DSA
- 13Hashing Technique
- 14Heaps Technique in DSA

- 1Find Maximum and Minimum in Array using Loop
- 2Find Second Largest in Array
- 3Find Second Smallest in Array
- 4Reverse Array using Two Pointers
- 5Check if Array is Sorted
- 6Remove Duplicates from Sorted Array
- 7Left Rotate an Array by One Place
- 8Left Rotate an Array by K Places
- 9Move Zeroes in Array to End
- 10Linear Search in Array
- 11Union of Two Arrays
- 12Find Missing Number in Array
- 13Max Consecutive Ones in Array
- 14Find Kth Smallest Element
- 15Longest Subarray with Given Sum (Positives)
- 16Longest Subarray with Given Sum (Positives and Negatives)
- 17Find Majority Element in Array (more than n/2 times)
- 18Find Majority Element in Array (more than n/3 times)
- 19Maximum Subarray Sum using Kadane's Algorithm
- 20Print Subarray with Maximum Sum
- 21Stock Buy and Sell
- 22Rearrange Array Alternating Positive and Negative Elements
- 23Next Permutation of Array
- 24Leaders in an Array
- 25Longest Consecutive Sequence in Array
- 26Count Subarrays with Given Sum
- 27Sort an Array of 0s, 1s, and 2s
- 28Two Sum Problem
- 29Three Sum Problem
- 304 Sum Problem
- 31Find Length of Largest Subarray with 0 Sum
- 32Find Maximum Product Subarray

- 1Binary Search in Array using Iteration
- 2Find Lower Bound in Sorted Array
- 3Find Upper Bound in Sorted Array
- 4Search Insert Position in Sorted Array (Lower Bound Approach)
- 5Floor and Ceil in Sorted Array
- 6First Occurrence in a Sorted Array
- 7Last Occurrence in a Sorted Array
- 8Count Occurrences in Sorted Array
- 9Search Element in a Rotated Sorted Array
- 10Search in Rotated Sorted Array with Duplicates
- 11Minimum in Rotated Sorted Array
- 12Find Rotation Count in Sorted Array
- 13Search Single Element in Sorted Array
- 14Find Peak Element in Array
- 15Square Root using Binary Search
- 16Nth Root of a Number using Binary Search
- 17Koko Eating Bananas
- 18Minimum Days to Make M Bouquets
- 19Find the Smallest Divisor Given a Threshold
- 20Capacity to Ship Packages within D Days
- 21Kth Missing Positive Number
- 22Aggressive Cows Problem
- 23Allocate Minimum Number of Pages
- 24Split Array - Minimize Largest Sum
- 25Painter's Partition Problem
- 26Minimize Maximum Distance Between Gas Stations
- 27Median of Two Sorted Arrays of Different Sizes
- 28K-th Element of Two Sorted Arrays

- 1Reverse Words in a String
- 2Find the Largest Odd Number in a Numeric String
- 3Find Longest Common Prefix in Array of Strings
- 4Find Longest Common Substring
- 5Check If Two Strings Are Isomorphic - Optimal HashMap Solution
- 6Check String Rotation using Concatenation - Optimal Approach
- 7Check if Two Strings Are Anagrams - Optimal Approach
- 8Sort Characters by Frequency - Optimal HashMap and Heap Approach
- 9Find Longest Palindromic Substring - Dynamic Programming Approach
- 10Find Longest Palindromic Substring Without Dynamic Programming
- 11Remove Outermost Parentheses in String
- 12Find Maximum Nesting Depth of Parentheses - Optimal Stack-Free Solution
- 13Convert Roman Numerals to Integer - Efficient Approach
- 14Convert Integer to Roman Numeral - Step-by-Step for Beginners
- 15Implement Atoi - Convert String to Integer in Java
- 16Count Number of Substrings in a String - Explanation with Formula
- 17Edit Distance Problem
- 18Calculate Sum of Beauty of All Substrings - Optimal Approach
- 19Reverse Each Word in a String - Optimal Approach

- 1Check if i-th bit is set
- 2Check if a Number is Even/Odd
- 3Check if a Number is Power of 2
- 4Count Number of Set Bits
- 5Swap Two Numbers using XOR
- 6Divide Two Integers without using Multiplication, Division and Modulus Operator
- 7Count Number of Bits to Flip to Convert A to B
- 8Find the Number that Appears Odd Number of Times
- 9Power Set
- 10Find XOR of Numbers from L to R
- 11Prime Factors of a Number
- 12All Divisors of Number
- 13Sieve of Eratosthenes
- 14Find Prime Factorisation of a Number using Sieve
- 15Power(n,x)


- 1Preorder Traversal of a Binary Tree using Recursion
- 2Preorder Traversal of a Binary Tree using Iteration
- 3Inorder Traversal of a Binary Tree using Recursion
- 4Inorder Traversal of a Binary Tree using Iteration
- 5Postorder Traversal of a Binary Tree Using Recursion
- 6Postorder Traversal of a Binary Tree using Iteration
- 7Level Order Traversal of a Binary Tree using Recursion
- 8Level Order Traversal of a Binary Tree using Iteration
- 9Reverse Level Order Traversal of a Binary Tree using Iteration
- 10Reverse Level Order Traversal of a Binary Tree using Recursion
- 11Find Height of a Binary Tree
- 12Find Diameter of a Binary Tree
- 13Find Mirror of a Binary Tree
- 14Left View of a Binary Tree
- 15Right View of a Binary Tree
- 16Top View of a Binary Tree
- 17Bottom View of a Binary Tree
- 18Zigzag Traversal of a Binary Tree
- 19Check if a Binary Tree is Balanced
- 20Diagonal Traversal of a Binary Tree
- 21Boundary Traversal of a Binary Tree
- 22Construct a Binary Tree from a String with Bracket Representation
- 23Convert a Binary Tree into a Doubly Linked List
- 24Convert a Binary Tree into a Sum Tree
- 25Find Minimum Swaps Required to Convert a Binary Tree into a BST
- 26Check if a Binary Tree is a Sum Tree
- 27Check if All Leaf Nodes are at the Same Level in a Binary Tree
- 28Lowest Common Ancestor (LCA) in a Binary Tree
- 29Solve the Tree Isomorphism Problem
- 30Check if a Binary Tree Contains Duplicate Subtrees of Size 2 or More
- 31Check if Two Binary Trees are Mirror Images
- 32Calculate the Sum of Nodes on the Longest Path from Root to Leaf in a Binary Tree
- 33Print All Paths in a Binary Tree with a Given Sum
- 34Find the Distance Between Two Nodes in a Binary Tree
- 35Find the kth Ancestor of a Node in a Binary Tree
- 36Find All Duplicate Subtrees in a Binary Tree

- 1Find a Value in a Binary Search Tree
- 2Delete a Node in a Binary Search Tree
- 3Find the Minimum Value in a Binary Search Tree
- 4Find the Maximum Value in a Binary Search Tree
- 5Find the Inorder Successor in a Binary Search Tree
- 6Find the Inorder Predecessor in a Binary Search Tree
- 7Check if a Binary Tree is a Binary Search Tree
- 8Find the Lowest Common Ancestor of Two Nodes in a Binary Search Tree
- 9Convert a Binary Tree into a Binary Search Tree
- 10Balance a Binary Search Tree
- 11Merge Two Binary Search Trees
- 12Find the kth Largest Element in a Binary Search Tree
- 13Find the kth Smallest Element in a Binary Search Tree
- 14Flatten a Binary Search Tree into a Sorted List

- 1Breadth-First Search in Graphs
- 2Depth-First Search in Graphs
- 3Number of Provinces in an Undirected Graph
- 4Connected Components in a Matrix
- 5Rotten Oranges Problem - BFS in Matrix
- 6Flood Fill Algorithm - Graph Based
- 7Detect Cycle in an Undirected Graph using DFS
- 8Detect Cycle in an Undirected Graph using BFS
- 9Distance of Nearest Cell Having 1 - Grid BFS
- 10Surrounded Regions in Matrix using Graph Traversal
- 11Number of Enclaves in Grid
- 12Word Ladder - Shortest Transformation using Graph
- 13Word Ladder II - All Shortest Transformation Sequences
- 14Number of Distinct Islands using DFS
- 15Check if a Graph is Bipartite using DFS
- 16Topological Sort Using DFS
- 17Topological Sort using Kahn's Algorithm
- 18Cycle Detection in Directed Graph using BFS
- 19Course Schedule - Task Ordering with Prerequisites
- 20Course Schedule 2 - Task Ordering Using Topological Sort
- 21Find Eventual Safe States in a Directed Graph
- 22Alien Dictionary Character Order
- 23Shortest Path in Undirected Graph with Unit Distance
- 24Shortest Path in DAG using Topological Sort
- 25Dijkstra's Algorithm Using Set - Shortest Path in Graph
- 26Dijkstra’s Algorithm Using Priority Queue
- 27Shortest Distance in a Binary Maze using BFS
- 28Path With Minimum Effort in Grid using Graphs
- 29Cheapest Flights Within K Stops - Graph Problem
- 30Number of Ways to Reach Destination in Shortest Time - Graph Problem
- 31Minimum Multiplications to Reach End - Graph BFS
- 32Bellman-Ford Algorithm for Shortest Paths
- 33Floyd Warshall Algorithm for All-Pairs Shortest Path
- 34Find the City With the Fewest Reachable Neighbours
- 35Minimum Spanning Tree in Graphs
- 36Prim's Algorithm for Minimum Spanning Tree
- 37Disjoint Set (Union-Find) with Union by Rank and Path Compression
- 38Kruskal's Algorithm - Minimum Spanning Tree
- 39Minimum Operations to Make Network Connected
- 40Most Stones Removed with Same Row or Column
- 41Accounts Merge Problem using Disjoint Set Union
- 42Number of Islands II - Online Queries using DSU
- 43Making a Large Island Using DSU
- 44Bridges in Graph using Tarjan's Algorithm
- 45Articulation Points in Graphs
- 46Strongly Connected Components using Kosaraju's Algorithm
Divide and Conquer Technique in DSA | Strategy & Examples
Divide and Conquer in a Nutshell
- Break the problem into smaller subproblems of the same type.
- Recursively solve the subproblems.
- Combine the solutions to form the final answer.
What is the Divide and Conquer Technique?
The Divide and Conquer technique is a fundamental strategy in DSA used to solve complex problems by dividing them into smaller subproblems, solving each subproblem independently (often recursively), and then combining their results to solve the original problem.
This technique is especially powerful for problems that can be recursively broken down and where subproblems do not depend on each other.
Core Steps of Divide and Conquer
- Divide: Break the problem into smaller subproblems.
- Conquer: Solve the subproblems recursively.
- Combine: Merge the solutions of subproblems to solve the original one.
Why Divide and Conquer Works
This approach leverages recursion and usually leads to efficient algorithms. The total time complexity is often determined by a recurrence relation, which can be solved using the Master Theorem or recursion trees.
Examples of Divide and Conquer Applications
1. Merge Sort — Explained with Divide and Conquer
Problem Statement: You are given an unsorted array of numbers. Your goal is to sort this array in ascending order using an efficient algorithm.
Why Use Divide and Conquer for Sorting?
Sorting large arrays efficiently is a common problem in programming. Traditional approaches like bubble sort or insertion sort have a time complexity of O(n²), which makes them inefficient for large datasets.
Merge Sort is a perfect example of the Divide and Conquer technique because it breaks the problem into smaller, manageable parts, solves them independently, and then combines their solutions.
Understanding Merge Sort Through Divide and Conquer
Let’s understand the three core steps of the Divide and Conquer strategy in the context of Merge Sort:
- Divide: The array is divided into two halves (left and right).
- Conquer: Each half is sorted recursively using the same merge sort technique.
- Combine: The two sorted halves are merged together into one sorted array.
This process continues until the original array is broken down into single-element arrays (which are inherently sorted). Then these are merged step-by-step, resulting in a completely sorted array.
Why This Approach Works
The idea is that it's easier and faster to sort small parts and then merge them, instead of sorting the whole array in one go. This recursive breaking and combining keeps the time complexity low and predictable.
Step-by-Step Example
Suppose we have an array: [6, 3, 8, 5, 2]
- Divide: Break into two halves → [6, 3] and [8, 5, 2]
- Conquer:
- Sort [6, 3] → divide to [6] and [3], merge → [3, 6]
- Sort [8, 5, 2] → divide to [8] and [5, 2] → [5, 2] → divide to [5] and [2], merge → [2, 5], then merge with [8] → [2, 5, 8]
- Combine: Merge [3, 6] and [2, 5, 8] → Final sorted array: [2, 3, 5, 6, 8]
Merge Sort Python Pseudocode
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
How Merge Works
The merge
function takes two sorted arrays and merges them into a single sorted array by comparing elements one by one:
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
Time and Space Complexity Analysis
- Time Complexity: O(n log n)
The array is split in half log n times (like a binary tree), and each merge operation takes O(n) time. Hence, total time is O(n log n). - Space Complexity: O(n)
We use extra space to store temporary arrays while merging.
Why Merge Sort is a Great Use Case for Divide and Conquer
- Merge Sort works efficiently for large datasets because it reduces the problem size drastically at each step.
- Each part is handled independently, which makes it easy to parallelize in practice.
- It is a stable sort and guarantees O(n log n) time regardless of input type (sorted, reversed, random).
Merge Sort is a classic example of how the Divide and Conquer approach can be used to build a scalable and efficient sorting algorithm. By mastering its logic, you also gain deep insights into how complex problems can be simplified by breaking them into smaller ones.
2. Binary Search — Applying Divide and Conquer
Problem Statement: You are given a sorted array, and you need to determine whether a specific target value exists in the array. If it does, return its index; otherwise, return -1.
Why Use Divide and Conquer Here?
Binary Search is a Divide and Conquer technique because it follows the three core steps:
- Divide: Split the sorted array into two halves using the middle index.
- Conquer: Decide which half might contain the target value by comparing the middle element with the target.
- Combine: Since we only need to find the index (not merge results), we just return the result of the subproblem directly.
This approach significantly reduces the problem size at each step — instead of scanning every element (as in linear search), we eliminate half the array in each iteration. That’s why Binary Search is efficient, and it's a perfect candidate for applying Divide and Conquer.
Step-by-Step Explanation for Beginners
Let’s say we have a sorted array: [1, 3, 5, 7, 9, 11, 13]
and we want to find 9
.
- Find the middle index →
mid = 3
, element at index 3 is7
. - Compare 9 with 7:
- Since 9 > 7, we ignore the left half and now search only in the right half →
[9, 11, 13]
.
- Since 9 > 7, we ignore the left half and now search only in the right half →
- Again find middle → now array is shorter, mid element is
11
. - Since 9 < 11, we now look at the left side →
[9]
. - Middle is 9 → match found! Return its index.
At each step, we reduced the size of the problem by half. That’s the essence of Divide and Conquer — solve smaller and smaller subproblems until we reach the answer.
Strategy (Divide and Conquer Applied)
- Check the middle element of the current subarray.
- If it matches the target, return the index — you’re done!
- If the target is less than the middle element, recursively search the left half (conquer step).
- If the target is more, recursively search the right half (conquer step).
Pseudocode
def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
How Binary Search Follows Divide and Conquer
- Divide: Each time, the array is divided into two halves.
- Conquer: You eliminate one half based on comparison — no need to process it anymore.
- Combine: Since the result is either an index or -1, combining simply means passing the result back up.
Time and Space Complexity
- Time Complexity: O(log n) — because we cut the array size in half every time.
- Space Complexity:
- O(1) — for the iterative version (no extra memory used).
- O(log n) — for the recursive version (due to the recursion call stack).
Binary Search is one of the most efficient searching techniques — but it only works on sorted arrays. It’s a textbook example of the Divide and Conquer strategy: break the problem down into two parts and solve one recursively while ignoring the other. Because of its logarithmic time complexity, it’s extremely fast even on large datasets.
3. Maximum Subarray Problem (Using Divide and Conquer)
Problem Statement: Given an array of integers (which may include both positive and negative numbers), find the contiguous subarray that has the largest possible sum.
Why Use Divide and Conquer Here?
The Maximum Subarray Problem is a classic problem where a brute-force approach would check all possible subarrays (which are O(n²) in number) and calculate their sums — this is inefficient for large arrays.
To solve it more efficiently, we can use the Divide and Conquer technique. This is useful because:
- We can break the array into two smaller subarrays (left and right halves).
- We can recursively solve the same problem on each half.
- We also handle the case where the maximum subarray lies across the midpoint (spans both halves).
- By combining the results from left, right, and cross-mid sections, we can determine the global maximum efficiently.
This recursive strategy allows us to reduce the time complexity to O(n log n), making it much faster than brute force.
Divide and Conquer Strategy
The array is divided into two halves repeatedly, solving for smaller and smaller problems:
- Divide: Find the middle index of the current subarray.
- Conquer: Recursively find the maximum subarray sum in the left half and in the right half.
- Combine: Find the maximum sum of a subarray that crosses the midpoint (i.e., takes elements from both left and right halves).
We finally return the maximum of the three results — left, right, and cross sums — which gives the correct answer for the current range.
Visual Intuition
Imagine the array is:
[2, -4, 3, -1, 2, -4, 3]
You break it into halves:
- Left = [2, -4, 3]
- Right = [-1, 2, -4, 3]
[3, -1, 2]
, which crosses the midpoint. So, we must calculate the best "crossing" subarray too, and compare all three possibilities.
Python Pseudocode
def max_crossing_sum(arr, l, m, r):
left_sum = float('-inf')
sum = 0
for i in range(m, l - 1, -1):
sum += arr[i]
left_sum = max(left_sum, sum)
right_sum = float('-inf')
sum = 0
for i in range(m + 1, r + 1):
sum += arr[i]
right_sum = max(right_sum, sum)
return left_sum + right_sum
def max_subarray_sum(arr, l, r):
if l == r:
return arr[l]
m = (l + r) // 2
left_max = max_subarray_sum(arr, l, m)
right_max = max_subarray_sum(arr, m + 1, r)
cross_max = max_crossing_sum(arr, l, m, r)
return max(left_max, right_max, cross_max)
Step-by-Step Breakdown:
- Base Case: If the array has only one element, return that element.
- Recursive Step: Find max subarray in left half, right half, and crossing the middle.
- Return: The maximum of the three.
Time and Space Complexity
- Time Complexity: O(n log n)
- Why? Each level of recursion does O(n) work (for computing crossing sum), and the height of the recursion tree is O(log n).
- Space Complexity: O(log n) due to recursion stack (not counting input array space).
This version of the maximum subarray problem shows the power of divide and conquer. Even though Kadane’s algorithm solves this problem in linear time with dynamic programming, the divide and conquer approach teaches a fundamental design technique and works effectively on problems that don't allow linear-time solutions.
Learning this method enhances your understanding of recursion, problem breakdown, and merging solutions — all of which are vital in mastering algorithm design.
When to Use Divide and Conquer
- The problem can be divided into independent subproblems.
- The problem has recursive structure and optimal substructure.
- You can combine the solutions of subproblems efficiently.
Advantages and Disadvantages of Divide and Conquer
Advantages
- Efficient: Often reduces time complexity from O(n²) to O(n log n).
- Clean and Elegant: Recursive logic is easy to express and maintain.
- Parallelizable: Subproblems can often be solved in parallel.
Disadvantages
- Overhead: Recursive calls may lead to high space usage.
- Not Always Optimal: Sometimes simpler iterative methods may perform better.
- Difficult to Debug: Recursion stack can be complex to trace in large inputs.
Conclusion
The Divide and Conquer technique is a cornerstone of algorithmic problem-solving. It shines in problems where subparts can be solved independently and combined efficiently. By leveraging recursion and optimal substructure, it provides scalable and powerful solutions to many classic DSA problems like sorting, searching, and dynamic programming optimizations.