
- 1Overview of DSA Problem Solving Techniques
- 2Brute Force Technique in DSA
- 3Greedy Algorithm Technique in DSA
- 4Divide and Conquer Technique in DSA
- 5Dynamic Programming Technique in DSA
- 6Backtracking Technique in DSA
- 7Recursion Technique in DSA
- 8Sliding Window Technique in DSA
- 9Two Pointers Technique
- 10Binary Search Technique
- 11Tree / Graph Traversal Technique in DSA
- 12Bit Manipulation Technique in DSA
- 13Hashing Technique
- 14Heaps Technique in DSA

- 1Find Maximum and Minimum in Array using Loop
- 2Find Second Largest in Array
- 3Find Second Smallest in Array
- 4Reverse Array using Two Pointers
- 5Check if Array is Sorted
- 6Remove Duplicates from Sorted Array
- 7Left Rotate an Array by One Place
- 8Left Rotate an Array by K Places
- 9Move Zeroes in Array to End
- 10Linear Search in Array
- 11Union of Two Arrays
- 12Find Missing Number in Array
- 13Max Consecutive Ones in Array
- 14Find Kth Smallest Element
- 15Longest Subarray with Given Sum (Positives)
- 16Longest Subarray with Given Sum (Positives and Negatives)
- 17Find Majority Element in Array (more than n/2 times)
- 18Find Majority Element in Array (more than n/3 times)
- 19Maximum Subarray Sum using Kadane's Algorithm
- 20Print Subarray with Maximum Sum
- 21Stock Buy and Sell
- 22Rearrange Array Alternating Positive and Negative Elements
- 23Next Permutation of Array
- 24Leaders in an Array
- 25Longest Consecutive Sequence in Array
- 26Count Subarrays with Given Sum
- 27Sort an Array of 0s, 1s, and 2s
- 28Two Sum Problem
- 29Three Sum Problem
- 304 Sum Problem
- 31Find Length of Largest Subarray with 0 Sum
- 32Find Maximum Product Subarray

- 1Binary Search in Array using Iteration
- 2Find Lower Bound in Sorted Array
- 3Find Upper Bound in Sorted Array
- 4Search Insert Position in Sorted Array (Lower Bound Approach)
- 5Floor and Ceil in Sorted Array
- 6First Occurrence in a Sorted Array
- 7Last Occurrence in a Sorted Array
- 8Count Occurrences in Sorted Array
- 9Search Element in a Rotated Sorted Array
- 10Search in Rotated Sorted Array with Duplicates
- 11Minimum in Rotated Sorted Array
- 12Find Rotation Count in Sorted Array
- 13Search Single Element in Sorted Array
- 14Find Peak Element in Array
- 15Square Root using Binary Search
- 16Nth Root of a Number using Binary Search
- 17Koko Eating Bananas
- 18Minimum Days to Make M Bouquets
- 19Find the Smallest Divisor Given a Threshold
- 20Capacity to Ship Packages within D Days
- 21Kth Missing Positive Number
- 22Aggressive Cows Problem
- 23Allocate Minimum Number of Pages
- 24Split Array - Minimize Largest Sum
- 25Painter's Partition Problem
- 26Minimize Maximum Distance Between Gas Stations
- 27Median of Two Sorted Arrays of Different Sizes
- 28K-th Element of Two Sorted Arrays

- 1Reverse Words in a String
- 2Find the Largest Odd Number in a Numeric String
- 3Find Longest Common Prefix in Array of Strings
- 4Find Longest Common Substring
- 5Check If Two Strings Are Isomorphic - Optimal HashMap Solution
- 6Check String Rotation using Concatenation - Optimal Approach
- 7Check if Two Strings Are Anagrams - Optimal Approach
- 8Sort Characters by Frequency - Optimal HashMap and Heap Approach
- 9Find Longest Palindromic Substring - Dynamic Programming Approach
- 10Find Longest Palindromic Substring Without Dynamic Programming
- 11Remove Outermost Parentheses in String
- 12Find Maximum Nesting Depth of Parentheses - Optimal Stack-Free Solution
- 13Convert Roman Numerals to Integer - Efficient Approach
- 14Convert Integer to Roman Numeral - Step-by-Step for Beginners
- 15Implement Atoi - Convert String to Integer in Java
- 16Count Number of Substrings in a String - Explanation with Formula
- 17Edit Distance Problem
- 18Calculate Sum of Beauty of All Substrings - Optimal Approach
- 19Reverse Each Word in a String - Optimal Approach

- 1Check if i-th bit is set
- 2Check if a Number is Even/Odd
- 3Check if a Number is Power of 2
- 4Count Number of Set Bits
- 5Swap Two Numbers using XOR
- 6Divide Two Integers without using Multiplication, Division and Modulus Operator
- 7Count Number of Bits to Flip to Convert A to B
- 8Find the Number that Appears Odd Number of Times
- 9Power Set
- 10Find XOR of Numbers from L to R
- 11Prime Factors of a Number
- 12All Divisors of Number
- 13Sieve of Eratosthenes
- 14Find Prime Factorisation of a Number using Sieve
- 15Power(n,x)


- 1Preorder Traversal of a Binary Tree using Recursion
- 2Preorder Traversal of a Binary Tree using Iteration
- 3Inorder Traversal of a Binary Tree using Recursion
- 4Inorder Traversal of a Binary Tree using Iteration
- 5Postorder Traversal of a Binary Tree Using Recursion
- 6Postorder Traversal of a Binary Tree using Iteration
- 7Level Order Traversal of a Binary Tree using Recursion
- 8Level Order Traversal of a Binary Tree using Iteration
- 9Reverse Level Order Traversal of a Binary Tree using Iteration
- 10Reverse Level Order Traversal of a Binary Tree using Recursion
- 11Find Height of a Binary Tree
- 12Find Diameter of a Binary Tree
- 13Find Mirror of a Binary Tree
- 14Left View of a Binary Tree
- 15Right View of a Binary Tree
- 16Top View of a Binary Tree
- 17Bottom View of a Binary Tree
- 18Zigzag Traversal of a Binary Tree
- 19Check if a Binary Tree is Balanced
- 20Diagonal Traversal of a Binary Tree
- 21Boundary Traversal of a Binary Tree
- 22Construct a Binary Tree from a String with Bracket Representation
- 23Convert a Binary Tree into a Doubly Linked List
- 24Convert a Binary Tree into a Sum Tree
- 25Find Minimum Swaps Required to Convert a Binary Tree into a BST
- 26Check if a Binary Tree is a Sum Tree
- 27Check if All Leaf Nodes are at the Same Level in a Binary Tree
- 28Lowest Common Ancestor (LCA) in a Binary Tree
- 29Solve the Tree Isomorphism Problem
- 30Check if a Binary Tree Contains Duplicate Subtrees of Size 2 or More
- 31Check if Two Binary Trees are Mirror Images
- 32Calculate the Sum of Nodes on the Longest Path from Root to Leaf in a Binary Tree
- 33Print All Paths in a Binary Tree with a Given Sum
- 34Find the Distance Between Two Nodes in a Binary Tree
- 35Find the kth Ancestor of a Node in a Binary Tree
- 36Find All Duplicate Subtrees in a Binary Tree

- 1Find a Value in a Binary Search Tree
- 2Delete a Node in a Binary Search Tree
- 3Find the Minimum Value in a Binary Search Tree
- 4Find the Maximum Value in a Binary Search Tree
- 5Find the Inorder Successor in a Binary Search Tree
- 6Find the Inorder Predecessor in a Binary Search Tree
- 7Check if a Binary Tree is a Binary Search Tree
- 8Find the Lowest Common Ancestor of Two Nodes in a Binary Search Tree
- 9Convert a Binary Tree into a Binary Search Tree
- 10Balance a Binary Search Tree
- 11Merge Two Binary Search Trees
- 12Find the kth Largest Element in a Binary Search Tree
- 13Find the kth Smallest Element in a Binary Search Tree
- 14Flatten a Binary Search Tree into a Sorted List

- 1Breadth-First Search in Graphs
- 2Depth-First Search in Graphs
- 3Number of Provinces in an Undirected Graph
- 4Connected Components in a Matrix
- 5Rotten Oranges Problem - BFS in Matrix
- 6Flood Fill Algorithm - Graph Based
- 7Detect Cycle in an Undirected Graph using DFS
- 8Detect Cycle in an Undirected Graph using BFS
- 9Distance of Nearest Cell Having 1 - Grid BFS
- 10Surrounded Regions in Matrix using Graph Traversal
- 11Number of Enclaves in Grid
- 12Word Ladder - Shortest Transformation using Graph
- 13Word Ladder II - All Shortest Transformation Sequences
- 14Number of Distinct Islands using DFS
- 15Check if a Graph is Bipartite using DFS
- 16Topological Sort Using DFS
- 17Topological Sort using Kahn's Algorithm
- 18Cycle Detection in Directed Graph using BFS
- 19Course Schedule - Task Ordering with Prerequisites
- 20Course Schedule 2 - Task Ordering Using Topological Sort
- 21Find Eventual Safe States in a Directed Graph
- 22Alien Dictionary Character Order
- 23Shortest Path in Undirected Graph with Unit Distance
- 24Shortest Path in DAG using Topological Sort
- 25Dijkstra's Algorithm Using Set - Shortest Path in Graph
- 26Dijkstra’s Algorithm Using Priority Queue
- 27Shortest Distance in a Binary Maze using BFS
- 28Path With Minimum Effort in Grid using Graphs
- 29Cheapest Flights Within K Stops - Graph Problem
- 30Number of Ways to Reach Destination in Shortest Time - Graph Problem
- 31Minimum Multiplications to Reach End - Graph BFS
- 32Bellman-Ford Algorithm for Shortest Paths
- 33Floyd Warshall Algorithm for All-Pairs Shortest Path
- 34Find the City With the Fewest Reachable Neighbours
- 35Minimum Spanning Tree in Graphs
- 36Prim's Algorithm for Minimum Spanning Tree
- 37Disjoint Set (Union-Find) with Union by Rank and Path Compression
- 38Kruskal's Algorithm - Minimum Spanning Tree
- 39Minimum Operations to Make Network Connected
- 40Most Stones Removed with Same Row or Column
- 41Accounts Merge Problem using Disjoint Set Union
- 42Number of Islands II - Online Queries using DSU
- 43Making a Large Island Using DSU
- 44Bridges in Graph using Tarjan's Algorithm
- 45Articulation Points in Graphs
- 46Strongly Connected Components using Kosaraju's Algorithm
Greedy Algorithm Technique in DSA | Strategy & Examples
Greedy Algorithms in a Nutshell
- Make the best choice at each step based on local information.
- Does not reconsider previous choices.
- Efficient for problems that exhibit greedy-choice property and optimal substructure.
What is the Greedy Algorithm Technique?
The Greedy Algorithm Technique is a method for solving optimization problems by making a series of choices, each of which looks best at the moment. Unlike dynamic programming, greedy algorithms do not revisit or revise earlier decisions.
This method is effective when the problem has the greedy-choice property (local optimum leads to global optimum) and optimal substructure (optimal solution of a problem contains optimal solutions to subproblems).
What is Greedy-Choice Property?
The greedy-choice property refers to a situation where making the best possible choice at each small step leads to the best overall solution. That is, you don’t need to look ahead or try all possible combinations — you can build the solution step-by-step by always picking what looks best right now.
This works only if every local decision (greedy choice) contributes to the global optimum. If even one local choice can lead you away from the best final answer, then the problem does not satisfy the greedy-choice property. So, a problem that has this property lets you be short-sighted and still guarantees that you’ll get the best answer in the end.
Intuitively, imagine you're trying to reach a goal by choosing one option at a time. If always taking the seemingly best step gets you to the best final result — without ever needing to go back and change your decisions — then the greedy-choice property holds.
If a problem does not have this property, greedy strategies may lead to incorrect or sub-optimal results.
What is Optimal Substructure?
Optimal substructure means that a big problem can be broken down into smaller pieces, and the best solution to the big problem depends on the best solutions to those smaller pieces.
This property is crucial because it allows you to solve the overall problem by solving its parts one at a time. If the subproblems are solved optimally, then combining them gives the correct solution for the full problem.
To build intuition: imagine you're solving a puzzle where solving smaller sections perfectly leads to the whole puzzle being solved. If fixing the small parts automatically helps fix the entire picture, then the puzzle has optimal substructure.
In short, optimal substructure allows you to build the global solution incrementally by solving smaller parts optimally.
Examples of Greedy Algorithm Applications
1. Coin Change Problem (Minimum Coins)
Problem Statement: You are given a set of coin denominations (like 1, 2, 5, 10, etc.) and a target amount. The goal is to make the target amount using the fewest number of coins. You are allowed to use an unlimited number of each coin type.
Example
Suppose the available coins are [1, 5, 10, 20, 50, 100]
and the target amount is 135
.
Greedy Strategy
The greedy approach says: Always choose the largest coin that does not exceed the remaining amount. Subtract that coin from the amount, and repeat the process until the amount becomes zero.
Step-by-Step Breakdown:
- Start with amount = 135
- The largest coin ≤ 135 is 100 → use it → remaining = 35
- The largest coin ≤ 35 is 20 → use it → remaining = 15
- The largest coin ≤ 15 is 10 → use it → remaining = 5
- The largest coin ≤ 5 is 5 → use it → remaining = 0
Coins used: [100, 20, 10, 5] → total coins = 4
Pseudocode
// Pseudocode (assuming denominations are sorted in descending order)
count = 0
for coin in coins:
while amount >= coin:
amount -= coin
count += 1
return count
Why Greedy Works Here
Greedy-Choice Property:
This problem has the greedy-choice property if the coin system is designed such that choosing the largest denomination at each step always leads to the optimal solution. In standard currency systems (like Indian Rupees or US Dollars), this property is satisfied. Taking the largest coin first always minimizes the total number of coins.
Why? Because skipping a larger coin in favor of multiple smaller coins would always increase the total number of coins used, making it worse.
Optimal Substructure:
The optimal solution to the overall problem depends on the optimal solution to smaller subproblems.
For example, to solve the problem for amount = 135:
- We first take 100, and now we need to solve for 35.
- Then we take 20, and solve for 15.
- Then take 10, and solve for 5.
- Then take 5, and we’re done.
Time and Space Complexity
- Time Complexity: O(n) in the best case (if we quickly reach the amount), or O(n * amount / smallest coin) in worst case when using many small coins.
- Space Complexity: O(1) — no extra memory used other than counters.
Important Note:
This greedy solution does not work with all coin systems. For example, if coins are [1, 3, 4] and the target is 6, the greedy approach gives [4, 1, 1] (3 coins), but the optimal is [3, 3] (2 coins). That’s why the coin system must be canonical — designed so that the greedy algorithm always gives the correct answer.
How to Solve Such Cases where Greedy Algorithm Does not Work?
- Dynamic Programming: Try all combinations in a bottom-up or top-down approach and store solutions for subproblems.
- Backtracking or BFS: Explore all valid combinations (not scalable for large inputs).
The coin change problem is an excellent example of how greedy algorithms can solve optimization problems efficiently — but only when the problem satisfies both the greedy-choice property and optimal substructure. It's crucial to test whether the greedy method works for your specific input set before applying it in production.
2. Activity Selection Problem
Problem Statement:
You are given a list of activities. Each activity has a start
time and an end
time. Your goal is to select the maximum number of non-overlapping activities — meaning no two selected activities can happen at the same time.
This is a classic optimization problem where you must choose activities in such a way that you can attend the maximum number without any time conflicts.
Understanding the Intuition
Imagine you are organizing a schedule. You have multiple tasks to complete, but you can only do one at a time. Each task takes up a block of time. Your aim is to pick the highest number of tasks without overlaps, so that your time is used efficiently.
At first glance, you might think of trying all combinations of activities and choosing the one with the most tasks. But that would be slow (exponential time). Instead, we use a greedy algorithm.
Why Greedy Works Here
Greedy-Choice Property
We claim that if we always choose the activity that finishes the earliest (among the remaining ones), we will eventually arrive at the best solution. This is the greedy choice. The reason it works is because:
- By finishing early, we leave the most room for future activities.
- Choosing an activity that finishes late may block multiple future opportunities.
Optimal Substructure
Once we pick an activity (say, the one that ends earliest), the problem reduces to a smaller subproblem: selecting the maximum number of activities that start after the one we just chose. If we solve this subproblem optimally and combine it with our first greedy choice, we get an optimal solution for the entire problem.
Example
Consider the following list of activities:
Activity | Start | End |
---|---|---|
A1 | 1 | 4 |
A2 | 3 | 5 |
A3 | 0 | 6 |
A4 | 5 | 7 |
A5 | 8 | 9 |
A6 | 5 | 9 |
A7 | 6 | 10 |
Step 1: Sort Activities by End Time
We sort the activities based on their end times:
Activity | Start | End |
---|---|---|
A1 | 1 | 4 |
A2 | 3 | 5 |
A3 | 0 | 6 |
A4 | 5 | 7 |
A5 | 8 | 9 |
A6 | 5 | 9 |
A7 | 6 | 10 |
Step 2: Greedy Selection
We start by picking A1 (ends at 4).
Next, we look for the next activity whose start time is ≥ 4. A2 starts at 3, so skip. A3 starts at 0, skip. A4 starts at 5 → select A4.
Now A4 ends at 7. Next activity starting ≥ 7 is A5 (starts at 8) → select A5.
Final selected activities: A1, A4, A5
Maximum number of non-overlapping activities = 3
Pseudocode
# Let activities be a list of (start, end) tuples
activities.sort(key=lambda x: x[1]) # Sort by end time
last_end = -1
selected = []
for activity in activities:
if activity[0] >= last_end:
selected.append(activity)
last_end = activity[1]
return selected
Time Complexity:
- O(n log n) — for sorting the activities by end time.
- O(n) — to iterate and select activities.
Space Complexity:
- O(1) — if we just count them.
- O(k) — if we store selected activities (where k is number of selected activities).
The Activity Selection Problem is a great example of a greedy algorithm that works efficiently and correctly due to two key properties:
- Greedy-Choice Property: Always picking the earliest finishing activity leads to the optimal result.
- Optimal Substructure: The remaining problem after selecting one activity is a smaller version of the same problem.
Because of these properties, the greedy method provides an optimal and efficient solution without backtracking or complex logic.
3. Huffman Encoding — Greedy Algorithm in Action
Problem:
Given a list of characters and their corresponding frequencies (how often they appear), the task is to generate a binary prefix code for each character such that:
- No code is a prefix of another (prefix property).
- The total length of the encoded data is minimized.
This is known as constructing an optimal prefix code. It is commonly used in file compression techniques to reduce storage or transmission size.
Understanding with an Example:
Suppose we have the following characters and frequencies:
Character | Frequency |
---|---|
A | 5 |
B | 9 |
C | 12 |
D | 13 |
E | 16 |
F | 45 |
We want to assign a binary code to each character so that the total cost (frequency × code length) is minimized.
Greedy Strategy:
The Huffman Encoding algorithm uses a greedy approach. At every step, it:
- Selects the two characters (or combined nodes) with the smallest frequencies.
- Merges them into a new node with their combined frequency.
- Treats the new node as a parent, and repeats the process until one node (the root of the Huffman tree) remains.
// Pseudocode using a priority queue (min-heap)
while (more than one node in queue):
node1 = extract min frequency node
node2 = extract next min frequency node
mergedNode = new node with freq = node1.freq + node2.freq
insert mergedNode back into queue
return root of Huffman Tree
Step-by-Step Tree Construction:
- Pick A(5) and B(9) → Merge into nodeAB(14)
- Pick C(12) and nodeAB(14) → Merge into nodeCAB(26)
- Pick D(13) and E(16) → Merge into nodeDE(29)
- Pick nodeCAB(26) and nodeDE(29) → Merge into nodeCDEAB(55)
- Pick nodeCDEAB(55) and F(45) → Final merge into root node(100)
This creates a binary tree where each left and right move from the root is interpreted as '0' and '1', giving each character a unique binary code.
Why It Works — Greedy-Choice Property:
At each step, the greedy algorithm makes the best local decision: merging the two lowest frequency nodes. This ensures that the characters with the least frequency are deeper in the tree, and thus get longer codes — but since they appear less frequently, their longer code contributes less to the total cost. This choice does not need to be revised later, proving the greedy-choice property.
Why It Works — Optimal Substructure:
The problem can be broken into smaller subproblems. Each time we merge two nodes, we reduce the size of the problem. If each subproblem (smaller tree) is solved optimally, combining them results in an optimal larger tree. This confirms the optimal substructure property — the final optimal tree is made from optimal subtrees.
Time Complexity:
O(n log n), where n
is the number of unique characters. The priority queue (min-heap) is used to repeatedly extract the lowest frequencies, which takes log n time per operation.
Real-World Use:
Huffman Encoding is widely used in file compression formats such as ZIP, JPEG, and MP3. It helps minimize file sizes by assigning shorter codes to frequent characters and longer codes to rare ones, optimizing the overall space.
Conclusion:
Huffman Encoding is a classic example of applying a greedy algorithm effectively. It demonstrates both greedy-choice property and optimal substructure, making it not only efficient but also provably optimal for the problem it solves.
4. Fractional Knapsack Problem — Explained for Beginners
Problem Statement:
You are given a set of items. Each item has a value (profit) and a weight. You also have a knapsack (bag) that can hold a maximum weight W
.
Your goal is to maximize the total value you can carry in the knapsack. However, there's a twist — you are allowed to take fractions of items. So if an item weighs 10 units and you only have 5 units of capacity left, you can take half of it and get half the value.
Greedy Strategy:
To get the most value for the least weight, we follow a greedy approach. For each item, we compute its value-to-weight ratio. This tells us how much value we get per unit of weight. We then:
- Sort all items in descending order of their value-to-weight ratio.
- Pick the item with the highest ratio first and take as much of it as we can.
- If we can’t take the whole item, we take as much as the remaining capacity allows (a fraction).
- Repeat until the knapsack is full.
Example:
Suppose you have the following items and knapsack capacity W = 50
:
Item | Value | Weight |
---|---|---|
1 | 60 | 10 |
2 | 100 | 20 |
3 | 120 | 30 |
Step 1: Calculate value/weight ratio
- Item 1: 60/10 = 6
- Item 2: 100/20 = 5
- Item 3: 120/30 = 4
Step 2: Sort items by value/weight ratio: Item 1 (6), Item 2 (5), Item 3 (4)
Step 3: Start filling the knapsack (capacity = 50)
- Take all of Item 1 (weight = 10, value = 60). Remaining capacity = 40.
- Take all of Item 2 (weight = 20, value = 100). Remaining capacity = 20.
- Take 2/3 of Item 3 (weight = 20, value = (2/3)*120 = 80). Remaining capacity = 0.
Total value = 60 + 100 + 80 = 240
How This Satisfies Greedy-Choice Property:
At every step, we picked the item that gave us the most value per unit weight — the locally optimal (greedy) choice. Because the problem allows breaking items and the value scales linearly with weight, this strategy always leads to the optimal (best) global solution. We never had to go back and change earlier decisions.
How This Satisfies Optimal Substructure:
Suppose we’ve filled part of the knapsack optimally (say the first 30 units). The remaining 20 units of capacity is now a smaller subproblem. Solving it using the same strategy and combining both gives us the best answer for the entire problem. So, solving smaller parts optimally builds the global solution — this is optimal substructure.
Pseudocode:
function fractionalKnapsack(items, W):
sort items by value/weight in descending order
totalValue = 0
for item in items:
if item.weight <= W:
totalValue += item.value
W -= item.weight
else:
totalValue += item.value * (W / item.weight)
break // Knapsack is full
return totalValue
Time and Space Complexity:
- Time Complexity: O(n log n) — due to sorting the items by ratio.
- Space Complexity: O(1) — if sorting is done in-place, and no extra data structures are used.
Conclusion: Fractional Knapsack is a classic example of a problem where the greedy algorithm guarantees the optimal solution. This is because each choice is locally best and contributes perfectly to the global solution due to the linear scaling of value with weight.
When to Use Greedy Algorithms
- Problem exhibits greedy-choice property.
- Problem has optimal substructure.
- You need a fast, often near-optimal solution.
Advantages and Disadvantages of Greedy Technique
Advantages
- Fast and Efficient: Greedy algorithms are generally faster than dynamic programming or brute force.
- Simpler to Implement: They often require fewer lines of code and simpler logic.
- Scalable: Performs well even for large datasets if conditions are met.
Disadvantages
- Not Always Optimal: May fail to produce correct results if greedy-choice property is not satisfied.
- Problem-Specific: Requires deep understanding of problem structure.
- No Backtracking: Once a decision is made, it’s not revised.
Conclusion
Greedy algorithms are powerful tools when used in the right problems. They help build fast and efficient solutions using local decisions. However, it’s essential to validate that the problem meets the greedy criteria. Otherwise, the solution might be sub-optimal or incorrect. Use them wisely after testing against edge cases or comparing with optimal solutions.