Asymptotic Notations
Asymptotic Notations
We have discussed Asymptotic Analysis, and Worst, Average and Best Cases of Algorithms. The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that doesn’t depend on machine specific constants, and doesn’t require algorithms to be implemented and time taken by programs to be compared. Asymptotic notations are mathematical tools to represent time complexity of algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly used to represent time complexity of algorithms.
1) Θ Notation: The theta notation bounds a functions from above and below, so it defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low order terms and ignore leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Dropping lower order terms is always fine because there will always be a n0 after which Θ(n3) has higher values than Θn2) irrespective of the constants involved.
For a given function g(n), we denote Θ(g(n)) is following set of functions.
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-negative for values of n greater than n0.
2) Big O Notation: The Big O notation defines an upper bound of an algorithm, it bounds a function only from above. For example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have to use two statements for best and worst cases:
1. The worst case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have upper bound on time complexity of an algorithm. Many times we easily find an upper bound by simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0}
3) Ω Notation: Just as Big O notation provides an asymptotic upper bound on a function, Ω notation provides an asymptotic lower bound.
Ω Notation< can be useful when we have lower bound on time complexity of an algorithm. As discussed in the previous post, the best case performance of an algorithm is generally not useful, the Omega notation is the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.
Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= cg(n) <= f(n) for all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be written as Ω(n), but it is not a very useful information about insertion sort, as we are generally interested in worst case and sometimes in average case.
Exercise:
Which of the following statements is/are valid?
1. Time Complexity of QuickSort is Θ(n^2)
2. Time Complexity of QuickSort is O(n^2)
3. For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
4. Time complexity of all computer algorithms can be written as Ω(1)
Important Links :
- There are two more notations called little o and little omega. Little o provides strict upper bound (equality condition is removed from Big O) and little omega provides strict lower bound (equality condition removed from big omega)
- Analysis of Algorithms | Set 4 (Analysis of Loops)
- Recent Articles on analysis of algorithm.
Worst, Average and Best Cases
Worst, Average and Best Cases
we discussed how Asymptotic analysis overcomes the problems of naive way of analyzing algorithms. In this post, we will take an example of Linear Search and analyze it using Asymptotic analysis.
We can have three cases to analyze an algorithm:
1) Worst Case
2) Average Case
3) Best Case
Let us consider the following implementation of Linear Search.
#include <stdio.h> // Linearly search x in arr[]. If x is present then return the index, // otherwise return -1 int search(int arr[], int n, int x) { int i; for (i=0; i<n; i++) { if (arr[i] == x) return i; } return -1; } /* Driver program to test above functions*/ int main() { int arr[] = {1, 10, 30, 15}; int x = 30; int n = sizeof(arr)/sizeof(arr[0]); printf("%d is present at index %d", x, search(arr, n, x)); getchar(); return 0; }
Worst Case Analysis (Usually Done)
In the worst case analysis, we calculate upper bound on the running time of an algorithm. We must know the case that causes the maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be Θ(n).
Average Case Analysis (Sometimes done)
In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by a total number of inputs. We must know (or predict) distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). Following is the value of average case time complexity.
Average Case Time ==
= Θ(n)
Best Case Analysis (Bogus)
In the best case analysis, we calculate lower bound on the running time of an algorithm. We must know the case that causes minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Θ(1)
Most of the times, we do worst-case analysis to analyze algorithms. In the worst analysis, we guarantee an upper bound on the running time of an algorithm which is good information.
The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the average case analysis, we must know (or predict) the mathematical distribution of all possible inputs.
The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any information as in the worst case, an algorithm may take years to run.
For some algorithms, all the cases are asymptotically same, i.e., there are no worst and best cases. For example,Merge Sort. Merge Sort does Θ(nLogn) operations in all cases. Most of the other sorting algorithms have worst and best cases. For example, in the typical implementation of Quick Sort (where pivot is chosen as a corner element), the worst occurs when the input array is already sorted and the best occurs when the pivot elements always divide array in two halves. For insertion sort, the worst case occurs when the array is reverse sorted and the best case occurs when the array is sorted in the same order as output.
Analysis of Algorithms
(Asymptotic Analysis)
Why performance analysis?
There are many important things that should be taken care of, like user-friendliness, modularity, security, maintainability, etc. Why worry about performance?
The answer to this is simple, we can have all the above things only if we have performance. So performance is like currency through which we can buy all the above things. Another reason for studying performance is – speed is fun!
Given two algorithms for a task, how do we find out which one is better?
One naive way of doing this is – implement both the algorithms and run the two programs on your computer for different inputs and see which one takes less time. There are many problems with this approach for the analysis of algorithms.
1) It might be possible that for some inputs, the first algorithm performs better than the second. And for some inputs second performs better.
2) It might also be possible that for some inputs, the first algorithm performs better on one machine and the second works better on other machines for some other inputs.
Asymptotic Analysis is the big idea that handles above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how does the time (or space) taken by an algorithm increases with the input size.
For example, let us consider the search problem (searching for a given item) in a sorted array. One way to search is Linear Search (order of growth is linear) and another way is Binary Search (order of growth is logarithmic). To understand how Asymptotic Analysis solves the above-mentioned problems in analyzing algorithms, let us say we run the Linear Search on a fast computer and Binary Search on a slow computer. For small values of input array size n, the fast computer may take less time. But, after a certain value of input array size, the Binary Search will definitely start taking less time compared to the Linear Search even though the Binary Search is being run on a slow machine. The reason is the order of growth of Binary Search with respect to input size logarithmic while the order of growth of Linear Search is linear. So the machine dependent constants can always be ignored after certain values of input size.
Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms. For example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time respectively on a machine. Both of these algorithms are asymptotically same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is better as we ignore constants in Asymptotic Analysis.
Also, in the Asymptotic analysis, we always talk about input sizes larger than a constant value. It might be possible that those large inputs are never given to your software and an algorithm which is asymptotically slower, always performs better for your particular situation. So, you may end up choosing an algorithm that is Asymptotically slower but faster for your software.
Algorithms
Algorithms
Topics :
- Geometric Algorithms
- Mathematical Algorithms
- Bit Algorithms
- Graph Algorithms
- Randomized Algorithms
- Branch and Bound
- Quizzes on Algorithms
- Misc
Analysis of Algorithms:
- Asymptotic Analysis
- Worst, Average and Best Cases
- Asymptotic Notations
- Little o and little omega notations
- Analysis of Loops
- Solving Recurrences
- Amortized Analysis
- What does ‘Space Complexity’ mean?
- Pseudo-polynomial Algorithms
- NP-Completeness Introduction
- Polynomial Time Approximation Scheme
- A Time Complexity Question
- Time Complexity of building a heap
- Time Complexity where loop variable is incremented by 1, 2, 3, 4 ..
- Time Complexity of Loop with Powers
- Performance of loops (A caching question)
Recent Articles on Analysis of Algorithms
Quiz on Analysis of Algorithms
Quiz on Recurrences
Searching and Sorting:
- Linear Search, Binary Search, Jump Search, Interpolation Search, Exponential Search, Ternary Search
- Selection Sort, Bubble Sort, Insertion Sort, Merge Sort, Heap Sort, QuickSort, Radix Sort, Counting Sort, Bucket Sort, ShellSort, Comb Sort, Pigeonhole Sort, Cycle Sort
- Interpolation search vs Binary search
- Stability in sorting algorithms
- When does the worst case of Quicksort occur?
- Lower bound for comparison based sorting algorithms
- Which sorting algorithm makes minimum number of memory writes?
- Find the Minimum length Unsorted Subarray, sorting which makes the complete array sorted
- Merge Sort for Linked Lists
- Sort a nearly sorted (or K sorted) array
- Iterative Quick Sort
- QuickSort on Singly Linked List
- QuickSort on Doubly Linked List
- Find k closest elements to a given value
- Sort n numbers in range from 0 to n^2 – 1 in linear time
- A Problem in Many Binary Search Implementations
- Search in an almost sorted array
- Sort an array in wave form
- Why is Binary Search preferred over Ternary Search?
- K’th Smallest/Largest Element in Unsorted Array
- K’th Smallest/Largest Element in Unsorted Array in Expected Linear Time
- K’th Smallest/Largest Element in Unsorted Array in Worst Case Linear Time
- Find the closest pair from two sorted arrays
- Find common elements in three sorted arrays
- Given a sorted array and a number x, find the pair in array whose sum is closest to x
- Count 1’s in a sorted binary array
- Binary Insertion Sort
- Insertion Sort for Singly Linked List
- Why Quick Sort preferred for Arrays and Merge Sort for Linked Lists?
- Merge Sort for Doubly Linked List
- Minimum adjacent swaps to move maximum and minimum to corners
Recent Articles on Searching
Recent Articles on Sorting
Quiz on Searching
Quiz on Sorting
Coding Practice on Searching
Coding Practice on Sorting
Greedy Algorithms:
- Activity Selection Problem
- Kruskal’s Minimum Spanning Tree Algorithm
- Huffman Coding
- Efficient Huffman Coding for Sorted Input
- Prim’s Minimum Spanning Tree Algorithm
- Prim’s MST for Adjacency List Representation
- Dijkstra’s Shortest Path Algorithm
- Dijkstra’s Algorithm for Adjacency List Representation
- Job Sequencing Problem
- Quiz on Greedy Algorithms
- Greedy Algorithm to find Minimum number of Coins
- K Centers Problem
- Minimum Number of Platforms Required for a Railway/Bus Station
Recent Articles on Greedy Algorithms
Quiz on Greedy Algorithms
Coding Practice on Greedy Algorithms
Dynamic Programming:
- Overlapping Subproblems Property
- Optimal Substructure Property
- Longest Increasing Subsequence
- Longest Common Subsequence
- Edit Distance
- Min Cost Path
- Coin Change
- Matrix Chain Multiplication
- Binomial Coefficient
- 0-1 Knapsack Problem
- Egg Dropping Puzzle
- Longest Palindromic Subsequence
- Cutting a Rod
- Maximum Sum Increasing Subsequence
- Longest Bitonic Subsequence
- Floyd Warshall Algorithm
- Palindrome Partitioning
- Partition problem
- Word Wrap Problem
- Maximum Length Chain of Pairs
- Variations of LIS
- Box Stacking Problem
- Program for Fibonacci numbers
- Minimum number of jumps to reach end
- Maximum size square sub-matrix with all 1s
- Ugly Numbers
- Largest Sum Contiguous Subarray
- Longest Palindromic Substring
- Bellman–Ford Algorithm for Shortest Paths
- Optimal Binary Search Tree
- Largest Independent Set Problem
- Subset Sum Problem
- Maximum sum rectangle in a 2D matrix
- Count number of binary strings without consecutive 1?s
- Boolean Parenthesization Problem
- Count ways to reach the n’th stair
- Minimum Cost Polygon Triangulation
- Mobile Numeric Keypad Problem
- Count of n digit numbers whose sum of digits equals to given sum
- Minimum Initial Points to Reach Destination
- Total number of non-decreasing numbers with n digits
- Find length of the longest consecutive path from a given starting character
- Tiling Problem
- Minimum number of squares whose sum equals to given number n
- Find minimum number of coins that make a given value
- Collect maximum points in a grid using two traversals
- Shortest Common Supersequence
- Compute sum of digits in all numbers from 1 to n
- Count possible ways to construct buildings
- Maximum profit by buying and selling a share at most twice
- How to print maximum number of A’s using given four keys
- Find the minimum cost to reach destination using a train
- Vertex Cover Problem | Set 2 (Dynamic Programming Solution for Tree)
- Count number of ways to reach a given score in a game
- Weighted Job Scheduling
- Longest Even Length Substring such that Sum of First and Second Half is same
Recent Articles on Dynamic Programming
Quiz on Dynamic Programming
Coding Practice on Dynamic Programing
Pattern Searching:
- Naive Pattern Searching
- KMP Algorithm
- Rabin-Karp Algorithm
- A Naive Pattern Searching Question
- Finite Automata
- Efficient Construction of Finite Automata
- Boyer Moore Algorithm – Bad Character Heuristic
- Suffix Array
- Anagram Substring Search (Or Search for all permutations)
- Pattern Searching using a Trie of all Suffixes
- Aho-Corasick Algorithm for Pattern Searching
- kasai’s Algorithm for Construction of LCP array from Suffix Array
- Z algorithm (Linear time pattern searching Algorithm)
- Program to wish Women’s Day
Recent Articles on Pattern Searching
Other String Algorithms:
- Manacher’s Algorithm – Linear Time Longest Palindromic Substring – Part 1, Part 2, Part 3, Part 4
- Longest Even Length Substring such that Sum of First and Second Half is same
- Print all possible strings that can be made by placing spaces
Recent Articles on Strings
Coding practice on Strings
Backtracking:
- Print all permutations of a given string
- The Knight’s tour problem
- Rat in a Maze
- N Queen Problem
- Subset Sum
- m Coloring Problem
- Hamiltonian Cycle
- Sudoku
- Tug of War
- Solving Cryptarithmetic Puzzles
Recent Articles on Backtracking
Coding Practice on Backtracking
Divide and Conquer:
- Introduction
- Write your own pow(x, n) to calculate x*n
- Median of two sorted arrays
- Count Inversions
- Closest Pair of Points
- Strassen’s Matrix Multiplication
Recent Articles on Divide and Conquer
Quiz on Divide and Conquer
Coding practice on Divide and Conquer
Geometric Algorithms:
- Closest Pair of Points | O(nlogn) Implementation
- How to check if two given line segments intersect?
- How to check if a given point lies inside or outside a polygon?
- Convex Hull | Set 1 (Jarvis’s Algorithm or Wrapping)
- Convex Hull | Set 2 (Graham Scan)
- Given n line segments, find if any two segments intersect
- Check whether a given point lies inside a triangle or not
- How to check if given four points form a square
Recent Articles on Geometric Algorithms
Coding Practice on Geometric Algorithms
Mathematical Algorithms:
- Write an Efficient Method to Check if a Number is Multiple of 3
- Efficient way to multiply with 7
- Write a C program to print all permutations of a given string
- Lucky Numbers
- Write a program to add two numbers in base 14
- Babylonian method for square root
- Multiply two integers without using multiplication, division and bitwise operators, and no loops
- Print all combinations of points that can compose a given number
- Write you own Power without using multiplication(*) and division(/) operators
- Program for Fibonacci numbers
- Average of a stream of numbers
- Count numbers that don’t contain 3
- MagicSquare
- Sieve of Eratosthenes
- Number which has the maximum number of distinct prime factors in the range M to N
- Find day of the week for a given date
- DFA based division
- Generate integer from 1 to 7 with equal probability
- Given a number, find the next smallest palindrome
- Make a fair coin from a biased coin
- Check divisibility by 7
- Find the largest multiple of 3
- Lexicographic rank of a string
- Print all permutations in sorted (lexicographic) order
- Shuffle a given array
- Space and time efficient Binomial Coefficient
- Reservoir Sampling
- Pascal’s Triangle
- Select a random number from stream, with O(1) space
- Find the largest multiple of 2, 3 and 5
- Efficient program to calculate e^x
- Measure one litre using two vessels and infinite water supply
- Efficient program to print all prime factors of a given number
- Print all possible combinations of r elements in a given array of size n
- Random number generator in arbitrary probability distribution fashion
- How to check if a given number is Fibonacci number?
- Russian Peasant Multiplication
- Count all possible groups of size 2 or 3 that have sum as multiple of 3
- Tower of Hanoi
- Horner’s Method for Polynomial Evaluation
- Count trailing zeroes in factorial of a number
- Program for nth Catalan Number
- Generate one of 3 numbers according to given probabilities
- Find Excel column name from a given column number
- Find next greater number with same set of digits
- Count Possible Decodings of a given Digit Sequence
- Calculate the angle between hour hand and minute hand
- Count number of binary strings without consecutive 1?s
- Find the smallest number whose digits multiply to a given number n
- Draw a circle without floating point arithmetic
- How to check if an instance of 8 puzzle is solvable?
- Birthday Paradox
- Multiply two polynomials
- Count Distinct Non-Negative Integer Pairs (x, y) that Satisfy the Inequality x*x + y*y < n
- Count ways to reach the n’th stair
- Replace all ‘0’ with ‘5’ in an input Integer
- Program to add two polynomials
- Print first k digits of 1/n where n is a positive integer
- Given a number as a string, find the number of contiguous subsequences which recursively add up to 9
- Program for Bisection Method
- Program for Method Of False Position
- Program for Newton Raphson Method
Recent Articles on Mathematical Algorithms
Coding Practice on Mathematical Algorithms
Bit Algorithms:
- Find the element that appears once
- Detect opposite signs
- Set bits in all numbers from 1 to n
- Swap bits
- Add two numbers
- Smallest of three
- A Boolean Array Puzzle
- Set bits in an (big) array
- Next higher number with same number of set bits
- Optimization Technique (Modulus)
- Add 1 to a number
- Multiply with 3.5
- Turn off the rightmost set bit
- Check for Power of 4
- Absolute value (abs) without branching
- Modulus division by a power-of-2-number
- Minimum or Maximum of two integers
- Rotate bits
- Find the two non-repeating elements in an array
- Number Occurring Odd Number of Times
- Check for Integer Overflow
- Little and Big Endian
- Reverse Bits of a Number
- Count set bits in an integer
- Number of bits to be flipped to convert A to B
- Next Power of 2
- Check if a Number is Multiple of 3
- Find parity
- Multiply with 7
- Find whether a no is power of two
- Position of rightmost set bit
- Binary representation of a given number
- Swap all odd and even bits
- Find position of the only set bit
- Karatsuba algorithm for fast multiplication
- How to swap two numbers without using a temporary variable?
- Check if a number is multiple of 9 using bitwise operators
- Swap two nibbles in a byte
- How to turn off a particular bit in a number?
- Check if binary representation of a number is palindrome
Recent Articles on Bit Algorithms
Quiz on Bit Algorithms
Coding Practice on Bit Algorithms
Graph Algorithms:
Introduction, DFS and BFS:
- Graph and its representations
- Breadth First Traversal for a Graph
- Depth First Traversal for a Graph
- Applications of Depth First Search
- Detect Cycle in a Directed Graph
- Detect Cycle in a an Undirected Graph
- Detect cycle in an undirected graph
- Longest Path in a Directed Acyclic Graph
- Topological Sorting
- Check whether a given graph is Bipartite or not
- Snake and Ladder Problem
- Biconnected Components
- Check if a given graph is tree or not
Minimum Spanning Tree:
- Prim’s Minimum Spanning Tree (MST))
- Applications of Minimum Spanning Tree Problem
- Prim’s MST for Adjacency List Representation
- Kruskal’s Minimum Spanning Tree Algorithm
- Boruvka’s algorithm for Minimum Spanning Tree
Shortest Paths:
- Dijkstra’s shortest path algorithm
- Dijkstra’s Algorithm for Adjacency List Representation
- Bellman–Ford Algorithm
- Floyd Warshall Algorithm
- Johnson’s algorithm for All-pairs shortest paths
- Shortest Path in Directed Acyclic Graph
- Some interesting shortest path questions
- Shortest path with exactly k edges in a directed and weighted graph
Connectivity:
- Find if there is a path between two vertices in a directed graph
- Connectivity in a directed graph
- Articulation Points (or Cut Vertices) in a Graph
- Biconnected graph
- Bridges in a graph
- Eulerian path and circuit
- Fleury’s Algorithm for printing Eulerian Path or Circuit
- Strongly Connected Components
- Transitive closure of a graph
- Find the number of islands
- Count all possible walks from a source to a destination with exactly k edges
- Euler Circuit in a Directed Graph
- Biconnected Components
- Tarjan’s Algorithm to find Strongly Connected Components
Hard Problems:
- Graph Coloring (Introduction and Applications)
- Greedy Algorithm for Graph Coloring
- Travelling Salesman Problem (Naive and Dynamic Programming)
- Travelling Salesman Problem (Approximate using MST)
- Hamiltonian Cycle
- Vertex Cover Problem (Introduction and Approximate Algorithm)
- K Centers Problem (Greedy Approximate Algorithm)
Maximum Flow:
- Ford-Fulkerson Algorithm for Maximum Flow Problem
- Find maximum number of edge disjoint paths between two vertices
- Find minimum s-t cut in a flow network
- Maximum Bipartite Matching
- Channel Assignment Problem
Misc:
- Find if the strings can be chained to form a circle
- Given a sorted dictionary of an alien language, find order of characters
- Karger’s algorithm for Minimum Cut
- Karger’s algorithm for Minimum Cut | Set 2 (Analysis and Applications)
- Hopcroft–Karp Algorithm for Maximum Matching | Set 1 (Introduction)
- Hopcroft–Karp Algorithm for Maximum Matching | Set 2 (Implementation)
- Length of shortest chain to reach a target word
- Find same contacts in a list of contacts
All Algorithms on Graph
Quiz on Graph
Quiz on Graph Traversals
Quiz on Graph Shortest Paths
Quiz on Graph Minimum Spanning Tree
Coding Practice on Graph
Randomized Algorithms:
- Linearity of Expectation
- Expected Number of Trials until Success
- Randomized Algorithms | Set 0 (Mathematical Background)
- Randomized Algorithms | Set 1 (Introduction and Analysis)
- Randomized Algorithms | Set 2 (Classification and Applications)
- Randomized Algorithms | Set 3 (1/2 Approximate Median)
- Karger’s algorithm for Minimum Cut
- K’th Smallest/Largest Element in Unsorted Array | Set 2 (Expected Linear Time)
- Reservoir Sampling
- Shuffle a given array
- Select a Random Node from a Singly Linked List
Recent Articles on Randomized Algorithms
Branch and Bound:
- Branch and Bound | Set 1 (Introduction with 0/1 Knapsack)
- Branch and Bound | Set 2 (Implementation of 0/1 Knapsack)
- Branch and Bound | Set 3 (8 puzzle Problem)
- Branch and Bound | Set 5 (N Queen Problem)
- Branch And Bound | Set 6 (Traveling Salesman Problem)
Recent Articles on Branch and Bound
Quizzes on Algorithms:
- Analysis of Algorithms
- Sorting
- Divide and Conquer
- Greedy Algorithms
- Dynamic Programming
- Backtracking
- Misc
- NP Complete
- Searching
- Analysis of Algorithms (Recurrences)
- Recursion
- Bit Algorithms
- Graph Traversals
- Graph Shortest Paths
- Graph Minimum Spanning Tree
Misc:
- Commonly Asked Algorithm Interview Questions | Set 1
- Given a matrix of ‘O’ and ‘X’, find the largest subsquare surrounded by ‘X’
- Nuts & Bolts Problem (Lock & Key problem)
- Flood fill Algorithm – how to implement fill() in paint?
- Given n appointments, find all conflicting appointments
- Check a given sentence for a given set of simple grammer rules
- Find Index of 0 to be replaced with 1 to get longest continuous sequence of 1s in a binary array
- How to check if two given sets are disjoint?
- Minimum Number of Platforms Required for a Railway/Bus Station
- Length of the largest subarray with contiguous elements | Set 1
- Length of the largest subarray with contiguous elements | Set 2
- Print all increasing sequences of length k from first n natural numbers
- Given two strings, find if first string is a subsequence of second
- Snake and Ladder Problem
- Write a function that returns 2 for input 1 and returns 1 for 2
- Connect n ropes with minimum cost
- Find the number of valid parentheses expressions of given length
- Longest Monotonically Increasing Subsequence Size (N log N): Simple implementation
- Generate all binary permutations such that there are more 1’s than 0’s at every point in all permutations
- Lexicographically minimum string rotation
- Construct an array from its pair-sum array
- Program to evaluate simple expressions
- Check if characters of a given string can be rearranged to form a palindrome
- Print all pairs of anagrams in a given array of strings
Please see Data Structures and Advanced Data Structures for Graph, Binary Tree, BST and Linked List based algorithms.
We will be adding more categories and posts to this page soon.
You can create a new Algorithm topic and discuss it with other geeks using our portal PRACTICE. See recently added problems on Algorithms on PRACTICE.