Open anitsh opened 4 years ago
A list of instructions for solving a problem
A set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem
-Cambridge.org
Flowchart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B.
General process of designing an algorithm:
- High-level description (Brainstorm, Mindmap) “…prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head."
- Implementation description (Pseudocode, Flow chart) “…prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function."
- Formal description (Code)
Typical steps in the development of algorithms:
- Problem definition
- Development of a model
- Specification of the algorithm
- Designing an algorithm
- Checking the correctness of the algorithm
- Analysis of algorithm
- Implementation of algorithm
- Program testing
- Documentation preparation
This classification is neither exhaustive nor disjoint. The purpose is to highlight the various ways in which problems can be solved, so that we can apply similar techniques when solving new problems.
Brute-force algorithms generate every possible answer and select only the valid ones.
Greedy algorithms picks the “best” next partial solution or locally optimal solution from the current partial solution at every step and may not always produce an optimal solution.
Backtracking algorithms solve constraint satisfaction problems and find a single solution or all solution by using a search tree method to generate permutations of solutions and by exploring only “valid” branches of the search tree, eliminating invalid partial solutions along the way.
Branch-and-bound algorithms solve optimization problems and find the best solution also by using a search tree, eliminating the branches that will not lead to a better solution that the one currently known.
Divide-and-conquer algorithms divides a problem into multiple non-overlapping subproblems to construct a solution.
Dynamic programming algorithms divides a problem into multiple overlapping subproblems and build a solution (either bottom-up or top-down) only calculating the solution to a given sub-problem once.
Algorithms can be determined when it will complete it's execution time. Or simply how long will it take to complete or it might not be even possible to know when it will happen. Based on this we can classify algorithms as:
https://en.wikipedia.org/wiki/Deterministic_algorithm
Algorithms can be classified by the amount of time they need to complete compared to their input size:
Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.
The objective of while designing an algorithm is to reduce the complexity from exponential towards constant time.
In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start.
In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand. In operations research, the area in which online algorithms are developed is called online optimization.
As an example, consider the sorting algorithms selection sort and insertion sort: selection sort repeatedly selects the minimum element from the unsorted remainder and places it at the front, which requires access to the entire input; it is thus an offline algorithm. On the other hand, insertion sort considers one input element per iteration and produces a partial solution without considering future elements. Thus insertion sort is an online algorithm.
Note that the final result of an insertion sort is optimum, i.e., a correctly sorted list. For many problems, online algorithms cannot match the performance of offline algorithms. If the ratio between the performance of an online algorithm and an optimal offline algorithm is bounded, the online algorithm is called competitive.
Sorting is ordering a list of objects. It one of the most common tasks that a computer program performs like arranging numbers in descending or ascending order or ordering elements in alphabetic order etc.
The opposite of sorting, rearranging a sequence of items in a random or meaningless order, is called shuffling
.
In computer science, arranging in an ordered sequence is called "sorting". Sorting is a common operation in many applications, and efficient algorithms to perform it have been developed.
The most common uses of sorted sequences are:
Sorting algorithms can be categorized as simple sorts and efficient sorts. Simple sorts category of algorithms are efficient on the small amount of data but cannot handle large data. They are fast and efficient due to low overhead. Two simplest sort algorithms are insertion sort and selection sorts. Where as efficient sorts are more practical sorting algorithms with average time complexity. Some most common of these are merge sort, heap sort, and quicksort. These sorts can be used on large lists and complicated programs but each of them has its own drawbacks and advantages. Some may require additional space or more iterations, thus resulting in more complex algorithms.
A sort is in-place means that the algorithms do not allocate additional storage to hold temporary results: they sort the data in place.
A sort is stable if the order of equal items is preserved.
We can distinguish two types of sorting based on memory usages. If the number of objects is small enough to fits into the main memory, sorting is called internal sorting. If the number of objects is so large that some of them reside on external storage during the sort, it is called external sorting. The following are the internal
sorting algorithms:
We need an auxiliary array of size + 1 the original unsorted array.
Exchange two adjacent elements if they are out of order. Repeat until array is sorted. The algorithm works by comparing each item in the list with the item next to it, and swapping them if required. In other words, the largest element has bubbled to the top of the array. The algorithm repeats this process until it makes a pass all the way through the list without swapping any items.
Remove its entries one at a time and then insert each of them into a sorted part (initially empty).
void insertionSort(int[] ar)
{
for (int i=1; i ‹ ar.length; i++)
{
int index = ar[i]; int j = i;
while (j > 0 && ar[j-1] > index)
{
ar[j] = ar[j-1];
j--;
}
ar[j] = index;
} }
Shell sort is another type of insertion sort which is more useful for larger lists.
Select the smallest unsorted item and then swapping it with the item in the next position to be filled.
The selection sort works as follows: you look through the entire array for the smallest element, once you find it you swap it (the smallest element) with the first element of the array. Then you look for the smallest element in the remaining array (an array without the first element) and swap it with the second element. Then you look for the smallest element in the remaining array (an array without first and second elements) and swap it with the third element, and so on. Here is an example,
void selectionSort(int[] ar){
for (int i = 0; i ‹ ar.length-1; i++)
{
int min = i;
for (int j = i+1; j ‹ ar.length; j++)
if (ar[j] ‹ ar[min]) min = j;
int temp = ar[i];
ar[i] = ar[min];
ar[min] = temp;
} }
Heapsort : Scan successive elements for an out-of-order item, then insert the item in the proper place.
Mergesort O(n log n):
Divide an array into two halves, sort the two halves (recursively), and then merge the results.
It involves the following three steps: Divide the array into two (or more) subarrays Sort each subarray (Conquer) Merge them into one (in a smart way!)
// Pass an array of integers to be sorted
public static void mergeSortClient(Comparable [ ] a)
{
Comparable[] tmp = new Comparable[a.length];
mergeSort(a, tmp, 0, a.length - 1);
}
private static void mergeSort(Comparable [ ] a, Comparable [ ] tmp, int left, int right)
{
if( left < right )
{
int center = (left + right) / 2;
mergeSort(a, tmp, left, center);
mergeSort(a, tmp, center + 1, right);
merge(a, tmp, left, center + 1, right);
}
}
private static void merge(Comparable[ ] a, Comparable[ ] tmp, int left, int right, int rightEnd )
{
int leftEnd = right - 1;
int k = left;
int num = rightEnd - left + 1;
while(left <= leftEnd && right <= rightEnd)
if(a[left].compareTo(a[right]) <= 0)
tmp[k++] = a[left++];
else
tmp[k++] = a[right++];
while(left <= leftEnd) // Copy rest of first half
tmp[k++] = a[left++];
while(right <= rightEnd) // Copy rest of right half
tmp[k++] = a[right++];
// Copy tmp back
for(int i = 0; i < num; i++, rightEnd--)
a[rightEnd] = tmp[rightEnd];
}
Partition the array into two parts, sort the parts independently.
Quicksort is a divide-and-conquer method for sorting and it is one of the fastest algorithms for sorting. Quicksort is popular because it is not difficult to implement, works well for a variety of different kinds of input data, and is substantially faster than any other sorting method in typical applications. It is in-place (uses only a small auxiliary stack), requires time proportional to N log N on the average to sort N items, and has an extremely short inner loop.
Before doing a quicksort, it required to shuffle the array first!
Code: https://algs4.cs.princeton.edu/23quicksort/Quick.java.html
Resources:
Prerequisite #150
Resource
https://algs4.cs.princeton.edu/cheatsheet
https://www.dummies.com/programming/big-data/algorithms-dummies-cheat-sheet
https://en.wikipedia.org/wiki/Algorithm
https://en.wikipedia.org/wiki/Algorithmic_technique
https://en.wikipedia.org/wiki/List_of_algorithms
[ ] MIT OpenCourseWare MIT 6.006 Introduction to Algorithms, Fall 2011 Playlist
https://www.coursera.org/learn/algorithms-part1
https://www.coursera.org/learn/algorithms-part2
https://www.khanacademy.org/computing/computer-science/algorithms
https://www.hackerearth.com/practice/algorithms
HackerRank Playlist
https://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html
https://app.codility.com/programmers/lessons/3-time_complexity
https://www.cs.cmu.edu/~adamchik/15-121/lectures/Sorting%20Algorithms/sorting.html
https://docs.oracle.com/javase/tutorial/collections/algorithms/index.html
https://en.wikipedia.org/wiki/Category:Concurrent_algorithms
https://en.wikipedia.org/wiki/Category:Distributed_algorithms
https://en.wikipedia.org/wiki/Combinational_logic
https://en.wikipedia.org/wiki/Mathematical_logic
https://dictionary.cambridge.org/collocation/english/algorithm
[ ] https://yourbasic.org/algorithms
[ ] https://www.algorithm-archive.org
Source:https://en.wikipedia.org/wiki/Computational_number_theory
https://en.wikipedia.org/wiki/Category:Optimization_algorithms_and_methods https://en.wikipedia.org/wiki/Optimization_problem https://en.wikipedia.org/wiki/Linear_search_problem https://en.wikipedia.org/wiki/Decision_problem https://en.wikipedia.org/wiki/Word_problem_(mathematics) https://en.wikipedia.org/wiki/Approximation_algorithm https://en.wikipedia.org/wiki/Category:Problems_on_strings https://en.wikipedia.org/wiki/Counting_problem_(complexity)
https://www.youtube.com/watch?v=uUtb0BaeosQ&list=PLJse9iV6ReqgcI4tec2jcyOZkaUKuGoHN
Bit Manipulation
Dynamic Programming
https://www.geeksforgeeks.org/solve-dynamic-programming-problem
[ ] https://www.quantamagazine.org/new-algorithm-breaks-speed-limit-for-solving-linear-equations-20210308/
Advanced Stuffs #453 #454