Algorithm Efficiency and Big-O Notation

What is an algorithm?

A set of instructions to find the solution to a problem.

Note: we use code to implement algoritms, but algorithms don't have to be written in code.

What is efficiency?

Time efficiency helps us predict how long it could take a particular algorithm to run. Space efficiency helps us predict how much memory a particular algorithm could use up.

Why study algorithms and efficiency?

  • Understanding algorithms let us reuse knowledge from the field.
  • Better-performing algorithms can enhance the user experience.
  • Better-performing algorithms can save companies money.
  • Algorithms and algorithm analysis are shared languages developers use to talk programs (especially in INTERVIEWS!).

Big O notation

  1. Ask yourself: In the worst case scenario, how many calculations does your algorithm do?
  2. Phrase the answer in terms of the size of the input.
  3. Ignore constant multiples or smaller things added on.

We will consider all mathematical operations constant time or O(1) operations: +, -, *, /, and %.

function add(a,b){
    return a+b;
}

Functions containing for loops that go through the whole input are generally implementing at least linear time or O(n) algorithms.

function addAll(numArray){
    var sum = 1;
    for (var i=0; i<numArray.length; i++){
        sum +=  numArray[i];
    }
    return sum;
}

Logarithm terms in Big O notation (like O(log(n)) usually come from recursive functions that divide the problem into smaller subproblems. We'll see more about recursive algorithms as we go.

Almost everything else is composed of combinations of those. For example, if a for loop has more complex operations inside it, time complexity is usually higher.

function addAllArrays(arrayOfArrays){
    var sum = 1;
    var oneArray;
    for (var i=0; i<arrayOfArrays.length; i++){
        oneArray = arrayOfArrays[i];
        for (var j=0; j<oneArray.length; j++){
            sum +=  numArray[j];
        }
    }
    return sum;
}

Names given to common orders of complexity.

notation name
O(1) constant
O(log(n)) logarithmic
O(nc), for c < 1
O(n) linear
O(n(log(n)) linearithmic
O(n2) quadratic
O(nc), for c > 1 polynomial
O(cn), for c > 1 exponential

Every row listed in this table is more complex (takes more time or space) than the rows above it. That means, if we decide an algorithm takes polynomial time to do one set of operations and then moves on and needs a linear amount of work to finish up, we can just say it's a polynomial algorithm for the purposes of Big O notation.

Graph: how the number of operations (time) grows with the number of input elements for various orders of complexity time complexity graph from daveperrett.com

Big O is the most commonly-used notations for comparing functions in computer science, but there are others:

notation analogy
f(n) = O(g(n)) <=
f(n) = o(g(n)) <=
f(n) = Θ(g(n)) =
f(n) = Ω(g(n)) >
f(n) = ω(g(n)) <

Resources