The Self-Taught Computer Scientist. Cory Althoff
is directly proportional to the square of the size of the problem.
cubic time: An algorithm runs in cubic time when its performance is directly proportional to the cube of the size of the problem.
polynomial time: An algorithm runs in polynomial time when it scales as O(n**a), where a = 2 for quadratic time and a = 3 for cubic time.
exponential time: An algorithm runs in exponential time when it contains a constant raised to the problem's size.
brute-force algorithm: A type of algorithm that tests every possible option.
best-case complexity: How an algorithm performs with ideal input.
worst-case complexity: How an algorithm performs in the worst possible scenario for it.
average-case complexity: How an algorithm performs on average.
space complexity: The amount of memory space an algorithm needs.
fixed space: The amount of memory a program requires.
data structure space: The amount of memory a program requires to store the data set.
temporary space: The amount of memory an algorithm needs for intermediary processing, for example, if your algorithm needs to temporarily copy a list to transfer data.
Challenge
1 Find a program you've written in the past. Go through it and write down the time complexities for the different algorithms in it.
2 Recursion
To understand recursion, one must first understand recursion.
Anonymous
An iterative algorithm solves problems by repeating steps over and over, typically using a loop. Most of the algorithms you've written in your programming journey so far are likely iterative algorithms. Recursion is a method of problem-solving where you solve smaller instances of the problem until you arrive at a solution. Recursive algorithms rely on functions that call themselves. Any problem you can solve with an iterative algorithm, you can also solve with a recursive one; however, sometimes, a recursive algorithm is a more elegant solution.
You write a recursive algorithm inside of a function or method that calls itself. The code inside the function changes the input and passes in a new, different input the next time the function calls itself. Because of this, the function must have a base case: a condition that ends a recursive algorithm to stop it from continuing forever. Each time the function calls itself, it moves closer to the base case. Eventually, the base case condition is satisfied, the problem is solved, and the function stops calling itself. An algorithm that follows these rules satisfies the three laws of recursion:
A recursive algorithm must have a base case.
A recursive algorithm must change its state and move toward the base case.
A recursive algorithm must call itself recursively.
To help you understand how a recursive algorithm works, let's take a look at finding the factorial of a number using both a recursive and iterative algorithm. The factorial of a number is the product of all positive integers less than or equal to the number. For example, the factorial of 5 is 5 × 4 × 3 × 2 × 1.
5! = 5 * 4 * 3 * 2 * 1
Here is an iterative algorithm that calculates the factorial of a number, n
:
def factorial(n): the_product = 1 while n> 0: the_product *= n n = n - 1 return the_product
Your function, factorial
, accepts the number, n
, that you are using in your calculation.
def factorial(n):
Inside your function, you define a variable, the_product
, and set it to 1. You use the_product
to keep track of the product as you multiply n
by the numbers preceding it, for example, 5 * 4 * 3 * 2 * 1.
Next, you use a while
loop to iterate backward from n
to 1 while keeping track of the product.
while n> 0: the_product *= n n = n - 1
At the end of your while
loop, you return the_product
, which contains the factorial of n
.
return the_product
Here is how to write the same algorithm recursively:
def factorial(n): if n == 0: return 1 return n * factorial(n - 1)
First, you define a function called factorial
that accepts the number, n
, as a parameter. Next comes your base case. Your function will call itself repeatedly until n
is 0, at which point it will return 1
and will stop calling itself.
if n == 0: return 1
Whenever the base case is not satisfied, this line of code executes:
return n * factorial(n - 1)
As you can see, your code calls the factorial
function, which is itself. If this is your first time seeing a recursive algorithm, this probably looks strange to you, and it might even look like this code could not possibly work. But I promise you it does work. In this case, your factorial
function calls itself and returns the result. However, it does not call itself with the value of n
; rather, it calls it with the value of n
− 1. Eventually, n
will be less than 1, which will satisfy your base case:
if n == 0: return 1
That is all the code you have to write for your recursive algorithm, which is only four lines of code:
def factorial(n): if n == 0: return 1 return n * factorial(n - 1)
So, how does it work? Internally, each time your function hits a return statement, it puts it on a stack. A stack is a type of data structure you will learn more about in Part II. It is like a list in Python, but you remove items in the same order you added them. Let's say you call your recursive factorial
function like this:
factorial(3)
Your variable, n
, starts as 3. Your function tests your base case, which is False
, so Python executes this line of code:
return n * factorial(n - 1)
Python does not know the result of n * factorial(n − 1)
yet, so it puts it on the stack.
# Internal stack (do not try to run this code) # n = 3 [return n * factorial( n - 1)]
Then, your function calls itself again after decrementing n
by 1:
factorial(2)
Your function tests your base case again, which evaluates to False
, so Python executes this line of code:
return n * factorial(n - 1)
Python does not know the result of n * factorial(n
− 1)
yet, so it puts it on the stack.
# Internal stack # n = 3 # n = 2 [return n * factorial( n - 1), return n * factorial( n - 1),]
Once again, your function calls itself after decrementing n
by 1:
factorial(1)