Dynamic Programming Explained Simply, Fibonacci Series Insane Optimization.

ayman jaber
8 min readJul 8, 2021

--

Dynamic Programming is an extremely useful tool to have in a software developer’s toolkit. It is heavily used in computer networks, routing, graph problems, computer vision, artificial intelligence, and machine learning to name a few. Hopefully, in this article, I can give you a taste of why that is.

Let’s start off by telling you that Dynamic programming has a dirty secret, which annoys a lot of people after they discover it. The secret is this: Dynamic Programming has a very misleading name, as it has nothing to do with it being dynamic. The only reason it was named this way, was because of a marketing strategy to make it sound cooler [Dynamic programming — Wikipedia].

A more appropriate name would be Memoization, why? I will explain below with an example so that it makes sense.

In essence, DP is the art of following a divide-and-conquer approach to complex problems. Dividing them into simpler problems that are easier to solve, and breaking those down into even simpler problems, which are trivial to solve. DP is often used in recursive algorithms since the algorithm often needs to run an undetermined number of times.

In real life, DP is heavily used in computer networks, routing, graph problems, computer vision, artificial intelligence, and machine learning. So understanding it will give you an edge these few times where you are going to need it.

My aim in this series of articles is to walk you through several common use cases of Dynamic Programming, while making the solution clear and simple to understand, with diagrams and visual aids. As you will see below DP is capable of achieving some insane improvements on algorithms. Hopefully, by the end, you will have gained a liking for DP solutions.

Fibonacci Numbers Algorithms:

The Fibonacci Series is an interesting mathematical series, which keeps cropping up time and time again, both in mathematics and in the natural world. It follows the following pattern → 1 1 2 3 5 8 13 21…..

Where each number is the sum of its previous 2 numbers, with the exception of the first 2 numbers which have a value of 1.

A common algorithm question for beginners is to find the ‘nth ‘ number of the Fibonacci Series. A common way to solve this problem is to use a simple recursive function. Specifically this one (here we are using Ruby, but the code would look very similar in most programming languages):

The way this algorithm works is that if we are not looking for the first 2 numbers, the answer is going to be 1. Otherwise, the algorithm is going to call itself twice, by adding the 2 numbers before it (n-1 + n-2). If we were to call the algorithm using the value [4, 7, 12, 20, 50], this is what we would get:

As we can see the correct answer for the first 4 is given almost instantaneously. However, for the final call, the answer to the program not only lags, but it is also taking so much time that it seems frozen. I encourage you to try this right now, it won’t take you more than 2 minutes to set up everything and test it for yourself. Seeing the cursor just stuck processing felt surreal for me the first time I saw it.

Let’s take a minute to understand why this is happening. As I have stated above, each Fibonacci function calls itself twice. Calling 7, would call the function to get the value of the 6th and the 5th Fibonacci element. In turn, they would call the function to get the 5th and the 4th element, and the 4th and 3rd element, so forth and so on, as shown in the image below.

This continues until we get to either the first or the second values. Calling SRF with 1, or 2, returns 1, as these are the base/special cases. Where we already know the answer to be 1.

Using this returned data, the functions that called them can calculate their own answers, and return this answer to the functions that called them…

….this process continues until we get to the original calling function…

…The final answer is returned to the original calling function. In our example of 7, the answer would be 13. Pretty simple and straightforward, at each step we call the same function for the two numbers before it in order to get value for that step.

However, this causes us a tremendous problem. Just to get the number 7 we had to do 26 operations. The number of steps increases exponentially the higher the number that we want is, at a rate of “2^n“ to be exact. For example, if we wanted to find the 100th number in the series, the number of steps would be close to 2^100 = 1.268 * 10³⁰ steps. This is way more than the number of stars in the entire observable universe or the number of grains of sand on each beach on earth. These numbers are quite insane, so it’s no wonder that your pc would freeze if you attempted to calculate them using a simple recursive function.

So how do we improve this, and get it back to a sane manageable number of computations? Well let me get your attention on this:

We are essentially calling the same function with the same ’n’ value multiple times. Which result is going to be the same for all of these similar calls. What if we could somehow store these values somewhere, and just reuse them when we find ourselves calling the same function again? Well, guess what?

That’s what Memoization is all about. Memoization comes from the Latin ‘memorandum’, meaning ‘to be remembered’. So a key element of Dynamic Programming is to find simpler solutions, which you can store in memory, and re-use in finding the answer to more complex questions.

Let’s look at how we can now use DP to solve the Fibonacci sequence, by storing the value of the functions that we already found inside hash maps.

The results below are printed out by the console almost instantaneously.

Let’s break down the above code.

Here we have changed the function definition to include a ‘memo’ hash parameter, which is set to an empty hash initially. This hash is the memory part, which will enable us to keep track of the functions that we have already solved.

Next, we return ‘1’ if the value of ’n’ given is either 1 or 2 (so less than 3). The line above is where the magic happens. Here we are checking if we have stored the value of calling DPRF function with the value of ’n’. If we do have it in ‘memo’, we return it.

The last part of our new recursive function is very similar to the previous one. Essentially, we are recursively calling the same function twice, once with ‘n-1’, and once with ‘n-2’. However, this time we are passing in the ‘memo’ hash, which is storing all of the values of ’n’ that we previously found. We then pass this value to the ‘memo’ hash, with a key of ’n’. We finally return the value that we just found.

The key here is that since the ‘memo’ hash is being passed by reference, it is being shared by all of the recursive calls of the DPRC function.

This function now runs in the time complexity of O (n). Let’s have a look at a dry run to understand why that is.

To start off we call DPRF(7), which has an empty ‘memo’ object. DPRF(7) calls DPRF(6) first. Since the ‘memo’ object passed to DPRF(6) is empty, DPRF(6) calls DPRF(5), which in turn calls DPRF(4) -> DPRF(3) -> DPRF(2)…..

…..DPRF(2) Return the value of 1, which is stored in the ‘memo’ hash. DPRF(3) then calls DPRF(1), which returns the value of one, and stores this value in the ‘memo’. DPRF(3) then calculates its value and returns it to DPRF(4)….

So far, the steps have been pretty much the same as the Simple Recursive Function, but this is where the magic happens. DPRF(4) then calls DPRF(2), this is a value that we have already stored in the ‘memo’ hash, so we just return that value.

This same step is repeated for DPRF(5), which calls DPRF(3), which is a ‘memo’ key that we already have. So we don’t have to call DPRF(2) and DPRF(1), in order to get the value of DPRF(3).

The same steps are repeated for DPRF(6) and DPRF(7), which gives us the final answer of 13. Using this method we avoided having to go over all of the steps in red below:

From the image above we can also see that the time complexity of DPRF is O( 2n -3) -> O(n). If we were to call DPRF(100), the number of steps would be around 200. Not anywhere near the number of sand grains of even the smallest beach.

I hope this article has helped you glimpse the power of Dynamic Programming, and why and how it is so useful. In the coming articles, I am going to show a more practical application of Dynamic Programming, and where it is used in real-life situations.

--

--

ayman jaber
ayman jaber

Written by ayman jaber

Full-Stack Web Dev. Ruby on Rails| React | Redux | Node.js/Express. Looking for new challenges, preferably in the Startup scene

Responses (2)