Introduction:
Dynamic programming is a powerful problemsolving technique used in computer science and mathematics to solve problems that exhibit overlapping subproblems and optimal substructure. It’s a method that efficiently solves a problem by breaking it down into smaller, overlapping subproblems and solving each subproblem only once, storing the solutions to avoid redundant computations.
Key Concepts of Dynamic programming:
Overlapping Subproblems:
Dynamic programming is particularly effective when a problem can be broken down into smaller, overlapping subproblems. These subproblems share common computations, and by solving them only once and storing the results, we can significantly improve the efficiency of our algorithm.
Let’s take the example of the Fibonacci sequence to illustrate the concept of overlapping subproblems. The Fibonacci sequence is defined as follows:
F(n)=F(n−1)+F(n−2)
with base cases:
$F(0)=0,F(1)=1$
Now, if we naively calculate F(n) using recursion, we encounter overlapping subproblems. Consider the following recursive implementation in Python:
def fibonacci(n): if n <= 1: return n else: return fibonacci(n1) + fibonacci(n2)
When you call fibonacci(5)
, the function will make multiple redundant calls to solve the same subproblems. For instance, to compute fibonacci(5)
, it needs to compute fibonacci(4)
and fibonacci(3)
. But when it computes fibonacci(4)
, it again needs to compute fibonacci(3)
and fibonacci(2)
. As you can see, the subproblem fibonacci(3)
is solved multiple times.
Dynamic programming, in this case, can be applied by using memoization to store the results of previously computed Fibonacci numbers. This avoids redundant calculations and improves efficiency. Here’s an example in Python:
def fibonacci_memoization(n, memo={}): if n <= 1: return n elif n not in memo: memo[n] = fibonacci_memoization(n1, memo) + fibonacci_memoization(n2, memo) return memo[n] # Example usage result = fibonacci_memoization(5) print(result)
Here’s the Fibonacci example using memoization in both JavaScript and PHP.
JS
function fibonacciMemoization(n, memo = {}) { if (n <= 1) { return n; } else if (!(n in memo)) { memo[n] = fibonacciMemoization(n  1, memo) + fibonacciMemoization(n  2, memo); } return memo[n]; } // Example usage const result = fibonacciMemoization(5); console.log(result);
PHP
function fibonacciMemoization($n, &$memo = []) { if ($n <= 1) { return $n; } elseif (!array_key_exists($n, $memo)) { $memo[$n] = fibonacciMemoization($n  1, $memo) + fibonacciMemoization($n  2, $memo); } return $memo[$n]; } // Example usage $result = fibonacciMemoization(5); echo $result;
Optimal Substructure:
The optimal solution to the original problem can be constructed from the optimal solutions of its subproblems. In other words, if we know the best way to solve the smaller subproblems, we can use that information to efficiently solve the larger problem.
To illustrate the concept of optimal substructure, let’s consider the example of the Longest Increasing Subsequence (LIS) problem. The problem statement is as follows:
Given an array of integers, find the length of the longest strictly increasing subsequence.
The optimal substructure property in this context means that the solution to the LIS problem for an array of size $n$ can be constructed using the solutions to the LIS problem for smaller subarrays.
Here’s a dynamic programming solution for the LIS problem in Python:
def lis(arr): n = len(arr) lis_lengths = [1] * n # Initialize LIS lengths for each element for i in range(1, n): for j in range(0, i): if arr[i] > arr[j] and lis_lengths[i] < lis_lengths[j] + 1: lis_lengths[i] = lis_lengths[j] + 1 return max(lis_lengths) # Example usage arr = [10, 22, 9, 33, 21, 50, 41, 60] result = lis(arr) print(result)
In this example, the lis_lengths
array is used to store the length of the longest increasing subsequence ending at each index. The outer loop iterates through each element of the array, and the inner loop iterates over the elements before the current one. If the current element is greater than the previous one and the length of the LIS ending at the current index is less than the length of the LIS ending at the previous index plus one, then we update the length.
This dynamic programming solution has the optimal substructure property. The optimal solution for the entire array is constructed from the optimal solutions of its subarrays. The length of the longest increasing subsequence for each element is computed based on the lengths of the subproblem solutions.
In summary, optimal substructure in dynamic programming allows us to break down a problem into smaller subproblems, solve those subproblems optimally, and then use their solutions to construct the optimal solution for the original problem.
Here’s the Longest Increasing Subsequence (LIS) example implemented in both JavaScript and PHP:
JavaScript:
function lis(arr) { const n = arr.length; const lisLengths = Array(n).fill(1); for (let i = 1; i < n; i++) { for (let j = 0; j < i; j++) { if (arr[i] > arr[j] && lisLengths[i] < lisLengths[j] + 1) { lisLengths[i] = lisLengths[j] + 1; } } } return Math.max(...lisLengths); } // Example usage const arr = [10, 22, 9, 33, 21, 50, 41, 60]; const result = lis(arr); console.log(result);
PHP:
function lis($arr) { $n = count($arr); $lisLengths = array_fill(0, $n, 1); for ($i = 1; $i < $n; $i++) { for ($j = 0; $j < $i; $j++) { if ($arr[$i] > $arr[$j] && $lisLengths[$i] < $lisLengths[$j] + 1) { $lisLengths[$i] = $lisLengths[$j] + 1; } } } return max($lisLengths); } // Example usage $arr = [10, 22, 9, 33, 21, 50, 41, 60]; $result = lis($arr); echo $result;
Memoization
To avoid redundant calculations, dynamic programming often uses a technique called memoization. This involves storing the results of expensive function calls and returning the cached result when the same inputs occur again.
Memoization is a powerful optimization technique used in dynamic programming to avoid redundant calculations by storing and reusing the results of expensive function calls. It is particularly effective when a problem exhibits overlapping subproblems, meaning the same subproblem is solved multiple times in the course of solving the larger problem.
Let’s break down the key concepts of memoization:
 Caching Results:
 As the algorithm proceeds, the results of expensive function calls are stored in a data structure, typically a dictionary or an associative array. The key is a representation of the function’s input parameters, and the value is the result of the function.
 Checking the Cache:
 Before making a recursive call, the algorithm checks whether the result for the current set of parameters is already in the cache. If it is, the cached result is returned instead of recalculating it.
 Recursive Function with Memoization:
 The recursive function is often structured to check the cache before making recursive calls. If the result is in the cache, it is returned immediately; otherwise, the function proceeds with the computation and stores the result in the cache.
Here’s a simple example of memoization applied to the Fibonacci sequence in Python:
def fibonacci_memo(n, memo={}): if n in memo: return memo[n] if n <= 1: return n else: result = fibonacci_memo(n1, memo) + fibonacci_memo(n2, memo) memo[n] = result return result # Example usage result = fibonacci_memo(5) print(result)
In this example, the memo
dictionary is used to store the results of previously computed Fibonacci numbers. Before making a recursive call, the function checks whether the result for the current n
is already in the memo
. If it is, the cached result is returned. If not, the function proceeds with the recursive calls, calculates the result, and stores it in the memo
before returning it.
Memoization significantly improves the efficiency of algorithms that exhibit overlapping subproblems by avoiding redundant computations, and it is a key concept in many dynamic programming solutions.
Here’s an example of memoization applied to the Fibonacci sequence in both JavaScript and PHP:
JavaScript:
function fibonacciMemoization(n, memo = {}) { if (n in memo) { return memo[n]; } if (n <= 1) { return n; } else { const result = fibonacciMemoization(n  1, memo) + fibonacciMemoization(n  2, memo); memo[n] = result; return result; } } // Example usage const result = fibonacciMemoization(5); console.log(result);
PHP:
function fibonacciMemoization($n, &$memo = []) { if (array_key_exists($n, $memo)) { return $memo[$n]; } if ($n <= 1) { return $n; } else { $result = fibonacciMemoization($n  1, $memo) + fibonacciMemoization($n  2, $memo); $memo[$n] = $result; return $result; } } // Example usage $result = fibonacciMemoization(5); echo $result;
BottomUp Approach
Dynamic programming can be implemented using either a topdown approach (recursion with memoization) or a bottomup approach. In the bottomup approach, the problem is solved by solving the subproblems in a specific order, usually starting from the smallest subproblem and working towards the original problem.
In the bottomup approach, also known as iterative or tabulation approach, you solve the subproblems in a specific order, typically starting from the smallest subproblems and working your way up to the original problem. Instead of starting with the original problem and breaking it down into subproblems (as in the topdown approach with recursion), you build up solutions to the larger problem by solving progressively larger subproblems.
Let’s illustrate the bottomup approach for solving the 0/1 Knapsack problem using dynamic programming. In this problem, we are given a set of items, each with a weight and a value, and we need to determine the maximum value that can be obtained by selecting a subset of the items that fit into a knapsack of limited capacity.
Here’s the bottomup dynamic programming solution in Python:
def knapsack_bottom_up(capacity, weights, values, n): # Initialize a 2D array to store the results dp = [[0 for _ in range(capacity + 1)] for _ in range(n + 1)] # Build the table in bottomup fashion for i in range(1, n + 1): for w in range(1, capacity + 1): # If the current item's weight is more than the capacity, skip it if weights[i  1] > w: dp[i][w] = dp[i  1][w] else: # Choose the maximum value between including and excluding the current item dp[i][w] = max(values[i  1] + dp[i  1][w  weights[i  1]], dp[i  1][w]) return dp[n][capacity] # Example usage capacity = 50 weights = [10, 20, 30] values = [60, 100, 120] num_items = len(weights) result = knapsack_bottom_up(capacity, weights, values, num_items) print(result)
In this example, the dp
array is a 2D table where dp[i][w]
represents the maximum value that can be obtained with the first i
items, considering a knapsack capacity of w
. The algorithm fills in this table iteratively from the bottom up.
The key logic is in the inner loop, where for each item, the algorithm compares the value of including the current item in the knapsack (values[i  1] + dp[i  1][w  weights[i  1]]
) with the value of excluding the current item (dp[i  1][w]
). It chooses the maximum value between the two options.
The final result is stored in dp[num_items][capacity]
, representing the maximum value that can be obtained with the given capacity and items.
Here’s the 0/1 Knapsack problem solved using the bottomup approach in both JavaScript and PHP:
JS
function knapsackBottomUp(capacity, weights, values, n) { // Initialize a 2D array to store the results const dp = Array.from({ length: n + 1 }, () => Array(capacity + 1).fill(0)); // Build the table in bottomup fashion for (let i = 1; i <= n; i++) { for (let w = 1; w <= capacity; w++) { // If the current item's weight is more than the capacity, skip it if (weights[i  1] > w) { dp[i][w] = dp[i  1][w]; } else { // Choose the maximum value between including and excluding the current item dp[i][w] = Math.max(values[i  1] + dp[i  1][w  weights[i  1]], dp[i  1][w]); } } } return dp[n][capacity]; } // Example usage const capacity = 50; const weights = [10, 20, 30]; const values = [60, 100, 120]; const numItems = weights.length; const result = knapsackBottomUp(capacity, weights, values, numItems); console.log(result);
PHP
function knapsackBottomUp($capacity, $weights, $values, $n) { // Initialize a 2D array to store the results $dp = array_fill(0, $n + 1, array_fill(0, $capacity + 1, 0)); // Build the table in bottomup fashion for ($i = 1; $i <= $n; $i++) { for ($w = 1; $w <= $capacity; $w++) { // If the current item's weight is more than the capacity, skip it if ($weights[$i  1] > $w) { $dp[$i][$w] = $dp[$i  1][$w]; } else { // Choose the maximum value between including and excluding the current item $dp[$i][$w] = max($values[$i  1] + $dp[$i  1][$w  $weights[$i  1]], $dp[$i  1][$w]); } } } return $dp[$n][$capacity]; } // Example usage $capacity = 50; $weights = [10, 20, 30]; $values = [60, 100, 120]; $numItems = count($weights); $result = knapsackBottomUp($capacity, $weights, $values, $numItems); echo $result;
Examples of Dynamic Programming Problems
Matrix Chain Multiplication:
Given a sequence of matrices, find the most efficient way to multiply them. Dynamic programming can be employed to minimize the number of scalar multiplications needed to compute the product.
Let’s consider the example of matrix chain multiplication, where you are given a sequence of matrices, and you want to find the most efficient way to multiply them in order to minimize the number of scalar multiplications needed to compute the product. This problem can be solved using dynamic programming.
The algorithm for solving the matrix chain multiplication problem using dynamic programming follows a bottomup approach. Here’s a stepbystep explanation:
Problem Statement: Given a sequence of matrices, the goal is to find the most efficient way to multiply them in order to minimize the number of scalar multiplications needed to compute the product.
Algorithm:
 Initialization:
 Create a table
dp
to store the minimum number of scalar multiplications needed for each subproblem. It’s a 2D array with dimensions[n][n]
, wheren
is the number of matrices.  Set the cost of multiplying a single matrix (diagonal elements) to 0.
 Create a table

for i in range(n): dp[i][i] = 0
Fill the Table:
 Use a nested loop to iterate over subproblems of different lengths.
 For each subproblem, iterate over all possible ways to split the subproblem into two parts.
 Calculate the cost of multiplying the matrices in each possible split and update the table with the minimum cost.
for length in range(2, n + 1): for i in range(n  length + 1): j = i + length  1 for k in range(i, j): cost = dp[i][k] + dp[k + 1][j] + p[i] * p[k + 1] * p[j + 1] dp[i][j] = min(dp[i][j], cost)
The variables i
, j
, and k
represent indices of matrices in the sequence. The cost of splitting the sequence at index k
and multiplying the two resulting subsequences is calculated.
Result:
The final result is stored in dp[0][n  1]
, which represents the minimum number of scalar multiplications needed to compute the product of the entire sequence of matrices.
return dp[0][n  1]
Python:
def matrix_chain_multiplication(p): n = len(p)  1 # Number of matrices dp = [[float('inf')] * n for _ in range(n)] # The cost is 0 when multiplying one matrix for i in range(n): dp[i][i] = 0 # Fill the dp table in a bottomup manner for length in range(2, n + 1): for i in range(n  length + 1): j = i + length  1 for k in range(i, j): cost = dp[i][k] + dp[k + 1][j] + p[i] * p[k + 1] * p[j + 1] dp[i][j] = min(dp[i][j], cost) return dp[0][n  1] # Example usage matrix_dimensions = [10, 20, 30, 40, 30] result = matrix_chain_multiplication(matrix_dimensions) print(result)
JavaScript:
function matrixChainMultiplication(p) { const n = p.length  1; // Number of matrices const dp = Array.from({ length: n }, () => Array(n).fill(Number.POSITIVE_INFINITY)); // The cost is 0 when multiplying one matrix for (let i = 0; i < n; i++) { dp[i][i] = 0; } // Fill the dp table in a bottomup manner for (let length = 2; length <= n; length++) { for (let i = 0; i < n  length + 1; i++) { const j = i + length  1; for (let k = i; k < j; k++) { const cost = dp[i][k] + dp[k + 1][j] + p[i] * p[k + 1] * p[j + 1]; dp[i][j] = Math.min(dp[i][j], cost); } } } return dp[0][n  1]; } // Example usage const matrixDimensions = [10, 20, 30, 40, 30]; const result = matrixChainMultiplication(matrixDimensions); console.log(result);
PHP:
function matrixChainMultiplication($p) { $n = count($p)  1; // Number of matrices $dp = array_fill(0, $n, array_fill(0, $n, PHP_INT_MAX)); // The cost is 0 when multiplying one matrix for ($i = 0; $i < $n; $i++) { $dp[$i][$i] = 0; } // Fill the dp table in a bottomup manner for ($length = 2; $length <= $n; $length++) { for ($i = 0; $i < $n  $length + 1; $i++) { $j = $i + $length  1; for ($k = $i; $k < $j; $k++) { $cost = $dp[$i][$k] + $dp[$k + 1][$j] + $p[$i] * $p[$k + 1] * $p[$j + 1]; $dp[$i][$j] = min($dp[$i][$j], $cost); } } } return $dp[0][$n  1]; } // Example usage $matrixDimensions = [10, 20, 30, 40, 30]; $result = matrixChainMultiplication($matrixDimensions); echo $result;
Shortest Path in a Graph:
Dynamic programming can be used to find the shortest path between two nodes in a graph. The FloydWarshall algorithm is a classic example that computes the shortest paths between all pairs of nodes in a weighted graph.
The FloydWarshall algorithm is a classic algorithm for finding the shortest paths between all pairs of nodes in a weighted graph. It’s an example of dynamic programming and is particularly useful when you need to find shortest paths in dense graphs or graphs with both positive and negative edge weights.
Here’s a highlevel explanation of the FloydWarshall algorithm:
FloydWarshall Algorithm:
Problem Statement: Given a directed or undirected graph with weighted edges, find the shortest path between every pair of vertices.
Algorithm Steps:
Initialization:
Create a 2D array dist
where dist[i][j]
represents the shortest distance from vertex i
to vertex j
.
Initialize dist[i][j]
to the weight of the edge from i
to j
if there is an edge, and infinity otherwise.
for each vertex i: for each vertex j: if i == j: dist[i][j] = 0 else if there is an edge from i to j: dist[i][j] = weight of the edge from i to j else: dist[i][j] = infinity
Dynamic Programming Iteration:
For each intermediate vertex k
(from 1 to the number of vertices), update the dist
array by considering the possibility of going through vertex k
to get a shorter path
for each vertex k: for each vertex i: for each vertex j: dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
If dist[i][k] + dist[k][j]
is smaller than the current dist[i][j]
, update it.
Result:
The dist
array now contains the shortest paths between all pairs of vertices.
Example:
Consider the following graph:
0 / \ 12 \ /  3  / \ 45
The adjacency matrix representing edge weights:
graph = [ [0, 1, 4, inf, inf, inf], [1, 0, 4, 2, 7, inf], [4, 4, 0, 3, 5, inf], [inf, 2, 3, 0, 4, 6], [inf, 7, 5, 0, 7], [inf, inf, inf, 6, 7, 0] ]
Below are implementations of the FloydWarshall algorithm in both JavaScript and PHP.
JavaScript:
function floydWarshall(graph) { const n = graph.length; const dist = Array.from({ length: n }, () => Array(n)); // Initialize the distance matrix for (let i = 0; i < n; i++) { dist[i] = [...graph[i]]; } // FloydWarshall algorithm for (let k = 0; k < n; k++) { for (let i = 0; i < n; i++) { for (let j = 0; j < n; j++) { if (dist[i][k] !== Infinity && dist[k][j] !== Infinity) { dist[i][j] = Math.min(dist[i][j], dist[i][k] + dist[k][j]); } } } } return dist; } // Example usage const graph = [ [0, 1, 4, Infinity, Infinity, Infinity], [1, 0, 4, 2, 7, Infinity], [4, 4, 0, 3, 5, Infinity], [Infinity, 2, 3, 0, 4, 6], [Infinity, 7, 5, 4, 0, 7], [Infinity, Infinity, Infinity, 6, 7, 0] ]; const result = floydWarshall(graph); console.log(result);
PHP:
function floydWarshall($graph) { $n = count($graph); $dist = $graph; // FloydWarshall algorithm for ($k = 0; $k < $n; $k++) { for ($i = 0; $i < $n; $i++) { for ($j = 0; $j < $n; $j++) { if ($dist[$i][$k] !== INF && $dist[$k][$j] !== INF) { $dist[$i][$j] = min($dist[$i][$j], $dist[$i][$k] + $dist[$k][$j]); } } } } return $dist; } // Example usage $graph = [ [0, 1, 4, INF, INF, INF], [1, 0, 4, 2, 7, INF], [4, 4, 0, 3, 5, INF], [INF, 2, 3, 0, 4, 6], [INF, 7, 5, 4, 0, 7], [INF, INF, INF, 6, 7, 0] ]; $result = floydWarshall($graph); print_r($result);