7.2. Optimization process#

Note

Press the rocket symbol on the top right of the page to make the page interactive and play with the code!

Before the optimization process can be started the optimization problem needs to be formulated. For this a distinction needs to be made between the controlled variables (sometimes called the endogenous variables), the uncontrolled variables (sometimes called the exogenous variables), objective(s) and constraints.

By means of an optimization algorithm, the controlled variables are changed and evaluated using the objective and constraints. The uncontrolled variables link controlled variables to the objective(s). This search process ends when the found solution can no longer be improved.

The XYZ production problem#

Consider XYZ Corporation builds two computer models – the Standard and the Deluxe. The Standard has a profit per unit of $\(300\), and the Deluxe has a profit per unit of $\(500\). The two models are produced from three components: the Standard computer tower (60 in stock), the Deluxe computer tower (50 in stock), and a hard drive (120 in stock).

What combination of Standard and Deluxe computer towers will maximize XYZ’s profit from the components currently in stock?

For this problem the controlled variables are the number of Standard and Deluxe computer towers. The uncontrolled variables are the profit per unit. The objective is the total profit which needs to be maximized. The constraints are the number of Standard and Deluxe computer towers in stock, and also the number of hard drives in stock.

Because of the simplicity of this optimization problem we are able to solve it graphically. Run the code below to obtain the graphical output.

import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Polygon
from scipy.optimize import LinearConstraint, milp

"""
In this example we will use the scipy.optimize.milp function to search for the optimal number of computers of each 
type the company should produce to maximize profit. Since this is a rather simple, linear problem, we can use a simple 
and fast solver like the one provided with scipy. Also, this algorithm can handle mixed integer problems, like the one
provided here.

For documentation on scipy.optimize.milp please see the website of scipy or use the help() function.
"""

# first, define the objective function. Since it is linear, we can just provide the coefficients with which x1 and x2
# are multiplied. Note the -1: we need to maximize, however, milp is a minimization algorithm!
eq = -1 * np.array([300, 500])

# next, define the constraints. For this we first provide a matrix A with all the coefficients x1 and x2 are multiplied.
A = np.array([[1, 0], [0, 1], [1, 2]])

# next we determine the upper bounds as vectors
b_u = np.array([60, 50, 120])

# finally, we need to define the lower bound. In our case, these are taken as 0
b_l = np.full_like(b_u, 0)

# we can now define the LinearConstraint
constraints = LinearConstraint(A, b_l, b_u)

# the integrality array will tell the algorithm it should take the variables as integers.
integrality = np.ones_like(eq)

"""Run the optimization"""

result = milp(c=eq, constraints=constraints, integrality=integrality)

print(f'The optimal solution is for producing {result.x[0]} basic computers and '
      f'{result.x[1]} advanced computers.')

print(f'This will result in a profit of €{round(-1 * result.fun, 2)}')

"""
The solution of this example can also be found graphically. This figure is created below. You are not required to 
understand the code below, so it is not annotated further.
"""

fig, ax = plt.subplots(figsize=(8, 6))
ax.grid()

# Draw constraint lines
ax.hlines(0, -1, 60)
ax.vlines(0, -1, 120)
ax.plot(np.linspace(-1, 80, 100) * 0 + 50, np.linspace(-1, 120, 100), color="blue")
ax.plot(np.linspace(-1, 60, 100), np.linspace(-1, 60, 100) * 0 + 60, color="blue")
ax.plot(np.linspace(-1, 61, 100), -2 / 1 * np.linspace(-1, 61, 100) + 120, color="blue")

# Draw the feasible region
feasible_set = Polygon(np.array([[0, 0],
                                 [0, 60],
                                 [30, 60],
                                 [50, 20],
                                 [50, 0]]),
                       color="lightgrey")
ax.add_patch(feasible_set)

# Draw the objective function
ax.plot(np.linspace(-1, 61, 100), -3 / 5 * np.linspace(-1, 61, 100) + 78, color="orange")
ax.plot(np.linspace(-1, 37, 100), -3 / 5 * np.linspace(-1, 37, 100) + 20, color="orange")
ax.plot(np.linspace(-1, 61, 100), -3 / 5 * np.linspace(-1, 61, 100) + 50, color="orange")
ax.arrow(-2, 30, 0, 40, width=0.05, head_width=1, head_length=2, color="orange")
ax.text(52, 104, r"$x_2 \leq 50$", size=12)
ax.text(52, 64, r"$x_1 \leq 60$", size=12)
ax.text(10, 104, r"$x_1 + 2x_2 \leq 120$", size=12)
ax.text(6, 20, r"$z = 300x_1 + 500x_2$", size=12)
ax.text(15, 48, "solution space", size=12)

# Draw the optimal solution
ax.plot(result.x[1], result.x[0], "*", color="black")
ax.text(32, 64, "Optimal Solution", size=12)
plt.xlabel(r"$x_2$ Deluxe model")
plt.ylabel(r"$x_1$ Standard model")
plt.show()

The constraints define the solution space (feasible region) which contains all allowed solutions, in this example the combinations of computer towers produced. The objective function and its direction (maximize/minimize) determine what is considered the optimal solution.

Discrete optimization and continuous optimization#

In the case of the computer production problem the controlled variables are discrete as it is not feasible to manufacture fractions of computers. Discrete controlled variables will negatively impact the time to find an optimal solution. It can sometimes be useful to consider discrete controlled variables as being continuous. The output will then give a rough estimate of the ideal solution after rounding the different variable to the nearest integer value.

Depending on the type of optimization problem it is also possible that the controlled variables are continuous. In this case the optimization algorithm is not limited to integer variable values. Having only continuous variables will positively impact the time to find an optimal solution.

A mix of discrete and continuous variables is also not uncommon. In case of linear programming this is called mixed integer linear programming (MILP).

Constrained and unconstrained optimization#

Typical text books on mathematical optimization start with problems that are unconstrained. Such simplified problems allow for conveying the concept of mathematical optimization. The figure at the start of this chapter is a typical example of such a simplified problem.

However, in real life applications of optimization techniques, problems are unlikely to be unconstrained. The computer production problem was constrained by the amount of computer towers and hard drives in stock.

In engineering it is likely that the optimization problem is constrained by laws that are derived from the natural sciences. It is not uncommon that the problem is also constrained in the sense that stakeholders have set limits on different variables. The most common example is a budgetary constraint that cannot be violated but has no relation to the natural sciences, rather is related to the social sciences.

There is a clear distinction between constraints that are derived from the natural sciences and from the social sciences. The former are fixed and cannot be violated (non-negotiable) whereas the latter can still be up for debate (negotiable).

Linear and non-linear optimization#

When both the objective and constraint functions are linear, the optimization problem is considered a linear optimization problem. Linear optimization problems are easier to solve than general nonlinear ones.

The first numerical optimization algorithms were developed to solve linear optimization problems, and there are many applications in operations research.

When the objective function is quadratic and the constraints are linear, we have a quadratic optimization problem, which is another type of problem for which specialized solution methods exist.

Functions can be either uni-modal or multi-modal. Uni-modal functions have a single minimum, whereas multi-modal functions have multiple minima. When we find a minimum without knowledge of whether the function is uni-modal or not, we can only say that it is a local minimum; that is, this point is better than any point within a small neighborhood.

Often, we need not be too concerned about the possibility of multiple local minima. From an engineering design point of view, achieving a local optimum that is better than the initial design is already a useful result.

Convexity is a concept related to multi-modality. A function is convex if all line segments connecting any two points in the function lie above the function and never intersect it. Convex functions are always uni-modal. Also, all multi-modal functions are non-convex, but not all uni-modal functions are convex.

Convex optimization seeks to minimize convex functions over convex sets. Like linear optimization, convex optimization is another subfield of numerical optimization with many applications. When the objective and constraints are convex functions, we can use specialized formulations and algorithms that are much more efficient than general nonlinear algorithms to find the global optimum.