Quantcast
Channel: Zallus Designs
Viewing all articles
Browse latest Browse all 13

Optimal Model Predictive Control

$
0
0
How does one achieve a true optimal as well as implementable control? Naturally, this depends on what one defines as optimal. For simplicity, this post focuses on tracking a target or tracking trajectory. A 100+ year old control technology, PID Control, is still prevalent in industry despite it being incapable of performing optimally in many cases. Its continued prevalence is in part due to the complexity of better options as well as devices often hiding their control performance. Poor control is easy to hide with deceptive software and or no/overly simplistic feedback. With the advent of displays being placed on everything, users are starting to realize control can be better. This blog post will provide an introduction to an alternative control which is constrained, optimal, Model Predictive Control (MPC).
Loading
PID Gains: P: 5 , , I: 0.1 D: 0.1

What is constrained, optimal, model predictive control? MPC itself is a control scheme that makes use of past data, model physics, and future demands in order to better control a system. Optimal MPC references a control that calculates the global optimal input in order to meet an objective. Note there are many flavors of MPC that are not necessarily optimal such as Generalized Predictive Control(GPC). Constrained control is where the controller mathematically accounts for the limitations of the system ahead of time and optimizes around these constraints. For example, on/off systems cannot be driven past on. Another term for this is input saturation. Consequently, this control scheme creates the best compromise between desired targets taking into account the real world limitations of the system. The following sections will demonstrate how to implement constrained optimal MPC as well as contrast its performance against other control schemes

Transfer Function Example:

Consider one of the most basic models possible which is a second order transfer function:
$$ H(s) = \frac{a \cdot s + b}{s^2 + c \cdot s + d} \quad\quad (1) $$ Where $ a,b,c,d $ represent optimally chosen model parameters. The optimal choice of these parameters is known as system identification. While this identification process is out of the scope of this blog post, note the Zallus Controller must do this in order to implement MPC of an arbitrary system. The next step is to discretize the system and this can be done through various methods such as ZOH, forward difference, or bilateral transform to name a few. I have a basic forward difference example in this blog post.
$$ \frac{Y(k)}{U(k)} = \frac{A \cdot z^{-1} + B \cdot z^{-2}}{1 -C \cdot z^{-1} -D \cdot z^{-2}} \quad\quad (2) $$ $$ \hat{y}(k+1) = C \cdot y(k) + D \cdot y(k-1) + A \cdot u(k) + B \cdot u(k-1) \quad\quad (3) $$

Equation $ (2) $ shows the transfer function in the Z domain where $ A,B,C,D $ represent the discretized model parameters. With algebra and applying the definition of the Z-transform one can convert this into a recursive predictor form as shown in equation $ (3) $ . Now with some structure in place, consider a basic example of a 2nd order transfer function with input constraints where the input can only be between 0 and 1:
$$ H(k) = \frac{0.6 \cdot z^{-1} + 0.2 \cdot z^{-2}}{1 -0.9 \cdot z^{-1} +0.05 \cdot z^{-2}} \ , \quad 0

Loading

As seen in the plot, with real world constraints, one step ahead control is no longer an optimal control. Consequently, to achieve a optimal control, constraints must be factored in when solving for optimal inputs. In order to handle MPC as a large scale constrained optimization problem, the problem should be stated in a matrix format versus a recursive algorithm. This matrix format must also has to handle initial conditions such that MPC can be continuously updating once new information becomes available. An example will be shown for predicting 4 steps into the future. In this matrix form, future outputs $ y(k+1,2, \ldots ,N) $ will be denoted as a function of future inputs $ u(k+0,1,2, \cdots, N) $ as well as past data $ \tilde{y}(k-1) , \tilde{u}(k-1) $ where $ \tilde{u} $ denotes a constant.

$$ \hat{y} = J \cdot \hat{u} + C \quad\quad (6) $$ $$ \begin{bmatrix} \hat{y}(k+1) \\ \hat{y}(k+2) \\ \hat{y}(k+3) \\ \hat{y}(k+4) \\ \cdots \end{bmatrix} = \begin{bmatrix} A & 0 & 0 & 0 & \cdots \\ B + A \cdot C & A & 0 & 0 & \cdots \\ B \cdot C + A \cdot (C^2 + D) & B + A \cdot C & A & 0 & \cdots \\ A \cdot (C^3 + 2 \cdot D \cdot C) + B \cdot (C^2 + D) & B \cdot C + A \cdot (C^2 + D) & B + A \cdot C & A & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix} \cdot \begin{bmatrix} \hat{u}(k) \\ \hat{u}(k+1) \\ \hat{u}(k+2) \\ \hat{u}(k+3) \\ \cdots \end{bmatrix} + $$ $$ \begin{bmatrix} C & D & B \\ C^2 + D & C \cdot D & B \cdot C \\ C \cdot D + C \cdot (C^2 + D) & D \cdot (C^2 + D) & B \cdot (C^2 + D) \\ C \cdot (C^3 + 2 \cdot D \cdot C) + D \cdot (C^2 + D) & D \cdot (C^3 + 2 \cdot D \cdot C) & B \cdot (C^3 + 2 \cdot D \cdot C) \\ \cdots & \cdots & \cdots \end{bmatrix} \cdot \begin{bmatrix} \tilde{y}(k-1) \\ \tilde{y}(k-2) \\ \tilde{u}(k-1) \end{bmatrix} \quad\quad (7) $$

Equation $ (4) $ shows the short hand matrix prediction equation where $ \hat{y} $ is our predicted future outputs and where $ \hat{u} $ are our chosen/calculated future inputs. The future inputs are multiplied by $ J $ which is the jacobian matrix of the system and matrix $ C $ represents a matrix of constants that are dependent on the initial conditions. Note that Jacobians are crucial for nearly all optimization algorithms as well as their 2nd order equivalent which is the hessian (often approximated as $H \approx J^T \cdot J $ ). Note the structure of the jacobian in MPC problems is a lower triangular matrix as well as a toeplitz matrix. These features are key for validation as well as a efficient implementation of inversion/solving.

Moving on, solve equation $ (6) $ for the optimal inputs given target output values as well as a set of initial conditions.

$$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad\quad (7) $$ $$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad st. \quad lb un-constrained solution. Equation $ (7) $ states the constrained case where $ lb,ub $ are the lower bounds and upper bounds respectively. Note this is not a easy problem to solve optimally in my humble opinion. This problem has $ 2\cdot N $ constraints and is considered a non-linear programming problem. Furthermore, with so many constraints, if not handled with care, the computation time can become excessive. The easiest way to solve this problem would be with an active set method however this would be a bad idea since active set methods generally only identify 1-2 active constraints at a time. For MPC, a common case may be where half to all the constraints are simultaneously active. Consequently, I would recommend a more intelligent method such as the interior point method. Matlab/Octave of course have solvers for these types of constraints relying on such methods is non ideal for a cross platform IoT system. Consequently, implementing a constrained efficent solver in javascript was probably the hardest challenge of getting this Zallus Controller working. Looking at matlab’s nonlinear constrained algorithms is a good place to start for algorithm ideas. Once successfully implemented, the MPC solution to the previous example should look like the following:

Loading

As expected, the constrained optimal MPC reacts well ahead of time to future demands and bisects all the step changes in the output to minimize the error. Naturally, this produces lower error than the previously shown control methods. Interestingly enough, at $ k=27, k=36 $ there appeared to be some ripple and sub-optimal control. Surprisingly, those points are indeed optimal points where the controller is making use of the nature of the model to achieve a better curve. The following plot is all the displayed control techniques on the same plot for reference.

Loading

Finite Horizon:

While the previous examples demonstrate the concept of MPC, they are missing a key construct for practical implementation which is a finite horizon. Without limiting the scope of future predictions, the calculations involve optimization of $ N $ by $ N $ matrices where $ N $ is the length of the target. In order to make computation time feasible on the fly, it follows that the amount of future predictions must be limited. The plots previously shown were produced with a so called infinite horizon or a horizon of all the steps which was $ N = 50 $ . However, to achieve an optimal control of the previous examples, a future horizon of only $ N = 4 $ was given. The required finite horizon for optimal control is dependent upon the model characteristics as well as the target trajectory. To illustrate the affect of non-infinite horizons, please see the following plot. Note that the accuracy is reduced and converges towards one step ahead control as the finite horizon is reduced. Also note that the model and target were modified to better illustrate the affect of the horizon.

Loading

As seen above, when the finite horizon is reduced, the control has less ability to anticipate/optimize for future demand. Consequently, the overall error is increased but the computation time per step was decreased. Accordingly, optimal MPC is a balancing act of the computation time versus the time step acceptable.

This concludes my intro to optimal constrained MPC. Happy controlling! I hope to see some other optimal MPC control products out there!



Viewing all articles
Browse latest Browse all 13

Latest Images

Trending Articles





Latest Images