diff --git a/content/book/skogestad07_multiv_feedb_contr.md b/content/book/skogestad07_multiv_feedb_contr.md index c39d169..390849b 100644 --- a/content/book/skogestad07_multiv_feedb_contr.md +++ b/content/book/skogestad07_multiv_feedb_contr.md @@ -4,20 +4,11 @@ author = ["Thomas Dehaeze"] draft = false +++ -Tags -: [Reference Books]({{< relref "reference_books" >}}), [Multivariable Control]({{< relref "multivariable_control" >}}) - -Reference -: ([Skogestad and Postlethwaite 2007](#org7d9b388)) - -Author(s) -: Skogestad, S., & Postlethwaite, I. - -Year -: 2007 - -PDF version -: [link](/ox-hugo/skogestad07_multiv_feedb_contr.pdf) +- Tags :: [[file:reference_books.org][Reference Books]], [[file:multivariable_control.org][Multivariable Control]] +- Reference :: cite:skogestad07_multiv_feedb_contr +- Author(s) :: Skogestad, S., & Postlethwaite, I. +- Year :: 2007 +- PDF version :: [[file:pdfs/skogestad07_multiv_feedb_contr.pdf][link]]
\( @@ -63,7 +54,7 @@ PDF version ## Introduction {#introduction} - + ### The Process of Control System Design {#the-process-of-control-system-design} @@ -240,7 +231,7 @@ Notations used throughout this note are summarized in tables [1](#table--tab:not ## Classical Feedback Control {#classical-feedback-control} - + ### Frequency Response {#frequency-response} @@ -282,14 +273,14 @@ We note \\(N(\w\_0) = \left( \frac{d\ln{|G(j\w)|}}{d\ln{\w}} \right)\_{\w=\w\_0} #### One Degree-of-Freedom Controller {#one-degree-of-freedom-controller} -The simple one degree-of-freedom controller negative feedback structure is represented in Fig. [1](#orgd511abe). +The simple one degree-of-freedom controller negative feedback structure is represented in Fig. [1](#org7da9ddb). The input to the controller \\(K(s)\\) is \\(r-y\_m\\) where \\(y\_m = y+n\\) is the measured output and \\(n\\) is the measurement noise. Thus, the input to the plant is \\(u = K(s) (r-y-n)\\). The objective of control is to manipulate \\(u\\) (design \\(K\\)) such that the control error \\(e\\) remains small in spite of disturbances \\(d\\). The control error is defined as \\(e = y-r\\). - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_alt.png" caption="Figure 1: Configuration for one degree-of-freedom control" >}} @@ -564,7 +555,7 @@ Thus, this limits the attainable bandwidth: #### Inverse-Based Controller Design {#inverse-based-controller-design} The idea is to have \\(L(s) = \frac{\w\_c}{s}\\) with \\(\w\_c\\) the desired gain crossover frequency. -The controller associated is then \\(K(s) = \frac{\w\_c}{s}G^{-1}(s)\\) {the plant is inverted and an integrator is added}. +The controller associated is then \\(K(s) = \frac{\w\_c}{s}G^{-1}(s)\\) (the plant is inverted and an integrator is added). This idea is the essential part of the **internal model control** (IMC). This loop shape yields a phase margin of \\(\SI{90}{\degree}\\) and an infinite gain margin.
@@ -608,18 +599,18 @@ For reference tracking, we typically want the controller to look like \\(\frac{1 We cannot achieve both of these simultaneously with a single feedback controller. -The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig. [2](#orgaf9baaf)), rather than operating on their difference \\(r - y\_m\\). +The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig. [2](#org32f22a1)), rather than operating on their difference \\(r - y\_m\\). - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="Figure 2: 2 degrees-of-freedom control architecture" >}} -The controller can be slit into two separate blocks (Fig. [3](#orgdf34a4b)): +The controller can be slit into two separate blocks (Fig. [3](#org37c1b18)): - the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors) - the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_sep.png" caption="Figure 3: 2 degrees-of-freedom control architecture with two separate blocs" >}} @@ -690,7 +681,7 @@ Which can be expressed as an \\(\mathcal{H}\_\infty\\): W\_P(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A} \end{equation\*} -With (see Fig. [4](#org1e6ca86)): +With (see Fig. [4](#org6cea3ba)): - \\(M\\): maximum magnitude of \\(\abs{S}\\) - \\(\w\_B\\): crossover frequency @@ -698,7 +689,7 @@ With (see Fig. [4](#org1e6ca86)):
- + {{< figure src="/ox-hugo/skogestad07_weight_first_order.png" caption="Figure 4: Inverse of performance weight" >}} @@ -732,7 +723,7 @@ After selecting the form of \\(N\\) and the weights, the \\(\hinf\\) optimal con ## Introduction to Multivariable Control {#introduction-to-multivariable-control} - + ### Introduction {#introduction} @@ -769,13 +760,13 @@ The main rule for evaluating transfer functions is the **MIMO Rule**: Start from #### Negative Feedback Control Systems {#negative-feedback-control-systems} -For negative feedback system (Fig. [5](#org4a80576)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant: +For negative feedback system (Fig. [5](#org883d458)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant: - \\(L = G K\\) - \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\) - \\(T \triangleq L(I + L)^{-1}\\) is the transfer function from \\(r\\) to \\(y\\) - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_bis.png" caption="Figure 5: Conventional negative feedback control system" >}} @@ -1133,9 +1124,9 @@ The **structured singular value** \\(\mu\\) is a tool for analyzing the effects ### General Control Problem Formulation {#general-control-problem-formulation} -The general control problem formulation is represented in Fig. [6](#org2f011da). +The general control problem formulation is represented in Fig. [6](#orgb123e51) (introduced in ([Doyle 1983](#orgf54e061))). - + {{< figure src="/ox-hugo/skogestad07_general_control_names.png" caption="Figure 6: General control configuration" >}} @@ -1166,13 +1157,13 @@ Then we have to break all the "loops" entering and exiting the controller \\(K\\ #### Controller Design: Including Weights in \\(P\\) {#controller-design-including-weights-in--p} -In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) (Fig. [7](#orgcf69c72)). +In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) (Fig. [7](#org0719385)). We consider: - The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system) - The weighted or normalized controlled outputs \\(z = W\_z \tilde{z}\\) (where \\(\tilde{z}\\) often consists of the control error \\(y-r\\) and the manipulated input \\(u\\)) - + {{< figure src="/ox-hugo/skogestad07_general_plant_weights.png" caption="Figure 7: General Weighted Plant" >}} @@ -1225,9 +1216,9 @@ where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT #### A General Control Configuration Including Model Uncertainty {#a-general-control-configuration-including-model-uncertainty} -The general control configuration may be extended to include model uncertainty as shown in Fig. [8](#orgd20b47f). +The general control configuration may be extended to include model uncertainty as shown in Fig. [8](#orgcb66a11). - + {{< figure src="/ox-hugo/skogestad07_general_control_Mdelta.png" caption="Figure 8: General control configuration for the case with model uncertainty" >}} @@ -1255,7 +1246,7 @@ MIMO systems are often **more sensitive to uncertainty** than SISO systems. ## Elements of Linear System Theory {#elements-of-linear-system-theory} - + ### System Descriptions {#system-descriptions} @@ -1628,18 +1619,18 @@ RHP-zeros therefore imply high gain instability. ### Internal Stability of Feedback Systems {#internal-stability-of-feedback-systems} - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="Figure 9: Block diagram used to check internal stability" >}} -Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in Fig. [9](#orgde8788d) is **internally stable** if and only if all four closed-loop transfer matrices are stable. +Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in Fig. [9](#org687f512) is **internally stable** if and only if all four closed-loop transfer matrices are stable. \begin{align\*} &(I+KG)^{-1} & -K&(I+GK)^{-1} \\\\\\ G&(I+KG)^{-1} & &(I+GK)^{-1} \end{align\*} -Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in Fig. [9](#orgde8788d) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable. +Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in Fig. [9](#org687f512) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable. ### Stabilizing Controllers {#stabilizing-controllers} @@ -1806,7 +1797,7 @@ It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \ ## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems} - + ### Input-Output Controllability {#input-output-controllability} @@ -2292,11 +2283,11 @@ Uncertainty in the crossover frequency region can result in poor performance and ### Summary: Controllability Analysis with Feedback Control {#summary-controllability-analysis-with-feedback-control} - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="Figure 10: Feedback control system" >}} -Consider the control system in Fig. [10](#orgb84b4ee). +Consider the control system in Fig. [10](#org4d29db4). Here \\(G\_m(s)\\) denotes the measurement transfer function and we assume \\(G\_m(0) = 1\\) (perfect steady-state measurement).
@@ -2326,7 +2317,7 @@ Sometimes, the disturbances are so large that we hit input saturation or the req \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_margin_requirements.png" caption="Figure 11: Illustration of controllability requirements" >}} @@ -2348,7 +2339,7 @@ The rules may be used to **determine whether or not a given plant is controllabl ## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems} - + ### Introduction {#introduction} @@ -2728,13 +2719,13 @@ The issues are the same for SISO and MIMO systems, however, with MIMO systems th In practice, the difference between the true perturbed plant \\(G^\prime\\) and the plant model \\(G\\) is caused by a number of different sources. We here focus on input and output uncertainty. -In multiplicative form, the input and output uncertainties are given by (see Fig. [12](#org7f11e2b)): +In multiplicative form, the input and output uncertainties are given by (see Fig. [12](#org0962e33)): \begin{equation\*} G^\prime = (I + E\_O) G (I + E\_I) \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_input_output_uncertainty.png" caption="Figure 12: Plant with multiplicative input and output uncertainty" >}} @@ -2802,6 +2793,7 @@ We can see that with an inverse based controller, the worst case sensitivity wil
+**Input uncertainty and feedback control**: These statements apply to the frequency range around crossover. By "small", we mean smaller than 2 and by "large" we mean larger than 10. @@ -2877,7 +2869,7 @@ However, the situation is usually the opposite with model uncertainty because fo ## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems} - + ### Introduction to Robustness {#introduction-to-robustness} @@ -2951,11 +2943,11 @@ In most cases, we prefer to lump the uncertainty into a **multiplicative uncerta G\_p(s) = G(s)(1 + w\_I(s)\Delta\_I(s)); \quad \abs{\Delta\_I(j\w)} \le 1 \, \forall\w \end{equation\*} -which may be represented by the diagram in Fig. [13](#org6f74f68). +which may be represented by the diagram in Fig. [13](#org7931e1a).
- + {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set.png" caption="Figure 13: Plant with multiplicative uncertainty" >}} @@ -3020,7 +3012,7 @@ This is of course conservative as it introduces possible plants that are not pre #### Uncertain Regions {#uncertain-regions} -To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in Fig. [14](#orgd590978) the Nyquist plots generated by the following set of plants +To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in Fig. [14](#org790f70f) the Nyquist plots generated by the following set of plants \begin{equation\*} G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3 @@ -3030,7 +3022,7 @@ To illustrate how parametric uncertainty translate into frequency domain uncerta In general, these uncertain regions have complicated shapes and complex mathematical descriptions - **Step 2**. We therefore approximate such complex regions as discs, resulting in a **complex additive uncertainty description** - + {{< figure src="/ox-hugo/skogestad07_uncertainty_region.png" caption="Figure 14: Uncertainty regions of the Nyquist plot at given frequencies" >}} @@ -3049,11 +3041,11 @@ The disc-shaped regions may be generated by **additive** complex norm-bounded pe \end{aligned} \end{equation} -At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in Fig. [15](#org446d9c7). +At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in Fig. [15](#orgb5720a3).
- + {{< figure src="/ox-hugo/skogestad07_uncertainty_disc_generated.png" caption="Figure 15: Disc-shaped uncertainty regions generated by complex additive uncertainty" >}} @@ -3090,25 +3082,26 @@ This complex disc-shaped uncertainty description may be generated as follows: 1. Select a nominal \\(G(s)\\) 2. **Additive uncertainty**. At each frequency, find the smallest radius \\(l\_A(\w)\\) which includes all the possible plants \\(\Pi\\) + \begin{equation\*} - l\_A(\w) = maxG\_p∈Π \abs{G\_p(j\w) - G(j\w)} + l\_A(\w) = \max\_{G\_p\in\Pi} \abs{G\_p(j\w) - G(j\w)} + \end{equation\*} -\end{equation\*} - If we want a rational transfer function weight, \\(w\_A(s)\\), then it must be chosen to cover the set, so + If we want a rational transfer function weight, \\(w\_A(s)\\), then it must be chosen to cover the set, so - \begin{equation\*} - \abs{w\_A(j\w)} \ge l\_A(\w) \quad \forall\w -\end{equation\*} + \begin{equation\*} + \abs{w\_A(j\w)} \ge l\_A(\w) \quad \forall\w + \end{equation\*} -Usually \\(w\_A(s)\\) is of low order to simplify the controller design. - -1. **Multiplicative uncertainty**. + Usually \\(w\_A(s)\\) is of low order to simplify the controller design. +3. **Multiplicative uncertainty**. This is often the preferred uncertainty form, and we have - \begin{equation\*} - l\_I(\w) = maxG\_p∈Π \abs{\frac{G\_p(j\w) - G(j\w)}{G(j\w)}} -\end{equation\*} - and with a rational weight \\(\abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w\\) + \begin{equation\*} + l\_I(\w) = \max\_{G\_p\in\Pi} \abs{\frac{G\_p(j\w) - G(j\w)}{G(j\w)}} + \end{equation\*} + + and with a rational weight \\(\abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w\\)
@@ -3126,12 +3119,12 @@ To simplify subsequent controller design, we select a delay-free nominal model \end{equation\*} To obtain \\(l\_I(\w)\\), we consider three values (2, 2.5 and 3) for each of the three parameters (\\(k, \theta, \tau\\)). -The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) (Fig. [16](#orgc98eb6b)). +The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) (Fig. [16](#orgb7a3fef)). To derive \\(w\_I(s)\\), we then try to find a simple weight so that \\(\abs{w\_I(j\w)}\\) lies above all the dotted lines.
- + {{< figure src="/ox-hugo/skogestad07_uncertainty_weight.png" caption="Figure 16: Relative error for 27 combinations of \\(k,\ \tau\\) and \\(\theta\\). Solid and dashed lines: two weights \\(\abs{w\_I}\\)" >}} @@ -3174,26 +3167,26 @@ The magnitude of the relative uncertainty caused by neglecting the dynamics in \ ##### Neglected delay {#neglected-delay} -Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig. [17](#orgb7f2291)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then: +Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig. [17](#org95ee3d1)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then: \begin{equation\*} l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases} \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_neglected_time_delay.png" caption="Figure 17: Neglected time delay" >}} ##### Neglected lag {#neglected-lag} -Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig. [18](#orgbfc6539)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where +Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig. [18](#org605ffcf)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where \begin{equation\*} w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1} \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_neglected_first_order_lag.png" caption="Figure 18: Neglected first-order lag uncertainty" >}} @@ -3213,7 +3206,7 @@ There is an exact expression, its first order approximation is w\_I(s) = \frac{(1+\frac{r\_k}{2})\theta\_{\text{max}} s + r\_k}{\frac{\theta\_{\text{max}}}{2} s + 1} \end{equation\*} -However, as shown in Fig. [19](#org06b467d), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor: +However, as shown in Fig. [19](#org4b63987), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor: \begin{equation\*} w\_I^\prime(s) = w\_I \cdot \frac{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.838 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.685 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1} @@ -3221,7 +3214,7 @@ However, as shown in Fig. [19](#org06b467d), the weight \\(w\_I\\) is optim It is suggested to start with the simple weight and then if needed, to try the higher order weight. - + {{< figure src="/ox-hugo/skogestad07_lag_delay_uncertainty.png" caption="Figure 19: Multiplicative weight for gain and delay uncertainty" >}} @@ -3250,7 +3243,7 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th #### RS with Multiplicative Uncertainty {#rs-with-multiplicative-uncertainty} -We want to determine the stability of the uncertain feedback system in Fig. [20](#org8ede43d) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\). +We want to determine the stability of the uncertain feedback system in Fig. [20](#org228dfe4) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\). The loop transfer function becomes \begin{equation\*} @@ -3265,14 +3258,14 @@ We use the Nyquist stability condition to test for robust stability of the close &\Longleftrightarrow \quad L\_p \ \text{should not encircle -1}, \ \forall L\_p \end{align\*} - + {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback.png" caption="Figure 20: Feedback system with multiplicative uncertainty" >}} ##### Graphical derivation of RS-condition {#graphical-derivation-of-rs-condition} -Consider the Nyquist plot of \\(L\_p\\) as shown in Fig. [21](#orgd4e7f02). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc. +Consider the Nyquist plot of \\(L\_p\\) as shown in Fig. [21](#orgb3a1d63). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc. Encirclements are avoided if none of the discs cover \\(-1\\), and we get: \begin{align\*} @@ -3281,7 +3274,7 @@ Encirclements are avoided if none of the discs cover \\(-1\\), and we get: &\Leftrightarrow \quad \abs{w\_I T} < 1, \ \forall\w \\\\\\ \end{align\*} - + {{< figure src="/ox-hugo/skogestad07_nyquist_uncertainty.png" caption="Figure 21: Nyquist plot of \\(L\_p\\) for robust stability" >}} @@ -3320,13 +3313,13 @@ And we obtain the same condition as before. #### RS with Inverse Multiplicative Uncertainty {#rs-with-inverse-multiplicative-uncertainty} -We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig. [22](#org7fd6c1d)) in which +We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig. [22](#org82e5dfe)) in which \begin{equation\*} G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1} \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_inverse_uncertainty_set.png" caption="Figure 22: Feedback system with inverse multiplicative uncertainty" >}} @@ -3376,9 +3369,9 @@ The condition for **nominal performance** when considering performance in terms Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\). -This is illustrated graphically in Fig. [23](#orge41ae9d). +This is illustrated graphically in Fig. [23](#org8e9042b). - + {{< figure src="/ox-hugo/skogestad07_nyquist_performance_condition.png" caption="Figure 23: Nyquist plot illustration of the nominal performance condition \\(\abs{w\_P} < \abs{1 + L}\\)" >}} @@ -3399,21 +3392,21 @@ For robust performance, we require the performance condition to be satisfied for -Let's consider the case of multiplicative uncertainty as shown on Fig. [24](#org83b2671). +Let's consider the case of multiplicative uncertainty as shown on Fig. [24](#orgf0e4257). The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is \begin{equation\*} L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight_bis.png" caption="Figure 24: Diagram for robust performance with multiplicative uncertainty" >}} ##### Graphical derivation of RP-condition {#graphical-derivation-of-rp-condition} -As illustrated on Fig. [23](#orge41ae9d), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\). +As illustrated on Fig. [23](#org8e9042b), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\). Since \\(L\_p\\) at each frequency stays within a disk of radius \\(|w\_I(j\omega) L(j\omega)|\\) centered on \\(L(j\omega)\\), the condition for RP becomes: \begin{align\*} @@ -3459,15 +3452,15 @@ And we obtain the same RP-condition as the graphically derived one. ##### Remarks on RP-condition {#remarks-on-rp-condition} 1. The RP-condition for this problem is closely approximated by the mixed sensitivity \\(\hinf\\) condition: + \begin{equation\*} - \tcmbox{\hnorm{\begin{matrix}w\_P S \\\\ w\_I T\end{matrix}} = maxω \sqrt{\abs{w\_P S}^2 + \abs{w\_I T}^2} <1} + \tcmbox{\hnorm{\begin{matrix}w\_P S \\ w\_I T\end{matrix}} = \max\_{\omega} \sqrt{\abs{w\_P S}^2 + \abs{w\_I T}^2} <1} + \end{equation\*} -\end{equation\*} - This condition is within a factor at most \\(\sqrt{2}\\) of the true RP-condition. - This means that **for SISO systems, we can closely approximate the RP-condition in terms of an \\(\hinf\\) problem**, so there is no need to make use of the structured singular value. - However, we will see that the situation can be very different for MIMO systems. - -1. The RP-condition can be used to derive bounds on the loop shape \\(\abs{L}\\): + This condition is within a factor at most \\(\sqrt{2}\\) of the true RP-condition. + This means that **for SISO systems, we can closely approximate the RP-condition in terms of an \\(\hinf\\) problem**, so there is no need to make use of the structured singular value. + However, we will see that the situation can be very different for MIMO systems. +2. The RP-condition can be used to derive bounds on the loop shape \\(\abs{L}\\): \begin{align\*} \abs{L} &> \frac{1 + \abs{w\_P}}{1 - \abs{w\_I}}, \text{ at frequencies where } \abs{w\_I} < 1\\\\\\ @@ -3613,9 +3606,9 @@ In the transfer function form: with \\(\Phi(s) \triangleq (sI - A)^{-1}\\). -This is illustrated in the block diagram of Fig. [25](#org80cc1de), which is in the form of an inverse additive perturbation. +This is illustrated in the block diagram of Fig. [25](#org04e44fe), which is in the form of an inverse additive perturbation. - + {{< figure src="/ox-hugo/skogestad07_uncertainty_state_a_matrix.png" caption="Figure 25: Uncertainty in state space A-matrix" >}} @@ -3633,7 +3626,7 @@ We also derived a condition for robust performance with multiplicative uncertain ## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis} - + ### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty} @@ -3651,15 +3644,15 @@ The starting point for our robustness analysis is a system representation in whi where each \\(\Delta\_i\\) represents a **specific source of uncertainty**, e.g. input uncertainty \\(\Delta\_I\\) or parametric uncertainty \\(\delta\_i\\). -If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig. [26](#orgbe92de5). This form is useful for controller synthesis. +If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig. [26](#org7a82cd8). This form is useful for controller synthesis. - + {{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="Figure 26: General control configuration used for controller synthesis" >}} -If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig. [27](#org041abfb). +If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig. [27](#org59e1836). - + {{< figure src="/ox-hugo/skogestad07_general_control_Ndelta.png" caption="Figure 27: \\(N\Delta\text{-structure}\\) for robust performance analysis" >}} @@ -3677,9 +3670,9 @@ Similarly, the uncertain closed-loop transfer function from \\(w\\) to \\(z\\), &\triangleq N\_{22} + N\_{21} \Delta (I - N\_{11} \Delta)^{-1} N\_{12} \end{align\*} -To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig. [28](#org4b32441) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations. +To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig. [28](#org79e475b) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations. - + {{< figure src="/ox-hugo/skogestad07_general_control_Mdelta_bis.png" caption="Figure 28: \\(M\Delta\text{-structure}\\) for robust stability analysis" >}} @@ -3739,7 +3732,7 @@ Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nb | ![](/ox-hugo/skogestad07_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_input_uncertainty.png) | ![](/ox-hugo/skogestad07_output_uncertainty.png) | |----------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------| -| Additive uncertainty | Multiplicative input uncertainty | Multiplicative output uncertainty | +| Additive uncertainty | Multiplicative input uncertainty | Multiplicative output uncertainty | In Fig. [5](#table--fig:feedback-uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty. @@ -3764,7 +3757,7 @@ In Fig. [5](#table--fig:feedback-uncertainty), three **feedback or inverse | ![](/ox-hugo/skogestad07_inv_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_input_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_output_uncertainty.png) | |--------------------------------------------------------|------------------------------------------------------------------|-------------------------------------------------------------------| -| Inverse additive uncertainty | Inverse multiplicative input uncertainty | Inverse multiplicative output uncertainty | +| Inverse additive uncertainty | Inverse multiplicative input uncertainty | Inverse multiplicative output uncertainty | ##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation} @@ -3860,10 +3853,10 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th ### Obtaining \\(P\\), \\(N\\) and \\(M\\) {#obtaining--p----n--and--m} -Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig. [29](#org4f9f011). +Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig. [29](#org7ed88cf). \\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight. - + {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight.png" caption="Figure 29: System with multiplicative input uncertainty and performance measured at the output" >}} @@ -4047,7 +4040,7 @@ In order to get tighter condition we must use a tighter uncertainty description Robust stability bound in terms of the \\(\hinf\\) norm (\\(\text{RS}\Leftrightarrow\hnorm{M}<1\\)) are in general only tight when there is a single full perturbation block. An "exception" to this is when the uncertainty blocks enter or exit from the same location in the block diagram, because they can then be stacked on top of each other or side-by-side, in an overall \\(\Delta\\) which is then full matrix. -One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in Fig. [30](#org8bb0812), for which the set of plants is +One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in Fig. [30](#org68009b1), for which the set of plants is \begin{equation\*} G\_p = (M\_l + \Delta\_M)^{-1}(Nl + \Delta\_N), \quad \hnorm{[\Delta\_N, \ \Delta\_N]} \le \epsilon @@ -4057,7 +4050,7 @@ Where \\(G = M\_l^{-1} N\_l\\) is a left coprime factorization of the nominal pl This uncertainty description is surprisingly **general**, it allows both zeros and poles to cross into the right-half plane, and has proven to be very useful in applications. - + {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty.png" caption="Figure 30: Coprime Uncertainty" >}} @@ -4103,10 +4096,10 @@ To this effect, introduce the block-diagonal scaling matrix where \\(d\_i\\) is a scalar and \\(I\_i\\) is an identity matrix of the same dimension as the \\(i\\)'th perturbation block \\(\Delta\_i\\). -Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig. [31](#orga3e207a). +Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig. [31](#org45647bc). This clearly has no effect on stability. - + {{< figure src="/ox-hugo/skogestad07_block_diagonal_scalings.png" caption="Figure 31: Use of block-diagonal scalings, \\(\Delta D = D \Delta\\)" >}} @@ -4189,10 +4182,10 @@ A larger value of \\(\mu\\) is "bad" as it means that a smaller perturbation mak 1. \\(\mu(\alpha M) = \abs{\alpha} \mu(M)\\) for any real scalar \\(\alpha\\) 2. Let \\(\Delta = \diag{\Delta\_1, \Delta\_2}\\) be a block-diagonal perturbation and let \\(M\\) be partitioned accordingly. Then - \begin{equation\*} - μ\_Δ ≥ \text{max} \\{μΔ\_1 (M11), μΔ\_2(M22) \\} -\end{equation\*} + \begin{equation\*} + \mu\_\Delta \ge \text{max} \\{\mu\_{\Delta\_1} (M\_{11}), \mu\_{\Delta\_2}(M\_{22}) \\} + \end{equation\*} #### Properties of \\(\mu\\) for Complex Perturbations \\(\Delta\\) {#properties-of--mu--for-complex-perturbations--delta} @@ -4204,24 +4197,23 @@ A larger value of \\(\mu\\) is "bad" as it means that a smaller perturbation mak \end{equation} 2. \\(\mu(\alpha M) = \abs{\alpha} \mu(M)\\) for any (complex) scalar \\(\alpha\\) 3. For a full block complex perturbation \\(\Delta\\) + \begin{equation\*} - μ(M) = \maxsv(M) - -\end{equation\*} - -1. \\(\mu\\) for complex perturbations is bounded by the spectral radius and the singular value + \mu(M) = \maxsv(M) + \end{equation\*} +4. \\(\mu\\) for complex perturbations is bounded by the spectral radius and the singular value \begin{equation} \tcmbox{\rho(M) \le \mu(M) \le \maxsv(M)} \end{equation} -2. **Improved lower bound**. +5. **Improved lower bound**. Defined \\(\mathcal{U}\\) as the set of all unitary matrices \\(U\\) with the same block diagonal structure as \\(\Delta\\). Then for complex \\(\Delta\\) \begin{equation} \tcmbox{\mu(M) = \max\_{U\in\mathcal{U}} \rho(MU)} \end{equation} -3. **Improved upper bound**. +6. **Improved upper bound**. Defined \\(\mathcal{D}\\) as the set of all unitary matrices \\(D\\) that commute with \\(\Delta\\). Then @@ -4405,7 +4397,7 @@ Note that \\(\mu\\) underestimate how bad or good the actual worst case performa ### Application: RP with Input Uncertainty {#application-rp-with-input-uncertainty} -We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity (Fig. [29](#org4f9f011)). +We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity (Fig. [29](#org7ed88cf)). The performance requirement is then @@ -4519,9 +4511,9 @@ with the decoupling controller we have: \overline{\sigma}(N\_{22}) = \overline{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right| \end{equation\*} -and we see from Fig. [32](#org7d3694a) that the NP-condition is satisfied. +and we see from Fig. [32](#org9186ac7) that the NP-condition is satisfied. - + {{< figure src="/ox-hugo/skogestad07_mu_plots_distillation.png" caption="Figure 32: \\(\mu\text{-plots}\\) for distillation process with decoupling controller" >}} @@ -4534,7 +4526,7 @@ In this case \\(w\_I T\_I = w\_I T\\) is a scalar times the identity matrix: \mu\_{\Delta\_I}(w\_I T\_I) = |w\_I t| = \left|0.2 \frac{5s + 1}{(0.5s + 1)(1.43s + 1)}\right| \end{equation\*} -and we see from Fig. [32](#org7d3694a) that RS is satisfied. +and we see from Fig. [32](#org9186ac7) that RS is satisfied. The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may increase the uncertainty by a factor of \\(1/0.53 = 1.89\\) before the worst case uncertainty yields instability. @@ -4542,7 +4534,7 @@ The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may in ##### RP {#rp} Although the system has good robustness margins and excellent nominal performance, the robust performance is poor. -This is shown in Fig. [32](#org7d3694a) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\). +This is shown in Fig. [32](#org9186ac7) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\). The peak value is close to 6, meaning that even with 6 times less uncertainty, the weighted sensitivity will be about 6 times larger than what we require. @@ -4680,9 +4672,9 @@ The latter is an attempt to "flatten out" \\(\mu\\). #### Example: \\(\mu\text{-synthesis}\\) with DK-iteration {#example--mu-text-synthesis--with-dk-iteration} For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity. -The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig. [33](#org3040e0c). +The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig. [33](#orgf0f9726). - + {{< figure src="/ox-hugo/skogestad07_weights_distillation.png" caption="Figure 33: Uncertainty and performance weights" >}} @@ -4696,8 +4688,8 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex - Iteration No. 1. Step 1: with the initial scalings, the \\(\mathcal{H}\_\infty\\) synthesis produced a 6 state controller (2 states from the plant model and 2 from each of the weights). - Step 2: the upper \\(\mu\text{-bound}\\) is shown in Fig. [34](#orgfced99f). - Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in Fig. [35](#org462af49) + Step 2: the upper \\(\mu\text{-bound}\\) is shown in Fig. [34](#org4115ce1). + Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in Fig. [35](#org91e0266) - Iteration No. 2. Step 1: with the 8 state scalings \\(D^1(s)\\), the \\(\mathcal{H}\_\infty\\) synthesis gives a 22 state controller. Step 2: This controller gives a peak value of \\(\mu\\) of \\(1.02\\). @@ -4705,25 +4697,25 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex - Iteration No. 3. Step 1: The \\(\mathcal{H}\_\infty\\) norm is only slightly reduced. We thus decide the stop the iterations. - + {{< figure src="/ox-hugo/skogestad07_dk_iter_mu.png" caption="Figure 34: Change in \\(\mu\\) during DK-iteration" >}} - + {{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="Figure 35: Change in D-scale \\(d\_1\\) during DK-iteration" >}} -The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in Fig. [36](#org75929f1). +The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in Fig. [36](#orgc41ec53). The objectives of RS and NP are easily satisfied. The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\overline{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants. - + {{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="Figure 36: \\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}} -To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig. [37](#org73cb573). +To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig. [37](#org3b29e28). - + {{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\overline{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}} @@ -4790,7 +4782,7 @@ If resulting control performance is not satisfactory, one may switch to the seco ## Controller Design {#controller-design} - + ### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design} @@ -4800,7 +4792,7 @@ By multivariable transfer function shaping, therefore, we mean the shaping of th The classical loop-shaping ideas can be further generalized to MIMO systems by considering the singular values. -Consider the one degree-of-freedom system as shown in Fig. [38](#org86eebc5). +Consider the one degree-of-freedom system as shown in Fig. [38](#orgeadd66e). We have the following important relationships: \begin{align} @@ -4808,7 +4800,7 @@ We have the following important relationships: u(s) &= K(s) S(s) \big(r(s) - n(s) - d(s) \big) \end{align} - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_small.png" caption="Figure 38: One degree-of-freedom feedback configuration" >}} @@ -4856,9 +4848,9 @@ Thus, over specified frequency ranges, it is relatively easy to approximate the -Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig. [39](#org4fadbbc). +Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig. [39](#org277c195). - + {{< figure src="/ox-hugo/skogestad07_design_trade_off_mimo_gk.png" caption="Figure 39: Design trade-offs for the multivariable loop transfer function \\(GK\\)" >}} @@ -4917,9 +4909,9 @@ The optimal state estimate is given by a **Kalman filter**. The solution to the LQG problem is then found by replacing \\(x\\) by \\(\hat{x}\\) to give \\(u(t) = -K\_r \hat{x}\\). -We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig. [40](#orgc60d97c): the optimal state feedback and the optimal state estimator (the Kalman filter). +We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig. [40](#org8fd0749): the optimal state feedback and the optimal state estimator (the Kalman filter). - + {{< figure src="/ox-hugo/skogestad07_lqg_separation.png" caption="Figure 40: The separation theorem" >}} @@ -4951,7 +4943,7 @@ and \\(X\\) is the unique positive-semi definite solution of the algebraic Ricca
-The **Kalman filter** has the structure of an ordinary state-estimator, as shown on Fig. [41](#org4105277), with: +The **Kalman filter** has the structure of an ordinary state-estimator, as shown on Fig. [41](#org04ebefc), with: \begin{equation} \label{eq:kalman\_filter\_structure} \dot{\hat{x}} = A\hat{x} + Bu + K\_f(y-C\hat{x}) @@ -4971,11 +4963,11 @@ Where \\(Y\\) is the unique positive-semi definite solution of the algebraic Ric
- + {{< figure src="/ox-hugo/skogestad07_lqg_kalman_filter.png" caption="Figure 41: The LQG controller and noisy plant" >}} -The structure of the LQG controller is illustrated in Fig. [41](#org4105277), its transfer function from \\(y\\) to \\(u\\) is given by +The structure of the LQG controller is illustrated in Fig. [41](#org04ebefc), its transfer function from \\(y\\) to \\(u\\) is given by \begin{align\*} L\_{\text{LQG}}(s) &= \left[ \begin{array}{c|c} @@ -4990,9 +4982,9 @@ The structure of the LQG controller is illustrated in Fig. [41](#org4105277 It has the same degree (number of poles) as the plant.
-For the LQG-controller, as shown on Fig. [41](#org4105277), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig. [42](#org7d807ad). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states. +For the LQG-controller, as shown on Fig. [41](#org04ebefc), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig. [42](#org33edd20). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states. - + {{< figure src="/ox-hugo/skogestad07_lqg_integral.png" caption="Figure 42: LQG controller with integral action and reference input" >}} @@ -5005,18 +4997,18 @@ Their main limitation is that they can only be applied to minimum phase plants. ### \\(\htwo\\) and \\(\hinf\\) Control {#htwo--and--hinf--control} - + #### General Control Problem Formulation {#general-control-problem-formulation} - + There are many ways in which feedback design problems can be cast as \\(\htwo\\) and \\(\hinf\\) optimization problems. It is very useful therefore to have a **standard problem formulation** into which any particular problem may be manipulated. -Such a general formulation is afforded by the general configuration shown in Fig. [43](#orgb9feed3). +Such a general formulation is afforded by the general configuration shown in Fig. [43](#org3805aa9). - + {{< figure src="/ox-hugo/skogestad07_general_control.png" caption="Figure 43: General control configuration" >}} @@ -5196,7 +5188,7 @@ Then the LQG cost function is #### \\(\hinf\\) Optimal Control {#hinf--optimal-control} -With reference to the general control configuration on Fig. [43](#orgb9feed3), the standard \\(\hinf\\) optimal control problem is to find all stabilizing controllers \\(K\\) which minimize +With reference to the general control configuration on Fig. [43](#org3805aa9), the standard \\(\hinf\\) optimal control problem is to find all stabilizing controllers \\(K\\) which minimize \begin{equation\*} \hnorm{F\_l(P, K)} = \max\_{\omega} \maxsv\big(F\_l(P, K)(j\omega)\big) @@ -5308,7 +5300,7 @@ In general, the scalar weighting functions \\(w\_1(s)\\) and \\(w\_2(s)\\) can b This can be useful for **systems with channels of quite different bandwidths**. In that case, **diagonal weights are recommended** as anything more complicated is usually not worth the effort.
-To see how this mixed sensitivity problem can be formulated in the general setting, we can imagine the disturbance \\(d\\) as a single exogenous input and define and error signal \\(z = [z\_1^T\ z\_2^T]^T\\), where \\(z\_1 = W\_1 y\\) and \\(z\_2 = -W\_2 u\\) as illustrated in Fig. [44](#orgfa217a8). +To see how this mixed sensitivity problem can be formulated in the general setting, we can imagine the disturbance \\(d\\) as a single exogenous input and define and error signal \\(z = [z\_1^T\ z\_2^T]^T\\), where \\(z\_1 = W\_1 y\\) and \\(z\_2 = -W\_2 u\\) as illustrated in Fig. [44](#org75d0efb). We can then see that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\) as required. The elements of the generalized plant are @@ -5325,16 +5317,16 @@ The elements of the generalized plant are \end{array} \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_dist_rejection.png" caption="Figure 44: \\(S/KS\\) mixed-sensitivity optimization in standard form (regulation)" >}} -Another interpretation can be put on the \\(S/KS\\) mixed-sensitivity optimization as shown in the standard control configuration of Fig. [45](#orgb5a3be8). +Another interpretation can be put on the \\(S/KS\\) mixed-sensitivity optimization as shown in the standard control configuration of Fig. [45](#org4eab9a9). Here we consider a tracking problem. The exogenous input is a reference command \\(r\\), and the error signals are \\(z\_1 = -W\_1 e = W\_1 (r-y)\\) and \\(z\_2 = W\_2 u\\). -As the regulation problem of Fig. [44](#orgfa217a8), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\). +As the regulation problem of Fig. [44](#org75d0efb), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\). - + {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_ref_tracking.png" caption="Figure 45: \\(S/KS\\) mixed-sensitivity optimization in standard form (tracking)" >}} @@ -5347,7 +5339,7 @@ Another useful mixed sensitivity optimization problem, is to find a stabilizing The ability to shape \\(T\\) is desirable for tracking problems and noise attenuation. It is also important for robust stability with respect to multiplicative perturbations at the plant output. -The \\(S/T\\) mixed-sensitivity minimization problem can be put into the standard control configuration as shown in Fig. [46](#org8b13336). +The \\(S/T\\) mixed-sensitivity minimization problem can be put into the standard control configuration as shown in Fig. [46](#orgc12d5db). The elements of the generalized plant are @@ -5364,7 +5356,7 @@ The elements of the generalized plant are \end{array} \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_s_t.png" caption="Figure 46: \\(S/T\\) mixed-sensitivity optimization in standard form" >}} @@ -5390,9 +5382,9 @@ The focus of attention has moved to the size of signals and away from the size a Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals. -Weights are also used if a perturbation is used to model uncertainty, as in Fig. [47](#orgc73293b), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\). +Weights are also used if a perturbation is used to model uncertainty, as in Fig. [47](#org99bdb2a), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\). - + {{< figure src="/ox-hugo/skogestad07_input_uncertainty_hinf.png" caption="Figure 47: Multiplicative dynamic uncertainty model" >}} @@ -5401,9 +5393,9 @@ As we have seen, the weights \\(Q\\) and \\(R\\) are constant, but LQG can be ge When we consider a system's response to persistent sinusoidal signals of varying frequency, or when we consider the induced 2-norm between the exogenous input signals and the error signals, we are required to minimize the \\(\hinf\\) norm. In the absence of model uncertainty, there does not appear to be an overwhelming case for using the \\(\hinf\\) norm rather than the more traditional \\(\htwo\\) norm. -However, when uncertainty is addressed, as it always should be, \\(\hinf\\) is clearly the more **natural approach** using component uncertainty models as in Fig. [47](#orgc73293b).
+However, when uncertainty is addressed, as it always should be, \\(\hinf\\) is clearly the more **natural approach** using component uncertainty models as in Fig. [47](#org99bdb2a).
-A typical problem using the signal-based approach to \\(\hinf\\) control is illustrated in the interconnection diagram of Fig. [48](#org045d86d). +A typical problem using the signal-based approach to \\(\hinf\\) control is illustrated in the interconnection diagram of Fig. [48](#orgaef2c60). \\(G\\) and \\(G\_d\\) are nominal models of the plant and disturbance dynamics, and \\(K\\) is the controller to be designed. The weights \\(W\_d\\), \\(W\_r\\), and \\(W\_n\\) may be constant or dynamic and describe the relative importance and/or the frequency content of the disturbance, set points and noise signals. The weight \\(W\_\text{ref}\\) is a desired closed-loop transfer function between the weighted set point \\(r\_s\\) and the actual output \\(y\\). @@ -5424,11 +5416,11 @@ The problem can be cast as a standard \\(\hinf\\) optimization in the general co \end{bmatrix},\ u = u \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_hinf_signal_based.png" caption="Figure 48: A signal-based \\(\hinf\\) control problem" >}} -Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in Fig. [49](#org80eb8d1). +Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in Fig. [49](#orga90705b). The problem we now want to solve is: find a stabilizing controller \\(K\\) such that the \\(\hinf\\) norm of the transfer function between \\(w\\) and \\(z\\) is less that 1 for all \\(\Delta\\) where \\(\hnorm{\Delta} < 1\\). We have assumed in this statement that the **signal weights have normalized the 2-norm of the exogenous input signals to unity**. This problem is a non-standard \\(\hinf\\) optimization. @@ -5438,7 +5430,7 @@ It is a robust performance problem for which the \\(\mu\text{-synthesis}\\) proc \mu(M(j\omega)) < 1, \quad \forall\omega \end{equation\*} - + {{< figure src="/ox-hugo/skogestad07_hinf_signal_based_uncertainty.png" caption="Figure 49: A signal-based \\(\hinf\\) control problem with input multiplicative uncertainty" >}} @@ -5491,7 +5483,7 @@ The objective of robust stabilization is to stabilize not only the nominal model where \\(\epsilon > 0\\) is then the **stability margin**.
-For the perturbed feedback system of Fig. [50](#org97456ed), the stability property is robust if and only if the nominal feedback system is stable and +For the perturbed feedback system of Fig. [50](#org69748b6), the stability property is robust if and only if the nominal feedback system is stable and \begin{equation\*} \gamma \triangleq \hnorm{\begin{bmatrix} @@ -5502,7 +5494,7 @@ For the perturbed feedback system of Fig. [50](#org97456ed), the stability Notice that \\(\gamma\\) is the \\(\hinf\\) norm from \\(\phi\\) to \\(\begin{bmatrix}u \cr y\end{bmatrix}\\) and \\((I-GK)^{-1}\\) is the sensitivity function for this positive feedback arrangement. - + {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_bis.png" caption="Figure 50: \\(\hinf\\) robust stabilization problem" >}} @@ -5558,7 +5550,7 @@ It is important to emphasize that since we can compute \\(\gamma\_\text{min}\\) #### A Systematic \\(\hinf\\) Loop-Shaping Design Procedure {#a-systematic--hinf--loop-shaping-design-procedure} - + Robust stabilization alone is not much used in practice because the designer is not able to specify any performance requirements. To do so, **pre and post compensators** are used to **shape the open-loop singular values** prior to robust stabilization of the "shaped" plant. @@ -5569,9 +5561,9 @@ If \\(W\_1\\) and \\(W\_2\\) are the pre and post compensators respectively, the G\_s = W\_2 G W\_1 \end{equation} -as shown in Fig. [51](#org6f45506). +as shown in Fig. [51](#org13474af). - + {{< figure src="/ox-hugo/skogestad07_shaped_plant.png" caption="Figure 51: The shaped plant and controller" >}} @@ -5604,11 +5596,11 @@ Systematic procedure for \\(\hinf\\) loop-shaping design: - A small value of \\(\epsilon\_{\text{max}}\\) indicates that the chosen singular value loop-shapes are incompatible with robust stability requirements 7. **Analyze the design** and if not all the specification are met, make further modifications to the weights 8. **Implement the controller**. - The configuration shown in Fig. [52](#org7eb2c79) has been found useful when compared with the conventional setup in Fig. [38](#org86eebc5). + The configuration shown in Fig. [52](#orgcc7ab7b) has been found useful when compared with the conventional setup in Fig. [38](#orgeadd66e). This is because the references do not directly excite the dynamics of \\(K\_s\\), which can result in large amounts of overshoot. The constant prefilter ensure a steady-state gain of \\(1\\) between \\(r\\) and \\(y\\), assuming integral action in \\(W\_1\\) or \\(G\\) - + {{< figure src="/ox-hugo/skogestad07_shapping_practical_implementation.png" caption="Figure 52: A practical implementation of the loop-shaping controller" >}} @@ -5631,25 +5623,25 @@ Many control design problems possess two degrees-of-freedom: Sometimes, one degree-of-freedom is left out of the design, and the controller is driven by an error signal i.e. the difference between a command and the output. But in cases where stringent time-domain specifications are set on the output response, a one degree-of-freedom structure may not be sufficient.
-A general two degrees-of-freedom feedback control scheme is depicted in Fig. [53](#orgc049587). +A general two degrees-of-freedom feedback control scheme is depicted in Fig. [53](#org4a9f611). The commands and feedbacks enter the controller separately and are independently processed. - + {{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_simple.png" caption="Figure 53: General two degrees-of-freedom feedback control scheme" >}} The presented \\(\mathcal{H}\_\infty\\) loop-shaping design procedure in section is a one-degree-of-freedom design, although a **constant** pre-filter can be easily implemented for steady-state accuracy. However, this may not be sufficient and a dynamic two degrees-of-freedom design is required.
-The design problem is illustrated in Fig. [54](#org1ff3e15). +The design problem is illustrated in Fig. [54](#orgde6213a). The feedback part of the controller \\(K\_2\\) is designed to meet robust stability and disturbance rejection requirements. A prefilter is introduced to force the response of the closed-loop system to follow that of a specified model \\(T\_{\text{ref}}\\), often called the **reference model**. - + {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_hinf.png" caption="Figure 54: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping design problem" >}} -The design problem is to find the stabilizing controller \\(K = [K\_1,\ K\_2]\\) for the shaped plant \\(G\_s = G W\_1\\), with a normalized coprime factorization \\(G\_s = M\_s^{-1} N\_s\\), which minimizes the \\(\mathcal{H}\_\infty\\) norm of the transfer function between the signals \\([r^T\ \phi^T]^T\\) and \\([u\_s^T\ y^T\ e^T]^T\\) as defined in Fig. [54](#org1ff3e15). +The design problem is to find the stabilizing controller \\(K = [K\_1,\ K\_2]\\) for the shaped plant \\(G\_s = G W\_1\\), with a normalized coprime factorization \\(G\_s = M\_s^{-1} N\_s\\), which minimizes the \\(\mathcal{H}\_\infty\\) norm of the transfer function between the signals \\([r^T\ \phi^T]^T\\) and \\([u\_s^T\ y^T\ e^T]^T\\) as defined in Fig. [54](#orgde6213a). This problem is easily cast into the general configuration. The control signal to the shaped plant \\(u\_s\\) is given by: @@ -5679,9 +5671,9 @@ The main steps required to synthesize a two degrees-of-freedom \\(\mathcal{H}\_\ 5. Replace the prefilter \\(K\_1\\) by \\(K\_1 W\_i\\) to give exact model-matching at steady-state. 6. Analyze and, if required, redesign making adjustments to \\(\rho\\) and possibly \\(W\_1\\) and \\(T\_{\text{ref}}\\) -The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig. [55](#org4188cd2). +The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig. [55](#orgae50741). - + {{< figure src="/ox-hugo/skogestad07_hinf_synthesis_2dof.png" caption="Figure 55: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller" >}} @@ -5763,9 +5755,9 @@ When implemented in Hanus form, the expression for \\(u\\) becomes where \\(u\_a\\) is the **actual plant input**, that is the measurement at the **output of the actuators** which therefore contains information about possible actuator saturation. -The situation is illustrated in Fig. [56](#org31876f1), where the actuators are each modeled by a unit gain and a saturation. +The situation is illustrated in Fig. [56](#org6b40496), where the actuators are each modeled by a unit gain and a saturation. - + {{< figure src="/ox-hugo/skogestad07_weight_anti_windup.png" caption="Figure 56: Self-conditioned weight \\(W\_1\\)" >}} @@ -5821,14 +5813,14 @@ Moreover, one should be careful about combining controller synthesis and analysi ## Controller Structure Design {#controller-structure-design} - + ### Introduction {#introduction} -In previous sections, we considered the general problem formulation in Fig. [57](#org26a1f1c) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\). +In previous sections, we considered the general problem formulation in Fig. [57](#org2de8fca) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\). - + {{< figure src="/ox-hugo/skogestad07_general_control_names_bis.png" caption="Figure 57: General Control Configuration" >}} @@ -5861,19 +5853,19 @@ The reference value \\(r\\) is usually set at some higher layer in the control h - **Optimization layer**: computes the desired reference commands \\(r\\) - **Control layer**: implements these commands to achieve \\(y \approx r\\) -Additional layers are possible, as is illustrated in Fig. [58](#org66b6458) which shows a typical control hierarchy for a chemical plant. +Additional layers are possible, as is illustrated in Fig. [58](#org04403a5) which shows a typical control hierarchy for a chemical plant. - + {{< figure src="/ox-hugo/skogestad07_system_hierarchy.png" caption="Figure 58: Typical control system hierarchy in a chemical plant" >}} -In general, the information flow in such a control hierarchy is based on the higher layer sending reference values (setpoints) to the layer below reporting back any problems achieving this (see Fig. [6](#org72131f5)). +In general, the information flow in such a control hierarchy is based on the higher layer sending reference values (setpoints) to the layer below reporting back any problems achieving this (see Fig. [6](#org564c9b9)). There is usually a time scale separation between the layers which means that the **setpoints**, as viewed from a given layer, are **updated only periodically**.
The optimization tends to be performed open-loop with limited use of feedback. On the other hand, the control layer is mainly based on feedback information. The **optimization is often based on nonlinear steady-state models**, whereas we often use **linear dynamic models in the control layer**.
-From a theoretical point of view, the optimal performance is obtained with a **centralized optimizing controller**, which combines the two layers of optimizing and control (see Fig. [6](#org44604ac)). +From a theoretical point of view, the optimal performance is obtained with a **centralized optimizing controller**, which combines the two layers of optimizing and control (see Fig. [6](#org91aaf5b)). All control actions in such an ideal control system would be perfectly coordinated and the control system would use on-line dynamic optimization based on nonlinear dynamic model of the complete plant. However, this solution is normally not used for a number a reasons, included the cost of modeling, the difficulty of controller design, maintenance, robustness problems and the lack of computing power. @@ -5885,7 +5877,7 @@ However, this solution is normally not used for a number a reasons, included the | ![](/ox-hugo/skogestad07_optimize_control_a.png) | ![](/ox-hugo/skogestad07_optimize_control_b.png) | ![](/ox-hugo/skogestad07_optimize_control_c.png) | |--------------------------------------------------|--------------------------------------------------------------------------------|-------------------------------------------------------------| -| Open loop optimization | Closed-loop implementation with separate control layer | Integrated optimization and control | +| Open loop optimization | Closed-loop implementation with separate control layer | Integrated optimization and control | ### Selection of Controlled Outputs {#selection-of-controlled-outputs} @@ -5976,14 +5968,13 @@ The use of the minimum singular value to select controlled outputs may be summar 1. From a (nonlinear) model compute the optimal parameters (inputs and outputs) for various conditions (disturbances, operating points). This yields a "look-up" table for optimal parameter values as a function of the operating conditions 2. From this data, obtain for each candidate output the variation in its optimal value + \begin{equation\*} - v\_i = \frac{(yi\text{opt,max} - yi\text{opt,min})}{2} - -\end{equation\*} - -1. Scale the candidate outputs such that for each output the sum of the magnitudes of \\(v\_i\\) and the control error (\\(e\_i\\), including measurement noise \\(n\_i\\)) is similar (e.g. \\(|v\_i| + |e\_i| = 1\\)) -2. Scale the inputs such that a unit deviation in each input from its optimal value has the same effect on the cost function \\(J\\) -3. Select as candidates those sets of controlled outputs which corresponds to a large value of \\(\underline{\sigma}(G)\\). + v\_i = \frac{(y\_{i\_{\text{opt,max}}} - y\_{i\_{\text{opt,min}}})}{2} + \end{equation\*} +3. Scale the candidate outputs such that for each output the sum of the magnitudes of \\(v\_i\\) and the control error (\\(e\_i\\), including measurement noise \\(n\_i\\)) is similar (e.g. \\(|v\_i| + |e\_i| = 1\\)) +4. Scale the inputs such that a unit deviation in each input from its optimal value has the same effect on the cost function \\(J\\) +5. Select as candidates those sets of controlled outputs which corresponds to a large value of \\(\underline{\sigma}(G)\\). \\(G\\) is the transfer function for the effect of the scaled inputs on the scaled outputs @@ -6002,7 +5993,7 @@ Thus, the selection of controlled and measured outputs are two separate issues. ### Selection of Manipulations and Measurements {#selection-of-manipulations-and-measurements} -We are here concerned with the variable sets \\(u\\) and \\(v\\) in Fig. [57](#org26a1f1c). +We are here concerned with the variable sets \\(u\\) and \\(v\\) in Fig. [57](#org2de8fca). Note that **the measurements** \\(v\\) used by the controller **are in general different from the controlled variables** \\(z\\) because we may not be able to measure all the controlled variables and we may want to measure and control additional variables in order to: - Stabilize the plant, or more generally change its dynamics @@ -6095,9 +6086,9 @@ Then when a SISO control loop is closed, we lose the input \\(u\_i\\) as a degre A cascade control structure results when either of the following two situations arise: - The reference \\(r\_i\\) is an output from another controller. - This is the **conventional cascade control** (Fig. [7](#orgfd3fab2)) + This is the **conventional cascade control** (Fig. [7](#org2115574)) - The "measurement" \\(y\_i\\) is an output from another controller. - This is referred to as **input resetting** (Fig. [7](#orgfc7ecf3)) + This is referred to as **input resetting** (Fig. [7](#org682e801))
@@ -6107,7 +6098,7 @@ A cascade control structure results when either of the following two situations | ![](/ox-hugo/skogestad07_cascade_extra_meas.png) | ![](/ox-hugo/skogestad07_cascade_extra_input.png) | |-------------------------------------------------------|---------------------------------------------------| -| Extra measurements \\(y\_2\\) | Extra inputs \\(u\_2\\) | +| Extra measurements \\(y\_2\\) | Extra inputs \\(u\_2\\) | #### Cascade Control: Extra Measurements {#cascade-control-extra-measurements} @@ -6131,7 +6122,7 @@ where in most cases \\(r\_2 = 0\\) since we do not have a degree-of-freedom to c ##### Cascade implementation {#cascade-implementation} -To obtain an implementation with two SISO controllers, we may cascade the controllers as illustrated in Fig. [7](#orgfd3fab2): +To obtain an implementation with two SISO controllers, we may cascade the controllers as illustrated in Fig. [7](#org2115574): \begin{align\*} r\_2 &= K\_1(s)(r\_1 - y\_1) \\\\\\ @@ -6141,13 +6132,13 @@ To obtain an implementation with two SISO controllers, we may cascade the contro Note that the output \\(r\_2\\) from the slower primary controller \\(K\_1\\) is not a manipulated plant input, but rather the reference input to the faster secondary controller \\(K\_2\\). Cascades based on measuring the actual manipulated variable (\\(y\_2 = u\_m\\)) are commonly used to **reduce uncertainty and non-linearity at the plant input**. -In the general case (Fig. [7](#orgfd3fab2)) \\(y\_1\\) and \\(y\_2\\) are not directly related to each other, and this is sometimes referred to as _parallel cascade control_. -However, it is common to encounter the situation in Fig. [59](#org860571e) where the primary output \\(y\_1\\) depends directly on \\(y\_2\\) which is a special case of Fig. [7](#orgfd3fab2). +In the general case (Fig. [7](#org2115574)) \\(y\_1\\) and \\(y\_2\\) are not directly related to each other, and this is sometimes referred to as _parallel cascade control_. +However, it is common to encounter the situation in Fig. [59](#orgef2e583) where the primary output \\(y\_1\\) depends directly on \\(y\_2\\) which is a special case of Fig. [7](#org2115574).
-With reference to the special (but common) case of cascade control shown in Fig. [59](#org860571e), the use of **extra measurements** is useful under the following circumstances: +With reference to the special (but common) case of cascade control shown in Fig. [59](#orgef2e583), the use of **extra measurements** is useful under the following circumstances: - The disturbance \\(d\_2\\) is significant and \\(G\_1\\) is non-minimum phase. If \\(G\_1\\) is minimum phase, the input-output controllability of \\(G\_2\\) and \\(G\_1 G\_2\\) are the same and there is no fundamental advantage in measuring \\(y\_2\\) @@ -6156,7 +6147,7 @@ With reference to the special (but common) case of cascade control shown in Fig.
- + {{< figure src="/ox-hugo/skogestad07_cascade_control.png" caption="Figure 59: Common case of cascade control where the primary output \\(y\_1\\) depends directly on the extra measurement \\(y\_2\\)" >}} @@ -6184,7 +6175,7 @@ Then \\(u\_2(t)\\) will only be used for **transient control** and will return t ##### Cascade implementation {#cascade-implementation} -To obtain an implementation with two SISO controllers we may cascade the controllers as shown in Fig. [7](#orgfc7ecf3). +To obtain an implementation with two SISO controllers we may cascade the controllers as shown in Fig. [7](#org682e801). We again let input \\(u\_2\\) take care of the **fast control** and \\(u\_1\\) of the **long-term control**. The fast control loop is then @@ -6206,7 +6197,7 @@ It also shows more clearly that \\(r\_{u\_2}\\), the reference for \\(u\_2\\), m
-Consider the system in Fig. [60](#org097dc0e) with two manipulated inputs (\\(u\_2\\) and \\(u\_3\\)), one controlled output (\\(y\_1\\) which should be close to \\(r\_1\\)) and two measured variables (\\(y\_1\\) and \\(y\_2\\)). +Consider the system in Fig. [60](#orgc933e77) with two manipulated inputs (\\(u\_2\\) and \\(u\_3\\)), one controlled output (\\(y\_1\\) which should be close to \\(r\_1\\)) and two measured variables (\\(y\_1\\) and \\(y\_2\\)). Input \\(u\_2\\) has a more direct effect on \\(y\_1\\) than does input \\(u\_3\\) (there is a large delay in \\(G\_3(s)\\)). Input \\(u\_2\\) should only be used for transient control as it is desirable that it remains close to \\(r\_3 = r\_{u\_2}\\). The extra measurement \\(y\_2\\) is closer than \\(y\_1\\) to the input \\(u\_2\\) and may be useful for detecting disturbances affecting \\(G\_1\\). @@ -6218,7 +6209,7 @@ We would probably tune the three controllers in the order \\(K\_2\\), \\(K\_3\\)
- + {{< figure src="/ox-hugo/skogestad07_cascade_control_two_layers.png" caption="Figure 60: Control configuration with two layers of cascade control" >}} @@ -6322,7 +6313,7 @@ By partitioning the inputs and outputs, the overall model \\(y = G u\\) can be w \end{aligned} \end{equation} -Assume now that feedback control \\(u\_2 = K\_2(r\_2 - y\_2 - n\_2)\\) is used for the "secondary" subsystem involving \\(u\_2\\) and \\(y\_2\\) (Fig. [61](#org192bdd8)). +Assume now that feedback control \\(u\_2 = K\_2(r\_2 - y\_2 - n\_2)\\) is used for the "secondary" subsystem involving \\(u\_2\\) and \\(y\_2\\) (Fig. [61](#orgf85b7f5)). We get: \begin{equation} \label{eq:partial\_control} @@ -6333,7 +6324,7 @@ We get: \end{aligned} \end{equation} - + {{< figure src="/ox-hugo/skogestad07_partial_control.png" caption="Figure 61: Partial Control" >}} @@ -6392,7 +6383,7 @@ The selection of \\(u\_2\\) and \\(y\_2\\) for use in the lower-layer control sy ##### Sequential design of cascade control systems {#sequential-design-of-cascade-control-systems} -Consider the conventional cascade control system in Fig. [7](#orgfd3fab2) where we have additional "secondary" measurements \\(y\_2\\) with no associated control objective, and the objective is to improve the control of \\(y\_1\\) by locally controlling \\(y\_2\\). +Consider the conventional cascade control system in Fig. [7](#org2115574) where we have additional "secondary" measurements \\(y\_2\\) with no associated control objective, and the objective is to improve the control of \\(y\_1\\) by locally controlling \\(y\_2\\). The idea is that this should reduce the effect of disturbances and uncertainty on \\(y\_1\\). From \eqref{eq:partial_control}, it follows that we should select \\(y\_2\\) and \\(u\_2\\) such that \\(\\|P\_d\\|\\) is small and at least smaller than \\(\\|G\_{d1}\\|\\). @@ -6462,9 +6453,9 @@ Then to minimize the control error for the primary output, \\(J = \\|y\_1 - r\_1 ### Decentralized Feedback Control {#decentralized-feedback-control} -In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig. [62](#org395f5fe)). +In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig. [62](#org8730c03)). - + {{< figure src="/ox-hugo/skogestad07_decentralized_diagonal_control.png" caption="Figure 62: Decentralized diagonal control of a \\(2 \times 2\\) plant" >}} @@ -6864,7 +6855,7 @@ The conditions are also useful in an **input-output controllability analysis** f ## Model Reduction {#model-reduction} - + ### Introduction {#introduction} @@ -7291,6 +7282,7 @@ Good approximation at high frequencies may also sometimes be desired. In such a case, using truncation or optimal Hankel norm approximation with appropriate frequency weightings may yield better results. + ## Bibliography {#bibliography} -Skogestad, Sigurd, and Ian Postlethwaite. 2007. _Multivariable Feedback Control: Analysis and Design_. John Wiley. +Doyle, John C. 1983. “Synthesis of Robust Controllers and Filters.” In _The 22nd IEEE Conference on Decision and Control_, 109–14. IEEE.