Update Content - 2024-12-17

This commit is contained in:
2024-12-17 15:23:12 +01:00
parent ba5c203e48
commit f0d28899bf
6 changed files with 210 additions and 209 deletions

View File

@@ -56,7 +56,7 @@ draft = false
## Introduction {#introduction}
<span class="org-target" id="org-target--sec:introduction"></span>
<span class="org-target" id="org-target--sec-introduction"></span>
### The Process of Control System Design {#the-process-of-control-system-design}
@@ -183,11 +183,11 @@ In order to obtain a linear model from the "first-principle", the following appr
### Notation {#notation}
Notations used throughout this note are summarized in tables [1](#table--tab:notation-conventional), [2](#table--tab:notation-general) and [3](#table--tab:notation-tf).
Notations used throughout this note are summarized in [1](#table--tab:notation-conventional), [2](#table--tab:notation-general) and [3](#table--tab:notation-tf).
<a id="table--tab:notation-conventional"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--tab:notation-conventional">Table 1</a></span>:
<span class="table-number"><a href="#table--tab:notation-conventional">Table 1</a>:</span>
Notations for the conventional control configuration
</div>
@@ -204,7 +204,7 @@ Notations used throughout this note are summarized in tables [1](#table--tab:not
<a id="table--tab:notation-general"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--tab:notation-general">Table 2</a></span>:
<span class="table-number"><a href="#table--tab:notation-general">Table 2</a>:</span>
Notations for the general configuration
</div>
@@ -218,7 +218,7 @@ Notations used throughout this note are summarized in tables [1](#table--tab:not
<a id="table--tab:notation-tf"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--tab:notation-tf">Table 3</a></span>:
<span class="table-number"><a href="#table--tab:notation-tf">Table 3</a>:</span>
Notations for transfer functions
</div>
@@ -231,7 +231,7 @@ Notations used throughout this note are summarized in tables [1](#table--tab:not
## Classical Feedback Control {#classical-feedback-control}
<span class="org-target" id="org-target--sec:classical_feedback"></span>
<span class="org-target" id="org-target--sec-classical-feedback"></span>
### Frequency Response {#frequency-response}
@@ -272,7 +272,7 @@ We note \\(N(\w\_0) = \left( \frac{d\ln{|G(j\w)|}}{d\ln{\w}} \right)\_{\w=\w\_0}
#### One Degree-of-Freedom Controller {#one-degree-of-freedom-controller}
The simple one degree-of-freedom controller negative feedback structure is represented in Fig.&nbsp;[1](#figure--fig:classical-feedback-alt).
The simple one degree-of-freedom controller negative feedback structure is represented in [1](#figure--fig:classical-feedback-alt).
The input to the controller \\(K(s)\\) is \\(r-y\_m\\) where \\(y\_m = y+n\\) is the measured output and \\(n\\) is the measurement noise.
Thus, the input to the plant is \\(u = K(s) (r-y-n)\\).
@@ -592,13 +592,13 @@ For reference tracking, we typically want the controller to look like \\(\frac{1
We cannot achieve both of these simultaneously with a single feedback controller.
The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig.&nbsp;[2](#figure--fig:classical-feedback-2dof-alt)), rather than operating on their difference \\(r - y\_m\\).
The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller ([2](#figure--fig:classical-feedback-2dof-alt)), rather than operating on their difference \\(r - y\_m\\).
<a id="figure--fig:classical-feedback-2dof-alt"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="<span class=\"figure-number\">Figure 2: </span>2 degrees-of-freedom control architecture" >}}
The controller can be slit into two separate blocks (Fig.&nbsp;[3](#figure--fig:classical-feedback-sep)):
The controller can be slit into two separate blocks ([3](#figure--fig:classical-feedback-sep)):
- the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors)
- the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance
@@ -672,7 +672,7 @@ Which can be expressed as an \\(\mathcal{H}\_\infty\\):
W\_P(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A}
\end{equation\*}
With (see Fig.&nbsp;[4](#figure--fig:performance-weigth)):
With (see [4](#figure--fig:performance-weigth)):
- \\(M\\): maximum magnitude of \\(\abs{S}\\)
- \\(\w\_B\\): crossover frequency
@@ -714,7 +714,7 @@ After selecting the form of \\(N\\) and the weights, the \\(\hinf\\) optimal con
## Introduction to Multivariable Control {#introduction-to-multivariable-control}
<span class="org-target" id="org-target--sec:multivariable_control"></span>
<span class="org-target" id="org-target--sec-multivariable-control"></span>
### Introduction {#introduction}
@@ -750,7 +750,7 @@ The main rule for evaluating transfer functions is the **MIMO Rule**: Start from
#### Negative Feedback Control Systems {#negative-feedback-control-systems}
For negative feedback system (Fig.&nbsp;[5](#figure--fig:classical-feedback-bis)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant:
For negative feedback system ([5](#figure--fig:classical-feedback-bis)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant:
- \\(L = G K\\)
- \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\)
@@ -1109,7 +1109,7 @@ The **structured singular value** \\(\mu\\) is a tool for analyzing the effects
### General Control Problem Formulation {#general-control-problem-formulation}
The general control problem formulation is represented in Fig.&nbsp;[6](#figure--fig:general-control-names) (introduced in (<a href="#citeproc_bib_item_1">Doyle 1983</a>)).
The general control problem formulation is represented in [6](#figure--fig:general-control-names) (introduced in (<a href="#citeproc_bib_item_1">Doyle 1983</a>)).
<a id="figure--fig:general-control-names"></a>
@@ -1141,7 +1141,7 @@ Then we have to break all the "loops" entering and exiting the controller \\(K\\
#### Controller Design: Including Weights in \\(P\\) {#controller-design-including-weights-in-p}
In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) (Fig.&nbsp;[7](#figure--fig:general-plant-weights)).
In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) ([7](#figure--fig:general-plant-weights)).
We consider:
- The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system)
@@ -1199,7 +1199,7 @@ where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT
#### A General Control Configuration Including Model Uncertainty {#a-general-control-configuration-including-model-uncertainty}
The general control configuration may be extended to include model uncertainty as shown in Fig.&nbsp;[8](#figure--fig:general-config-model-uncertainty).
The general control configuration may be extended to include model uncertainty as shown in [8](#figure--fig:general-config-model-uncertainty).
<a id="figure--fig:general-config-model-uncertainty"></a>
@@ -1228,7 +1228,7 @@ MIMO systems are often **more sensitive to uncertainty** than SISO systems.
## Elements of Linear System Theory {#elements-of-linear-system-theory}
<span class="org-target" id="org-target--sec:linear_sys_theory"></span>
<span class="org-target" id="org-target--sec-linear-sys-theory"></span>
### System Descriptions {#system-descriptions}
@@ -1595,14 +1595,14 @@ RHP-zeros therefore imply high gain instability.
{{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="<span class=\"figure-number\">Figure 9: </span>Block diagram used to check internal stability" >}}
Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in Fig.&nbsp;[9](#figure--fig:block-diagram-for-stability) is **internally stable** if and only if all four closed-loop transfer matrices are stable.
Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in [9](#figure--fig:block-diagram-for-stability) is **internally stable** if and only if all four closed-loop transfer matrices are stable.
\begin{align\*}
&(I+KG)^{-1} & -K&(I+GK)^{-1} \\\\
G&(I+KG)^{-1} & &(I+GK)^{-1}
\end{align\*}
Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in Fig.&nbsp;[9](#figure--fig:block-diagram-for-stability) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable.
Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in [9](#figure--fig:block-diagram-for-stability) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable.
### Stabilizing Controllers {#stabilizing-controllers}
@@ -1761,7 +1761,7 @@ It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \
## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems}
<span class="org-target" id="org-target--sec:perf_limit_siso"></span>
<span class="org-target" id="org-target--sec-perf-limit-siso"></span>
### Input-Output Controllability {#input-output-controllability}
@@ -2234,7 +2234,7 @@ Uncertainty in the crossover frequency region can result in poor performance and
{{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="<span class=\"figure-number\">Figure 10: </span>Feedback control system" >}}
Consider the control system in Fig.&nbsp;[10](#figure--fig:classical-feedback-meas).
Consider the control system in [10](#figure--fig:classical-feedback-meas).
Here \\(G\_m(s)\\) denotes the measurement transfer function and we assume \\(G\_m(0) = 1\\) (perfect steady-state measurement).
<div class="important">
@@ -2285,7 +2285,7 @@ The rules may be used to **determine whether or not a given plant is controllabl
## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems}
<span class="org-target" id="org-target--sec:perf_limit_mimo"></span>
<span class="org-target" id="org-target--sec-perf-limit-mimo"></span>
### Introduction {#introduction}
@@ -2654,7 +2654,7 @@ The issues are the same for SISO and MIMO systems, however, with MIMO systems th
In practice, the difference between the true perturbed plant \\(G^\prime\\) and the plant model \\(G\\) is caused by a number of different sources.
We here focus on input and output uncertainty.
In multiplicative form, the input and output uncertainties are given by (see Fig.&nbsp;[12](#figure--fig:input-output-uncertainty)):
In multiplicative form, the input and output uncertainties are given by (see [12](#figure--fig:input-output-uncertainty)):
\begin{equation\*}
G^\prime = (I + E\_O) G (I + E\_I)
@@ -2801,7 +2801,7 @@ However, the situation is usually the opposite with model uncertainty because fo
## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems}
<span class="org-target" id="org-target--sec:uncertainty_robustness_siso"></span>
<span class="org-target" id="org-target--sec-uncertainty-robustness-siso"></span>
### Introduction to Robustness {#introduction-to-robustness}
@@ -2873,7 +2873,7 @@ In most cases, we prefer to lump the uncertainty into a **multiplicative uncerta
G\_p(s) = G(s)(1 + w\_I(s)\Delta\_I(s)); \quad \abs{\Delta\_I(j\w)} \le 1 \\, \forall\w
\end{equation\*}
which may be represented by the diagram in Fig.&nbsp;[13](#figure--fig:input-uncertainty-set).
which may be represented by the diagram in [13](#figure--fig:input-uncertainty-set).
</div>
@@ -2940,7 +2940,7 @@ This is of course conservative as it introduces possible plants that are not pre
#### Uncertain Regions {#uncertain-regions}
To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in Fig.&nbsp;[14](#figure--fig:uncertainty-region) the Nyquist plots generated by the following set of plants
To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in [14](#figure--fig:uncertainty-region) the Nyquist plots generated by the following set of plants
\begin{equation\*}
G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3
@@ -2968,7 +2968,7 @@ The disc-shaped regions may be generated by **additive** complex norm-bounded pe
\end{aligned}
\end{equation}
At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in Fig.&nbsp;[15](#figure--fig:uncertainty-disc-generated).
At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in [15](#figure--fig:uncertainty-disc-generated).
</div>
@@ -3044,7 +3044,7 @@ To simplify subsequent controller design, we select a delay-free nominal model
\end{equation\*}
To obtain \\(l\_I(\w)\\), we consider three values (2, 2.5 and 3) for each of the three parameters (\\(k, \theta, \tau\\)).
The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) (Fig.&nbsp;[16](#figure--fig:uncertainty-weight)).
The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) ([16](#figure--fig:uncertainty-weight)).
To derive \\(w\_I(s)\\), we then try to find a simple weight so that \\(\abs{w\_I(j\w)}\\) lies above all the dotted lines.
</div>
@@ -3092,7 +3092,7 @@ The magnitude of the relative uncertainty caused by neglecting the dynamics in \
##### Neglected delay {#neglected-delay}
Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig.&nbsp;[17](#figure--fig:neglected-time-delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency ([17](#figure--fig:neglected-time-delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
\begin{equation\*}
l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases}
@@ -3105,7 +3105,7 @@ Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{m
##### Neglected lag {#neglected-lag}
Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig.&nbsp;[18](#figure--fig:neglected-first-order-lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) ([18](#figure--fig:neglected-first-order-lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
\begin{equation\*}
w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1}
@@ -3131,7 +3131,7 @@ There is an exact expression, its first order approximation is
w\_I(s) = \frac{(1+\frac{r\_k}{2})\theta\_{\text{max}} s + r\_k}{\frac{\theta\_{\text{max}}}{2} s + 1}
\end{equation\*}
However, as shown in Fig.&nbsp;[19](#figure--fig:lag-delay-uncertainty), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor:
However, as shown in [19](#figure--fig:lag-delay-uncertainty), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor:
\begin{equation\*}
w\_I^\prime(s) = w\_I \cdot \frac{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.838 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.685 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}
@@ -3167,7 +3167,7 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
#### RS with Multiplicative Uncertainty {#rs-with-multiplicative-uncertainty}
We want to determine the stability of the uncertain feedback system in Fig.&nbsp;[20](#figure--fig:feedback-multiplicative-uncertainty) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\).
We want to determine the stability of the uncertain feedback system in [20](#figure--fig:feedback-multiplicative-uncertainty) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\).
The loop transfer function becomes
\begin{equation\*}
@@ -3189,7 +3189,7 @@ We use the Nyquist stability condition to test for robust stability of the close
##### Graphical derivation of RS-condition {#graphical-derivation-of-rs-condition}
Consider the Nyquist plot of \\(L\_p\\) as shown in Fig.&nbsp;[21](#figure--fig:nyquist-uncertainty). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc.
Consider the Nyquist plot of \\(L\_p\\) as shown in [21](#figure--fig:nyquist-uncertainty). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc.
Encirclements are avoided if none of the discs cover \\(-1\\), and we get:
\begin{align\*}
@@ -3236,7 +3236,7 @@ And we obtain the same condition as before.
#### RS with Inverse Multiplicative Uncertainty {#rs-with-inverse-multiplicative-uncertainty}
We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig.&nbsp;[22](#figure--fig:inverse-uncertainty-set)) in which
We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty ([22](#figure--fig:inverse-uncertainty-set)) in which
\begin{equation\*}
G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1}
@@ -3290,7 +3290,7 @@ The condition for **nominal performance** when considering performance in terms
</div>
Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\).
This is illustrated graphically in Fig.&nbsp;[23](#figure--fig:nyquist-performance-condition).
This is illustrated graphically in [23](#figure--fig:nyquist-performance-condition).
<a id="figure--fig:nyquist-performance-condition"></a>
@@ -3312,7 +3312,7 @@ For robust performance, we require the performance condition to be satisfied for
</div>
Let's consider the case of multiplicative uncertainty as shown on Fig.&nbsp;[24](#figure--fig:input-uncertainty-set-feedback-weight-bis).
Let's consider the case of multiplicative uncertainty as shown on [24](#figure--fig:input-uncertainty-set-feedback-weight-bis).
The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is
\begin{equation\*}
@@ -3326,7 +3326,7 @@ The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \D
##### Graphical derivation of RP-condition {#graphical-derivation-of-rp-condition}
As illustrated on Fig.&nbsp;[23](#figure--fig:nyquist-performance-condition), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\).
As illustrated on [23](#figure--fig:nyquist-performance-condition), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\).
Since \\(L\_p\\) at each frequency stays within a disk of radius \\(|w\_I(j\omega) L(j\omega)|\\) centered on \\(L(j\omega)\\), the condition for RP becomes:
\begin{align\*}
@@ -3524,7 +3524,7 @@ In the transfer function form:
with \\(\Phi(s) \triangleq (sI - A)^{-1}\\).
This is illustrated in the block diagram of Fig.&nbsp;[25](#figure--fig:uncertainty-state-a-matrix), which is in the form of an inverse additive perturbation.
This is illustrated in the block diagram of [25](#figure--fig:uncertainty-state-a-matrix), which is in the form of an inverse additive perturbation.
<a id="figure--fig:uncertainty-state-a-matrix"></a>
@@ -3544,7 +3544,7 @@ We also derived a condition for robust performance with multiplicative uncertain
## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis}
<span class="org-target" id="org-target--sec:robust_perf_mimo"></span>
<span class="org-target" id="org-target--sec-robust-perf-mimo"></span>
### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty}
@@ -3562,13 +3562,13 @@ The starting point for our robustness analysis is a system representation in whi
where each \\(\Delta\_i\\) represents a **specific source of uncertainty**, e.g. input uncertainty \\(\Delta\_I\\) or parametric uncertainty \\(\delta\_i\\).
If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig.&nbsp;[26](#figure--fig:general-control-delta). This form is useful for controller synthesis.
If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in [26](#figure--fig:general-control-delta). This form is useful for controller synthesis.
<a id="figure--fig:general-control-delta"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="<span class=\"figure-number\">Figure 26: </span>General control configuration used for controller synthesis" >}}
If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig.&nbsp;[27](#figure--fig:general-control-Ndelta).
If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in [27](#figure--fig:general-control-Ndelta).
<a id="figure--fig:general-control-Ndelta"></a>
@@ -3588,7 +3588,7 @@ Similarly, the uncertain closed-loop transfer function from \\(w\\) to \\(z\\),
&\triangleq N\_{22} + N\_{21} \Delta (I - N\_{11} \Delta)^{-1} N\_{12}
\end{align\*}
To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig.&nbsp;[28](#figure--fig:general-control-Mdelta-bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in [28](#figure--fig:general-control-Mdelta-bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
<a id="figure--fig:general-control-Mdelta-bis"></a>
@@ -3627,7 +3627,7 @@ However, the inclusion of parametric uncertainty may be more significant for MIM
Unstructured perturbations are often used to get a simple uncertainty model.
We here define unstructured uncertainty as the use of a "full" complex perturbation matrix \\(\Delta\\), usually with dimensions compatible with those of the plant, where at each frequency any \\(\Delta(j\w)\\) satisfying \\(\maxsv(\Delta(j\w)) < 1\\) is allowed.
Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nbsp;[4](#table--fig:feedforward-uncertainty): additive uncertainty, multiplicative input uncertainty and multiplicative output uncertainty.
Three common forms of **feedforward unstructured uncertainty** are shown [4](#table--fig:feedforward-uncertainty): additive uncertainty, multiplicative input uncertainty and multiplicative output uncertainty.
<div class="important">
@@ -3643,15 +3643,15 @@ Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nb
<a id="table--fig:feedforward-uncertainty"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--fig:feedforward-uncertainty">Table 4</a></span>:
<span class="table-number"><a href="#table--fig:feedforward-uncertainty">Table 4</a>:</span>
Common feedforward unstructured uncertainty
</div>
| ![](/ox-hugo/skogestad07_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_input_uncertainty.png) | ![](/ox-hugo/skogestad07_output_uncertainty.png) |
|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|
| <span class="org-target" id="org-target--fig:additive_uncertainty"></span> Additive uncertainty | <span class="org-target" id="org-target--fig:input_uncertainty"></span> Multiplicative input uncertainty | <span class="org-target" id="org-target--fig:output_uncertainty"></span> Multiplicative output uncertainty |
| <span class="org-target" id="org-target--fig-additive-uncertainty"></span> Additive uncertainty | <span class="org-target" id="org-target--fig-input-uncertainty"></span> Multiplicative input uncertainty | <span class="org-target" id="org-target--fig-output-uncertainty"></span> Multiplicative output uncertainty |
In Fig.&nbsp;[5](#table--fig:feedback-uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
In [5](#table--fig:feedback-uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
<div class="important">
@@ -3667,13 +3667,13 @@ In Fig.&nbsp;[5](#table--fig:feedback-uncertainty), three **feedback or inverse
<a id="table--fig:feedback-uncertainty"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--fig:feedback-uncertainty">Table 5</a></span>:
<span class="table-number"><a href="#table--fig:feedback-uncertainty">Table 5</a>:</span>
Common feedback unstructured uncertainty
</div>
| ![](/ox-hugo/skogestad07_inv_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_input_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_output_uncertainty.png) |
|-------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| <span class="org-target" id="org-target--fig:inv_additive_uncertainty"></span> Inverse additive uncertainty | <span class="org-target" id="org-target--fig:inv_input_uncertainty"></span> Inverse multiplicative input uncertainty | <span class="org-target" id="org-target--fig:inv_output_uncertainty"></span> Inverse multiplicative output uncertainty |
| <span class="org-target" id="org-target--fig-inv-additive-uncertainty"></span> Inverse additive uncertainty | <span class="org-target" id="org-target--fig-inv-input-uncertainty"></span> Inverse multiplicative input uncertainty | <span class="org-target" id="org-target--fig-inv-output-uncertainty"></span> Inverse multiplicative output uncertainty |
##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation}
@@ -3768,7 +3768,7 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
### Obtaining \\(P\\), \\(N\\) and \\(M\\) {#obtaining-p-n-and-m}
Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig.&nbsp;[29](#figure--fig:input-uncertainty-set-feedback-weight).
Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown [29](#figure--fig:input-uncertainty-set-feedback-weight).
\\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight.
<a id="figure--fig:input-uncertainty-set-feedback-weight"></a>
@@ -3906,7 +3906,7 @@ Then the \\(M\Delta\text{-system}\\) is stable for all perturbations \\(\Delta\\
#### Application of the Unstructured RS-condition {#application-of-the-unstructured-rs-condition}
We will now present necessary and sufficient conditions for robust stability for each of the six single unstructured perturbations in Figs&nbsp;[4](#table--fig:feedforward-uncertainty) and&nbsp;[5](#table--fig:feedback-uncertainty) with
We will now present necessary and sufficient conditions for robust stability for each of the six single unstructured perturbations in [4](#table--fig:feedforward-uncertainty) and [5](#table--fig:feedback-uncertainty) with
\begin{equation\*}
E = W\_2 \Delta W\_1, \quad \hnorm{\Delta} \le 1
@@ -3951,7 +3951,7 @@ In order to get tighter condition we must use a tighter uncertainty description
Robust stability bound in terms of the \\(\hinf\\) norm (\\(\text{RS}\Leftrightarrow\hnorm{M}<1\\)) are in general only tight when there is a single full perturbation block.
An "exception" to this is when the uncertainty blocks enter or exit from the same location in the block diagram, because they can then be stacked on top of each other or side-by-side, in an overall \\(\Delta\\) which is then full matrix.
One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in Fig.&nbsp;[30](#figure--fig:coprime-uncertainty), for which the set of plants is
One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in [30](#figure--fig:coprime-uncertainty), for which the set of plants is
\begin{equation\*}
G\_p = (M\_l + \Delta\_M)^{-1}(Nl + \Delta\_N), \quad \hnorm{[\Delta\_N, \ \Delta\_N]} \le \epsilon
@@ -4007,7 +4007,7 @@ To this effect, introduce the block-diagonal scaling matrix
where \\(d\_i\\) is a scalar and \\(I\_i\\) is an identity matrix of the same dimension as the \\(i\\)'th perturbation block \\(\Delta\_i\\).
Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig.&nbsp;[31](#figure--fig:block-diagonal-scalings).
Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in [31](#figure--fig:block-diagonal-scalings).
This clearly has no effect on stability.
<a id="figure--fig:block-diagonal-scalings"></a>
@@ -4302,7 +4302,7 @@ Note that \\(\mu\\) underestimate how bad or good the actual worst case performa
### Application: RP with Input Uncertainty {#application-rp-with-input-uncertainty}
We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity (Fig.&nbsp;[29](#figure--fig:input-uncertainty-set-feedback-weight)).
We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity ([29](#figure--fig:input-uncertainty-set-feedback-weight)).
The performance requirement is then
@@ -4416,7 +4416,7 @@ with the decoupling controller we have:
\overline{\sigma}(N\_{22}) = \overline{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right|
\end{equation\*}
and we see from Fig.&nbsp;[32](#figure--fig:mu-plots-distillation) that the NP-condition is satisfied.
and we see from [32](#figure--fig:mu-plots-distillation) that the NP-condition is satisfied.
<a id="figure--fig:mu-plots-distillation"></a>
@@ -4431,7 +4431,7 @@ In this case \\(w\_I T\_I = w\_I T\\) is a scalar times the identity matrix:
\mu\_{\Delta\_I}(w\_I T\_I) = |w\_I t| = \left|0.2 \frac{5s + 1}{(0.5s + 1)(1.43s + 1)}\right|
\end{equation\*}
and we see from Fig.&nbsp;[32](#figure--fig:mu-plots-distillation) that RS is satisfied.
and we see from [32](#figure--fig:mu-plots-distillation) that RS is satisfied.
The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may increase the uncertainty by a factor of \\(1/0.53 = 1.89\\) before the worst case uncertainty yields instability.
@@ -4439,7 +4439,7 @@ The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may in
##### RP {#rp}
Although the system has good robustness margins and excellent nominal performance, the robust performance is poor.
This is shown in Fig.&nbsp;[32](#figure--fig:mu-plots-distillation) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\).
This is shown in [32](#figure--fig:mu-plots-distillation) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\).
The peak value is close to 6, meaning that even with 6 times less uncertainty, the weighted sensitivity will be about 6 times larger than what we require.
@@ -4576,7 +4576,7 @@ The latter is an attempt to "flatten out" \\(\mu\\).
#### Example: \\(\mu\text{-synthesis}\\) with DK-iteration {#example-mu-text-synthesis-with-dk-iteration}
For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity.
The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig.&nbsp;[33](#figure--fig:weights-distillation).
The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in [33](#figure--fig:weights-distillation).
<a id="figure--fig:weights-distillation"></a>
@@ -4592,8 +4592,8 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
- Iteration No. 1.
Step 1: with the initial scalings, the \\(\mathcal{H}\_\infty\\) synthesis produced a 6 state controller (2 states from the plant model and 2 from each of the weights).
Step 2: the upper \\(\mu\text{-bound}\\) is shown in Fig.&nbsp;[34](#figure--fig:dk-iter-mu).
Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in Fig.&nbsp;[35](#figure--fig:dk-iter-d-scale)
Step 2: the upper \\(\mu\text{-bound}\\) is shown in [34](#figure--fig:dk-iter-mu).
Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in [35](#figure--fig:dk-iter-d-scale)
- Iteration No. 2.
Step 1: with the 8 state scalings \\(D^1(s)\\), the \\(\mathcal{H}\_\infty\\) synthesis gives a 22 state controller.
Step 2: This controller gives a peak value of \\(\mu\\) of \\(1.02\\).
@@ -4609,7 +4609,7 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
{{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="<span class=\"figure-number\">Figure 35: </span>Change in D-scale \\(d\_1\\) during DK-iteration" >}}
The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in Fig.&nbsp;[36](#figure--fig:mu-plot-optimal-k3).
The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in [36](#figure--fig:mu-plot-optimal-k3).
The objectives of RS and NP are easily satisfied.
The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\overline{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants.
@@ -4617,7 +4617,7 @@ The peak value of \\(\mu\\) is just slightly over 1, so the performance specific
{{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="<span class=\"figure-number\">Figure 36: </span>\\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}}
To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig.&nbsp;[37](#figure--fig:perturb-s-k3).
To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in [37](#figure--fig:perturb-s-k3).
<a id="figure--fig:perturb-s-k3"></a>
@@ -4686,7 +4686,7 @@ If resulting control performance is not satisfactory, one may switch to the seco
## Controller Design {#controller-design}
<span class="org-target" id="org-target--sec:controller_design"></span>
<span class="org-target" id="org-target--sec-controller-design"></span>
### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design}
@@ -4696,7 +4696,7 @@ By multivariable transfer function shaping, therefore, we mean the shaping of th
The classical loop-shaping ideas can be further generalized to MIMO systems by considering the singular values.
Consider the one degree-of-freedom system as shown in Fig.&nbsp;[38](#figure--fig:classical-feedback-small).
Consider the one degree-of-freedom system as shown in [38](#figure--fig:classical-feedback-small).
We have the following important relationships:
\begin{align}
@@ -4750,7 +4750,7 @@ Thus, over specified frequency ranges, it is relatively easy to approximate the
</div>
Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig.&nbsp;[39](#figure--fig:design-trade-off-mimo-gk).
Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in [39](#figure--fig:design-trade-off-mimo-gk).
<a id="figure--fig:design-trade-off-mimo-gk"></a>
@@ -4810,7 +4810,7 @@ The optimal state estimate is given by a **Kalman filter**.
The solution to the LQG problem is then found by replacing \\(x\\) by \\(\hat{x}\\) to give \\(u(t) = -K\_r \hat{x}\\).
We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig.&nbsp;[40](#figure--fig:lqg-separation): the optimal state feedback and the optimal state estimator (the Kalman filter).
We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in [40](#figure--fig:lqg-separation): the optimal state feedback and the optimal state estimator (the Kalman filter).
<a id="figure--fig:lqg-separation"></a>
@@ -4842,7 +4842,7 @@ and \\(X\\) is the unique positive-semi definite solution of the algebraic Ricca
<div class="important">
The **Kalman filter** has the structure of an ordinary state-estimator, as shown on Fig.&nbsp;[41](#figure--fig:lqg-kalman-filter), with:
The **Kalman filter** has the structure of an ordinary state-estimator, as shown on [41](#figure--fig:lqg-kalman-filter), with:
\begin{equation} \label{eq:kalman\_filter\_structure}
\dot{\hat{x}} = A\hat{x} + Bu + K\_f(y-C\hat{x})
@@ -4866,7 +4866,7 @@ Where \\(Y\\) is the unique positive-semi definite solution of the algebraic Ric
{{< figure src="/ox-hugo/skogestad07_lqg_kalman_filter.png" caption="<span class=\"figure-number\">Figure 41: </span>The LQG controller and noisy plant" >}}
The structure of the LQG controller is illustrated in Fig.&nbsp;[41](#figure--fig:lqg-kalman-filter), its transfer function from \\(y\\) to \\(u\\) is given by
The structure of the LQG controller is illustrated in [41](#figure--fig:lqg-kalman-filter), its transfer function from \\(y\\) to \\(u\\) is given by
\begin{align\*}
L\_{\text{LQG}}(s) &= \left[ \begin{array}{c|c}
@@ -4881,7 +4881,7 @@ The structure of the LQG controller is illustrated in Fig.&nbsp;[41](#figure--fi
It has the same degree (number of poles) as the plant.<br />
For the LQG-controller, as shown on Fig.&nbsp;[41](#figure--fig:lqg-kalman-filter), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig.&nbsp;[42](#figure--fig:lqg-integral). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states.
For the LQG-controller, as shown on [41](#figure--fig:lqg-kalman-filter), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in [42](#figure--fig:lqg-integral). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states.
<a id="figure--fig:lqg-integral"></a>
@@ -4896,16 +4896,16 @@ Their main limitation is that they can only be applied to minimum phase plants.
### \\(\htwo\\) and \\(\hinf\\) Control {#htwo-and-hinf-control}
<span class="org-target" id="org-target--sec:htwo_and_hinf"></span>
<span class="org-target" id="org-target--sec-htwo-and-hinf"></span>
#### General Control Problem Formulation {#general-control-problem-formulation}
<span class="org-target" id="org-target--sec:htwo_inf_assumptions"></span>
<span class="org-target" id="org-target--sec-htwo-inf-assumptions"></span>
There are many ways in which feedback design problems can be cast as \\(\htwo\\) and \\(\hinf\\) optimization problems.
It is very useful therefore to have a **standard problem formulation** into which any particular problem may be manipulated.
Such a general formulation is afforded by the general configuration shown in Fig.&nbsp;[43](#figure--fig:general-control).
Such a general formulation is afforded by the general configuration shown in [43](#figure--fig:general-control).
<a id="figure--fig:general-control"></a>
@@ -5085,7 +5085,7 @@ Then the LQG cost function is
#### \\(\hinf\\) Optimal Control {#hinf-optimal-control}
With reference to the general control configuration on Fig.&nbsp;[43](#figure--fig:general-control), the standard \\(\hinf\\) optimal control problem is to find all stabilizing controllers \\(K\\) which minimize
With reference to the general control configuration on [43](#figure--fig:general-control), the standard \\(\hinf\\) optimal control problem is to find all stabilizing controllers \\(K\\) which minimize
\begin{equation\*}
\hnorm{F\_l(P, K)} = \max\_{\omega} \maxsv\big(F\_l(P, K)(j\omega)\big)
@@ -5196,7 +5196,7 @@ In general, the scalar weighting functions \\(w\_1(s)\\) and \\(w\_2(s)\\) can b
This can be useful for **systems with channels of quite different bandwidths**.
In that case, **diagonal weights are recommended** as anything more complicated is usually not worth the effort.<br />
To see how this mixed sensitivity problem can be formulated in the general setting, we can imagine the disturbance \\(d\\) as a single exogenous input and define and error signal \\(z = [z\_1^T\ z\_2^T]^T\\), where \\(z\_1 = W\_1 y\\) and \\(z\_2 = -W\_2 u\\) as illustrated in Fig.&nbsp;[44](#figure--fig:mixed-sensitivity-dist-rejection).
To see how this mixed sensitivity problem can be formulated in the general setting, we can imagine the disturbance \\(d\\) as a single exogenous input and define and error signal \\(z = [z\_1^T\ z\_2^T]^T\\), where \\(z\_1 = W\_1 y\\) and \\(z\_2 = -W\_2 u\\) as illustrated in [44](#figure--fig:mixed-sensitivity-dist-rejection).
We can then see that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\) as required.
The elements of the generalized plant are
@@ -5217,10 +5217,10 @@ The elements of the generalized plant are
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_dist_rejection.png" caption="<span class=\"figure-number\">Figure 44: </span>\\(S/KS\\) mixed-sensitivity optimization in standard form (regulation)" >}}
Another interpretation can be put on the \\(S/KS\\) mixed-sensitivity optimization as shown in the standard control configuration of Fig.&nbsp;[45](#figure--fig:mixed-sensitivity-ref-tracking).
Another interpretation can be put on the \\(S/KS\\) mixed-sensitivity optimization as shown in the standard control configuration of [45](#figure--fig:mixed-sensitivity-ref-tracking).
Here we consider a tracking problem.
The exogenous input is a reference command \\(r\\), and the error signals are \\(z\_1 = -W\_1 e = W\_1 (r-y)\\) and \\(z\_2 = W\_2 u\\).
As the regulation problem of Fig.&nbsp;[44](#figure--fig:mixed-sensitivity-dist-rejection), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\).
As the regulation problem of [44](#figure--fig:mixed-sensitivity-dist-rejection), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\).
<a id="figure--fig:mixed-sensitivity-ref-tracking"></a>
@@ -5235,7 +5235,7 @@ Another useful mixed sensitivity optimization problem, is to find a stabilizing
The ability to shape \\(T\\) is desirable for tracking problems and noise attenuation.
It is also important for robust stability with respect to multiplicative perturbations at the plant output.
The \\(S/T\\) mixed-sensitivity minimization problem can be put into the standard control configuration as shown in Fig.&nbsp;[46](#figure--fig:mixed-sensitivity-s-t).
The \\(S/T\\) mixed-sensitivity minimization problem can be put into the standard control configuration as shown in [46](#figure--fig:mixed-sensitivity-s-t).
The elements of the generalized plant are
@@ -5277,7 +5277,7 @@ The focus of attention has moved to the size of signals and away from the size a
</div>
Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals.
Weights are also used if a perturbation is used to model uncertainty, as in Fig.&nbsp;[47](#figure--fig:input-uncertainty-hinf), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\).
Weights are also used if a perturbation is used to model uncertainty, as in [47](#figure--fig:input-uncertainty-hinf), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\).
<a id="figure--fig:input-uncertainty-hinf"></a>
@@ -5288,9 +5288,9 @@ As we have seen, the weights \\(Q\\) and \\(R\\) are constant, but LQG can be ge
When we consider a system's response to persistent sinusoidal signals of varying frequency, or when we consider the induced 2-norm between the exogenous input signals and the error signals, we are required to minimize the \\(\hinf\\) norm.
In the absence of model uncertainty, there does not appear to be an overwhelming case for using the \\(\hinf\\) norm rather than the more traditional \\(\htwo\\) norm.
However, when uncertainty is addressed, as it always should be, \\(\hinf\\) is clearly the more **natural approach** using component uncertainty models as in Fig.&nbsp;[47](#figure--fig:input-uncertainty-hinf).<br />
However, when uncertainty is addressed, as it always should be, \\(\hinf\\) is clearly the more **natural approach** using component uncertainty models as in [47](#figure--fig:input-uncertainty-hinf).<br />
A typical problem using the signal-based approach to \\(\hinf\\) control is illustrated in the interconnection diagram of Fig.&nbsp;[48](#figure--fig:hinf-signal-based).
A typical problem using the signal-based approach to \\(\hinf\\) control is illustrated in the interconnection diagram of [48](#figure--fig:hinf-signal-based).
\\(G\\) and \\(G\_d\\) are nominal models of the plant and disturbance dynamics, and \\(K\\) is the controller to be designed.
The weights \\(W\_d\\), \\(W\_r\\), and \\(W\_n\\) may be constant or dynamic and describe the relative importance and/or the frequency content of the disturbance, set points and noise signals.
The weight \\(W\_\text{ref}\\) is a desired closed-loop transfer function between the weighted set point \\(r\_s\\) and the actual output \\(y\\).
@@ -5315,7 +5315,7 @@ The problem can be cast as a standard \\(\hinf\\) optimization in the general co
{{< figure src="/ox-hugo/skogestad07_hinf_signal_based.png" caption="<span class=\"figure-number\">Figure 48: </span>A signal-based \\(\hinf\\) control problem" >}}
Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in Fig.&nbsp;[49](#figure--fig:hinf-signal-based-uncertainty).
Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in [49](#figure--fig:hinf-signal-based-uncertainty).
The problem we now want to solve is: find a stabilizing controller \\(K\\) such that the \\(\hinf\\) norm of the transfer function between \\(w\\) and \\(z\\) is less that 1 for all \\(\Delta\\) where \\(\hnorm{\Delta} < 1\\).
We have assumed in this statement that the **signal weights have normalized the 2-norm of the exogenous input signals to unity**.
This problem is a non-standard \\(\hinf\\) optimization.
@@ -5378,7 +5378,7 @@ The objective of robust stabilization is to stabilize not only the nominal model
where \\(\epsilon > 0\\) is then the **stability margin**.<br />
For the perturbed feedback system of Fig.&nbsp;[50](#figure--fig:coprime-uncertainty-bis), the stability property is robust if and only if the nominal feedback system is stable and
For the perturbed feedback system of [50](#figure--fig:coprime-uncertainty-bis), the stability property is robust if and only if the nominal feedback system is stable and
\begin{equation\*}
\gamma \triangleq \hnorm{\begin{bmatrix}
@@ -5445,7 +5445,7 @@ It is important to emphasize that since we can compute \\(\gamma\_\text{min}\\)
#### A Systematic \\(\hinf\\) Loop-Shaping Design Procedure {#a-systematic-hinf-loop-shaping-design-procedure}
<span class="org-target" id="org-target--sec:hinf_loop_shaping_procedure"></span>
<span class="org-target" id="org-target--sec-hinf-loop-shaping-procedure"></span>
Robust stabilization alone is not much used in practice because the designer is not able to specify any performance requirements.
To do so, **pre and post compensators** are used to **shape the open-loop singular values** prior to robust stabilization of the "shaped" plant.
@@ -5456,7 +5456,7 @@ If \\(W\_1\\) and \\(W\_2\\) are the pre and post compensators respectively, the
G\_s = W\_2 G W\_1
\end{equation}
as shown in Fig.&nbsp;[51](#figure--fig:shaped-plant).
as shown in [51](#figure--fig:shaped-plant).
<a id="figure--fig:shaped-plant"></a>
@@ -5491,7 +5491,7 @@ Systematic procedure for \\(\hinf\\) loop-shaping design:
- A small value of \\(\epsilon\_{\text{max}}\\) indicates that the chosen singular value loop-shapes are incompatible with robust stability requirements
7. **Analyze the design** and if not all the specification are met, make further modifications to the weights
8. **Implement the controller**.
The configuration shown in Fig.&nbsp;[52](#figure--fig:shapping-practical-implementation) has been found useful when compared with the conventional setup in Fig.&nbsp;[38](#figure--fig:classical-feedback-small).
The configuration shown in [52](#figure--fig:shapping-practical-implementation) has been found useful when compared with the conventional setup in [38](#figure--fig:classical-feedback-small).
This is because the references do not directly excite the dynamics of \\(K\_s\\), which can result in large amounts of overshoot.
The constant prefilter ensure a steady-state gain of \\(1\\) between \\(r\\) and \\(y\\), assuming integral action in \\(W\_1\\) or \\(G\\)
@@ -5518,17 +5518,17 @@ Many control design problems possess two degrees-of-freedom:
Sometimes, one degree-of-freedom is left out of the design, and the controller is driven by an error signal i.e. the difference between a command and the output.
But in cases where stringent time-domain specifications are set on the output response, a one degree-of-freedom structure may not be sufficient.<br />
A general two degrees-of-freedom feedback control scheme is depicted in Fig.&nbsp;[53](#figure--fig:classical-feedback-2dof-simple).
A general two degrees-of-freedom feedback control scheme is depicted in [53](#figure--fig:classical-feedback-2dof-simple).
The commands and feedbacks enter the controller separately and are independently processed.
<a id="figure--fig:classical-feedback-2dof-simple"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_simple.png" caption="<span class=\"figure-number\">Figure 53: </span>General two degrees-of-freedom feedback control scheme" >}}
The presented \\(\mathcal{H}\_\infty\\) loop-shaping design procedure in section&nbsp;is a one-degree-of-freedom design, although a **constant** pre-filter can be easily implemented for steady-state accuracy.
The presented \\(\mathcal{H}\_\infty\\) loop-shaping design procedure in section is a one-degree-of-freedom design, although a **constant** pre-filter can be easily implemented for steady-state accuracy.
However, this may not be sufficient and a dynamic two degrees-of-freedom design is required.<br />
The design problem is illustrated in Fig.&nbsp;[54](#figure--fig:coprime-uncertainty-hinf).
The design problem is illustrated in [54](#figure--fig:coprime-uncertainty-hinf).
The feedback part of the controller \\(K\_2\\) is designed to meet robust stability and disturbance rejection requirements.
A prefilter is introduced to force the response of the closed-loop system to follow that of a specified model \\(T\_{\text{ref}}\\), often called the **reference model**.
@@ -5536,7 +5536,7 @@ A prefilter is introduced to force the response of the closed-loop system to fol
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_hinf.png" caption="<span class=\"figure-number\">Figure 54: </span>Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping design problem" >}}
The design problem is to find the stabilizing controller \\(K = [K\_1,\ K\_2]\\) for the shaped plant \\(G\_s = G W\_1\\), with a normalized coprime factorization \\(G\_s = M\_s^{-1} N\_s\\), which minimizes the \\(\mathcal{H}\_\infty\\) norm of the transfer function between the signals \\([r^T\ \phi^T]^T\\) and \\([u\_s^T\ y^T\ e^T]^T\\) as defined in Fig.&nbsp;[54](#figure--fig:coprime-uncertainty-hinf).
The design problem is to find the stabilizing controller \\(K = [K\_1,\ K\_2]\\) for the shaped plant \\(G\_s = G W\_1\\), with a normalized coprime factorization \\(G\_s = M\_s^{-1} N\_s\\), which minimizes the \\(\mathcal{H}\_\infty\\) norm of the transfer function between the signals \\([r^T\ \phi^T]^T\\) and \\([u\_s^T\ y^T\ e^T]^T\\) as defined in [54](#figure--fig:coprime-uncertainty-hinf).
This problem is easily cast into the general configuration.
The control signal to the shaped plant \\(u\_s\\) is given by:
@@ -5559,14 +5559,14 @@ The purpose of the prefilter is to ensure that:
The main steps required to synthesize a two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller are:
1. Design a one degree-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller (section&nbsp;) but without a post-compensator \\(W\_2\\)
1. Design a one degree-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller (section ) but without a post-compensator \\(W\_2\\)
2. Select a desired closed-loop transfer function \\(T\_{\text{ref}}\\) between the commands and controller outputs
3. Set the scalar parameter \\(\rho\\) to a small value greater than \\(1\\); something in the range \\(1\\) to \\(3\\) will usually suffice
4. For the shaped \\(G\_s = G W\_1\\), the desired response \\(T\_{\text{ref}}\\), and the scalar parameter \\(\rho\\), solve the standard \\(\mathcal{H}\_\infty\\) optimization problem to a specified tolerance to get \\(K = [K\_1,\ K\_2]\\)
5. Replace the prefilter \\(K\_1\\) by \\(K\_1 W\_i\\) to give exact model-matching at steady-state.
6. Analyze and, if required, redesign making adjustments to \\(\rho\\) and possibly \\(W\_1\\) and \\(T\_{\text{ref}}\\)
The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig.&nbsp;[55](#figure--fig:hinf-synthesis-2dof).
The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in [55](#figure--fig:hinf-synthesis-2dof).
<a id="figure--fig:hinf-synthesis-2dof"></a>
@@ -5650,7 +5650,7 @@ When implemented in Hanus form, the expression for \\(u\\) becomes
where \\(u\_a\\) is the **actual plant input**, that is the measurement at the **output of the actuators** which therefore contains information about possible actuator saturation.
The situation is illustrated in Fig.&nbsp;[56](#figure--fig:weight-anti-windup), where the actuators are each modeled by a unit gain and a saturation.
The situation is illustrated in [56](#figure--fig:weight-anti-windup), where the actuators are each modeled by a unit gain and a saturation.
<a id="figure--fig:weight-anti-windup"></a>
@@ -5708,12 +5708,12 @@ Moreover, one should be careful about combining controller synthesis and analysi
## Controller Structure Design {#controller-structure-design}
<span class="org-target" id="org-target--sec:controller_structure_design"></span>
<span class="org-target" id="org-target--sec-controller-structure-design"></span>
### Introduction {#introduction}
In previous sections, we considered the general problem formulation in Fig.&nbsp;[57](#figure--fig:general-control-names-bis) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\).
In previous sections, we considered the general problem formulation in [57](#figure--fig:general-control-names-bis) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\).
<a id="figure--fig:general-control-names-bis"></a>
@@ -5748,31 +5748,31 @@ The reference value \\(r\\) is usually set at some higher layer in the control h
- **Optimization layer**: computes the desired reference commands \\(r\\)
- **Control layer**: implements these commands to achieve \\(y \approx r\\)
Additional layers are possible, as is illustrated in Fig.&nbsp;[58](#figure--fig:control-system-hierarchy) which shows a typical control hierarchy for a chemical plant.
Additional layers are possible, as is illustrated in [58](#figure--fig:control-system-hierarchy) which shows a typical control hierarchy for a chemical plant.
<a id="figure--fig:control-system-hierarchy"></a>
{{< figure src="/ox-hugo/skogestad07_system_hierarchy.png" caption="<span class=\"figure-number\">Figure 58: </span>Typical control system hierarchy in a chemical plant" >}}
In general, the information flow in such a control hierarchy is based on the higher layer sending reference values (setpoints) to the layer below reporting back any problems achieving this (see Fig.&nbsp;[6](#org-target--fig:optimize_control_b)).
In general, the information flow in such a control hierarchy is based on the higher layer sending reference values (setpoints) to the layer below reporting back any problems achieving this (see [6](#org-target--fig-optimize-control-b)).
There is usually a time scale separation between the layers which means that the **setpoints**, as viewed from a given layer, are **updated only periodically**.<br />
The optimization tends to be performed open-loop with limited use of feedback. On the other hand, the control layer is mainly based on feedback information.
The **optimization is often based on nonlinear steady-state models**, whereas we often use **linear dynamic models in the control layer**.<br />
From a theoretical point of view, the optimal performance is obtained with a **centralized optimizing controller**, which combines the two layers of optimizing and control (see Fig.&nbsp;[6](#org-target--fig:optimize_control_c)).
From a theoretical point of view, the optimal performance is obtained with a **centralized optimizing controller**, which combines the two layers of optimizing and control (see [6](#org-target--fig-optimize-control-c)).
All control actions in such an ideal control system would be perfectly coordinated and the control system would use on-line dynamic optimization based on nonlinear dynamic model of the complete plant.
However, this solution is normally not used for a number a reasons, included the cost of modeling, the difficulty of controller design, maintenance, robustness problems and the lack of computing power.
<a id="table--fig:optimize-control"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--fig:optimize-control">Table 6</a></span>:
<span class="table-number"><a href="#table--fig:optimize-control">Table 6</a>:</span>
Alternative structures for optimization and control
</div>
| ![](/ox-hugo/skogestad07_optimize_control_a.png) | ![](/ox-hugo/skogestad07_optimize_control_b.png) | ![](/ox-hugo/skogestad07_optimize_control_c.png) |
|-------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
| <span class="org-target" id="org-target--fig:optimize_control_a"></span> Open loop optimization | <span class="org-target" id="org-target--fig:optimize_control_b"></span> Closed-loop implementation with separate control layer | <span class="org-target" id="org-target--fig:optimize_control_c"></span> Integrated optimization and control |
| <span class="org-target" id="org-target--fig-optimize-control-a"></span> Open loop optimization | <span class="org-target" id="org-target--fig-optimize-control-b"></span> Closed-loop implementation with separate control layer | <span class="org-target" id="org-target--fig-optimize-control-c"></span> Integrated optimization and control |
### Selection of Controlled Outputs {#selection-of-controlled-outputs}
@@ -5885,7 +5885,7 @@ Thus, the selection of controlled and measured outputs are two separate issues.
### Selection of Manipulations and Measurements {#selection-of-manipulations-and-measurements}
We are here concerned with the variable sets \\(u\\) and \\(v\\) in Fig.&nbsp;[57](#figure--fig:general-control-names-bis).
We are here concerned with the variable sets \\(u\\) and \\(v\\) in [57](#figure--fig:general-control-names-bis).
Note that **the measurements** \\(v\\) used by the controller **are in general different from the controlled variables** \\(z\\) because we may not be able to measure all the controlled variables and we may want to measure and control additional variables in order to:
- Stabilize the plant, or more generally change its dynamics
@@ -5977,19 +5977,19 @@ Then when a SISO control loop is closed, we lose the input \\(u\_i\\) as a degre
A cascade control structure results when either of the following two situations arise:
- The reference \\(r\_i\\) is an output from another controller.
This is the **conventional cascade control** (Fig.&nbsp;[7](#org-target--fig:cascade_extra_meas))
This is the **conventional cascade control** ([7](#org-target--fig-cascade-extra-meas))
- The "measurement" \\(y\_i\\) is an output from another controller.
This is referred to as **input resetting** (Fig.&nbsp;[7](#org-target--fig:cascade_extra_input))
This is referred to as **input resetting** ([7](#org-target--fig-cascade-extra-input))
<a id="table--fig:cascade-implementation"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--fig:cascade-implementation">Table 7</a></span>:
<span class="table-number"><a href="#table--fig:cascade-implementation">Table 7</a>:</span>
Cascade Implementations
</div>
| ![](/ox-hugo/skogestad07_cascade_extra_meas.png) | ![](/ox-hugo/skogestad07_cascade_extra_input.png) |
|--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| <span class="org-target" id="org-target--fig:cascade_extra_meas"></span> Extra measurements \\(y\_2\\) | <span class="org-target" id="org-target--fig:cascade_extra_input"></span> Extra inputs \\(u\_2\\) |
| <span class="org-target" id="org-target--fig-cascade-extra-meas"></span> Extra measurements \\(y\_2\\) | <span class="org-target" id="org-target--fig-cascade-extra-input"></span> Extra inputs \\(u\_2\\) |
#### Cascade Control: Extra Measurements {#cascade-control-extra-measurements}
@@ -6013,7 +6013,7 @@ where in most cases \\(r\_2 = 0\\) since we do not have a degree-of-freedom to c
##### Cascade implementation {#cascade-implementation}
To obtain an implementation with two SISO controllers, we may cascade the controllers as illustrated in Fig.&nbsp;[7](#org-target--fig:cascade_extra_meas):
To obtain an implementation with two SISO controllers, we may cascade the controllers as illustrated in [7](#org-target--fig-cascade-extra-meas):
\begin{align\*}
r\_2 &= K\_1(s)(r\_1 - y\_1) \\\\
@@ -6023,12 +6023,12 @@ To obtain an implementation with two SISO controllers, we may cascade the contro
Note that the output \\(r\_2\\) from the slower primary controller \\(K\_1\\) is not a manipulated plant input, but rather the reference input to the faster secondary controller \\(K\_2\\).
Cascades based on measuring the actual manipulated variable (\\(y\_2 = u\_m\\)) are commonly used to **reduce uncertainty and non-linearity at the plant input**.
In the general case (Fig.&nbsp;[7](#org-target--fig:cascade_extra_meas)) \\(y\_1\\) and \\(y\_2\\) are not directly related to each other, and this is sometimes referred to as _parallel cascade control_.
However, it is common to encounter the situation in Fig.&nbsp;[59](#figure--fig:cascade-control) where the primary output \\(y\_1\\) depends directly on \\(y\_2\\) which is a special case of Fig.&nbsp;[7](#org-target--fig:cascade_extra_meas).
In the general case ([7](#org-target--fig-cascade-extra-meas)) \\(y\_1\\) and \\(y\_2\\) are not directly related to each other, and this is sometimes referred to as _parallel cascade control_.
However, it is common to encounter the situation in [59](#figure--fig:cascade-control) where the primary output \\(y\_1\\) depends directly on \\(y\_2\\) which is a special case of [7](#org-target--fig-cascade-extra-meas).
<div class="important">
With reference to the special (but common) case of cascade control shown in Fig.&nbsp;[59](#figure--fig:cascade-control), the use of **extra measurements** is useful under the following circumstances:
With reference to the special (but common) case of cascade control shown in [59](#figure--fig:cascade-control), the use of **extra measurements** is useful under the following circumstances:
- The disturbance \\(d\_2\\) is significant and \\(G\_1\\) is non-minimum phase.
If \\(G\_1\\) is minimum phase, the input-output controllability of \\(G\_2\\) and \\(G\_1 G\_2\\) are the same and there is no fundamental advantage in measuring \\(y\_2\\)
@@ -6065,7 +6065,7 @@ Then \\(u\_2(t)\\) will only be used for **transient control** and will return t
##### Cascade implementation {#cascade-implementation}
To obtain an implementation with two SISO controllers we may cascade the controllers as shown in Fig.&nbsp;[7](#org-target--fig:cascade_extra_input).
To obtain an implementation with two SISO controllers we may cascade the controllers as shown in [7](#org-target--fig-cascade-extra-input).
We again let input \\(u\_2\\) take care of the **fast control** and \\(u\_1\\) of the **long-term control**.
The fast control loop is then
@@ -6086,7 +6086,7 @@ It also shows more clearly that \\(r\_{u\_2}\\), the reference for \\(u\_2\\), m
<div class="exampl">
Consider the system in Fig.&nbsp;[60](#figure--fig:cascade-control-two-layers) with two manipulated inputs (\\(u\_2\\) and \\(u\_3\\)), one controlled output (\\(y\_1\\) which should be close to \\(r\_1\\)) and two measured variables (\\(y\_1\\) and \\(y\_2\\)).
Consider the system in [60](#figure--fig:cascade-control-two-layers) with two manipulated inputs (\\(u\_2\\) and \\(u\_3\\)), one controlled output (\\(y\_1\\) which should be close to \\(r\_1\\)) and two measured variables (\\(y\_1\\) and \\(y\_2\\)).
Input \\(u\_2\\) has a more direct effect on \\(y\_1\\) than does input \\(u\_3\\) (there is a large delay in \\(G\_3(s)\\)).
Input \\(u\_2\\) should only be used for transient control as it is desirable that it remains close to \\(r\_3 = r\_{u\_2}\\).
The extra measurement \\(y\_2\\) is closer than \\(y\_1\\) to the input \\(u\_2\\) and may be useful for detecting disturbances affecting \\(G\_1\\).
@@ -6173,7 +6173,7 @@ Four applications of partial control are:
The outputs \\(y\_1\\) have an associated control objective but are not measured.
Instead, we aim at indirectly controlling \\(y\_1\\) by controlling the secondary measured variables \\(y\_2\\).
The table&nbsp;[8](#table--tab:partial-control) shows clearly the differences between the four applications of partial control.
The table [8](#table--tab:partial-control) shows clearly the differences between the four applications of partial control.
In all cases, there is a control objective associated with \\(y\_1\\) and a feedback involving measurement and control of \\(y\_2\\) and we want:
- The effect of disturbances on \\(y\_1\\) to be small (when \\(y\_2\\) is controlled)
@@ -6181,7 +6181,7 @@ In all cases, there is a control objective associated with \\(y\_1\\) and a feed
<a id="table--tab:partial-control"></a>
<div class="table-caption">
<span class="table-number"><a href="#table--tab:partial-control">Table 8</a></span>:
<span class="table-number"><a href="#table--tab:partial-control">Table 8</a>:</span>
Applications of partial control
</div>
@@ -6201,7 +6201,7 @@ By partitioning the inputs and outputs, the overall model \\(y = G u\\) can be w
\end{aligned}
\end{equation}
Assume now that feedback control \\(u\_2 = K\_2(r\_2 - y\_2 - n\_2)\\) is used for the "secondary" subsystem involving \\(u\_2\\) and \\(y\_2\\) (Fig.&nbsp;[61](#figure--fig:partial-control)).
Assume now that feedback control \\(u\_2 = K\_2(r\_2 - y\_2 - n\_2)\\) is used for the "secondary" subsystem involving \\(u\_2\\) and \\(y\_2\\) ([61](#figure--fig:partial-control)).
We get:
\begin{equation} \label{eq:partial\_control}
@@ -6270,7 +6270,7 @@ The selection of \\(u\_2\\) and \\(y\_2\\) for use in the lower-layer control sy
##### Sequential design of cascade control systems {#sequential-design-of-cascade-control-systems}
Consider the conventional cascade control system in Fig.&nbsp;[7](#org-target--fig:cascade_extra_meas) where we have additional "secondary" measurements \\(y\_2\\) with no associated control objective, and the objective is to improve the control of \\(y\_1\\) by locally controlling \\(y\_2\\).
Consider the conventional cascade control system in [7](#org-target--fig-cascade-extra-meas) where we have additional "secondary" measurements \\(y\_2\\) with no associated control objective, and the objective is to improve the control of \\(y\_1\\) by locally controlling \\(y\_2\\).
The idea is that this should reduce the effect of disturbances and uncertainty on \\(y\_1\\).
From <eq:partial_control>, it follows that we should select \\(y\_2\\) and \\(u\_2\\) such that \\(\\|P\_d\\|\\) is small and at least smaller than \\(\\|G\_{d1}\\|\\).
@@ -6338,7 +6338,7 @@ Then to minimize the control error for the primary output, \\(J = \\|y\_1 - r\_1
### Decentralized Feedback Control {#decentralized-feedback-control}
In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig.&nbsp;[62](#figure--fig:decentralized-diagonal-control)).
In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller ([62](#figure--fig:decentralized-diagonal-control)).
<a id="figure--fig:decentralized-diagonal-control"></a>
@@ -6729,7 +6729,7 @@ The conditions are also useful in an **input-output controllability analysis** f
## Model Reduction {#model-reduction}
<span class="org-target" id="org-target--sec:model_reduction"></span>
<span class="org-target" id="org-target--sec-model-reduction"></span>
### Introduction {#introduction}