|
|
|
@@ -16,10 +16,54 @@ Author(s)
|
|
|
|
|
Year
|
|
|
|
|
: 2007
|
|
|
|
|
|
|
|
|
|
<div style="display: none"> \(
|
|
|
|
|
% H Infini
|
|
|
|
|
\newcommand{\hinf}{\mathcal{H}_\infty}
|
|
|
|
|
% H 2
|
|
|
|
|
\newcommand{\htwo}{\mathcal{H}_2}
|
|
|
|
|
% Omega
|
|
|
|
|
\newcommand{\w}{\omega}
|
|
|
|
|
% H-Infinity Norm
|
|
|
|
|
\newcommand{\hnorm}[1]{\left\|#1\right\|_{\infty}}
|
|
|
|
|
% H-2 Norm
|
|
|
|
|
\newcommand{\normtwo}[1]{\left\|#1\right\|_{2}}
|
|
|
|
|
% Norm
|
|
|
|
|
\newcommand{\norm}[1]{\left\|#1\right\|}
|
|
|
|
|
% Absolute value
|
|
|
|
|
\newcommand{\abs}[1]{\left\lvert#1\right\lvert}
|
|
|
|
|
% Maximum for all omega
|
|
|
|
|
\newcommand{\maxw}{\text{max}_{\omega}}
|
|
|
|
|
% Maximum singular value
|
|
|
|
|
\newcommand{\maxsv}{\overline{\sigma}}
|
|
|
|
|
% Minimum singular value
|
|
|
|
|
\newcommand{\minsv}{\underline{\sigma}}
|
|
|
|
|
% Under bar
|
|
|
|
|
\newcommand{\ubar}[1]{\text{\b{$#1$}}}
|
|
|
|
|
% Diag keyword
|
|
|
|
|
\newcommand{\diag}[1]{\text{diag}\{{#1}\}}
|
|
|
|
|
% Vector
|
|
|
|
|
\newcommand{\colvec}[1]{\begin{bmatrix}#1\end{bmatrix}}
|
|
|
|
|
\)</div>
|
|
|
|
|
|
|
|
|
|
<div style="display: none"> \(
|
|
|
|
|
\newcommand{\tcmbox}[1]{\boxed{#1}}
|
|
|
|
|
% Simulate SIunitx
|
|
|
|
|
\newcommand{\SI}[2]{#1\,#2}
|
|
|
|
|
\newcommand{\ang}[1]{#1^{\circ}}
|
|
|
|
|
\newcommand{\degree}{^{\circ}}
|
|
|
|
|
\newcommand{\radian}{\text{rad}}
|
|
|
|
|
\newcommand{\percent}{\%}
|
|
|
|
|
\newcommand{\decibel}{\text{dB}}
|
|
|
|
|
\newcommand{\per}{/}
|
|
|
|
|
% Bug with subequations
|
|
|
|
|
\newcommand{\eatLabel}[2]{}
|
|
|
|
|
\newenvironment{subequations}{\eatLabel}{}
|
|
|
|
|
\)</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Introduction {#introduction}
|
|
|
|
|
|
|
|
|
|
<a id="org43a1dd8"></a>
|
|
|
|
|
<a id="orga7066d6"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### The Process of Control System Design {#the-process-of-control-system-design}
|
|
|
|
@@ -190,7 +234,7 @@ Notations used throughout this note are summarized in tables [table:notatio
|
|
|
|
|
|
|
|
|
|
## Classical Feedback Control {#classical-feedback-control}
|
|
|
|
|
|
|
|
|
|
<a id="org49cc073"></a>
|
|
|
|
|
<a id="orgaabb1e5"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Frequency Response {#frequency-response}
|
|
|
|
@@ -239,7 +283,7 @@ Thus, the input to the plant is \\(u = K(s) (r-y-n)\\).
|
|
|
|
|
The objective of control is to manipulate \\(u\\) (design \\(K\\)) such that the control error \\(e\\) remains small in spite of disturbances \\(d\\).
|
|
|
|
|
The control error is defined as \\(e = y-r\\).
|
|
|
|
|
|
|
|
|
|
<a id="org594420f"></a>
|
|
|
|
|
<a id="org3b71acb"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_alt.png" caption="Figure 1: Configuration for one degree-of-freedom control" >}}
|
|
|
|
|
|
|
|
|
@@ -551,7 +595,7 @@ We cannot achieve both of these simultaneously with a single feedback controller
|
|
|
|
|
|
|
|
|
|
The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig. [fig:classical_feedback_2dof_alt](#fig:classical_feedback_2dof_alt)), rather than operating on their difference \\(r - y\_m\\).
|
|
|
|
|
|
|
|
|
|
<a id="org658969c"></a>
|
|
|
|
|
<a id="org9265d45"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="Figure 2: 2 degrees-of-freedom control architecture" >}}
|
|
|
|
|
|
|
|
|
@@ -560,7 +604,7 @@ The controller can be slit into two separate blocks (Fig. [fig:classical_fe
|
|
|
|
|
- the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors)
|
|
|
|
|
- the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance
|
|
|
|
|
|
|
|
|
|
<a id="org8799861"></a>
|
|
|
|
|
<a id="org0e3d8d7"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_sep.png" caption="Figure 3: 2 degrees-of-freedom control architecture with two separate blocs" >}}
|
|
|
|
|
|
|
|
|
@@ -629,7 +673,7 @@ With (see Fig. [fig:performance_weigth](#fig:performance_weigth)):
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="org20f2d1e"></a>
|
|
|
|
|
<a id="org0656ee4"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_weight_first_order.png" caption="Figure 4: Inverse of performance weight" >}}
|
|
|
|
|
|
|
|
|
@@ -653,7 +697,7 @@ After selecting the form of \\(N\\) and the weights, the \\(\hinf\\) optimal con
|
|
|
|
|
|
|
|
|
|
## Introduction to Multivariable Control {#introduction-to-multivariable-control}
|
|
|
|
|
|
|
|
|
|
<a id="org3b68fc1"></a>
|
|
|
|
|
<a id="org25e187e"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Introduction {#introduction}
|
|
|
|
@@ -696,7 +740,7 @@ For negative feedback system (Fig. [fig:classical_feedback_bis](#fig:classi
|
|
|
|
|
- \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\)
|
|
|
|
|
- \\(T \triangleq L(I + L)^{-1}\\) is the transfer function from \\(r\\) to \\(y\\)
|
|
|
|
|
|
|
|
|
|
<a id="orgb1e90db"></a>
|
|
|
|
|
<a id="org10be303"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_bis.png" caption="Figure 5: Conventional negative feedback control system" >}}
|
|
|
|
|
|
|
|
|
@@ -1011,7 +1055,7 @@ The **structured singular value** \\(\mu\\) is a tool for analyzing the effects
|
|
|
|
|
|
|
|
|
|
The general control problem formulation is represented in Fig. [fig:general_control_names](#fig:general_control_names).
|
|
|
|
|
|
|
|
|
|
<a id="orge52557f"></a>
|
|
|
|
|
<a id="org410e618"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_names.png" caption="Figure 6: General control configuration" >}}
|
|
|
|
|
|
|
|
|
@@ -1041,7 +1085,7 @@ We consider:
|
|
|
|
|
- The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system)
|
|
|
|
|
- The weighted or normalized controlled outputs \\(z = W\_z \tilde{z}\\) (where \\(\tilde{z}\\) often consists of the control error \\(y-r\\) and the manipulated input \\(u\\))
|
|
|
|
|
|
|
|
|
|
<a id="orga94c007"></a>
|
|
|
|
|
<a id="org98354a0"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_plant_weights.png" caption="Figure 7: General Weighted Plant" >}}
|
|
|
|
|
|
|
|
|
@@ -1084,7 +1128,7 @@ where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT
|
|
|
|
|
|
|
|
|
|
The general control configuration may be extended to include model uncertainty as shown in Fig. [fig:general_config_model_uncertainty](#fig:general_config_model_uncertainty).
|
|
|
|
|
|
|
|
|
|
<a id="org4adeb70"></a>
|
|
|
|
|
<a id="orgaee1f77"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta.png" caption="Figure 8: General control configuration for the case with model uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -1112,7 +1156,7 @@ MIMO systems are often **more sensitive to uncertainty** than SISO systems.
|
|
|
|
|
|
|
|
|
|
## Elements of Linear System Theory {#elements-of-linear-system-theory}
|
|
|
|
|
|
|
|
|
|
<a id="orgad24d7e"></a>
|
|
|
|
|
<a id="orgb820714"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### System Descriptions {#system-descriptions}
|
|
|
|
@@ -1398,7 +1442,7 @@ RHP-zeros therefore imply high gain instability.
|
|
|
|
|
|
|
|
|
|
### Internal Stability of Feedback Systems {#internal-stability-of-feedback-systems}
|
|
|
|
|
|
|
|
|
|
<a id="org0711897"></a>
|
|
|
|
|
<a id="orgb1e4209"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="Figure 9: Block diagram used to check internal stability" >}}
|
|
|
|
|
|
|
|
|
@@ -1545,7 +1589,7 @@ It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \
|
|
|
|
|
|
|
|
|
|
## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems}
|
|
|
|
|
|
|
|
|
|
<a id="orga81403c"></a>
|
|
|
|
|
<a id="org76a7a2f"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Input-Output Controllability {#input-output-controllability}
|
|
|
|
@@ -1937,7 +1981,7 @@ Uncertainty in the crossover frequency region can result in poor performance and
|
|
|
|
|
|
|
|
|
|
### Summary: Controllability Analysis with Feedback Control {#summary-controllability-analysis-with-feedback-control}
|
|
|
|
|
|
|
|
|
|
<a id="org4107db4"></a>
|
|
|
|
|
<a id="org4ab6880"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="Figure 10: Feedback control system" >}}
|
|
|
|
|
|
|
|
|
@@ -1966,7 +2010,7 @@ In summary:
|
|
|
|
|
Sometimes, the disturbances are so large that we hit input saturation or the required bandwidth is not achievable. To avoid the latter problem, we must at least require that the effect of the disturbance is less than \\(1\\) at frequencies beyond the bandwidth:
|
|
|
|
|
\\[ \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c \\]
|
|
|
|
|
|
|
|
|
|
<a id="orgd603c07"></a>
|
|
|
|
|
<a id="orga143a9d"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_margin_requirements.png" caption="Figure 11: Illustration of controllability requirements" >}}
|
|
|
|
|
|
|
|
|
@@ -1988,7 +2032,7 @@ The rules may be used to **determine whether or not a given plant is controllabl
|
|
|
|
|
|
|
|
|
|
## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems}
|
|
|
|
|
|
|
|
|
|
<a id="org2ba9bd9"></a>
|
|
|
|
|
<a id="org6b25e5b"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Introduction {#introduction}
|
|
|
|
@@ -2299,7 +2343,7 @@ We here focus on input and output uncertainty.
|
|
|
|
|
In multiplicative form, the input and output uncertainties are given by (see Fig. [fig:input_output_uncertainty](#fig:input_output_uncertainty)):
|
|
|
|
|
\\[ G^\prime = (I + E\_O) G (I + E\_I) \\]
|
|
|
|
|
|
|
|
|
|
<a id="orgcf627e9"></a>
|
|
|
|
|
<a id="org367c804"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_output_uncertainty.png" caption="Figure 12: Plant with multiplicative input and output uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2435,7 +2479,7 @@ However, the situation is usually the opposite with model uncertainty because fo
|
|
|
|
|
|
|
|
|
|
## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems}
|
|
|
|
|
|
|
|
|
|
<a id="org83d3f33"></a>
|
|
|
|
|
<a id="org80d55a0"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Introduction to Robustness {#introduction-to-robustness}
|
|
|
|
@@ -2509,7 +2553,7 @@ which may be represented by the diagram in Fig. [fig:input_uncertainty_set]
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="orgc4dec07"></a>
|
|
|
|
|
<a id="org865770b"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set.png" caption="Figure 13: Plant with multiplicative uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2563,7 +2607,7 @@ To illustrate how parametric uncertainty translate into frequency domain uncerta
|
|
|
|
|
In general, these uncertain regions have complicated shapes and complex mathematical descriptions
|
|
|
|
|
- **Step 2**. We therefore approximate such complex regions as discs, resulting in a **complex additive uncertainty description**
|
|
|
|
|
|
|
|
|
|
<a id="org6f945a2"></a>
|
|
|
|
|
<a id="org168b9ff"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_uncertainty_region.png" caption="Figure 14: Uncertainty regions of the Nyquist plot at given frequencies" >}}
|
|
|
|
|
|
|
|
|
@@ -2586,7 +2630,7 @@ At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped regi
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="orgdc40ec3"></a>
|
|
|
|
|
<a id="org46ced8b"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_uncertainty_disc_generated.png" caption="Figure 15: Disc-shaped uncertainty regions generated by complex additive uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2643,7 +2687,7 @@ To derive \\(w\_I(s)\\), we then try to find a simple weight so that \\(\abs{w\_
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="org7fa26f9"></a>
|
|
|
|
|
<a id="org797da76"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_uncertainty_weight.png" caption="Figure 16: Relative error for 27 combinations of \\(k,\ \tau\\) and \\(\theta\\). Solid and dashed lines: two weights \\(\abs{w\_I}\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -2682,7 +2726,7 @@ The magnitude of the relative uncertainty caused by neglecting the dynamics in \
|
|
|
|
|
Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig. [fig:neglected_time_delay](#fig:neglected_time_delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
|
|
|
|
|
\\[ l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases} \\]
|
|
|
|
|
|
|
|
|
|
<a id="orgcac6bc4"></a>
|
|
|
|
|
<a id="org8ddf130"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_neglected_time_delay.png" caption="Figure 17: Neglected time delay" >}}
|
|
|
|
|
|
|
|
|
@@ -2692,7 +2736,7 @@ Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{m
|
|
|
|
|
Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig. [fig:neglected_first_order_lag](#fig:neglected_first_order_lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
|
|
|
|
|
\\[ w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1} \\]
|
|
|
|
|
|
|
|
|
|
<a id="org119416a"></a>
|
|
|
|
|
<a id="orge3ddb3c"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_neglected_first_order_lag.png" caption="Figure 18: Neglected first-order lag uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2709,7 +2753,7 @@ However, as shown in Fig. [fig:lag_delay_uncertainty](#fig:lag_delay_uncert
|
|
|
|
|
|
|
|
|
|
It is suggested to start with the simple weight and then if needed, to try the higher order weight.
|
|
|
|
|
|
|
|
|
|
<a id="orge82f57e"></a>
|
|
|
|
|
<a id="orgb652b95"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_lag_delay_uncertainty.png" caption="Figure 19: Multiplicative weight for gain and delay uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2749,7 +2793,7 @@ We use the Nyquist stability condition to test for robust stability of the close
|
|
|
|
|
&\Longleftrightarrow \quad L\_p \ \text{should not encircle -1}, \ \forall L\_p
|
|
|
|
|
\end{align\*}
|
|
|
|
|
|
|
|
|
|
<a id="org0430058"></a>
|
|
|
|
|
<a id="org0fda45b"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback.png" caption="Figure 20: Feedback system with multiplicative uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2765,7 +2809,7 @@ Encirclements are avoided if none of the discs cover \\(-1\\), and we get:
|
|
|
|
|
&\Leftrightarrow \quad \abs{w\_I T} < 1, \ \forall\w \\\\\\
|
|
|
|
|
\end{align\*}
|
|
|
|
|
|
|
|
|
|
<a id="org40d7367"></a>
|
|
|
|
|
<a id="org4ead586"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_nyquist_uncertainty.png" caption="Figure 21: Nyquist plot of \\(L\_p\\) for robust stability" >}}
|
|
|
|
|
|
|
|
|
@@ -2803,7 +2847,7 @@ And we obtain the same condition as before.
|
|
|
|
|
We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig. [fig:inverse_uncertainty_set](#fig:inverse_uncertainty_set)) in which
|
|
|
|
|
\\[ G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1} \\]
|
|
|
|
|
|
|
|
|
|
<a id="org0cbbf1c"></a>
|
|
|
|
|
<a id="orgaad9987"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_inverse_uncertainty_set.png" caption="Figure 22: Feedback system with inverse multiplicative uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -2855,7 +2899,7 @@ The condition for nominal performance when considering performance in terms of t
|
|
|
|
|
Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\).
|
|
|
|
|
This is illustrated graphically in Fig. [fig:nyquist_performance_condition](#fig:nyquist_performance_condition).
|
|
|
|
|
|
|
|
|
|
<a id="orga872ba6"></a>
|
|
|
|
|
<a id="org8e66342"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_nyquist_performance_condition.png" caption="Figure 23: Nyquist plot illustration of the nominal performance condition \\(\abs{w\_P} < \abs{1 + L}\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -2880,7 +2924,7 @@ Let's consider the case of multiplicative uncertainty as shown on Fig. [fig
|
|
|
|
|
The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is
|
|
|
|
|
\\[ L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \\]
|
|
|
|
|
|
|
|
|
|
<a id="orga55c360"></a>
|
|
|
|
|
<a id="org3ca06cf"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight_bis.png" caption="Figure 24: Diagram for robust performance with multiplicative uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -3046,7 +3090,7 @@ with \\(\Phi(s) \triangleq (sI - A)^{-1}\\).
|
|
|
|
|
|
|
|
|
|
This is illustrated in the block diagram of Fig. [fig:uncertainty_state_a_matrix](#fig:uncertainty_state_a_matrix), which is in the form of an inverse additive perturbation.
|
|
|
|
|
|
|
|
|
|
<a id="org7061c4c"></a>
|
|
|
|
|
<a id="orgd286b2a"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_uncertainty_state_a_matrix.png" caption="Figure 25: Uncertainty in state space A-matrix" >}}
|
|
|
|
|
|
|
|
|
@@ -3064,7 +3108,7 @@ We also derived a condition for robust performance with multiplicative uncertain
|
|
|
|
|
|
|
|
|
|
## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis}
|
|
|
|
|
|
|
|
|
|
<a id="orga4090db"></a>
|
|
|
|
|
<a id="orgb076a9b"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty}
|
|
|
|
@@ -3075,13 +3119,13 @@ where each \\(\Delta\_i\\) represents a **specific source of uncertainty**, e.g.
|
|
|
|
|
|
|
|
|
|
If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig. [fig:general_control_delta](#fig:general_control_delta). This form is useful for controller synthesis.
|
|
|
|
|
|
|
|
|
|
<a id="orgeddb6de"></a>
|
|
|
|
|
<a id="org0853688"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="Figure 26: General control configuration used for controller synthesis" >}}
|
|
|
|
|
|
|
|
|
|
If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig. [fig:general_control_Ndelta](#fig:general_control_Ndelta).
|
|
|
|
|
|
|
|
|
|
<a id="org40e5b3e"></a>
|
|
|
|
|
<a id="orgc524251"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_Ndelta.png" caption="Figure 27: \\(N\Delta\text{-structure}\\) for robust performance analysis" >}}
|
|
|
|
|
|
|
|
|
@@ -3101,7 +3145,7 @@ Similarly, the uncertain closed-loop transfer function from \\(w\\) to \\(z\\),
|
|
|
|
|
|
|
|
|
|
To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig. [fig:general_control_Mdelta_bis](#fig:general_control_Mdelta_bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
|
|
|
|
|
|
|
|
|
|
<a id="org35c320b"></a>
|
|
|
|
|
<a id="orge0e68f2"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta_bis.png" caption="Figure 28: \\(M\Delta\text{-structure}\\) for robust stability analysis" >}}
|
|
|
|
|
|
|
|
|
@@ -3153,7 +3197,7 @@ Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nb
|
|
|
|
|
|
|
|
|
|
|  |  |  |
|
|
|
|
|
|----------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------|
|
|
|
|
|
| <a id="org44a89ed"></a> Additive uncertainty | <a id="org22a596e"></a> Multiplicative input uncertainty | <a id="org34aa45a"></a> Multiplicative output uncertainty |
|
|
|
|
|
| <a id="org94556ee"></a> Additive uncertainty | <a id="org205e138"></a> Multiplicative input uncertainty | <a id="org884d99b"></a> Multiplicative output uncertainty |
|
|
|
|
|
|
|
|
|
|
In Fig. [fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
|
|
|
|
|
|
|
|
|
@@ -3176,7 +3220,7 @@ In Fig. [fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feed
|
|
|
|
|
|
|
|
|
|
|  |  |  |
|
|
|
|
|
|--------------------------------------------------------|------------------------------------------------------------------|-------------------------------------------------------------------|
|
|
|
|
|
| <a id="org1808a4d"></a> Inverse additive uncertainty | <a id="org75e65aa"></a> Inverse multiplicative input uncertainty | <a id="org8c1d406"></a> Inverse multiplicative output uncertainty |
|
|
|
|
|
| <a id="org17a4e6d"></a> Inverse additive uncertainty | <a id="org2765e1d"></a> Inverse multiplicative input uncertainty | <a id="org33356e1"></a> Inverse multiplicative output uncertainty |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation}
|
|
|
|
@@ -3246,7 +3290,7 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
|
|
|
|
|
Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig. [fig:input_uncertainty_set_feedback_weight](#fig:input_uncertainty_set_feedback_weight).
|
|
|
|
|
\\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight.
|
|
|
|
|
|
|
|
|
|
<a id="org150edcc"></a>
|
|
|
|
|
<a id="org2ebb26f"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight.png" caption="Figure 29: System with multiplicative input uncertainty and performance measured at the output" >}}
|
|
|
|
|
|
|
|
|
@@ -3406,7 +3450,7 @@ Where \\(G = M\_l^{-1} N\_l\\) is a left coprime factorization of the nominal pl
|
|
|
|
|
|
|
|
|
|
This uncertainty description is surprisingly **general**, it allows both zeros and poles to cross into the right-half plane, and has proven to be very useful in applications.
|
|
|
|
|
|
|
|
|
|
<a id="org647e6f9"></a>
|
|
|
|
|
<a id="org71a706b"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty.png" caption="Figure 30: Coprime Uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -3438,7 +3482,7 @@ where \\(d\_i\\) is a scalar and \\(I\_i\\) is an identity matrix of the same di
|
|
|
|
|
Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig. [fig:block_diagonal_scalings](#fig:block_diagonal_scalings).
|
|
|
|
|
This clearly has no effect on stability.
|
|
|
|
|
|
|
|
|
|
<a id="orgfd9a9f8"></a>
|
|
|
|
|
<a id="org949fb62"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_block_diagonal_scalings.png" caption="Figure 31: Use of block-diagonal scalings, \\(\Delta D = D \Delta\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -3754,7 +3798,7 @@ with the decoupling controller we have:
|
|
|
|
|
\\[ \bar{\sigma}(N\_{22}) = \bar{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right| \\]
|
|
|
|
|
and we see from Fig. [fig:mu_plots_distillation](#fig:mu_plots_distillation) that the NP-condition is satisfied.
|
|
|
|
|
|
|
|
|
|
<a id="org573e9d8"></a>
|
|
|
|
|
<a id="orga318a7a"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_mu_plots_distillation.png" caption="Figure 32: \\(\mu\text{-plots}\\) for distillation process with decoupling controller" >}}
|
|
|
|
|
|
|
|
|
@@ -3779,7 +3823,7 @@ The peak value is close to 6, meaning that even with 6 times less uncertainty, t
|
|
|
|
|
|
|
|
|
|
We here consider the relationship between \\(\mu\\) for RP and the condition number of the plant or of the controller.
|
|
|
|
|
We consider unstructured multiplicative uncertainty (i.e. \\(\Delta\_I\\) is a full matrix) and performance is measured in terms of the weighted sensitivity.
|
|
|
|
|
With \\(N\\) given by [eq:n_delta_structure_clasic](#eq:n_delta_structure_clasic), we have:
|
|
|
|
|
With \\(N\\) given by \eqref{eq:n_delta_structure_clasic}, we have:
|
|
|
|
|
\\[ \overbrace{\mu\_{\tilde{\Delta}}(N)}^{\text{RP}} \le [ \overbrace{\bar{\sigma}(w\_I T\_I)}^{\text{RS}} + \overbrace{\bar{\sigma}(w\_P S)}^{\text{NP}} ] (1 + \sqrt{k}) \\]
|
|
|
|
|
where \\(k\\) is taken as the smallest value between the condition number of the plant and of the controller:
|
|
|
|
|
\\[ k = \text{min}(\gamma(G), \gamma(K)) \\]
|
|
|
|
@@ -3877,7 +3921,7 @@ The latter is an attempt to "flatten out" \\(\mu\\).
|
|
|
|
|
For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity.
|
|
|
|
|
The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig. [fig:weights_distillation](#fig:weights_distillation).
|
|
|
|
|
|
|
|
|
|
<a id="orgfb7536a"></a>
|
|
|
|
|
<a id="orgd273607"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_weights_distillation.png" caption="Figure 33: Uncertainty and performance weights" >}}
|
|
|
|
|
|
|
|
|
@@ -3900,11 +3944,11 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
|
|
|
|
|
- Iteration No. 3.
|
|
|
|
|
Step 1: The \\(\mathcal{H}\_\infty\\) norm is only slightly reduced. We thus decide the stop the iterations.
|
|
|
|
|
|
|
|
|
|
<a id="org9fe8930"></a>
|
|
|
|
|
<a id="org10a3970"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_dk_iter_mu.png" caption="Figure 34: Change in \\(\mu\\) during DK-iteration" >}}
|
|
|
|
|
|
|
|
|
|
<a id="org7d6de99"></a>
|
|
|
|
|
<a id="org400285f"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="Figure 35: Change in D-scale \\(d\_1\\) during DK-iteration" >}}
|
|
|
|
|
|
|
|
|
@@ -3912,13 +3956,13 @@ The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\
|
|
|
|
|
The objectives of RS and NP are easily satisfied.
|
|
|
|
|
The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\bar{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants.
|
|
|
|
|
|
|
|
|
|
<a id="org22cca49"></a>
|
|
|
|
|
<a id="org519a9ca"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="Figure 36: \\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}}
|
|
|
|
|
|
|
|
|
|
To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig. [fig:perturb_s_k3](#fig:perturb_s_k3).
|
|
|
|
|
|
|
|
|
|
<a id="orgbb4e9b9"></a>
|
|
|
|
|
<a id="orgfcb21f2"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\bar{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}}
|
|
|
|
|
|
|
|
|
@@ -3973,7 +4017,7 @@ If resulting control performance is not satisfactory, one may switch to the seco
|
|
|
|
|
|
|
|
|
|
## Controller Design {#controller-design}
|
|
|
|
|
|
|
|
|
|
<a id="org8e7c092"></a>
|
|
|
|
|
<a id="orga616dec"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design}
|
|
|
|
@@ -3993,7 +4037,7 @@ We have the following important relationships:
|
|
|
|
|
\end{align}
|
|
|
|
|
\end{subequations}
|
|
|
|
|
|
|
|
|
|
<a id="org988fd36"></a>
|
|
|
|
|
<a id="org8d4f22a"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_small.png" caption="Figure 38: One degree-of-freedom feedback configuration" >}}
|
|
|
|
|
|
|
|
|
@@ -4035,7 +4079,7 @@ Thus, over specified frequency ranges, it is relatively easy to approximate the
|
|
|
|
|
|
|
|
|
|
Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig. [fig:design_trade_off_mimo_gk](#fig:design_trade_off_mimo_gk).
|
|
|
|
|
|
|
|
|
|
<a id="orgd19569b"></a>
|
|
|
|
|
<a id="org6e3f117"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_design_trade_off_mimo_gk.png" caption="Figure 39: Design trade-offs for the multivariable loop transfer function \\(GK\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -4092,7 +4136,7 @@ The solution to the LQG problem is then found by replacing \\(x\\) by \\(\hat{x}
|
|
|
|
|
|
|
|
|
|
We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig. [fig:lqg_separation](#fig:lqg_separation): the optimal state feedback and the optimal state estimator (the Kalman filter).
|
|
|
|
|
|
|
|
|
|
<a id="org9f29f94"></a>
|
|
|
|
|
<a id="org6f521b9"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_lqg_separation.png" caption="Figure 40: The separation theorem" >}}
|
|
|
|
|
|
|
|
|
@@ -4142,7 +4186,7 @@ Where \\(Y\\) is the unique positive-semi definite solution of the algebraic Ric
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="org714f37d"></a>
|
|
|
|
|
<a id="orgf0f14d9"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_lqg_kalman_filter.png" caption="Figure 41: The LQG controller and noisy plant" >}}
|
|
|
|
|
|
|
|
|
@@ -4163,7 +4207,7 @@ It has the same degree (number of poles) as the plant.<br />
|
|
|
|
|
|
|
|
|
|
For the LQG-controller, as shown on Fig. [fig:lqg_kalman_filter](#fig:lqg_kalman_filter), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig. [fig:lqg_integral](#fig:lqg_integral). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states.
|
|
|
|
|
|
|
|
|
|
<a id="org7af49f1"></a>
|
|
|
|
|
<a id="orgb7cfb99"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_lqg_integral.png" caption="Figure 42: LQG controller with integral action and reference input" >}}
|
|
|
|
|
|
|
|
|
@@ -4176,18 +4220,18 @@ Their main limitation is that they can only be applied to minimum phase plants.
|
|
|
|
|
|
|
|
|
|
### \\(\htwo\\) and \\(\hinf\\) Control {#htwo--and--hinf--control}
|
|
|
|
|
|
|
|
|
|
<a id="org64e6be0"></a>
|
|
|
|
|
<a id="org6da7635"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### General Control Problem Formulation {#general-control-problem-formulation}
|
|
|
|
|
|
|
|
|
|
<a id="org4fb3ed6"></a>
|
|
|
|
|
<a id="org1448cec"></a>
|
|
|
|
|
There are many ways in which feedback design problems can be cast as \\(\htwo\\) and \\(\hinf\\) optimization problems.
|
|
|
|
|
It is very useful therefore to have a **standard problem formulation** into which any particular problem may be manipulated.
|
|
|
|
|
|
|
|
|
|
Such a general formulation is afforded by the general configuration shown in Fig. [fig:general_control](#fig:general_control).
|
|
|
|
|
|
|
|
|
|
<a id="org6b116f5"></a>
|
|
|
|
|
<a id="org3b91a51"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control.png" caption="Figure 43: General control configuration" >}}
|
|
|
|
|
|
|
|
|
@@ -4438,7 +4482,7 @@ The elements of the generalized plant are
|
|
|
|
|
\end{array}
|
|
|
|
|
\end{equation\*}
|
|
|
|
|
|
|
|
|
|
<a id="org9710d93"></a>
|
|
|
|
|
<a id="org35551f8"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_dist_rejection.png" caption="Figure 44: \\(S/KS\\) mixed-sensitivity optimization in standard form (regulation)" >}}
|
|
|
|
|
|
|
|
|
@@ -4447,7 +4491,7 @@ Here we consider a tracking problem.
|
|
|
|
|
The exogenous input is a reference command \\(r\\), and the error signals are \\(z\_1 = -W\_1 e = W\_1 (r-y)\\) and \\(z\_2 = W\_2 u\\).
|
|
|
|
|
As the regulation problem of Fig. [fig:mixed_sensitivity_dist_rejection](#fig:mixed_sensitivity_dist_rejection), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\).
|
|
|
|
|
|
|
|
|
|
<a id="org57086a7"></a>
|
|
|
|
|
<a id="org55460a0"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_ref_tracking.png" caption="Figure 45: \\(S/KS\\) mixed-sensitivity optimization in standard form (tracking)" >}}
|
|
|
|
|
|
|
|
|
@@ -4471,7 +4515,7 @@ The elements of the generalized plant are
|
|
|
|
|
\end{array}
|
|
|
|
|
\end{equation\*}
|
|
|
|
|
|
|
|
|
|
<a id="org41ecea3"></a>
|
|
|
|
|
<a id="org007c976"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_s_t.png" caption="Figure 46: \\(S/T\\) mixed-sensitivity optimization in standard form" >}}
|
|
|
|
|
|
|
|
|
@@ -4499,7 +4543,7 @@ The focus of attention has moved to the size of signals and away from the size a
|
|
|
|
|
Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals.
|
|
|
|
|
Weights are also used if a perturbation is used to model uncertainty, as in Fig. [fig:input_uncertainty_hinf](#fig:input_uncertainty_hinf), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\).
|
|
|
|
|
|
|
|
|
|
<a id="org093006a"></a>
|
|
|
|
|
<a id="orgabff04a"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_hinf.png" caption="Figure 47: Multiplicative dynamic uncertainty model" >}}
|
|
|
|
|
|
|
|
|
@@ -4521,7 +4565,7 @@ The problem can be cast as a standard \\(\hinf\\) optimization in the general co
|
|
|
|
|
w = \begin{bmatrix}d\\r\\n\end{bmatrix},\ z = \begin{bmatrix}z\_1\\z\_2\end{bmatrix}, \ v = \begin{bmatrix}r\_s\\y\_m\end{bmatrix},\ u = u
|
|
|
|
|
\end{equation\*}
|
|
|
|
|
|
|
|
|
|
<a id="org6632f75"></a>
|
|
|
|
|
<a id="org5056f35"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_hinf_signal_based.png" caption="Figure 48: A signal-based \\(\hinf\\) control problem" >}}
|
|
|
|
|
|
|
|
|
@@ -4532,7 +4576,7 @@ This problem is a non-standard \\(\hinf\\) optimization.
|
|
|
|
|
It is a robust performance problem for which the \\(\mu\text{-synthesis}\\) procedure can be applied where we require the structured singular value:
|
|
|
|
|
\\[ \mu(M(j\omega)) < 1, \quad \forall\omega \\]
|
|
|
|
|
|
|
|
|
|
<a id="orgfa2cdae"></a>
|
|
|
|
|
<a id="org7befd92"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_hinf_signal_based_uncertainty.png" caption="Figure 49: A signal-based \\(\hinf\\) control problem with input multiplicative uncertainty" >}}
|
|
|
|
|
|
|
|
|
@@ -4590,7 +4634,7 @@ For the perturbed feedback system of Fig. [fig:coprime_uncertainty_bis](#fi
|
|
|
|
|
|
|
|
|
|
Notice that \\(\gamma\\) is the \\(\hinf\\) norm from \\(\phi\\) to \\(\begin{bmatrix}u\\y\end{bmatrix}\\) and \\((I-GK)^{-1}\\) is the sensitivity function for this positive feedback arrangement.
|
|
|
|
|
|
|
|
|
|
<a id="org79cab4a"></a>
|
|
|
|
|
<a id="org4f3b2f4"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_bis.png" caption="Figure 50: \\(\hinf\\) robust stabilization problem" >}}
|
|
|
|
|
|
|
|
|
@@ -4631,13 +4675,13 @@ for a specified \\(\gamma > \gamma\_\text{min}\\), is given by
|
|
|
|
|
\end{align}
|
|
|
|
|
\end{subequations}
|
|
|
|
|
|
|
|
|
|
The Matlab function `coprimeunc` can be used to generate the controller in [eq:control_coprime_factor](#eq:control_coprime_factor).
|
|
|
|
|
It is important to emphasize that since we can compute \\(\gamma\_\text{min}\\) from [eq:gamma_min_coprime](#eq:gamma_min_coprime) we get an explicit solution by solving just two Riccati equations and avoid the \\(\gamma\text{-iteration}\\) needed to solve the general \\(\mathcal{H}\_\infty\\) problem.
|
|
|
|
|
The Matlab function `coprimeunc` can be used to generate the controller in \eqref{eq:control_coprime_factor}.
|
|
|
|
|
It is important to emphasize that since we can compute \\(\gamma\_\text{min}\\) from \eqref{eq:gamma_min_coprime} we get an explicit solution by solving just two Riccati equations and avoid the \\(\gamma\text{-iteration}\\) needed to solve the general \\(\mathcal{H}\_\infty\\) problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### A Systematic \\(\hinf\\) Loop-Shaping Design Procedure {#a-systematic--hinf--loop-shaping-design-procedure}
|
|
|
|
|
|
|
|
|
|
<a id="org556e0fc"></a>
|
|
|
|
|
<a id="org929fa3b"></a>
|
|
|
|
|
Robust stabilization alone is not much used in practice because the designer is not able to specify any performance requirements.
|
|
|
|
|
|
|
|
|
|
To do so, **pre and post compensators** are used to **shape the open-loop singular values** prior to robust stabilization of the "shaped" plant.
|
|
|
|
@@ -4650,7 +4694,7 @@ If \\(W\_1\\) and \\(W\_2\\) are the pre and post compensators respectively, the
|
|
|
|
|
|
|
|
|
|
as shown in Fig. [fig:shaped_plant](#fig:shaped_plant).
|
|
|
|
|
|
|
|
|
|
<a id="orga1726e5"></a>
|
|
|
|
|
<a id="orgbc1e59e"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_shaped_plant.png" caption="Figure 51: The shaped plant and controller" >}}
|
|
|
|
|
|
|
|
|
@@ -4687,7 +4731,7 @@ Systematic procedure for \\(\hinf\\) loop-shaping design:
|
|
|
|
|
This is because the references do not directly excite the dynamics of \\(K\_s\\), which can result in large amounts of overshoot.
|
|
|
|
|
The constant prefilter ensure a steady-state gain of \\(1\\) between \\(r\\) and \\(y\\), assuming integral action in \\(W\_1\\) or \\(G\\)
|
|
|
|
|
|
|
|
|
|
<a id="org8641bb9"></a>
|
|
|
|
|
<a id="org83cf8d8"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_shapping_practical_implementation.png" caption="Figure 52: A practical implementation of the loop-shaping controller" >}}
|
|
|
|
|
|
|
|
|
@@ -4713,7 +4757,7 @@ But in cases where stringent time-domain specifications are set on the output re
|
|
|
|
|
A general two degrees-of-freedom feedback control scheme is depicted in Fig. [fig:classical_feedback_2dof_simple](#fig:classical_feedback_2dof_simple).
|
|
|
|
|
The commands and feedbacks enter the controller separately and are independently processed.
|
|
|
|
|
|
|
|
|
|
<a id="org58fc63d"></a>
|
|
|
|
|
<a id="org8f1d974"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_simple.png" caption="Figure 53: General two degrees-of-freedom feedback control scheme" >}}
|
|
|
|
|
|
|
|
|
@@ -4724,7 +4768,7 @@ The design problem is illustrated in Fig. [fig:coprime_uncertainty_hinf](#f
|
|
|
|
|
The feedback part of the controller \\(K\_2\\) is designed to meet robust stability and disturbance rejection requirements.
|
|
|
|
|
A prefilter is introduced to force the response of the closed-loop system to follow that of a specified model \\(T\_{\text{ref}}\\), often called the **reference model**.
|
|
|
|
|
|
|
|
|
|
<a id="org6a86c67"></a>
|
|
|
|
|
<a id="orgd00d786"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_hinf.png" caption="Figure 54: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping design problem" >}}
|
|
|
|
|
|
|
|
|
@@ -4749,7 +4793,7 @@ The main steps required to synthesize a two degrees-of-freedom \\(\mathcal{H}\_\
|
|
|
|
|
|
|
|
|
|
The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig. [fig:hinf_synthesis_2dof](#fig:hinf_synthesis_2dof).
|
|
|
|
|
|
|
|
|
|
<a id="org18530c5"></a>
|
|
|
|
|
<a id="org3d681ec"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_hinf_synthesis_2dof.png" caption="Figure 55: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller" >}}
|
|
|
|
|
|
|
|
|
@@ -4821,7 +4865,7 @@ where \\(u\_a\\) is the **actual plant input**, that is the measurement at the *
|
|
|
|
|
|
|
|
|
|
The situation is illustrated in Fig. [fig:weight_anti_windup](#fig:weight_anti_windup), where the actuators are each modeled by a unit gain and a saturation.
|
|
|
|
|
|
|
|
|
|
<a id="org0e606b3"></a>
|
|
|
|
|
<a id="org3867b27"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_weight_anti_windup.png" caption="Figure 56: Self-conditioned weight \\(W\_1\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -4869,14 +4913,14 @@ Moreover, one should be careful about combining controller synthesis and analysi
|
|
|
|
|
|
|
|
|
|
## Controller Structure Design {#controller-structure-design}
|
|
|
|
|
|
|
|
|
|
<a id="org3d2d0b9"></a>
|
|
|
|
|
<a id="org6fc0469"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Introduction {#introduction}
|
|
|
|
|
|
|
|
|
|
In previous sections, we considered the general problem formulation in Fig. [fig:general_control_names_bis](#fig:general_control_names_bis) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\).
|
|
|
|
|
|
|
|
|
|
<a id="org3079cf1"></a>
|
|
|
|
|
<a id="org366605e"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_general_control_names_bis.png" caption="Figure 57: General Control Configuration" >}}
|
|
|
|
|
|
|
|
|
@@ -4911,7 +4955,7 @@ The reference value \\(r\\) is usually set at some higher layer in the control h
|
|
|
|
|
|
|
|
|
|
Additional layers are possible, as is illustrated in Fig. [fig:control_system_hierarchy](#fig:control_system_hierarchy) which shows a typical control hierarchy for a chemical plant.
|
|
|
|
|
|
|
|
|
|
<a id="org82916a6"></a>
|
|
|
|
|
<a id="org42e952b"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_system_hierarchy.png" caption="Figure 58: Typical control system hierarchy in a chemical plant" >}}
|
|
|
|
|
|
|
|
|
@@ -4933,7 +4977,7 @@ However, this solution is normally not used for a number a reasons, included the
|
|
|
|
|
|
|
|
|
|
|  |  |  |
|
|
|
|
|
|--------------------------------------------------|--------------------------------------------------------------------------------|-------------------------------------------------------------|
|
|
|
|
|
| <a id="org94698d3"></a> Open loop optimization | <a id="org0b81e20"></a> Closed-loop implementation with separate control layer | <a id="orgd6b172c"></a> Integrated optimization and control |
|
|
|
|
|
| <a id="org6986695"></a> Open loop optimization | <a id="orgaae7402"></a> Closed-loop implementation with separate control layer | <a id="orge8ee4d7"></a> Integrated optimization and control |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Selection of Controlled Outputs {#selection-of-controlled-outputs}
|
|
|
|
@@ -5140,7 +5184,7 @@ A cascade control structure results when either of the following two situations
|
|
|
|
|
|
|
|
|
|
|  |  |
|
|
|
|
|
|-------------------------------------------------------|---------------------------------------------------|
|
|
|
|
|
| <a id="org9cde265"></a> Extra measurements \\(y\_2\\) | <a id="orgd964ccc"></a> Extra inputs \\(u\_2\\) |
|
|
|
|
|
| <a id="org4e7be08"></a> Extra measurements \\(y\_2\\) | <a id="org1a947e7"></a> Extra inputs \\(u\_2\\) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Cascade Control: Extra Measurements {#cascade-control-extra-measurements}
|
|
|
|
@@ -5189,7 +5233,7 @@ With reference to the special (but common) case of cascade control shown in Fig.
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="org754439a"></a>
|
|
|
|
|
<a id="org664489f"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_cascade_control.png" caption="Figure 59: Common case of cascade control where the primary output \\(y\_1\\) depends directly on the extra measurement \\(y\_2\\)" >}}
|
|
|
|
|
|
|
|
|
@@ -5239,7 +5283,7 @@ We would probably tune the three controllers in the order \\(K\_2\\), \\(K\_3\\)
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
<a id="orga098898"></a>
|
|
|
|
|
<a id="org12e1e27"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_cascade_control_two_layers.png" caption="Figure 60: Control configuration with two layers of cascade control" >}}
|
|
|
|
|
|
|
|
|
@@ -5354,7 +5398,7 @@ We get:
|
|
|
|
|
\end{aligned}
|
|
|
|
|
\end{equation}
|
|
|
|
|
|
|
|
|
|
<a id="org043573a"></a>
|
|
|
|
|
<a id="orgffa343f"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_partial_control.png" caption="Figure 61: Partial Control" >}}
|
|
|
|
|
|
|
|
|
@@ -5413,7 +5457,7 @@ The selection of \\(u\_2\\) and \\(y\_2\\) for use in the lower-layer control sy
|
|
|
|
|
Consider the conventional cascade control system in Fig. [fig:cascade_extra_meas](#fig:cascade_extra_meas) where we have additional "secondary" measurements \\(y\_2\\) with no associated control objective, and the objective is to improve the control of \\(y\_1\\) by locally controlling \\(y\_2\\).
|
|
|
|
|
The idea is that this should reduce the effect of disturbances and uncertainty on \\(y\_1\\).
|
|
|
|
|
|
|
|
|
|
From [eq:partial_control](#eq:partial_control), it follows that we should select \\(y\_2\\) and \\(u\_2\\) such that \\(\\|P\_d\\|\\) is small and at least smaller than \\(\\|G\_{d1}\\|\\).
|
|
|
|
|
From \eqref{eq:partial_control}, it follows that we should select \\(y\_2\\) and \\(u\_2\\) such that \\(\\|P\_d\\|\\) is small and at least smaller than \\(\\|G\_{d1}\\|\\).
|
|
|
|
|
These arguments particularly apply at high frequencies.
|
|
|
|
|
More precisely, we want the input-output controllability of \\([P\_u\ P\_r]\\) with disturbance model \\(P\_d\\) to be better that of the plant \\([G\_{11}\ G\_{12}]\\) with disturbance model \\(G\_{d1}\\).
|
|
|
|
|
|
|
|
|
@@ -5430,7 +5474,7 @@ A set of outputs \\(y\_1\\) may be left uncontrolled only if the effects of all
|
|
|
|
|
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
To evaluate the feasibility of partial control, one must for each choice of \\(y\_2\\) and \\(u\_2\\), rearrange the system as in [eq:partial_control_partitioning](#eq:partial_control_partitioning) and [eq:partial_control](#eq:partial_control), and compute \\(P\_d\\) using [eq:tight_control_y2](#eq:tight_control_y2).
|
|
|
|
|
To evaluate the feasibility of partial control, one must for each choice of \\(y\_2\\) and \\(u\_2\\), rearrange the system as in \eqref{eq:partial_control_partitioning} and \eqref{eq:partial_control}, and compute \\(P\_d\\) using \eqref{eq:tight_control_y2}.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### Measurement Selection for Indirect Control {#measurement-selection-for-indirect-control}
|
|
|
|
@@ -5474,7 +5518,7 @@ Then to minimize the control error for the primary output, \\(J = \\|y\_1 - r\_1
|
|
|
|
|
|
|
|
|
|
In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig. [fig:decentralized_diagonal_control](#fig:decentralized_diagonal_control)).
|
|
|
|
|
|
|
|
|
|
<a id="orge8c0a58"></a>
|
|
|
|
|
<a id="org301990a"></a>
|
|
|
|
|
|
|
|
|
|
{{< figure src="/ox-hugo/skogestad07_decentralized_diagonal_control.png" caption="Figure 62: Decentralized diagonal control of a \\(2 \times 2\\) plant" >}}
|
|
|
|
|
|
|
|
|
@@ -5590,7 +5634,7 @@ We then derive **necessary conditions for stability** which may be used to elimi
|
|
|
|
|
|
|
|
|
|
For decentralized diagonal control, it is desirable that the system can be tuned and operated one loop at a time.
|
|
|
|
|
Assume therefore that \\(G\\) is stable and each individual loop is stable by itself (\\(\tilde{S}\\) and \\(\tilde{T}\\) are stable).
|
|
|
|
|
Using the **spectral radius condition** on the factorized \\(S\\) in [eq:S_factorization](#eq:S_factorization), we have that the overall system is stable (\\(S\\) is stable) if
|
|
|
|
|
Using the **spectral radius condition** on the factorized \\(S\\) in \eqref{eq:S_factorization}, we have that the overall system is stable (\\(S\\) is stable) if
|
|
|
|
|
|
|
|
|
|
\begin{equation}
|
|
|
|
|
\rho(E\tilde{T}(j\omega)) < 1, \forall\omega
|
|
|
|
@@ -5813,7 +5857,7 @@ For performance, we need \\(|1 + L\_i|\\) to be larger than each of these:
|
|
|
|
|
|1 + L\_i| > \max\_{k,j}\\{|\tilde{g}\_{dik}|, |\gamma\_{ij}|\\}
|
|
|
|
|
\end{equation}
|
|
|
|
|
|
|
|
|
|
To achieve stability of the individual loops, one must analyze \\(g\_{ii}(s)\\) to ensure that the bandwidth required by [eq:decent_contr_one_loop](#eq:decent_contr_one_loop) is achievable.
|
|
|
|
|
To achieve stability of the individual loops, one must analyze \\(g\_{ii}(s)\\) to ensure that the bandwidth required by \eqref{eq:decent_contr_one_loop} is achievable.
|
|
|
|
|
Note that RHP-zeros in the diagonal elements may limit achievable decentralized control, whereas they may not pose any problems for a multivariable controller.
|
|
|
|
|
Since with decentralized control, we usually want to use simple controllers, the achievable bandwidth in each loop will be limited by the frequency where \\(\angle g\_{ii}\\) is \\(\SI{-180}{\degree}\\)</li>
|
|
|
|
|
<li>Check for constraints by considering the elements of \\(G^{-1} G\_d\\) and make sure that they do not exceed one in magnitude within the frequency range where control is needed.
|
|
|
|
@@ -5831,7 +5875,7 @@ If the plant is not controllable, then one may consider another choice of pairin
|
|
|
|
|
If one still cannot find any pairing which are controllable, then one should consider multivariable control.
|
|
|
|
|
|
|
|
|
|
<ol class="org-ol">
|
|
|
|
|
<li value="7">If the chosen pairing is controllable, then [eq:decent_contr_one_loop](#eq:decent_contr_one_loop) tells us how large \\(|L\_i| = |g\_{ii} k\_i|\\) must be.
|
|
|
|
|
<li value="7">If the chosen pairing is controllable, then \eqref{eq:decent_contr_one_loop} tells us how large \\(|L\_i| = |g\_{ii} k\_i|\\) must be.
|
|
|
|
|
This can be used as a basis for designing the controller \\(k\_i(s)\\) for loop \\(i\\)</li>
|
|
|
|
|
</ol>
|
|
|
|
|
|
|
|
|
@@ -5852,7 +5896,7 @@ Thus sequential design may involve many iterations.
|
|
|
|
|
|
|
|
|
|
#### Conclusion on Decentralized Control {#conclusion-on-decentralized-control}
|
|
|
|
|
|
|
|
|
|
A number of **conditions for the stability**, e.g. [eq:decent_contr_cond_stability](#eq:decent_contr_cond_stability) and [eq:decent_contr_necessary_cond_stability](#eq:decent_contr_necessary_cond_stability), and **performance**, e.g. [eq:decent_contr_cond_perf_dist](#eq:decent_contr_cond_perf_dist) and [eq:decent_contr_cond_perf_ref](#eq:decent_contr_cond_perf_ref), of decentralized control systems have been derived.
|
|
|
|
|
A number of **conditions for the stability**, e.g. \eqref{eq:decent_contr_cond_stability} and \eqref{eq:decent_contr_necessary_cond_stability}, and **performance**, e.g. \eqref{eq:decent_contr_cond_perf_dist} and \eqref{eq:decent_contr_cond_perf_ref}, of decentralized control systems have been derived.
|
|
|
|
|
|
|
|
|
|
The conditions may be useful in **determining appropriate pairings of inputs and outputs** and the **sequence in which the decentralized controllers should be designed**.
|
|
|
|
|
|
|
|
|
@@ -5861,7 +5905,7 @@ The conditions are also useful in an **input-output controllability analysis** f
|
|
|
|
|
|
|
|
|
|
## Model Reduction {#model-reduction}
|
|
|
|
|
|
|
|
|
|
<a id="orga673906"></a>
|
|
|
|
|
<a id="org7648c32"></a>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Introduction {#introduction}
|
|
|
|
|