@@ -127,22 +127,22 @@ Tables [3](#orgb6964ec), [2](#table--tab:notations-eigen-vectors-values) and [3]
## Zeros in SISO Mechanical Systems {#zeros-in-siso-mechanical-systems}
-
+
The origin and influence of poles are clear: they represent the resonant frequencies of the system, and for each resonance frequency, a mode shape can be defined to describe the motion at that frequency.
We here which to give an intuitive understanding for **when to expect zeros in SISO mechanical systems** and **how to predict the frequencies at which they will occur**.
-Figure [3](#orgb6964ec) shows a series arrangement of masses and springs, with a total of \\(n\\) masses and \\(n+1\\) springs.
+Figure [3](#org02d84e8) shows a series arrangement of masses and springs, with a total of \\(n\\) masses and \\(n+1\\) springs.
The degrees of freedom are numbered from left to right, \\(z\_1\\) through \\(z\_n\\).
-
+
{{< figure src="/ox-hugo/hatch00_n_dof_zeros.png" caption="Figure 3: n dof system showing various SISO input/output configurations" >}}
-([Miu 1993](#org03acd9e)) shows that the zeros of any particular transfer function are the poles of the constrained system to the left and/or right of the system defined by constraining the one or two dof's defining the transfer function.
+([Miu 1993](#org39eead7)) shows that the zeros of any particular transfer function are the poles of the constrained system to the left and/or right of the system defined by constraining the one or two dof's defining the transfer function.
The resonances of the "overhanging appendages" of the constrained system create the zeros.
@@ -151,12 +151,12 @@ The resonances of the "overhanging appendages" of the constrained system create
## State Space Analysis {#state-space-analysis}
-
+
## Modal Analysis {#modal-analysis}
-
+
Lightly damped structures are typically analyzed with the "normal mode" method described in this section.
@@ -196,9 +196,9 @@ Summarizing the modal analysis method of analyzing linear mechanical systems and
#### Equation of Motion {#equation-of-motion}
-Let's consider the model shown in Figure [4](#org627cff8) with \\(k\_1 = k\_2 = k\\), \\(m\_1 = m\_2 = m\_3 = m\\) and \\(c\_1 = c\_2 = 0\\).
+Let's consider the model shown in Figure [4](#org0c2921d) with \\(k\_1 = k\_2 = k\\), \\(m\_1 = m\_2 = m\_3 = m\\) and \\(c\_1 = c\_2 = 0\\).
-
+
{{< figure src="/ox-hugo/hatch00_undamped_tdof_model.png" caption="Figure 4: Undamped tdof model" >}}
@@ -297,17 +297,17 @@ One then find:
\end{bmatrix}
\end{equation}
-Virtual interpretation of the eigenvectors are shown in Figures [5](#org0396b30), [6](#orgd3bc915) and [7](#orgc82dccd).
+Virtual interpretation of the eigenvectors are shown in Figures [5](#orgc90fe3a), [6](#orgfd8222c) and [7](#orgaf9cc36).
-
+
{{< figure src="/ox-hugo/hatch00_tdof_mode_1.png" caption="Figure 5: Rigid-Body Mode, 0rad/s" >}}
-
+
{{< figure src="/ox-hugo/hatch00_tdof_mode_2.png" caption="Figure 6: Second Model, Middle Mass Stationary, 1rad/s" >}}
-
+
{{< figure src="/ox-hugo/hatch00_tdof_mode_3.png" caption="Figure 7: Third Mode, 1.7rad/s" >}}
@@ -346,9 +346,9 @@ There are many options for change of basis, but we will show that **when eigenve
The n-uncoupled equations in the principal coordinate system can then be solved for the responses in the principal coordinate system using the well known solutions for the single dof systems.
The n-responses in the principal coordinate system can then be **transformed back** to the physical coordinate system to provide the actual response in physical coordinate.
-This procedure is schematically shown in Figure [8](#org2a145bc).
+This procedure is schematically shown in Figure [8](#orgf9a2963).
-
+
{{< figure src="/ox-hugo/hatch00_schematic_modal_solution.png" caption="Figure 8: Roadmap for Modal Solution" >}}
@@ -696,7 +696,7 @@ Absolute damping is based on making \\(b = 0\\), in which case the percentage of
## Frequency Response: Modal Form {#frequency-response-modal-form}
-
+
The procedure to obtain the frequency response from a modal form is as follow:
@@ -704,9 +704,9 @@ The procedure to obtain the frequency response from a modal form is as follow:
- use Laplace transform to obtain the transfer functions in principal coordinates
- back-transform the transfer functions to physical coordinates where the individual mode contributions will be evident
-This will be applied to the model shown in Figure [9](#org5228de8).
+This will be applied to the model shown in Figure [9](#org48b68a4).
-
+
{{< figure src="/ox-hugo/hatch00_tdof_model.png" caption="Figure 9: tdof undamped model for modal analysis" >}}
@@ -888,9 +888,9 @@ Equations \eqref{eq:general_add_tf} and \eqref{eq:general_add_tf_damp} shows tha
-Figure [10](#org36b2696) shows the separate contributions of each mode to the total response \\(z\_1/F\_1\\).
+Figure [10](#org87763b9) shows the separate contributions of each mode to the total response \\(z\_1/F\_1\\).
-
+
{{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 10: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}}
@@ -899,16 +899,16 @@ The zeros for SISO transfer functions are the roots of the numerator, however, f
## SISO State Space Matlab Model from ANSYS Model {#siso-state-space-matlab-model-from-ansys-model}
-
+
### Introduction {#introduction}
-In this section is developed a SISO state space Matlab model from an ANSYS cantilever beam model as shown in Figure [11](#org332d1e7).
+In this section is developed a SISO state space Matlab model from an ANSYS cantilever beam model as shown in Figure [11](#orga66d597).
A z direction force is applied at the midpoint of the beam and z displacement at the tip is the output.
The objective is to provide the smallest Matlab state space model that accurately represents the pertinent dynamics.
-
+
{{< figure src="/ox-hugo/hatch00_cantilever_beam.png" caption="Figure 11: Cantilever beam with forcing function at midpoint" >}}
@@ -987,7 +987,7 @@ If sorting of DC gain values is performed prior to the `truncate` operation, the
## Ground Acceleration Matlab Model From ANSYS Model {#ground-acceleration-matlab-model-from-ansys-model}
-
+
### Model Description {#model-description}
@@ -1001,25 +1001,25 @@ If sorting of DC gain values is performed prior to the `truncate` operation, the
## SISO Disk Drive Actuator Model {#siso-disk-drive-actuator-model}
-
+
-In this section we wish to extract a SISO state space model from a Finite Element model representing a Disk Drive Actuator (Figure [12](#org6d55a33)).
+In this section we wish to extract a SISO state space model from a Finite Element model representing a Disk Drive Actuator (Figure [12](#org94e126d)).
### Actuator Description {#actuator-description}
-
+
{{< figure src="/ox-hugo/hatch00_disk_drive_siso_model.png" caption="Figure 12: Drawing of Actuator/Suspension system" >}}
-The primary motion of the actuator is rotation about the pivot bearing, therefore the final model has the coordinate system transformed from a Cartesian x,y,z coordinate system to a Cylindrical \\(r\\), \\(\theta\\) and \\(z\\) system, with the two origins coincident (Figure [13](#org482c35b)).
+The primary motion of the actuator is rotation about the pivot bearing, therefore the final model has the coordinate system transformed from a Cartesian x,y,z coordinate system to a Cylindrical \\(r\\), \\(\theta\\) and \\(z\\) system, with the two origins coincident (Figure [13](#org4a20950)).
-
+
{{< figure src="/ox-hugo/hatch00_disk_drive_nodes_reduced_model.png" caption="Figure 13: Nodes used for reduced Matlab model. Shown with partial finite element mesh at coil" >}}
For reduced models, we only require eigenvector information for dof where forces are applied and where displacements are required.
-Figure [13](#org482c35b) shows the nodes used for the reduced Matlab model.
+Figure [13](#org4a20950) shows the nodes used for the reduced Matlab model.
The four nodes 24061, 24066, 24082 and 24087 are located in the center of the coil in the z direction and are used for simulating the VCM force.
The arrows at the nodes indicate the direction of forces.
@@ -1045,6 +1045,9 @@ A small section of the exported `.eig` file from ANSYS is shown bellow..
+
+
+
LOAD STEP= 1 SUBSTEP= 1
FREQ= 8.1532 LOAD CASE= 0
@@ -1059,6 +1062,8 @@ NODE UX UY UZ ROTX ROTY ROTZ
+
+
Important information are:
- `SUBSTEP`: mode number
@@ -1082,7 +1087,7 @@ From Ansys, we have the eigenvalues \\(\omega\_i\\) and eigenvectors \\(\bm{z}\\
## Balanced Reduction {#balanced-reduction}
-
+
In this chapter another method of reducing models, “balanced reduction”, will be introduced and compared with the DC and peak gain ranking methods.
@@ -1197,14 +1202,14 @@ The **states to be kept are the states with the largest diagonal terms**.
## MIMO Two Stage Actuator Model {#mimo-two-stage-actuator-model}
-
+
-In this section, a MIMO two-stage actuator model is derived from a finite element model (Figure [14](#orgdc24ed7)).
+In this section, a MIMO two-stage actuator model is derived from a finite element model (Figure [14](#org1453e17)).
### Actuator Description {#actuator-description}
-
+
{{< figure src="/ox-hugo/hatch00_disk_drive_mimo_schematic.png" caption="Figure 14: Drawing of actuator/suspension system" >}}
@@ -1226,9 +1231,9 @@ Since the same forces are being applied to both piezo elements, they represent t
### Ansys Model Description {#ansys-model-description}
-In Figure [15](#org40d5587) are shown the principal nodes used for the model.
+In Figure [15](#orge94bde1) are shown the principal nodes used for the model.
-
+
{{< figure src="/ox-hugo/hatch00_disk_drive_mimo_ansys.png" caption="Figure 15: Nodes used for reduced Matlab model, shown with partial mesh at coil and piezo element" >}}
@@ -1264,23 +1269,23 @@ The complete system is rebuilt by augmenting the rigid body mode with the reduce
We define the system parameters.
```matlab
-m = 1;
-k = 1;
+ m = 1;
+ k = 1;
```
We write the mass and stiffness matrices:
```matlab
-M = diag([m, m, m]);
-K = [k, -k, 0;
- -k, 2*k, -k;
- 0, -k, k];
+ M = diag([m, m, m]);
+ K = [k, -k, 0;
+ -k, 2*k, -k;
+ 0, -k, k];
```
Compute the eigenvalues and eigenvectors:
```matlab
-[z, w] = eig(M\K);
+ [z, w] = eig(M\K);
```
| | rad/s |
@@ -1292,7 +1297,7 @@ Compute the eigenvalues and eigenvectors:
Normalization of the eigenvectors:
```matlab
-zn = z./sqrt(diag(z' * M * z));
+ zn = z./sqrt(diag(z' * M * z));
```
| zn1 | zn2 | zn3 |
@@ -1304,22 +1309,22 @@ zn = z./sqrt(diag(z' * M * z));
Non-necessary step:
```matlab
-Mn = zn' * M * zn;
-Kn = zn' * K * zn;
+ Mn = zn' * M * zn;
+ Kn = zn' * K * zn;
```
By inspection:
```matlab
-Mn = eye(3);
-Kn = w; % Shouldn't this be equal to w.^2 ?
+ Mn = eye(3);
+ Kn = w; % Shouldn't this be equal to w.^2 ?
```
We add some simple proportional damping:
```matlab
-xi = 0.03;
-Cn = xi/2*sqrt(k*m) * eye(3);
+ xi = 0.03;
+ Cn = xi/2*sqrt(k*m) * eye(3);
```
The equations in the principal coordinates are:
@@ -1329,11 +1334,11 @@ Let's note
\\[ \bm{G}\_p(s) = \frac{\bm{z}\_p}{\bm{F}} \\]
```matlab
-Gp = (tf(Mn)*s^2 + tf(Cn)*s + tf(Kn))\tf(eye(3))*zn';
+ Gp = (tf(Mn)*s^2 + tf(Cn)*s + tf(Kn))\tf(eye(3))*zn';
```
```matlab
-bodeFig({Gp(1,1), Gp(2,2), Gp(3,3)})
+ bodeFig({Gp(1,1), Gp(2,2), Gp(3,3)})
```
And we have the Laplace transform in the principal coordinates.
@@ -1344,14 +1349,14 @@ And we note:
\\[ \bm{G}(s) = \bm{z}\_n\ bm{G}\_p(s) = \frac{\bm{z}}{\bm{F}} \\]
```matlab
-G = zn * Gp;
+ G = zn * Gp;
```
-
+
{{< figure src="/ox-hugo/hatch00_z13_tf.png" caption="Figure 16: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_3\\)" >}}
-
+
{{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 17: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}}
@@ -1362,13 +1367,13 @@ G = zn * Gp;
### Extract values {#extract-values}
```matlab
-filename = 'files/cantbeam30bl.eig';
+ filename = 'files/cantbeam30bl.eig';
-dir = 3; % UY
-[xn, f0] = readEigFile(filename, dir);
+ dir = 3; % UY
+ [xn, f0] = readEigFile(filename, dir);
-n_nodes = size(xn, 1);
-n_modes = size(xn, 2);
+ n_nodes = size(xn, 1);
+ n_modes = size(xn, 2);
```
@@ -1377,8 +1382,8 @@ n_modes = size(xn, 2);
First, define the node numbers corresponding to the inputs and outputs
```matlab
-i_input = 14;
-i_output = 29;
+ i_input = 14;
+ i_output = 29;
```
@@ -1387,7 +1392,7 @@ i_output = 29;
We here use uniform damping.
```matlab
-xi = 0.01;
+ xi = 0.01;
```
@@ -1401,116 +1406,116 @@ I could define 2x2 sub-matrices each corresponding to a particular mode and then
System Matrix - A
```matlab
-Adiag = zeros(2*n_modes,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0);
+ Adiag = zeros(2*n_modes,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0);
-Adiagsup = zeros(2*n_modes-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*n_modes-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*n_modes-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0).^2;
+ Adiaginf = zeros(2*n_modes-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*n_modes, length(i_input));
+ B = zeros(2*n_modes, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn'*Fp;
-end
+ B(2:2:end, i) = xn'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*n_modes);
-C(:, 1:2:end) = xn(i_output, :);
+ C = zeros(length(i_output), 2*n_modes);
+ C(:, 1:2:end) = xn(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_f = ss(A, B, C, D);
+ G_f = ss(A, B, C, D);
```
### Simple mode truncation {#simple-mode-truncation}
-Let's plot the frequency of the modes (Figure [18](#org152bcb2)).
+Let's plot the frequency of the modes (Figure [18](#orga04e866)).
-
+
{{< figure src="/ox-hugo/hatch00_cant_beam_modes_freq.png" caption="Figure 18: Frequency of the modes" >}}
-
+
{{< figure src="/ox-hugo/hatch00_cant_beam_unsorted_dc_gains.png" caption="Figure 19: Unsorted DC Gains" >}}
Let's keep only the first 10 modes.
```matlab
-m_max = 10;
-xn_t = xn(:, 1:m_max);
-f0_t = f0(1:m_max);
+ m_max = 10;
+ xn_t = xn(:, 1:m_max);
+ f0_t = f0(1:m_max);
```
```matlab
-Adiag = zeros(2*m_max,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0_t);
+ Adiag = zeros(2*m_max,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0_t);
-Adiagsup = zeros(2*m_max-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*m_max-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*m_max-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0_t).^2;
+ Adiaginf = zeros(2*m_max-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0_t).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*m_max, length(i_input));
+ B = zeros(2*m_max, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn_t'*Fp;
-end
+ B(2:2:end, i) = xn_t'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*m_max);
-C(:, 1:2:end) = xn_t(i_output, :);
+ C = zeros(length(i_output), 2*m_max);
+ C(:, 1:2:end) = xn_t(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_t = ss(A, B, C, D);
+ G_t = ss(A, B, C, D);
```
@@ -1519,114 +1524,114 @@ G_t = ss(A, B, C, D);
Let's sort the modes by their DC gains and plot their sorted DC gains.
```matlab
-dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
+ dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
-[dc_gain_sort, index_sort] = sort(dc_gain, 'descend');
+ [dc_gain_sort, index_sort] = sort(dc_gain, 'descend');
```
-
+
{{< figure src="/ox-hugo/hatch00_cant_beam_sorted_dc_gains.png" caption="Figure 20: Sorted DC Gains" >}}
Let's keep only the first 10 **sorted** modes.
```matlab
-m_max = 10;
+ m_max = 10;
-xn_s = xn(:, index_sort(1:m_max));
-f0_s = f0(index_sort(1:m_max));
+ xn_s = xn(:, index_sort(1:m_max));
+ f0_s = f0(index_sort(1:m_max));
```
```matlab
-Adiag = zeros(2*m_max,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0_s);
+ Adiag = zeros(2*m_max,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0_s);
-Adiagsup = zeros(2*m_max-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*m_max-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*m_max-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
+ Adiaginf = zeros(2*m_max-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*m_max, length(i_input));
+ B = zeros(2*m_max, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn_s'*Fp;
-end
+ B(2:2:end, i) = xn_s'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*m_max);
-C(:, 1:2:end) = xn_s(i_output, :);
+ C = zeros(length(i_output), 2*m_max);
+ C(:, 1:2:end) = xn_s(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_s = ss(A, B, C, D);
+ G_s = ss(A, B, C, D);
```
### Comparison {#comparison}
```matlab
-freqs = logspace(0, 5, 1000);
+ freqs = logspace(0, 5, 1000);
-figure;
-hold on;
-plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Full');
-plot(freqs, abs(squeeze(freqresp(G_t, freqs, 'Hz'))), 'DisplayName', 'Trun');
-plot(freqs, abs(squeeze(freqresp(G_s, freqs, 'Hz'))), 'DisplayName', 'Sort');
-set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
-ylabel('Amplitude'); xlabel('Frequency [Hz]');
-legend();
+ figure;
+ hold on;
+ plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Full');
+ plot(freqs, abs(squeeze(freqresp(G_t, freqs, 'Hz'))), 'DisplayName', 'Trun');
+ plot(freqs, abs(squeeze(freqresp(G_s, freqs, 'Hz'))), 'DisplayName', 'Sort');
+ set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
+ ylabel('Amplitude'); xlabel('Frequency [Hz]');
+ legend();
```
### Effect of the Individual Modes {#effect-of-the-individual-modes}
```matlab
-freqs = logspace(0, 4, 1000);
+ freqs = logspace(0, 4, 1000);
-figure;
-hold on;
-for mode_i = 1:6
- A = zeros(2);
- A(2,2) = -2*xi.*(2*pi*f0(mode_i));
- A(1,2) = 1;
- A(2,1) = -(2*pi*f0(mode_i)).^2;
+ figure;
+ hold on;
+ for mode_i = 1:6
+ A = zeros(2);
+ A(2,2) = -2*xi.*(2*pi*f0(mode_i));
+ A(1,2) = 1;
+ A(2,1) = -(2*pi*f0(mode_i)).^2;
- B = [0; xn(i_input, mode_i)'];
+ B = [0; xn(i_input, mode_i)'];
- C = [xn(i_output, mode_i), 0];
+ C = [xn(i_output, mode_i), 0];
- D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
- plot(freqs, abs(squeeze(freqresp(ss(A,B,C,D), freqs, 'Hz'))), ...
- 'DisplayName', sprintf('Mode %i', mode_i));
-end
-plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'k--', ...
- 'DisplayName', 'Full');
-set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
-ylabel('Amplitude'); xlabel('Frequency [Hz]');
-legend();
+ plot(freqs, abs(squeeze(freqresp(ss(A,B,C,D), freqs, 'Hz'))), ...
+ 'DisplayName', sprintf('Mode %i', mode_i));
+ end
+ plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'k--', ...
+ 'DisplayName', 'Full');
+ set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
+ ylabel('Amplitude'); xlabel('Frequency [Hz]');
+ legend();
```
@@ -1638,9 +1643,9 @@ legend();
If we want to use Rayleigh damping:
```matlab
-a = 1e-2;
-b = 1e-6;
-xi = (a + b * (2*pi*f0).^2)./(2*pi*f0);
+ a = 1e-2;
+ b = 1e-6;
+ xi = (a + b * (2*pi*f0).^2)./(2*pi*f0);
```
@@ -1649,64 +1654,64 @@ xi = (a + b * (2*pi*f0).^2)./(2*pi*f0);
System Matrix - A
```matlab
-Adiag = zeros(2*n_modes,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0);
+ Adiag = zeros(2*n_modes,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0);
-Adiagsup = zeros(2*n_modes-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*n_modes-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*n_modes-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0).^2;
+ Adiaginf = zeros(2*n_modes-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*n_modes, length(i_input));
+ B = zeros(2*n_modes, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn'*Fp;
-end
+ B(2:2:end, i) = xn'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*n_modes);
-C(:, 1:2:end) = xn(i_output, :);
+ C = zeros(length(i_output), 2*n_modes);
+ C(:, 1:2:end) = xn(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_d = ss(A, B, C, D);
+ G_d = ss(A, B, C, D);
```
#### Comparison with Uniform Damping {#comparison-with-uniform-damping}
```matlab
-freqs = logspace(0, 5, 1000);
+ freqs = logspace(0, 5, 1000);
-figure;
-hold on;
-plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Uniform Damping');
-plot(freqs, abs(squeeze(freqresp(G_d, freqs, 'Hz'))), 'DisplayName', 'Non-Uniform Damping');
-set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
-ylabel('Amplitude'); xlabel('Frequency [Hz]');
-legend();
+ figure;
+ hold on;
+ plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Uniform Damping');
+ plot(freqs, abs(squeeze(freqresp(G_d, freqs, 'Hz'))), 'DisplayName', 'Non-Uniform Damping');
+ set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
+ ylabel('Amplitude'); xlabel('Frequency [Hz]');
+ legend();
```
@@ -1715,79 +1720,79 @@ legend();
Let's sort the modes by their peak gains and plot their sorted peak gains.
```matlab
-dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
-peak_gain = dc_gain./xi;
+ dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
+ peak_gain = dc_gain./xi;
-[peak_gain_sort, index_sort] = sort(peak_gain, 'descend');
+ [peak_gain_sort, index_sort] = sort(peak_gain, 'descend');
```
Let's keep only the first 10 **sorted** modes.
```matlab
-m_max = 10;
+ m_max = 10;
-xn_s = xn(:, index_sort(1:m_max));
-f0_s = f0(index_sort(1:m_max));
-xi_x = xi(index_sort(1:m_max));
+ xn_s = xn(:, index_sort(1:m_max));
+ f0_s = f0(index_sort(1:m_max));
+ xi_x = xi(index_sort(1:m_max));
```
```matlab
-Adiag = zeros(2*m_max,1);
-Adiag(2:2:end) = -2*xi_s.*(2*pi*f0_s);
+ Adiag = zeros(2*m_max,1);
+ Adiag(2:2:end) = -2*xi_s.*(2*pi*f0_s);
-Adiagsup = zeros(2*m_max-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*m_max-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*m_max-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
+ Adiaginf = zeros(2*m_max-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*m_max, length(i_input));
+ B = zeros(2*m_max, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn_s'*Fp;
-end
+ B(2:2:end, i) = xn_s'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*m_max);
-C(:, 1:2:end) = xn_s(i_output, :);
+ C = zeros(length(i_output), 2*m_max);
+ C(:, 1:2:end) = xn_s(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_p = ss(A, B, C, D);
+ G_p = ss(A, B, C, D);
```
```matlab
-freqs = logspace(0, 5, 1000);
+ freqs = logspace(0, 5, 1000);
-figure;
-hold on;
-plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Uniform Damping');
-plot(freqs, abs(squeeze(freqresp(G_d, freqs, 'Hz'))), 'DisplayName', 'Non-Uniform Damping');
-plot(freqs, abs(squeeze(freqresp(G_p, freqs, 'Hz'))), 'DisplayName', 'Peak sort');
-set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
-ylabel('Amplitude'); xlabel('Frequency [Hz]');
-legend();
+ figure;
+ hold on;
+ plot(freqs, abs(squeeze(freqresp(G_f, freqs, 'Hz'))), 'DisplayName', 'Uniform Damping');
+ plot(freqs, abs(squeeze(freqresp(G_d, freqs, 'Hz'))), 'DisplayName', 'Non-Uniform Damping');
+ plot(freqs, abs(squeeze(freqresp(G_p, freqs, 'Hz'))), 'DisplayName', 'Peak sort');
+ set(gca, 'XScale', 'log'); set(gca, 'YScale', 'log');
+ ylabel('Amplitude'); xlabel('Frequency [Hz]');
+ legend();
```
@@ -1799,8 +1804,8 @@ legend();
Let's choose two inputs and two outputs.
```matlab
-i_input = [14, 31];
-i_output = [14, 31];
+ i_input = [14, 31];
+ i_output = [14, 31];
```
@@ -1809,49 +1814,49 @@ i_output = [14, 31];
System Matrix - A
```matlab
-Adiag = zeros(2*n_modes,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0);
+ Adiag = zeros(2*n_modes,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0);
-Adiagsup = zeros(2*n_modes-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*n_modes-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*n_modes-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0).^2;
+ Adiaginf = zeros(2*n_modes-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*n_modes, length(i_input));
+ B = zeros(2*n_modes, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn'*Fp;
-end
+ B(2:2:end, i) = xn'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*n_modes);
-C(:, 1:2:end) = xn(i_output, :);
+ C = zeros(length(i_output), 2*n_modes);
+ C(:, 1:2:end) = xn(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_m = ss(A, B, C, D);
+ G_m = ss(A, B, C, D);
```
@@ -1862,13 +1867,13 @@ First, we have to make sure that the rigid body mode is not included in the syst
Then, we compute the controllability and observability gramians.
```matlab
-wc = gram(G_m, 'c');
-wo = gram(G_m, 'o');
+ wc = gram(G_m, 'c');
+ wo = gram(G_m, 'o');
```
And we plot the diagonal terms
-
+
{{< figure src="/ox-hugo/hatch00_gramians.png" caption="Figure 21: Observability and Controllability Gramians" >}}
@@ -1883,17 +1888,17 @@ We use `balreal` to rank oscillatory states.
> to reduce the model order).
```matlab
-[G_b, G, T, Ti] = balreal(G_m);
+ [G_b, G, T, Ti] = balreal(G_m);
```
-
+
{{< figure src="/ox-hugo/hatch00_cant_beam_gramian_balanced.png" caption="Figure 22: Sorted values of the Gramian of the balanced realization" >}}
Now we can choose the number of states to keep.
```matlab
-n_states_b = 20;
+ n_states_b = 20;
```
We now use `modred` to define reduced order oscillatory system using `mathdc` or `truncate` option.
@@ -1906,7 +1911,7 @@ We now use `modred` to define reduced order oscillatory system using `mathdc` or
> state vector and X2 is discarded.
```matlab
-G_br = modred(G_b, n_states_b+1:size(A,1), 'truncate');
+ G_br = modred(G_b, n_states_b+1:size(A,1), 'truncate');
```
If needed, the rigid body mode should be added to the reduced system.
@@ -1914,13 +1919,13 @@ If needed, the rigid body mode should be added to the reduced system.
And other option is to specify the minimum value of the gramians diagonal elements for the modes to keep.
```matlab
-G_min = 1e-4;
-G_br = modred(G_b, G
\d+),(?\d+)\]:(?[^\[]+)', 'names');
+ str = fileread('files/Kdense.txt');
+ % Remove spaces
+ str = regexprep(str,'\s+','');
+ % Regex to get the data
+ parts = regexp(str, '\[(?\d+),(?\d+)\]:(?[^\[]+)', 'names');
-row = cellfun(@str2double, {parts.row}, 'UniformOutput', true);
-col = cellfun(@str2double, {parts.col}, 'UniformOutput', true);
-val = cellfun(@str2double, {parts.val}, 'UniformOutput', true);
+ row = cellfun(@str2double, {parts.row}, 'UniformOutput', true);
+ col = cellfun(@str2double, {parts.col}, 'UniformOutput', true);
+ val = cellfun(@str2double, {parts.val}, 'UniformOutput', true);
-sz = [max(row), max(col)]; % size of output matrix
-MatK = zeros(sz); % preallocate size
-ix = sub2ind(sz, row, col); % get matrix positions
-MatK(ix)= val; % assign data
+ sz = [max(row), max(col)]; % size of output matrix
+ MatK = zeros(sz); % preallocate size
+ ix = sub2ind(sz, row, col); % get matrix positions
+ MatK(ix)= val; % assign data
```
```matlab
-str = fileread('files/Mdense.txt');
-% Remove spaces
-str = regexprep(str,'\s+','');
-% Regex to get the data
-parts = regexp(str, '\[(?\d+),(?\d+)\]:(?[^\[]+)', 'names');
+ str = fileread('files/Mdense.txt');
+ % Remove spaces
+ str = regexprep(str,'\s+','');
+ % Regex to get the data
+ parts = regexp(str, '\[(?\d+),(?\d+)\]:(?[^\[]+)', 'names');
-row = cellfun(@str2double, {parts.row}, 'UniformOutput', true);
-col = cellfun(@str2double, {parts.col}, 'UniformOutput', true);
-val = cellfun(@str2double, {parts.val}, 'UniformOutput', true);
+ row = cellfun(@str2double, {parts.row}, 'UniformOutput', true);
+ col = cellfun(@str2double, {parts.col}, 'UniformOutput', true);
+ val = cellfun(@str2double, {parts.val}, 'UniformOutput', true);
-sz = [max(row), max(col)]; % size of output matrix
-MatM = zeros(sz); % preallocate size
-ix = sub2ind(sz, row, col); % get matrix positions
-MatM(ix)= val; % assign data
+ sz = [max(row), max(col)]; % size of output matrix
+ MatM = zeros(sz); % preallocate size
+ ix = sub2ind(sz, row, col); % get matrix positions
+ MatM(ix)= val; % assign data
```
Find inputs/outputs:
```matlab
-i_input = 14;
-i_output = 29;
+ i_input = 14;
+ i_output = 29;
```
@@ -1987,27 +1992,27 @@ i_output = 29;
Correspondence with DOF:
```matlab
-a = readtable('files/Mass_HB.mapping', 'FileType', 'text');
-KM_i = strcmpi(a{:, 3},{'UZ'});
+ a = readtable('files/Mass_HB.mapping', 'FileType', 'text');
+ KM_i = strcmpi(a{:, 3},{'UZ'});
-MatM = MatM(KM_i, KM_i);
-MatK = MatK(KM_i, KM_i);
+ MatM = MatM(KM_i, KM_i);
+ MatK = MatK(KM_i, KM_i);
```
### Read Position of Nodes {#read-position-of-nodes}
```matlab
-a = readmatrix('files/FEA-poutre-noeuds.txt');
-pos = a(:, 4:6);
-node_i = a(:, 7);
+ a = readmatrix('files/FEA-poutre-noeuds.txt');
+ pos = a(:, 4:6);
+ node_i = a(:, 7);
```
```matlab
-figure;
-hold on;
-plot3(pos(:,1),pos(:,3),pos(:,3), 'ko')
-text(pos(:,1),pos(:,3),pos(:,3), num2cell(node_i))
+ figure;
+ hold on;
+ plot3(pos(:,1),pos(:,3),pos(:,3), 'ko')
+ text(pos(:,1),pos(:,3),pos(:,3), num2cell(node_i))
```
@@ -2016,73 +2021,73 @@ text(pos(:,1),pos(:,3),pos(:,3), num2cell(node_i))
Define Physical Inputs and Outputs
```matlab
-i_input = 14;
-i_output = 29;
+ i_input = 14;
+ i_output = 29;
```
Damping
```matlab
-xi = 0.01;
+ xi = 0.01;
```
```matlab
-dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
+ dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
-[dc_gain_sort, index_sort] = sort(dc_gain, 'descend');
+ [dc_gain_sort, index_sort] = sort(dc_gain, 'descend');
```
```matlab
-m_max = 13;
+ m_max = 13;
-xn_s = xn(:, index_sort(1:m_max));
-f0_s = f0(index_sort(1:m_max));
+ xn_s = xn(:, index_sort(1:m_max));
+ f0_s = f0(index_sort(1:m_max));
```
```matlab
-Adiag = zeros(2*m_max,1);
-Adiag(2:2:end) = -2*xi.*(2*pi*f0_s);
+ Adiag = zeros(2*m_max,1);
+ Adiag(2:2:end) = -2*xi.*(2*pi*f0_s);
-Adiagsup = zeros(2*m_max-1,1);
-Adiagsup(1:2:end) = 1;
+ Adiagsup = zeros(2*m_max-1,1);
+ Adiagsup(1:2:end) = 1;
-Adiaginf = zeros(2*m_max-1,1);
-Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
+ Adiaginf = zeros(2*m_max-1,1);
+ Adiaginf(1:2:end) = -(2*pi*f0_s).^2;
-A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
+ A = diag(Adiag) + diag(Adiagsup, 1) + diag(Adiaginf, -1);
```
System Matrix - B
```matlab
-B = zeros(2*m_max, length(i_input));
+ B = zeros(2*m_max, length(i_input));
-for i = 1:length(i_input)
- % Physical Coordinates
- Fp = zeros(n_nodes, 1);
- Fp(i_input(i)) = 1;
+ for i = 1:length(i_input)
+ % Physical Coordinates
+ Fp = zeros(n_nodes, 1);
+ Fp(i_input(i)) = 1;
- B(2:2:end, i) = xn_s'*Fp;
-end
+ B(2:2:end, i) = xn_s'*Fp;
+ end
```
System Matrix - C
```matlab
-C = zeros(length(i_output), 2*m_max);
-C(:, 1:2:end) = xn_s(i_output, :);
+ C = zeros(length(i_output), 2*m_max);
+ C(:, 1:2:end) = xn_s(i_output, :);
```
System Matrix - D
```matlab
-D = zeros(length(i_output), length(i_input));
+ D = zeros(length(i_output), length(i_input));
```
State Space Model
```matlab
-G_s = ss(A, B, C, D);
+ G_s = ss(A, B, C, D);
```
@@ -2091,15 +2096,15 @@ G_s = ss(A, B, C, D);
Full Mass and Stiffness matrices in the principal coordinates:
```matlab
-Mp = eye(length(f0));
-Kp = xn'*diag((2*pi*f0).^2)/xn;
+ Mp = eye(length(f0));
+ Kp = xn'*diag((2*pi*f0).^2)/xn;
```
Reduced Mass and Stiffness matrices in the principal coordinates:
```matlab
-Mr = Mp()
-Kr = xn'*diag((2*pi*f0).^2)/xn;
+ Mr = Mp()
+ Kr = xn'*diag((2*pi*f0).^2)/xn;
```
Reduced Mass and Stiffness matrices in the physical coordinates:
@@ -2109,28 +2114,28 @@ Reduced Mass and Stiffness matrices in the physical coordinates:
```
```matlab
-M = xn_s*eye(m_max)/xn_s;
-K = xn_s*diag((2*pi*f0_s).^2)/xn_s;
+ M = xn_s*eye(m_max)/xn_s;
+ K = xn_s*diag((2*pi*f0_s).^2)/xn_s;
```
```matlab
-% M = xn*eye(length(f0))/xn;
-% K = xn*diag((2*pi*f0).^2)/xn;
+ % M = xn*eye(length(f0))/xn;
+ % K = xn*diag((2*pi*f0).^2)/xn;
-M = eye(length(f0));
-K = xn*diag((2*pi*f0).^2)/xn;
+ M = eye(length(f0));
+ K = xn*diag((2*pi*f0).^2)/xn;
```
### Frames for Simscape {#frames-for-simscape}
```matlab
-pos_frames = pos([1, i_input, i_output], :);
+ pos_frames = pos([1, i_input, i_output], :);
```
## Bibliography {#bibliography}
-Hatch, Michael R. 2000. _Vibration Simulation Using MATLAB and ANSYS_. CRC Press.
+Hatch, Michael R. 2000. _Vibration Simulation Using MATLAB and ANSYS_. CRC Press.
-Miu, Denny K. 1993. _Mechatronics: Electromechanics and Contromechanics_. 1st ed. Mechanical Engineering Series. Springer-Verlag New York.
+Miu, Denny K. 1993. _Mechatronics: Electromechanics and Contromechanics_. 1st ed. Mechanical Engineering Series. Springer-Verlag New York.
diff --git a/content/book/leach14_fundam_princ_engin_nanom.md b/content/book/leach14_fundam_princ_engin_nanom.md
index 2cb4021..2af7db2 100644
--- a/content/book/leach14_fundam_princ_engin_nanom.md
+++ b/content/book/leach14_fundam_princ_engin_nanom.md
@@ -8,7 +8,7 @@ Tags
: [Metrology]({{< relref "metrology" >}})
Reference
-: ([Leach 2014](#orgc3e03e3))
+: ([Leach 2014](#orgc132434))
Author(s)
: Leach, R.
@@ -89,4 +89,4 @@ This type of angular interferometer is used to measure small angles (less than \
## Bibliography {#bibliography}
-Leach, Richard. 2014. _Fundamental Principles of Engineering Nanometrology_. Elsevier. .
+Leach, Richard. 2014. _Fundamental Principles of Engineering Nanometrology_. Elsevier. .
diff --git a/content/book/leach18_basic_precis_engin_edition.md b/content/book/leach18_basic_precis_engin_edition.md
index 1ba0854..397cb91 100644
--- a/content/book/leach18_basic_precis_engin_edition.md
+++ b/content/book/leach18_basic_precis_engin_edition.md
@@ -8,7 +8,7 @@ Tags
: [Precision Engineering]({{< relref "precision_engineering" >}})
Reference
-: ([Leach and Smith 2018](#org545df46))
+: ([Leach and Smith 2018](#org50ae2e1))
Author(s)
: Leach, R., & Smith, S. T.
@@ -19,4 +19,4 @@ Year
## Bibliography {#bibliography}
-Leach, Richard, and Stuart T. Smith. 2018. _Basics of Precision Engineering - 1st Edition_. CRC Press.
+Leach, Richard, and Stuart T. Smith. 2018. _Basics of Precision Engineering - 1st Edition_. CRC Press.
diff --git a/content/book/skogestad07_multiv_feedb_contr.md b/content/book/skogestad07_multiv_feedb_contr.md
index 6d41ce8..c39d169 100644
--- a/content/book/skogestad07_multiv_feedb_contr.md
+++ b/content/book/skogestad07_multiv_feedb_contr.md
@@ -8,7 +8,7 @@ Tags
: [Reference Books]({{< relref "reference_books" >}}), [Multivariable Control]({{< relref "multivariable_control" >}})
Reference
-: ([Skogestad and Postlethwaite 2007](#org11783d5))
+: ([Skogestad and Postlethwaite 2007](#org7d9b388))
Author(s)
: Skogestad, S., & Postlethwaite, I.
@@ -19,10 +19,51 @@ Year
PDF version
: [link](/ox-hugo/skogestad07_multiv_feedb_contr.pdf)
+
+\(
+% H Infini
+\newcommand{\hinf}{\mathcal{H}_\infty}
+% H 2
+\newcommand{\htwo}{\mathcal{H}_2}
+% Omega
+\newcommand{\w}{\omega}
+% H-Infinity Norm
+\newcommand{\hnorm}[1]{\left\|#1\right\|_{\infty}}
+% H-2 Norm
+\newcommand{\normtwo}[1]{\left\|#1\right\|_{2}}
+% Norm
+\newcommand{\norm}[1]{\left\|#1\right\|}
+% Absolute value
+\newcommand{\abs}[1]{\left\lvert#1\right\lvert}
+% Maximum for all omega
+\newcommand{\maxw}{\text{max}_{\omega}}
+% Maximum singular value
+\newcommand{\maxsv}{\overline{\sigma}}
+% Minimum singular value
+\newcommand{\minsv}{\underline{\sigma}}
+% Diag keyword
+\newcommand{\diag}[1]{\text{diag}\{{#1}\}}
+% Vector
+\newcommand{\colvec}[1]{\begin{bmatrix} #1 \end{bmatrix}}
+\newcommand{\tcmbox}[1]{\boxed{#1}}
+% Simulate SIunitx
+\newcommand{\SI}[2]{#1\,#2}
+\newcommand{\ang}[1]{#1^{\circ}}
+\newcommand{\degree}{^{\circ}}
+\newcommand{\radian}{\text{rad}}
+\newcommand{\percent}{\%}
+\newcommand{\decibel}{\text{dB}}
+\newcommand{\per}{/}
+% Bug with subequations
+\newcommand{\eatLabel}[2]{}
+\newenvironment{subequations}{\eatLabel}{}
+\)
+
+
## Introduction {#introduction}
-
+
### The Process of Control System Design {#the-process-of-control-system-design}
@@ -44,10 +85,10 @@ The process of designing a control system is a step by step design procedure as
13. Choose hardware and software and implement the controller
14. Test and validate the control system, and tune the controller on-line, if necessary
-Input-output controllability analysis is studied in section [sec:perf_limit_siso](#sec:perf_limit_siso) for SISO systems and in section [sec:perf_limit_mimo](#sec:perf_limit_mimo) for MIMO systems.
-The steps 4, 5, 6 and 7 are corresponding to the **control structure design**. This is treated in section [sec:controller_structure_design](#sec:controller_structure_design).
-The design of the controller is described in section [sec:controller_design](#sec:controller_design).
-The analysis of performance and robustness of a controlled system is studied in sections [sec:uncertainty_robustness_siso](#sec:uncertainty_robustness_siso) and [sec:robust_perf_mimo](#sec:robust_perf_mimo).
+Input-output controllability analysis is studied in section for SISO systems and in section for MIMO systems.
+The steps 4, 5, 6 and 7 are corresponding to the **control structure design**. This is treated in section .
+The design of the controller is described in section .
+The analysis of performance and robustness of a controlled system is studied in sections and .
### The Control Problem {#the-control-problem}
@@ -64,7 +105,7 @@ A major source of difficulty is that models may be inaccurate or may change with
The inaccuracy in \\(G\\) may cause instability problems as it is part of the feedback loop.
To deal with such a problem, the concept of **model uncertainty** will be used.
-
+
**Nominal Stability (NS)**
@@ -112,9 +153,11 @@ The variables \\(\hat{y}\\), \\(\hat{r}\\) and \\(\hat{e}\\) are in the same uni
For MIMO systems, each variables in the vectors \\(\hat{d}\\), \\(\hat{r}\\), \\(\hat{u}\\) and \\(\hat{e}\\) may have a different maximum value, in which case \\(D\_e\\), \\(D\_u\\), \\(D\_s\\) and \\(D\_r\\), become diagonal scaling matrices.
-
+
+**Scaled transfer functions**:
+
\begin{align\*}
G &= D\_e^{-1} \hat{G} D\_u\\\\\\
G\_d &= D\_e^{-1} \hat{G\_d} D\_d
@@ -123,7 +166,11 @@ G\_d &= D\_e^{-1} \hat{G\_d} D\_d
We then obtain the following model in terms of scaled variables:
-\\[ y = G u + G\_d d \\]
+
+\begin{equation\*}
+ y = G u + G\_d d
+\end{equation\*}
+
where \\(u\\) and \\(d\\) should be less than 1 in magnitude.
It is sometimes useful to introduce a **scaled reference** \\(\tilde{r}\\) which is less than 1 in magnitude: \\(\tilde{r} = \hat{r}/\hat{r}\_{\max} = D\_r^{-1}\hat{r}\\)
@@ -145,11 +192,11 @@ In order to obtain a linear model from the "first-principle", the following appr
### Notation {#notation}
-Notations used throughout this note are summarized in tables [table:notation_conventional](#table:notation_conventional), [table:notation_general](#table:notation_general) and [table:notation_tf](#table:notation_tf).
+Notations used throughout this note are summarized in tables [1](#table--tab:notation-conventional), [2](#table--tab:notation-general) and [3](#table--tab:notation-tf).
-
+
-
Table 1:
+
Table 1:
Notations for the conventional control configuration
@@ -164,9 +211,9 @@ Notations used throughout this note are summarized in tables [table:notatio
| \\(y\_m\\) | Measurements |
| \\(u\\) | Control signals |
-
+
@@ -178,9 +225,9 @@ Notations used throughout this note are summarized in tables [table:notatio
| \\(v\\) | Controller inputs: measurements |
| \\(u\\) | Control signals |
-
+
@@ -193,7 +240,7 @@ Notations used throughout this note are summarized in tables [table:notatio
## Classical Feedback Control {#classical-feedback-control}
-
+
### Frequency Response {#frequency-response}
@@ -206,10 +253,10 @@ By replacing \\(s\\) by \\(j\omega\\) in a transfer function \\(G(s)\\), we get
After sending a sinusoidal signal through a system \\(G(s)\\), the signal's magnitude is amplified by a factor \\(\abs{G(j\omega)}\\) and its phase is shifted by \\(\angle{G(j\omega)}\\).
-
+
-**minimum phase systems** are systems with no time delays or RHP-zeros.
+**Minimum phase systems** are systems with no time delays or RHP-zeros.
The name minimum phase refers to the fact that such a system has the minimum possible phase lag for the given magnitude response \\(|G(j\omega)|\\).
@@ -219,13 +266,13 @@ The name minimum phase refers to the fact that such a system has the minimum pos
For minimum phase systems, there is a unique relationship between the gain and phase of the frequency response: the **Bode gain-phase relationship**:
-\begin{equation}
+\begin{equation} \label{eq:bode\_phase\_gain}
\angle{G(j\w\_0)} = \frac{1}{\pi} \int\_{-\infty}^{\infty} \frac{d\ln{\abs{G(j\w)}}}{d\ln{\w}} \ln{\abs{\frac{\w+\w\_0}{\w-\w\_0}}} \frac{d\w}{\w}
\end{equation}
We note \\(N(\w\_0) = \left( \frac{d\ln{|G(j\w)|}}{d\ln{\w}} \right)\_{\w=\w\_0}\\) that corresponds to the **slope of the magnitude** of \\(G(s)\\) in log-variables. We then have the following approximation of the **Bode gain-phase relationship**:
-\begin{equation}
+\begin{equation} \label{eq:bode\_phase\_gain\_approx}
\tcmbox{\angle{G(j\w\_0)} \approx \frac{\pi}{2} N(\w\_0)}
\end{equation}
@@ -235,30 +282,32 @@ We note \\(N(\w\_0) = \left( \frac{d\ln{|G(j\w)|}}{d\ln{\w}} \right)\_{\w=\w\_0}
#### One Degree-of-Freedom Controller {#one-degree-of-freedom-controller}
-The simple one degree-of-freedom controller negative feedback structure is represented in Fig. [fig:classical_feedback_alt](#fig:classical_feedback_alt).
+The simple one degree-of-freedom controller negative feedback structure is represented in Fig. [1](#orgd511abe).
The input to the controller \\(K(s)\\) is \\(r-y\_m\\) where \\(y\_m = y+n\\) is the measured output and \\(n\\) is the measurement noise.
Thus, the input to the plant is \\(u = K(s) (r-y-n)\\).
The objective of control is to manipulate \\(u\\) (design \\(K\\)) such that the control error \\(e\\) remains small in spite of disturbances \\(d\\).
The control error is defined as \\(e = y-r\\).
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_alt.png" caption="Figure 1: Configuration for one degree-of-freedom control" >}}
#### Closed-loop Transfer Functions {#closed-loop-transfer-functions}
-
+
-\begin{subequations}
- \begin{align}
- y &= T r + S G\_d d + T n\\\\\\
- e &= -S r + S G\_d d - T n\\\\\\
- y &= KS r - KS G\_d d - KS n
- \end{align}
-\end{subequations}
+**Closed-Loop Transfer Functions**:
+
+\begin{equation} \label{eq:closed\_loop\_tf\_1dof\_feedback}
+\begin{aligned}
+y &= T r + S G\_d d + T n\\\\\\
+e &= -S r + S G\_d d - T n\\\\\\
+y &= KS r - KS G\_d d - KS n
+\end{aligned}
+\end{equation}
@@ -266,12 +315,18 @@ The control error is defined as \\(e = y-r\\).
#### Why Feedback? {#why-feedback}
We could think that we can use a "perfect" feedforward controller \\(K\_r(s) = G^{-1}(s)\\) with \\(r-G\_d d\\) as the controller input:
-\\[ y = G u + G\_d d = G K\_r (r - G\_d d) + G\_d d = r \\]
+
+\begin{equation\*}
+ y = G u + G\_d d = G K\_r (r - G\_d d) + G\_d d = r
+\end{equation\*}
+
Unfortunately, \\(G\\) is never an exact model and the disturbances are never known exactly.
-
+
+**Reasons for Feedback Control**:
+
- Signal uncertainty
- Unknown disturbance
- Model uncertainty
@@ -300,7 +355,7 @@ Moreover, method 2 provides useful measure of relative stability and will be use
The **Gain Margin** is defined as:
-\begin{equation}
+\begin{equation} \label{eq:gain\_margin}
\tcmbox{\text{GM} = \frac{1}{|L(j\w\_{180})|}}
\end{equation}
@@ -314,7 +369,7 @@ The GM is the factor by which the loop gain \\(\vert L(s)\vert\\) may be increas
The **Phase Margin** is defined as:
-\begin{equation}
+\begin{equation} \label{eq:phase\_margin}
\tcmbox{\text{PM} = \angle L(j \w\_c) + \ang{180}}
\end{equation}
@@ -329,15 +384,15 @@ Note that by decreasing the value of \\(\omega\_c\\) (lowering the closed-loop b
#### Maximum Peak Criteria {#maximum-peak-criteria}
-
+
-\begin{subequations}
- \begin{align}
- M\_S &= \max\_{\w} \abs{S(j\w)} = \hnorm{S}\\\\\\
- M\_T &= \max\_{\w} \abs{T(j\w)} = \hnorm{T}
- \end{align}
-\end{subequations}
+**Maximum peak criteria** for \\(S\\) and \\(T\\):
+
+\begin{align}
+ M\_S &= \max\_{\w} \abs{S(j\w)} = \hnorm{S}\\\\\\
+ M\_T &= \max\_{\w} \abs{T(j\w)} = \hnorm{T}
+\end{align}
@@ -351,7 +406,7 @@ Typically, we require \\(M\_S < 2\ (6dB)\\) and \\(M\_T < 1.25\ (2dB)\\).
There is a close **relationship between these maximum peaks and the gain and phase margins**.
For a given value of \\(M\_S\\), we have:
-\begin{equation}
+\begin{equation} \label{eq:link\_pm\_gm\_mm}
\tcmbox{\text{GM} \geq \frac{M\_S}{M\_S-1}; \quad \text{PM} \geq \frac{1}{M\_S}}
\end{equation}
@@ -365,10 +420,10 @@ Example of guaranteed stability margins:
In general, a large bandwidth corresponds to a faster rise time, however, this also indicates an higher sensitivity to noise and to parameter variations.
-
+
-The bandwidth, is the frequency range \\([\w\_1, \w\_2]\\) over which control is **effective**. In most case we simple call \\(\w\_2 = \w\_B\\) the bandwidth.
+The **bandwidth**, is the frequency range \\([\w\_1, \w\_2]\\) over which control is **effective**. In most case we simple call \\(\w\_2 = \w\_B\\) the bandwidth.
@@ -387,7 +442,7 @@ Then we have the following regions:
The closed-loop time constant \\(\tau\_{\text{cl}}\\) can be related to the bandwidth:
-\begin{equation}
+\begin{equation} \label{eq:bandwidth\_response\_time}
\tcmbox{\tau\_{\text{cl}} \approx \frac{1}{\w\_b}}
\end{equation}
@@ -444,9 +499,10 @@ Fortunately, the conflicting design objectives are generally in different freque
#### Fundamentals of Loop-Shaping Design {#fundamentals-of-loop-shaping-design}
-
+
+**Loop Shaping**:
Design procedure that involves explicitly shaping the magnitude of the loop transfer function \\(\abs{L(j\w)}\\).
@@ -454,12 +510,12 @@ Design procedure that involves explicitly shaping the magnitude of the loop tran
To get the benefits of feedback control, we want the loop gain \\(\abs{L(j\w)}\\) to be as large as possible within the bandwidth region.
However, due to time delays, RHP-zeros, unmodelled high-frequency dynamics and limitations on the allowed manipulated inputs, the loop gain has to drop below one at and above the crossover frequency \\(\w\_c\\).
-
+
To measure how \\(\abs{L(j\w)}\\) falls with frequency, we consider the **logarithmic slope**:
-\begin{equation}
+\begin{equation} \label{eq:logarithmic\_slope}
N = \frac{d \ln{\abs{L}}}{d \ln{\w}}
\end{equation}
@@ -491,7 +547,7 @@ First consider a **time delay** \\(\theta\\) which adds a phase of \\(-\theta \o
Thus, we want \\(\theta \omega\_c < \SI{55}{\degree} \approx \SI{1}{\radian}\\).
The attainable bandwidth is limited by the time delay:
-\begin{equation}
+\begin{equation} \label{eq:time\_delay\_bw\_limit}
\tcmbox{\omega\_c < 1/\theta}
\end{equation}
@@ -500,7 +556,7 @@ To avoid an increase in slope cause by the zero, we add a pole at \\(s = -z\\),
The phase contribution is \\(\approx \SI{-55}{\degree}\\) at \\(\w = z/2\\).
Thus, this limits the attainable bandwidth:
-\begin{equation}
+\begin{equation} \label{eq:rhp\_zero\_bw\_limit}
\tcmbox{\w\_c < z/2}
\end{equation}
@@ -529,7 +585,7 @@ A reasonable loop shape is then \\(\abs{L} = \abs{G\_d}\\).
The corresponding controller satisfies
-\begin{equation}
+\begin{equation} \label{eq:K\_loop\_shaping\_dist\_reject}
\abs{K} = \abs{G^{-1}G\_d}
\end{equation}
@@ -552,18 +608,18 @@ For reference tracking, we typically want the controller to look like \\(\frac{1
We cannot achieve both of these simultaneously with a single feedback controller.
-The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig. [fig:classical_feedback_2dof_alt](#fig:classical_feedback_2dof_alt)), rather than operating on their difference \\(r - y\_m\\).
+The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig. [2](#orgaf9baaf)), rather than operating on their difference \\(r - y\_m\\).
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="Figure 2: 2 degrees-of-freedom control architecture" >}}
-The controller can be slit into two separate blocks (Fig. [fig:classical_feedback_sep](#fig:classical_feedback_sep)):
+The controller can be slit into two separate blocks (Fig. [3](#orgdf34a4b)):
- the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors)
- the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_sep.png" caption="Figure 3: 2 degrees-of-freedom control architecture with two separate blocs" >}}
@@ -580,13 +636,13 @@ An alternative design strategy is to directly shape the magnitude of the closed
The \\(\hinf\\) norm of a stable scalar transfer function \\(f(s)\\) is simply the peak value of \\(\abs{f(j\w)}\\) as a function of frequency:
-\begin{equation}
+\begin{equation} \label{eq:hinf\_norm}
\tcmbox{\hnorm{f(s)} \triangleq \max\_{\w} \abs{f(j\w)}}
\end{equation}
Similarly, the symbol \\(\htwo\\) stands for the Hardy space of transfer function with bounded 2-norm:
-\begin{equation}
+\begin{equation} \label{eq:h2\_norm}
\tcmbox{\normtwo{f(s)} \triangleq \left( \frac{1}{2\pi} \int\_{-\infty}^{\infty} \abs{f(j\w)}^2 d\w \right)^{1/2}}
\end{equation}
@@ -595,9 +651,11 @@ Similarly, the symbol \\(\htwo\\) stands for the Hardy space of transfer functio
The sensitivity function \\(S\\) is a very good indicator of closed-loop performance. The main advantage of considering \\(S\\) is that we want \\(S\\) small and **it is sufficient to consider just its magnitude** \\(\abs{S}\\).
-
+
+**Typical specifications in terms of** \\(S\\):
+
- Minimum bandwidth frequency \\(\w\_B^\*\\)
- Maximum tracking error at selected freq.
- The maximum steady state tracking error \\(A\\)
@@ -612,19 +670,27 @@ Mathematically, these specifications may be captured by an **upper bound** \\(1/
The subscript \\(P\\) stands for **performance** since \\(S\\) is mainly used as a performance indicator.
The performance requirement becomes
-\\[ S(j\w) < 1/\abs{W\_P(j\w)}, \forall \w \\]
+
+\begin{equation\*}
+ S(j\w) < 1/\abs{W\_P(j\w)}, \forall \w
+\end{equation\*}
+
Which can be expressed as an \\(\mathcal{H}\_\infty\\):
-\begin{equation}
+\begin{equation} \label{eq:perf\_requirements\_hinf}
\tcmbox{\hnorm{W\_P S} < 1}
\end{equation}
-
+
-\\[W\_P(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A}\\]
+**Typical performance weight**:
-With (see Fig. [fig:performance_weigth](#fig:performance_weigth)):
+\begin{equation\*}
+ W\_P(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A}
+\end{equation\*}
+
+With (see Fig. [4](#org1e6ca86)):
- \\(M\\): maximum magnitude of \\(\abs{S}\\)
- \\(\w\_B\\): crossover frequency
@@ -632,12 +698,15 @@ With (see Fig. [fig:performance_weigth](#fig:performance_weigth)):
-
+
{{< figure src="/ox-hugo/skogestad07_weight_first_order.png" caption="Figure 4: Inverse of performance weight" >}}
If we want a steeper slope for \\(L\\) below the bandwidth, an higher order weight may be selected. A weight which ask for a slope of \\(-2\\) for \\(L\\) below crossover is:
-\\[W\_P(s) = \frac{(s/M^{1/2} + \w\_B^\*)^2}{(s + \w\_B^\* A^{1/2})^2}\\]
+
+\begin{equation\*}
+ W\_P(s) = \frac{(s/M^{1/2} + \w\_B^\*)^2}{(s + \w\_B^\* A^{1/2})^2}
+\end{equation\*}
#### Stacked Requirements: Mixed Sensitivity {#stacked-requirements-mixed-sensitivity}
@@ -649,14 +718,21 @@ To do this, we can make demands on another closed-loop transfer function \\(T\\)
Also, to achieve robustness or to restrict the magnitude of the input signal \\(u\\), one may place an upper bound \\(1/\abs{W\_U}\\) on the magnitude \\(KS\\).
To combined these **mixed sensitivity specifications**, a **stacking approach** is usually used, resulting in the following overall specification:
-\\[\maxw \maxsv(N(j\w)) < 1; \quad N = \colvec{W\_P S \\ W\_T T \\ W\_U KS}\\]
+
+\begin{equation\*}
+ \maxw \maxsv(N(j\w)) < 1; \quad N = \begin{bmatrix}
+ W\_P S \\\\\\
+ W\_T T \\\\\\
+ W\_U KS
+ \end{bmatrix}
+\end{equation\*}
After selecting the form of \\(N\\) and the weights, the \\(\hinf\\) optimal controller is obtained by solving the problem \\(\min\_K\hnorm{N(K)}\\).
## Introduction to Multivariable Control {#introduction-to-multivariable-control}
-
+
### Introduction {#introduction}
@@ -674,7 +750,7 @@ A plant is said to be **ill-conditioned** if the gain depends strongly on the in
For MIMO systems the order of the transfer functions matter, so in general:
-\begin{equation}
+\begin{equation} \label{eq:mimo\_gk\_neq\_kg}
\tcmbox{GK \neq KG}
\end{equation}
@@ -683,7 +759,7 @@ even when \\(G\\) and \\(K\\) are square matrices.
### Transfer Functions {#transfer-functions}
-
+
The main rule for evaluating transfer functions is the **MIMO Rule**: Start from the output and write down the transfer functions as you meet them going to the input. If you exit a feedback loop then we get a term \\((I-L)^{-1}\\) where \\(L = GK\\) is the transfer function around the loop (gain going backwards).
@@ -693,13 +769,13 @@ The main rule for evaluating transfer functions is the **MIMO Rule**: Start from
#### Negative Feedback Control Systems {#negative-feedback-control-systems}
-For negative feedback system (Fig. [fig:classical_feedback_bis](#fig:classical_feedback_bis)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant:
+For negative feedback system (Fig. [5](#org4a80576)), we define \\(L\\) to be the loop transfer function as seen when breaking the loop at the **output** of the plant:
- \\(L = G K\\)
- \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\)
- \\(T \triangleq L(I + L)^{-1}\\) is the transfer function from \\(r\\) to \\(y\\)
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_bis.png" caption="Figure 5: Conventional negative feedback control system" >}}
@@ -723,7 +799,7 @@ The element \\(g\_{ij}(j\w)\\) of the matrix \\(G\\) represents the sinusoidal r
For a SISO system, the gain at \\(\omega\\) is simply:
-\begin{equation}
+\begin{equation} \label{eq:gain\_siso}
\frac{|y(\w)|}{|d(\w)|} = \frac{|G(j\w)d(\w)|}{|d(\w)|} = |G(j\w)|
\end{equation}
@@ -731,11 +807,14 @@ The gain depends on the frequency \\(\w\\) but it is independent of the input ma
For MIMO systems, we have to use norms to measure the amplitude of the inputs/outputs.
If we select vector 2-norm, the magnitude of the vector input signal is:
-\\[ \normtwo{d(\w)} = \sqrt{\sum\_j |d\_j(\w)|^2} \\]
+
+\begin{equation\*}
+ \normtwo{d(\w)} = \sqrt{\sum\_j |d\_j(\w)|^2}
+\end{equation\*}
The gain of the system is then:
-\begin{equation}
+\begin{equation} \label{eq:gain\_mimo}
\frac{\normtwo{y(\w)}}{\normtwo{d(\w)}} = \frac{\normtwo{G(j\w)d(\w)}}{\normtwo{d(\w)}} = \frac{\sqrt{\sum\_j |y\_j(\w)|^2}}{\sqrt{\sum\_j |d\_j(\w)|^2}}
\end{equation}
@@ -752,10 +831,12 @@ The main problem is that the eigenvalues measure the gain for the special case w
We are interested by the physical interpretation of the SVD when applied to the frequency response of a MIMO system \\(G(s)\\) with \\(m\\) inputs and \\(l\\) outputs.
-
+
-\begin{equation}
+**Singular Value Decomposition**:
+
+\begin{equation} \label{eq:svd}
G = U \Sigma V^H
\end{equation}
@@ -772,19 +853,29 @@ G = U \Sigma V^H
The input and output directions are related through the singular values:
-\begin{equation}
+\begin{equation} \label{eq:svd\_directions}
\tcmbox{G v\_i = \sigma\_i u\_i}
\end{equation}
So, if we consider an input in the direction \\(v\_i\\), then the output is in the direction \\(u\_i\\). Furthermore, since \\(\normtwo{v\_i}=1\\) and \\(\normtwo{u\_i}=1\\), we see that **the singular value \\(\sigma\_i\\) directly gives the gain of the matrix \\(G\\) in this direction**.
The **largest gain** for any input is equal to the **maximum singular value**:
-\\[\maxsv(G) \equiv \sigma\_1(G) = \max\_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv\_1}}{\normtwo{v\_1}} \\]
-The **smallest gain** for any input direction is equal to the **minimum singular value**:
-\\[\minsv(G) \equiv \sigma\_k(G) = \min\_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv\_k}}{\normtwo{v\_k}} \\]
-We define \\(u\_1 = \bar{u}\\), \\(v\_1 = \bar{v}\\), \\(u\_k=\ubar{u}\\) and \\(v\_k = \ubar{v}\\). Then is follows that:
-\\[ G\bar{v} = \maxsv \bar{u} ; \quad G\ubar{v} = \minsv \ubar{u} \\]
+\begin{equation\*}
+ \maxsv(G) \triangleq \sigma\_1(G) = \max\_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv\_1}}{\normtwo{v\_1}}
+\end{equation\*}
+
+The **smallest gain** for any input direction is equal to the **minimum singular value**:
+
+\begin{equation\*}
+ \minsv(G) \triangleq \sigma\_k(G) = \min\_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv\_k}}{\normtwo{v\_k}}
+\end{equation\*}
+
+We define \\(u\_1 = \overline{u}\\), \\(v\_1 = \overline{v}\\), \\(u\_k = \underline{u}\\) and \\(v\_k = \underline{v}\\). Then is follows that:
+
+\begin{equation\*}
+ G\overline{v} = \maxsv \overline{u} ; \quad G\underline{v} = \minsv \underline{u}
+\end{equation\*}
#### Non Square Plants {#non-square-plants}
@@ -797,17 +888,23 @@ Similarly, for a plant with more inputs and outputs, the additional input singul
#### Singular Values for Performance {#singular-values-for-performance}
The gain of the MIMO system from the vector of reference inputs \\(r\\) and the vector of control error \\(e\\) is bounded by the minimum and maximum singular values of \\(S\\):
-\\[ \minsv(S(j\w)) < \frac{\normtwo{e(\w)}}{\normtwo{r(\w)}} < \maxsv(S(j\w)) \\]
+
+\begin{equation\*}
+ \minsv(S(j\w)) < \frac{\normtwo{e(\w)}}{\normtwo{r(\w)}} < \maxsv(S(j\w))
+\end{equation\*}
In terms of performance, we require that the gain remains small for any direction of \\(r(\w)\\) including the "worst-case" direction corresponding to the gain \\(\maxsv(S(j\w))\\). Let \\(1/\abs{W\_P(j\w)}\\) represent the maximum allowed magnitude of \\(\frac{\normtwo{e(\w)}}{\normtwo{r(\w)}}\\) at each frequency:
-\\[ \maxsv(S(j\w)) < \frac{1}{\abs{W\_P}}, \forall \w \Leftrightarrow \hnorm{W\_P S} < 1 \\]
-
+\begin{equation\*}
+ \maxsv(S(j\w)) < \frac{1}{\abs{W\_P}}, \forall \w \Leftrightarrow \hnorm{W\_P S} < 1
+\end{equation\*}
+
+
The \\(\hinf\\) norm is defined as the peak of the maximum singular value of the frequency response:
-\begin{equation}
+\begin{equation} \label{eq:hinf\_norm\_mimo}
\hnorm{M(s)} \triangleq \max\_{\w} \maxsv(M(j\w))
\end{equation}
@@ -825,7 +922,10 @@ A conceptually simple approach to multivariable control is given by a two-step p
2. **Design a diagonal controller** \\(K\_S(s)\\) for the shaped plant using methods similar to those for SISO systems.
The overall controller is then:
-\\[ K(s) = W\_1(s)K\_s(s) \\]
+
+\begin{equation\*}
+ K(s) = W\_1(s)K\_s(s)
+\end{equation\*}
#### Decoupling {#decoupling}
@@ -847,10 +947,16 @@ The idea of decoupling control is appealing, but there are **several difficultie
We can also introduce a **post compensator** \\(W\_2(s)\\).
The shaped plant is then:
-\\[G\_S(s) = W\_2(s)G(s)W\_1(s)\\]
+
+\begin{equation\*}
+ G\_S(s) = W\_2(s)G(s)W\_1(s)
+\end{equation\*}
A diagonal controller \\(K\_S\\) can then be designed for the shaped plant. The overall controller is then:
-\\[K(s) = W\_1(s)K\_S(s)W\_2(s)\\]
+
+\begin{equation\*}
+ K(s) = W\_1(s)K\_S(s)W\_2(s)
+\end{equation\*}
The **SVD-controller** is a special case of a pre and post compensator design: \\(W\_1 = V\_0\\) and \\(W\_2 = U\_0^T\\).
\\(V\_0\\) and \\(U\_0\\) are obtained from a SVD of \\(G\_0 = U\_0 \Sigma\_0 V\_0^T\\) where \\(G\_0\\) is a real approximation of \\(G(j\w\_0)\\).
@@ -867,12 +973,20 @@ However, if off-diagonal elements in \\(G(s)\\) are large, the performance with
Consider the problem of disturbance rejection: \\(y = S G\_d d\\) where \\(\normtwo{d}<1\\) and our performance requirement is that \\(\normtwo{y}<1\\) which is equivalent to requiring \\(\maxsv(SG\_d) < 1\\).
However there is generally a trade-off between input usage and performance. The controller that minimize the input magnitude while meeting the performance requirement is the one that yields all singular values of \\(SG\_d\\) equal to 1, i.e. \\(\sigma\_i(SG\_d) = 1, \forall \w\\). This corresponds to:
-\\[S\_{\text{min}} G\_d = U\_1\\]
+
+\begin{equation\*}
+ S\_{\text{min}} G\_d = U\_1
+\end{equation\*}
+
Where \\(U\_1\\) is some all-pass transfer function (which at each frequency has all its singular values equal to 1).
At frequencies where feedback is effective, we have \\(S\approx L^{-1}\\) and then \\(L\_{\text{min}} = GK\_{\text{min}} \approx G\_d U\_1^{-1}\\).
In conclusion, the controller and loop shape with the minimum gain will often look like:
-\\[ K\_{\text{min}} \approx G^{-1} G\_d U\_2 \\]
+
+\begin{equation\*}
+ K\_{\text{min}} \approx G^{-1} G\_d U\_2
+\end{equation\*}
+
where \\(U\_2 = U\_1^{-1}\\) is some all-pass transfer function matrix.
We see that for disturbances entering at the plant inputs, \\(G\_d = G\\), we get \\(G\_{\text{min}} = U\_2\\), so a simple constant unit gain controller yields a good trade-off between output performance and input usage.
@@ -882,22 +996,29 @@ We see that for disturbances entering at the plant inputs, \\(G\_d = G\\), we ge
In the mixed-sensitivity \\(S/KS\\) problem, the objective is to minimize the \\(\hinf\\) norm of:
-\begin{equation}
- N = \colvec{W\_P S \\ W\_U K S}
+\begin{equation} \label{eq:s\_ks\_mixed\_sensitivity}
+ N = \begin{bmatrix}
+ W\_P S \\\\\\
+ W\_U K S
+ \end{bmatrix}
\end{equation}
Here are some guidelines for the choice of the weights \\(W\_P\\) and \\(W\_U\\):
- \\(KS\\) is the transfer function from \\(r\\) to \\(u\\), so for a system which has been scaled, a reasonable initial choice for the input weight is \\(W\_U = I\\)
- \\(S\\) is the transfer function from \\(r\\) to \\(-e = r-y\\). A common choice for the performance weight is \\(W\_P = \text{diag}\\{w\_{p\_i}\\}\\) with:
- \\[ w\_{p\_i} = \frac{s/M\_i + \w\_{B\_i}^\*}{s + \w\_{B\_i}^\*A\_i}, \quad A\_i \ll 1 \\]
+
+ \begin{equation\*}
+ w\_{p\_i} = \frac{s/M\_i + \w\_{B\_i}^\*}{s + \w\_{B\_i}^\*A\_i}, \quad A\_i \ll 1
+ \end{equation\*}
+
Selecting \\(A\_i \ll 1\\) ensures approximate integral action.
Often we select \\(M\_i\\) about 2 for all outputs, whereas \\(\w\_{B\_i}^\*\\) may be different for each output.
For disturbance rejection, we may in some cases want a steeper slope for \\(w\_{P\_i}(s)\\) at low frequencies.
However it may be better to **consider the disturbances explicitly** by considering the \\(\hinf\\) norm of:
-\begin{equation}
+\begin{equation} \label{eq:mixed\_sensitivity\_4}
N = \begin{bmatrix}
W\_P S & W\_P S G\_d \\\\\\
W\_U K S & W\_U K S G\_d
@@ -916,7 +1037,7 @@ This can be achieved in several ways:
Whereas the poles \\(p\\) of MIMO system \\(G\\) are essentially poles of elements of \\(G\\), the zeros are generally not the zeros of elements of \\(G\\).
However, for square MIMO plants, the poles and zeros are in most cases the poles and zeros of \\(\det G(s)\\).
-
+
The zeros \\(z\\) of a MIMO system \\(G\\) are defined as the values \\(s=z\\) where \\(G(s)\\) loses rank.
@@ -938,12 +1059,12 @@ If it is not, the zero is called a "**pinned zero**".
#### Condition Number {#condition-number}
-
+
-We define the condition number of a matrix as the ratio between its maximum and minimum singular values:
+We define the **condition number** of a matrix as the ratio between its maximum and minimum singular values:
-\begin{equation}
+\begin{equation} \label{eq:condition\_number}
\gamma(G) \triangleq \maxsv(G)/\minsv(G)
\end{equation}
@@ -957,7 +1078,7 @@ It then follows that the condition number is large if the product of the largest
Note that the condition number depends strongly on scaling. One might consider minimizing the condition number over all possible scalings.
This results in the **minimized or optimal condition number** which is defined by:
-\begin{equation}
+\begin{equation} \label{eq:condition\_number\_optimal}
\gamma^\*(G) = \min\_{D\_1,D\_2} \gamma(D\_1 G D\_2)
\end{equation}
@@ -967,12 +1088,12 @@ However if the condition number is large (say, larger than 10), then this may in
#### Relative Gain Array (RGA) {#relative-gain-array--rga}
-
+
-The relative gain array (RGA) for a non-singular square matrix \\(G\\) is a square matrix defined as:
+The **relative gain array** (RGA) for a non-singular square matrix \\(G\\) is a square matrix defined as:
-\begin{equation}
+\begin{equation} \label{eq:relative\_gain\_array}
\text{RGA}(G) = \Lambda(G) \triangleq G \times G^{-T}
\end{equation}
@@ -1012,15 +1133,16 @@ The **structured singular value** \\(\mu\\) is a tool for analyzing the effects
### General Control Problem Formulation {#general-control-problem-formulation}
-The general control problem formulation is represented in Fig. [fig:general_control_names](#fig:general_control_names).
+The general control problem formulation is represented in Fig. [6](#org2f011da).
-
+
{{< figure src="/ox-hugo/skogestad07_general_control_names.png" caption="Figure 6: General control configuration" >}}
-
+
+**Control Design Problem**:
Find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed-loop norm from \\(w\\) to \\(z\\).
@@ -1031,20 +1153,26 @@ Find a controller \\(K\\) which based on the information in \\(v\\), generates a
We must first find a block diagram representation of the system and identify the signals \\(w\\), \\(z\\), \\(u\\) and \\(v\\).
Then we have to break all the "loops" entering and exiting the controller \\(K\\) to obtain \\(P\\) such that:
-\begin{equation}
- \colvec{z\\v} = P \colvec{w\\u}
+\begin{equation} \label{eq:generalized\_plant\_inputs\_outputs}
+ \begin{bmatrix}
+ z \\\\\\
+ v
+ \end{bmatrix} = P \begin{bmatrix}
+ w \\\\\\
+ u
+ \end{bmatrix}
\end{equation}
#### Controller Design: Including Weights in \\(P\\) {#controller-design-including-weights-in--p}
-In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) (Fig. [fig:general_plant_weights](#fig:general_plant_weights)).
+In order to get a meaningful controller synthesis problem, for example in terms of the \\(\hinf\\) norms, we generally have to include the weights \\(W\_z\\) and \\(W\_w\\) in the generalized plant \\(P\\) (Fig. [7](#orgcf69c72)).
We consider:
- The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system)
- The weighted or normalized controlled outputs \\(z = W\_z \tilde{z}\\) (where \\(\tilde{z}\\) often consists of the control error \\(y-r\\) and the manipulated input \\(u\\))
-
+
{{< figure src="/ox-hugo/skogestad07_general_plant_weights.png" caption="Figure 7: General Weighted Plant" >}}
@@ -1055,11 +1183,17 @@ The weighted matrices are usually frequency dependent and typically selected suc
We often partition \\(P\\) as:
-\begin{equation}
- \begin{bmatrix} z \\ v \end{bmatrix} = \begin{bmatrix}
- P\_{11} & P\_{12} \\\\\\
- P\_{21} & P\_{22}
- \end{bmatrix} \begin{bmatrix} w \\ u \end{bmatrix}
+\begin{equation} \label{eq:general\_plant\_partitioning}
+ \begin{bmatrix}
+ z \\\\\\
+ v
+ \end{bmatrix} = \begin{bmatrix}
+ P\_{11} & P\_{12} \\\\\\
+ P\_{21} & P\_{22}
+ \end{bmatrix} \begin{bmatrix}
+ w \\\\\\
+ u
+ \end{bmatrix}
\end{equation}
\\(P\_{22}\\) has dimensions compatible with the controller.
@@ -1069,15 +1203,21 @@ We often partition \\(P\\) as:
In the previous representations, the controller \\(K\\) has a separate block. This is useful when **synthesizing** the controller. However, for **analysis** of closed-loop performance the controller is given, and we may absorb \\(K\\) into the interconnection structure and obtain the system \\(N\\).
-
+
-\begin{equation}
+**Closed-loop transfer function** \\(N\\):
+
+\begin{equation} \label{eq:N\_formula}
z = N w
\end{equation}
\\(N\\) is given by:
-\\[N = P\_{11} + P\_{12}K(I-P\_{22}K)^{-1}P\_{12} \triangleq F\_l(P, K) \\]
+
+\begin{equation\*}
+ N = P\_{11} + P\_{12}K(I-P\_{22}K)^{-1}P\_{12} \triangleq F\_l(P, K)
+\end{equation\*}
+
where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT).
@@ -1085,9 +1225,9 @@ where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT
#### A General Control Configuration Including Model Uncertainty {#a-general-control-configuration-including-model-uncertainty}
-The general control configuration may be extended to include model uncertainty as shown in Fig. [fig:general_config_model_uncertainty](#fig:general_config_model_uncertainty).
+The general control configuration may be extended to include model uncertainty as shown in Fig. [8](#orgd20b47f).
-
+
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta.png" caption="Figure 8: General control configuration for the case with model uncertainty" >}}
@@ -1097,7 +1237,7 @@ It is usually normalized in such a way that \\(\hnorm{\Delta} \leq 1\\).
### Conclusion {#conclusion}
-
+
The **Singular Value Decomposition** (SVD) of the plant transfer function matrix provides insight into **multivariable directionality**.
@@ -1115,7 +1255,7 @@ MIMO systems are often **more sensitive to uncertainty** than SISO systems.
## Elements of Linear System Theory {#elements-of-linear-system-theory}
-
+
### System Descriptions {#system-descriptions}
@@ -1130,7 +1270,10 @@ For linear systems there are several alternative system representations:
#### State-Space Representation {#state-space-representation}
A natural way to represent many physical systems is by nonlinear state-space models of the form
-\\[\dot{x} \triangleq \frac{dx}{dt} = f(x, u);\quad y = g(x, u)\\]
+
+\begin{equation\*}
+ \dot{x} \triangleq \frac{dx}{dt} = f(x, u);\quad y = g(x, u)
+\end{equation\*}
Linear state-space models may then be derived from the linearization of such models.
@@ -1141,20 +1284,34 @@ y(t) & = C x(t) + D u(t)
where \\(A\\), \\(B\\), \\(C\\) and \\(D\\) are real matrices.
-These equations may be rewritten as
-\\[\colvec{\dot{x}\\y} = \begin{bmatrix}
-A & B \\\\\\
-C & D
-\end{bmatrix} \colvec{x\\u}\\]
+These equations may be rewritten as
+
+\begin{equation\*}
+ \begin{bmatrix}
+ \dot{x} \\\\\\
+ y
+ \end{bmatrix} = \begin{bmatrix}
+ A & B \\\\\\
+ C & D
+ \end{bmatrix}
+ \begin{bmatrix}
+ x \\\\\\
+ u
+ \end{bmatrix}
+\end{equation\*}
+
which gives rise to the short-hand notation
-\\[G = \left[ \begin{array}{c|c}
-A & B \\ \hline
-C & D \\\\\\
-\end{array} \right]\\]
+
+\begin{equation}
+ G = \left[ \begin{array}{c|c}
+ A & B \cr \hline
+ C & D
+ \end{array} \right]
+\end{equation}
The state-space representation of a system is not unique, there exist realizations with the same input-output behavior, but with additional unobservable and/or uncontrollable state.
-
+
A minimal realization is a realization with the **fewest number of states** and consequently **no unobservable or uncontrollable modes**.
@@ -1167,40 +1324,62 @@ The state-space representation yields an internal description of the system whic
#### Impulse Response Representation {#impulse-response-representation}
The impulse response matrix is
-\\[g(t) = \begin{cases}
+
+\begin{equation\*}
+ g(t) = \begin{cases}
0 & t < 0 \\\\\\
C e^{At} B + D \delta(t) & t \geq 0
-\end{cases}\\]
+\end{cases}
+\end{equation\*}
+
The \\(ij\\)'th element of the impulse response matrix, \\(g\_{ij}(t)\\), represents the response \\(y\_i(t)\\) to an impulse \\(u\_j(t)=\delta(t)\\) for a systems with a zero initial state.
With initial state \\(x(0) = 0\\), the dynamic response to an arbitrary input \\(u(t)\\) is
-\\[y(t) = g(t)\*u(t) = \int\_0^t g(t-\tau)u(\tau)d\tau\\]
+
+\begin{equation\*}
+ y(t) = g(t)\*u(t) = \int\_0^t g(t-\tau)u(\tau)d\tau
+\end{equation\*}
#### Transfer Function Representation - Laplace Transforms {#transfer-function-representation-laplace-transforms}
The transfer function representation is unique and is defined as the Laplace transform of the impulse response.
-
+
-\\[ G(s) = \int\_0^\infty g(t)e^{-st}dt \\]
+**Laplace transform**:
+
+\begin{equation\*}
+ G(s) = \int\_0^\infty g(t)e^{-st}dt
+\end{equation\*}
We can also obtain the transfer function representation from the state-space representation by taking the Laplace transform of the state-space equations
-\\[ s x(s) = A x(s) + B u(s) \ \Rightarrow \ x(s) = (sI-A)^{-1} B u(s) \\]
-\\[ y(s) = C x(s) + D u(s) \ \Rightarrow \ y(s) = \underbrace{\left(C(sI-A)^{-1}B+D\right)}\_{G(s)}u(s) \\]
+
+\begin{equation\*}
+ s x(s) = A x(s) + B u(s) \ \Rightarrow \ x(s) = (sI-A)^{-1} B u(s)
+\end{equation\*}
+
+\begin{equation\*}
+ y(s) = C x(s) + D u(s) \ \Rightarrow \ y(s) = \underbrace{\left(C(sI-A)^{-1}B+D\right)}\_{G(s)}u(s)
+\end{equation\*}
Time delays and improper systems can be represented by Laplace transforms, but do not have a state-space representation.
#### Coprime Factorization {#coprime-factorization}
-
+
-\\[G(s) = N\_r(s) M\_r^{-1}(s)\\]
+**Right coprime factorization of \\(G\\)**:
+
+\begin{equation\*}
+ G(s) = N\_r(s) M\_r^{-1}(s)
+\end{equation\*}
+
where \\(N\_r(s)\\) and \\(M\_r(s)\\) are stable coprime transfer functions.
@@ -1219,7 +1398,10 @@ There are **many ways to check for state controllability and observability**, e.
The method which yields the most insight is probably to compute the input and output directions associated with each pole (mode).
For the case when \\(A\\) has distinct eigenvalues, we have the following dyadic expansion of the transfer function matrix from inputs to outputs
-\\[G(s) = \sum\_{i=1}^{n} \frac{C t\_i q\_i^H B}{s - \lambda\_i} + D = \sum\_{i=1}^{n} \frac{y\_{p\_i} u\_{p\_i}}{s - \lambda\_i} + D\\]
+
+\begin{equation\*}
+ G(s) = \sum\_{i=1}^{n} \frac{C t\_i q\_i^H B}{s - \lambda\_i} + D = \sum\_{i=1}^{n} \frac{y\_{p\_i} u\_{p\_i}}{s - \lambda\_i} + D
+\end{equation\*}
- The \\(i\\)'th **input pole vector** \\(u\_{p\_i} \triangleq q\_i^H B\\) is an indication of how much the \\(i\\)'th mode is excited (and thus may be "controlled") by the inputs.
- The \\(i\\)'th **output pole vector** \\(y\_{p\_i} \triangleq C t\_i\\) indicates how much the \\(i\\)'th mode is observed in the outputs.
@@ -1228,14 +1410,22 @@ For the case when \\(A\\) has distinct eigenvalues, we have the following dyadic
##### State Controllability {#state-controllability}
Let \\(\lambda\_i\\) be the \\(i^{\text{th}}\\) eigenvalue of \\(A\\), \\(q\_i\\) the corresponding left eigenvector (\\(q\_i^H A = \lambda\_i q\_i^H\\)), and \\(u\_{p\_i} = B^H q\_i\\) the \\(i^{\text{th}}\\) input pole vector. Then the system \\((A, B)\\) is state controllable if and only if
-\\[u\_{p\_i} \neq 0, \forall i\\]
+
+\begin{equation\*}
+ u\_{p\_i} \neq 0, \forall i
+\end{equation\*}
+
That is if and only if all its input pole vectors are nonzero.
##### State Observability {#state-observability}
Let \\(\lambda\_i\\) be the \\(i^{\text{th}}\\) eigenvalue of \\(A\\), \\(t\_i\\) the corresponding right eigenvector (\\(A t\_i = \lambda\_i t\_i\\)), and \\(y\_{p\_i} = C t\_i\\) the \\(i^{\text{th}}\\) output pole vector. Then the system \\((A, C)\\) is state observable if and only if
-\\[y\_{p\_i} \neq 0, \forall i\\]
+
+\begin{equation\*}
+ y\_{p\_i} \neq 0, \forall i
+\end{equation\*}
+
That is if and only if all its output pole vectors are nonzero.
@@ -1247,14 +1437,15 @@ It follows that a state-space realization is minimal if and only if \\((A, B)\\)
### Stability {#stability}
-
+
+**Internal Stability**:
A system is (internally) stable is none of its components contain hidden unstable modes and the injection of bounded external signals at any place in the system result in bounded output signals measured anywhere in the system.
-
+
A system is (state) **stabilizable** if all unstable modes are state controllable.
@@ -1267,13 +1458,17 @@ A system with unstabilizable or undetectable modes is said to contain hidden uns
### Poles {#poles}
-
+
+**Multivariable Pole**:
The poles \\(p\_i\\) of a system with state-space description are the **eigenvalues** \\(\lambda\_i(A), i=1, \dotsc, n\\) of the matrix \\(A\\).
The **pole or characteristic polynomial** \\(\phi(s)\\) is defined as \\(\phi(s) \triangleq \det(sI-A) = \Pi\_{i=1}^n (s-p\_i)\\).
Thus the poles are the roots or the characteristic equation
-\\[\phi(s) \triangleq \det(sI-A) = 0\\]
+
+\begin{equation\*}
+ \phi(s) \triangleq \det(sI-A) = 0
+\end{equation\*}
@@ -1294,26 +1489,40 @@ The poles are essentially the sum of the poles in the elements of the transfer f
In multivariable system poles have **directions** associated with them. To quantify this, we use the **input and output pole vectors**.
-
+
-\\[ u\_{p\_i} = B^H q\_i \\]
+**Input pole vector**:
+
+\begin{equation\*}
+ u\_{p\_i} = B^H q\_i
+\end{equation\*}
+
With \\(q\_i\\) the left eigenvector of \\(A\\) (\\({q\_i}^T A = \lambda\_i {q\_i}^T\\)).
The input pole direction is \\(\frac{1}{\normtwo{u\_{p\_i}}} u\_{p\_i}\\)
-
+
-\\[ y\_{p\_i} = C t\_i \\]
+**Output pole vector**:
+
+\begin{equation\*}
+ y\_{p\_i} = C t\_i
+\end{equation\*}
+
With \\(t\_i\\) the right eigenvector of \\(A\\) (\\(A t\_i = \lambda\_i t\_i\\)).
The output pole direction is \\(\frac{1}{\normtwo{y\_{p\_i}}} y\_{p\_i}\\)
The pole directions may be defined in terms of the transfer function matrix by evaluating \\(G(s)\\) at the pole \\(p\_i\\) and considering the directions of the resulting complex matrix \\(G(p\_i)\\). The matrix is infinite in the direction of the pole, and we may write
-\\[ G(p\_i) u\_{p\_i} = \infty \cdot y\_{p\_i} \\]
+
+\begin{equation\*}
+ G(p\_i) u\_{p\_i} = \infty \cdot y\_{p\_i}
+\end{equation\*}
+
where \\(u\_{p\_i}\\) is the input pole direction and \\(y\_{p\_i}\\) is the output pole direction.
The pole directions may in principle be obtained from an SVD of \\(G(p\_i) = U\Sigma V^H\\).
@@ -1326,9 +1535,10 @@ The pole direction is usually very interesting because it gives information abou
Zeros of a system arise when competing effects, internal to the system, are such that the output is zero even when the inputs (and the states) are not themselves identically zero.
-
+
+**Multivariable Zero**:
\\(z\_i\\) is a zero of \\(G(s)\\) if the rank of \\(G(z\_i)\\) is less than the normal rank of \\(G(s)\\).
The zero polynomial is defined as \\(z(s) = \Pi\_{i=1}^{n\_z}(s-z\_i)\\) where \\(n\_z\\) is the number of finite zeros of \\(G(s)\\)
@@ -1338,10 +1548,19 @@ The zero polynomial is defined as \\(z(s) = \Pi\_{i=1}^{n\_z}(s-z\_i)\\) where \
#### Zeros from State-Space Realizations {#zeros-from-state-space-realizations}
The state-space equations of a system may be written as
-\\[P(s) \colvec{x\\u} = \colvec{0\\y}, \quad P(s) = \begin{bmatrix}
-sI-A & -B \\\\\\
-C & D \\\\\\
-\end{bmatrix}\\]
+
+\begin{equation\*}
+ P(s) \begin{bmatrix}
+ x \\\\\\
+ u
+ \end{bmatrix} = \begin{bmatrix}
+ 0 \\\\\\
+ y
+ \end{bmatrix}, \quad P(s) = \begin{bmatrix}
+ sI-A & -B \\\\\\
+ C & D
+ \end{bmatrix}
+\end{equation\*}
The zeros are then the values \\(s=z\\) for which the polynomial system matrix, \\(P(s)\\), loses rank, resulting in zero output for some non-zero input.
@@ -1356,7 +1575,11 @@ The zeros are values of \\(s\\) for which \\(G(s)\\) looses rank. In general, th
#### Zero Directions {#zero-directions}
Let \\(G(s)\\) have a zero at \\(s=z\\). Then \\(G(s)\\) loses rank at \\(s=z\\), and there will exist non-zero vectors \\(u\_z\\) and \\(y\_z\\) such that
-\\[G(z) u\_z = 0 \cdot y\_z\\]
+
+\begin{equation\*}
+ G(z) u\_z = 0 \cdot y\_z
+\end{equation\*}
+
Here \\(u\_z\\) is defined as the **input zero direction** and \\(y\_z\\) is defined as the **output zero direction**.
From a practical point of view, \\(y\_z\\) is usually of more interest than \\(u\_z\\) because it give information about **which combination of outputs may be difficult to control**.
@@ -1377,11 +1600,15 @@ Again, we may obtain input and output zero directions from an SVD of \\(G(s)\\):
- **Parallel**: \\(G+K\\). Poles are unchanged, zeros are moved (but note that physically a parallel interconnection requires an additional manipulated input)
- **Pinned zeros**. A zero is pinned to a subset of the outputs if \\(y\_z\\) has one or more elements equal to zero. Their effect cannot be moved freely to any output. Similarly, a zero is pinned to certain input if \\(u\_z\\) has one or more elements equal to zero.
-
+
+**Effect of feedback on poles and zeros**:
Consider a SISO negative feedback system with plant \\(G(s)=\frac{z(s)}{\phi(s)}\\) and a constant gain controller, \\(K(s)=k\\). The closed-loop response from reference \\(r\\) to output \\(y\\) is
-\\[T(s) = \frac{kG(s)}{1+kG(s)} = \frac{kz(s)}{\phi(s)+kz(s)} = k\frac{z\_{\text{cl}}(s)}{\phi\_{\text{cl}}(s)}\\]
+
+\begin{equation\*}
+ T(s) = \frac{kG(s)}{1+kG(s)} = \frac{kz(s)}{\phi(s)+kz(s)} = k\frac{z\_{\text{cl}}(s)}{\phi\_{\text{cl}}(s)}
+\end{equation\*}
We note that:
@@ -1401,29 +1628,34 @@ RHP-zeros therefore imply high gain instability.
### Internal Stability of Feedback Systems {#internal-stability-of-feedback-systems}
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="Figure 9: Block diagram used to check internal stability" >}}
-Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in Fig. [fig:block_diagram_for_stability](#fig:block_diagram_for_stability) is **internally stable** if and only if all four closed-loop transfer matrices are stable.
+Assume that the components \\(G\\) and \\(K\\) contain no unstable hidden modes. Then the feedback system in Fig. [9](#orgde8788d) is **internally stable** if and only if all four closed-loop transfer matrices are stable.
\begin{align\*}
&(I+KG)^{-1} & -K&(I+GK)^{-1} \\\\\\
G&(I+KG)^{-1} & &(I+GK)^{-1}
\end{align\*}
-Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in Fig. [fig:block_diagram_for_stability](#fig:block_diagram_for_stability) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable.
+Assume there are no RHP pole-zero cancellations between \\(G(s)\\) and \\(K(s)\\), the feedback system in Fig. [9](#orgde8788d) is internally stable if and only if **one** of the four closed-loop transfer function matrices is stable.
### Stabilizing Controllers {#stabilizing-controllers}
The **Q-parameterization** is a parameterization that generates all controllers that yield internal stability of the closed loop transfer function.
-
+
+**Q-parameterization for stable plant**:
For stable plants, a parameterization of all stabilizing negative feedback controllers for the stable plant \\(G(s)\\) is given by
-\\[K = (I-QG)^{-1} Q = Q(I-GQ)^{-1}\\]
+
+\begin{equation\*}
+ K = (I-QG)^{-1} Q = Q(I-GQ)^{-1}
+\end{equation\*}
+
where the parameter \\(Q\\) is any stable transfer function matrix.
@@ -1435,9 +1667,10 @@ The closed-loop transfer functions turn out to be affine in \\(Q\\), e.g. \\(S\\
### Stability Analysis in the Frequency Domain {#stability-analysis-in-the-frequency-domain}
-
+
+**Generalized (MIMO) Nyquist theorem**:
Let \\(P\_{ol}\\) denote the number of unstable poles in \\(L(s) = G(s)K(s)\\). The closed-loop system with loop transfer \\(L(s)\\) and negative feedback is stable if and only if the Nyquist plot of \\(\det(I+L(s))\\):
1. makes \\(P\_{ol}\\) anti-clockwise encirclements of the origin
@@ -1445,27 +1678,39 @@ Let \\(P\_{ol}\\) denote the number of unstable poles in \\(L(s) = G(s)K(s)\\).
-
+
-The spectral radius \\(\rho(L(j\w))\\) is defined as the maximum eigenvalue magnitude:
-\\[ \rho(L(j\w)) \triangleq \max\_{i} \abs{\lambda\_i (L(j\w))} \\]
+The **spectral radius** \\(\rho(L(j\w))\\) is defined as the maximum eigenvalue magnitude:
+
+\begin{equation\*}
+ \rho(L(j\w)) \triangleq \max\_{i} \abs{\lambda\_i (L(j\w))}
+\end{equation\*}
-
+
+**Spectral radius stability condition**:
Consider a system with a stable loop transfer function \\(L(s)\\). Then the closed-loop system is stable if
-\\[ \rho(L(j\w)) < 1 \quad \forall \w \\]
+
+\begin{equation\*}
+ \rho(L(j\w)) < 1 \quad \forall \w
+\end{equation\*}
-
+
+**Small Gain Theorem**:
Consider a system with a stable loop transfer function \\(L(s)\\). Then the closed-loop system is stable if
-\\[ \norm{L(j\w)} < 1 \quad \forall \w\\]
+
+\begin{equation\*}
+ \norm{L(j\w)} < 1 \quad \forall \w
+\end{equation\*}
+
Where \\(\norm{L}\\) denotes any matrix norm that satisfies the multiplicative property \\(\norm{AB} \leq \norm{A}\cdot\norm{B}\\)
@@ -1474,7 +1719,9 @@ The Small gain theorem for SISO system says that the system is stable if \\(\abs
This may be understood as follows: the signals which "return" in the same direction after "one turn around the loop" are magnified by the eigenvalues \\(\lambda\_i\\) (and the directions are the eigenvectors \\(x\_i\\)):
-\\[ L x\_i = \lambda\_i x\_i \\]
+\begin{equation\*}
+ L x\_i = \lambda\_i x\_i
+\end{equation\*}
So if all the eigenvalues \\(\lambda\_i\\) are less than 1 in magnitude, all signals become smaller after each round, and the closed-loop system is stable.
@@ -1484,7 +1731,7 @@ So if all the eigenvalues \\(\lambda\_i\\) are less than 1 in magnitude, all sig
#### \\(\htwo\\) norm {#htwo--norm}
-
+
Consider a strictly proper system \\(G(s)\\). The \\(\htwo\\) norm is:
@@ -1501,11 +1748,14 @@ The \\(\htwo\\) norm can have a stochastic interpretation where we measure the *
#### \\(\hinf\\) norm {#hinf--norm}
-
+
Consider a proper linear stable system \\(G(s)\\). The \\(\hinf\\) norm is the peak value of its maximum singular value:
-\\[ \hnorm{G(s)} \triangleq \max\_{\w} \maxsv(G(j\w)) \\]
+
+\begin{equation\*}
+ \hnorm{G(s)} \triangleq \max\_{\w} \maxsv(G(j\w))
+\end{equation\*}
@@ -1516,7 +1766,9 @@ The \\(\hinf\\) norm has several interpretations in the time and frequency domai
- it is the worst case steady-state gain for sinusoidal inputs at any frequency
- it is equal to the 2-norm in the time domain:
-\\[ \hnorm{G(s)} = \max\_{w(t) \neq 0} \frac{\normtwo{z(t)}}{\normtwo{w(t)}} = \max\_{\normtwo{w(t)} = 1} \normtwo{z(t)} \\]
+\begin{equation\*}
+ \hnorm{G(s)} = \max\_{w(t) \neq 0} \frac{\normtwo{z(t)}}{\normtwo{w(t)}} = \max\_{\normtwo{w(t)} = 1} \normtwo{z(t)}
+\end{equation\*}
- is has an interpretation as an induced norm in terms of the expected values of stochastic signals
@@ -1525,9 +1777,11 @@ The \\(\hinf\\) norm has several interpretations in the time and frequency domai
Minimizing the \\(\hinf\\) norm corresponds to minimizing the peak of the largest singular value, whereas minimizing the \\(\htwo\\) norm corresponds to minimizing the sum of the square of all the singular values over all frequencies.
-
+
+**Why is the \\(\hinf\\) norm is so popular?**
+
The \\(\hinf\\) norm is **convenient for representing unstructured model uncertainty** and because if satisfies the multiplicative property \\(\hnorm{A(s)B(s)} \leq \hnorm{A(s)} \cdot \hnorm{B(s)}\\)
It follows that the \\(\hinf\\) norm is an **induced norm**.
@@ -1540,7 +1794,11 @@ This implies that we cannot, by evaluating the \\(\htwo\\) norm of the individua
#### Hankel norm {#hankel-norm}
The Hankel norm of a stable system \\(G(s)\\) is obtained when one applies an input \\(w(t)\\) up to \\(t=0\\) and measures the output \\(z(t)\\) for \\(t>0\\), and selects \\(w(t)\\) to maximize the ratio of the 2-norms:
-\\[ \left\\|G(s)\right\\|\_H \triangleq \max\_{w(t)} \frac{\sqrt{\int\_{0}^{\infty} \normtwo{z(\tau)}^2 d\tau }}{\sqrt{\int\_{-\infty}^0 \normtwo{w(\tau)}^2 d\tau}} \\]
+
+\begin{equation\*}
+ \left\\|G(s)\right\\|\_H \triangleq \max\_{w(t)} \frac{\sqrt{\int\_{0}^{\infty} \normtwo{z(\tau)}^2 d\tau }}{\sqrt{\int\_{-\infty}^0 \normtwo{w(\tau)}^2 d\tau}}
+\end{equation\*}
+
The Hankel norm is a kind of induced norm from past inputs to future outputs.
It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \sqrt{\rho(PQ)}\\) where \\(\rho\\) is the spectral radius, \\(P\\) is the controllability Gramian and \\(Q\\) the observability Gramian.
@@ -1548,15 +1806,15 @@ It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \
## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems}
-
+
### Input-Output Controllability {#input-output-controllability}
-
+
-The input-output controllability is the **ability to achieve acceptable control performance**; that is, to keep the outputs (\\(y\\)) within specified bounds from their references (\\(r\\)), in spite of unknown but bounded variations, such as disturbances (\\(d\\)) and plant changes, using available inputs (\\(u\\)) and available measurements (\\(y\_m\\)).
+The **input-output controllability** is the **ability to achieve acceptable control performance**; that is, to keep the outputs (\\(y\\)) within specified bounds from their references (\\(r\\)), in spite of unknown but bounded variations, such as disturbances (\\(d\\)) and plant changes, using available inputs (\\(u\\)) and available measurements (\\(y\_m\\)).
@@ -1569,7 +1827,7 @@ It may be affected by changing the plant itself:
- adding extra sensor or actuators
- changing the configuration of the lower layers of control already in place
-
+
Input-output controllability analysis is applied to a plant to find out **what control performance can be expected**.
@@ -1609,12 +1867,12 @@ The required input must not exceed maximum physically allowed value (\\(\abs{u}
#### \\(S\\) Plus \\(T\\) is One {#s--plus--t--is-one}
-
+
From the definitions \\(S = (I + L)^{-1}\\) and \\(T = L(I+L)^{-1}\\) we derive
-\begin{equation}
+\begin{equation} \label{eq:S\_T\_identity}
S + T = I
\end{equation}
@@ -1630,14 +1888,16 @@ In general, a trade-off between sensitivity reduction and sensitivity increase m
1. \\(L(s)\\) has at least two more poles than zeros (first waterbed formula)
2. \\(L(s)\\) has a RHP-zero (second waterbed formula)
-
+
+**First Waterbed Formula**:
+
Suppose that the open-loop transfer function \\(L(s)\\) is rational and has at least two more poles than zeros.
Suppose also that \\(L(s)\\) has \\(N\_p\\) RHP-poles at locations \\(p\_i\\).
Then for closed-loop stability, the sensitivity function must satisfy the following **Bode Sensitivity Integral**:
-\begin{equation}
+\begin{equation} \label{eq:bode\_sensitivity\_integral}
\int\_0^\infty \ln\abs{S(j\w)} d\w = \pi \sum\_{i=1}^{N\_p} \text{Re}(p\_i)
\end{equation}
@@ -1645,7 +1905,7 @@ Then for closed-loop stability, the sensitivity function must satisfy the follow
For a **stable plant**, we must have:
-\begin{equation}
+\begin{equation} \label{eq:bode\_sensitivity\_integral\_stable}
\int\_0^\infty \ln\abs{S(j\w)} d\w = 0
\end{equation}
@@ -1657,17 +1917,29 @@ From the first waterbed formula, we expect that an increase in the bandwidth mus
Although this is true in most practical cases, however this is not strictly implied by the formula.
This is because the increase in area may happen over an infinite frequency range.
-
+
+**Second Waterbed Formula**:
+
Suppose that \\(L(s)\\) has a single real **RHP-zero** \\(z\\) or a complex conjugate pair of zero \\(z=x\pm jy\\), and has \\(N\_p\\) RHP-poles \\(p\_i\\).
For closed-loop stability, the sensitivity function must satisfy
-\\[ \int\_0^\infty \ln\abs{S(j\w)} w(z, \w) d\w = \pi \ln \sum\_{i=1}^{N\_p} \abs{\frac{p\_i + z}{\bar{p\_i}-z}} \\]
+
+\begin{equation\*}
+ \int\_0^\infty \ln\abs{S(j\w)} w(z, \w) d\w = \pi \ln \sum\_{i=1}^{N\_p} \abs{\frac{p\_i + z}{\overline{p\_i}-z}}
+\end{equation\*}
where if the zero is real
-\\[ w(z, \w) = \frac{2z}{z^2 + \w^2} \\]
+
+\begin{equation\*}
+ w(z, \w) = \frac{2z}{z^2 + \w^2}
+\end{equation\*}
+
and if the zero pair is complex
-\\[ w(z, \w) = \frac{x}{x^2 + (y-\w)^2} + \frac{x}{x^2 + (y+\w)^2} \\]
+
+\begin{equation\*}
+ w(z, \w) = \frac{x}{x^2 + (y-\w)^2} + \frac{x}{x^2 + (y+\w)^2}
+\end{equation\*}
@@ -1675,7 +1947,10 @@ The second waterbed formula implies that the peak of \\(\abs{S}\\) is even highe
The weight \\(w(z, \w)\\) effectively "cuts off" the contribution from \\(\ln\abs{S}\\) to the integral at frequencies \\(\w > z\\).
So we have approximately:
-\\[ \int\_0^z \ln \abs{S(j\w)} d\w \approx 0 \\]
+
+\begin{equation\*}
+ \int\_0^z \ln \abs{S(j\w)} d\w \approx 0
+\end{equation\*}
This is similar to the Bode sensitivity integral, except that the trade-off is done over a limited frequency range.
Thus, a large peak for \\(\abs{S}\\) is unavoidable if we try to push down \\(\abs{S}\\) at low frequencies.
@@ -1683,18 +1958,20 @@ Thus, a large peak for \\(\abs{S}\\) is unavoidable if we try to push down \\(\a
#### Interpolation Constraints {#interpolation-constraints}
-
+
+**Interpolation contraints**:
+
If \\(p\\) is a **RHP-pole** of the loop transfer function \\(L(s)\\) then
-\begin{equation}
+\begin{equation} \label{eq:interpolation\_constaints\_p}
T(p) = 1, \quad S(p) = 0
\end{equation}
If \\(z\\) is a **RHP-zero** of the loop transfer function \\(L(s)\\) then
-\begin{equation}
+\begin{equation} \label{eq:interpolation\_constaints\_z}
T(z) = 0, \quad S(z) = 1
\end{equation}
@@ -1703,16 +1980,24 @@ If \\(z\\) is a **RHP-zero** of the loop transfer function \\(L(s)\\) then
#### Sensitivity Peaks {#sensitivity-peaks}
-
+
+**Maximum modulus principle**:
+
Suppose \\(f(s)\\) is stable, then the maximum value of \\(\abs{f(s)}\\) for \\(s\\) in the RHP is attained on the region's boundary (somewhere along the \\(j\w\\)-axis):
-\\[ \hnorm{f(j\w)} = \max\_{\omega} \abs{f(j\w)} \geq \abs{f(s\_0)} \quad \forall s\_0 \in \text{RHP} \\]
+
+\begin{equation\*}
+ \hnorm{f(j\w)} = \max\_{\omega} \abs{f(j\w)} \geq \abs{f(s\_0)} \quad \forall s\_0 \in \text{RHP}
+\end{equation\*}
We can derive the following bounds on the peaks of \\(S\\) and \\(T\\) from the maximum modulus principle:
-\\[ \hnorm{S} \geq \max\_{j} \prod\_{i=1}^{N\_p} \frac{\abs{z\_j + \bar{p\_i}}}{\abs{z\_j - p\_i}} \quad \hnorm{T} \geq \max\_{i} \prod\_{j=1}^{N\_z} \frac{\abs{\bar{z\_j} + p\_i}}{\abs{z\_j - p\_i}} \\]
+
+\begin{equation\*}
+ \hnorm{S} \geq \max\_{j} \prod\_{i=1}^{N\_p} \frac{\abs{z\_j + \overline{p\_i}}}{\abs{z\_j - p\_i}} \quad \hnorm{T} \geq \max\_{i} \prod\_{j=1}^{N\_z} \frac{\abs{\overline{z\_j} + p\_i}}{\abs{z\_j - p\_i}}
+\end{equation\*}
This shows that **large peaks** for \\(\abs{S}\\) and \\(\abs{T}\\) are unavoidable if we have a **RHP-zero and RHP-pole located close to each other**.
@@ -1721,11 +2006,16 @@ This shows that **large peaks** for \\(\abs{S}\\) and \\(\abs{T}\\) are unavoida
Consider a plant \\(G(s)\\) that contains a time delay \\(e^{-\theta s}\\). Even the "ideal" controller cannot remove this delay and the "ideal" sensitivity function is \\(S = 1 - T = 1 - e^{-\theta s}\\).
-
+
+**Upper bound on \\(\w\_c\\) for a time delay \\(\theta\\)**:
+
\\(S\\) crosses 1 at a frequency of about \\(1/\theta\\), so we expect to have an upper bound on \\(\w\_c\\):
-\\[ \w\_c < 1/\theta \\]
+
+\begin{equation\*}
+ \w\_c < 1/\theta
+\end{equation\*}
@@ -1755,18 +2045,30 @@ We require \\(\abs{S(j\w)} < 1/\abs{w\_P(j\w)} \quad \forall \w\\), so we must a
##### Performance at low frequencies {#performance-at-low-frequencies}
If we specify performance at low frequencies, we may use the following weight:
-\\[ w\_P = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A} \\]
+
+\begin{equation\*}
+ w\_P = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A}
+\end{equation\*}
+
Where \\(\w\_B^\*\\) is the minimum wanted bandwidth, \\(M\\) the maximum peak of \\(\abs{S}\\) and \\(A\\) the steady-state offset.
If we consider a **real RHP-zero**:
-\\[ \w\_B^\* < z \frac{1 - 1/M}{1 - A} \\]
+
+\begin{equation\*}
+ \w\_B^\* < z \frac{1 - 1/M}{1 - A}
+\end{equation\*}
+
For example, with \\(A=0\\) and \\(M=2\\), we must at least require \\(\w\_B^\* < 0.5z\\).
If we consider an **imaginary RHP-zero**:
-\\[ \w\_B^\* < \abs{z} \sqrt{1 - \frac{1}{M^2}} \\]
+
+\begin{equation\*}
+ \w\_B^\* < \abs{z} \sqrt{1 - \frac{1}{M^2}}
+\end{equation\*}
+
For example, with \\(M=2\\), we must at least require \\(\w\_B^\* < 0.86\abs{z}\\).
-
+
The presence of RHP-zero imposes an **upper bound on the achievable bandwidth** when we want tight control at low frequencies
@@ -1777,13 +2079,20 @@ The presence of RHP-zero imposes an **upper bound on the achievable bandwidth**
##### Performance at high frequencies {#performance-at-high-frequencies}
We consider the case where we want **tight control at high frequencies**, by use of the performance weight:
-\\[ w\_P = \frac{1}{M} + \frac{s}{\w\_B^\*} \\]
+
+\begin{equation\*}
+ w\_P = \frac{1}{M} + \frac{s}{\w\_B^\*}
+\end{equation\*}
If we consider a **real RHP-zero**:
-\\[ \w\_B^\* > z \frac{1}{1-1/M} \\]
+
+\begin{equation\*}
+ \w\_B^\* > z \frac{1}{1-1/M}
+\end{equation\*}
+
For example, with \\(M=2\\) the requirement is \\(\w\_B^\* > 2z\\), so we can only achieve tight control at frequencies beyond the frequency of the RHP-zero.
-
+
The presence of RHP-zero imposes and **lower bound on the achievable bandwidth** when we want tight control at high frequencies
@@ -1795,11 +2104,17 @@ The presence of RHP-zero imposes and **lower bound on the achievable bandwidth**
For unstable plants with a RHP-pole at \\(s = p\\), we **need** feedback for stabilization.
-
+
+**RHP-pole Limitation - Input Usage**:
+
In presence of a RHP-pole at \\(s=p\\):
-\\[ \hnorm{KS} \geq \abs{G\_s(p)^{-1}} \\]
+
+\begin{equation\*}
+ \hnorm{KS} \geq \abs{G\_s(p)^{-1}}
+\end{equation\*}
+
where \\(G\_s\\) is the "stable version" of \\(G\\) with its RHP-poles mirrored into the LHP.
Since \\(u = -KS(G\_d d + n)\\) and because of the previous inequality, the presence of disturbances \\(d\\) and measurement noise \\(n\\) may require the input \\(u\\) to saturate.
@@ -1807,9 +2122,11 @@ When the inputs saturate, the system is practically open-loop and the **stabiliz
-
+
+**RHP-pole Limitation - Bandwidth**:
+
We need to react sufficiently fast.
For a real RHP-pole \\(p\\) we must require that the closed-loop bandwidth is larger than \\(2p\\).
The presence of **RHP-poles generally imposes a lower bound on the bandwidth**.
@@ -1835,23 +2152,37 @@ Consider a single disturbance \\(d\\) and a constant reference \\(r=0\\). Withou
We conclude that no control is needed if \\(\abs{G\_d(j\w)} < 1\\) at all frequencies. In that case, the plant is said to be "**self-regulated**".
If \\(\abs{G\_d(j\w)} > 1\\) at some frequency, then **we need control**. In case of feedback control, we have
-\\[ e(s) = S(s)G\_d(s)d(s) \\]
-The performance requirement \\(\abs{e(\w)} < 1\\) for any \\(\abs{d(\w)}\\) at any frequency is satisfied if and only if
-\\[ \abs{S G\_d(j\w)} < 1 \quad \forall\w \quad \Leftrightarrow \quad \abs{S(j\w)} < 1/\abs{G\_d(j\w)} \quad \forall\w \\]
-
+\begin{equation\*}
+ e(s) = S(s)G\_d(s)d(s)
+\end{equation\*}
+
+The performance requirement \\(\abs{e(\w)} < 1\\) for any \\(\abs{d(\w)}\\) at any frequency is satisfied if and only if
+
+\begin{equation\*}
+ \abs{S G\_d(j\w)} < 1 \quad \forall\w \quad \Leftrightarrow \quad \abs{S(j\w)} < 1/\abs{G\_d(j\w)} \quad \forall\w
+\end{equation\*}
+
+
If the plant has a RHP-zero at \\(s=z\\), then \\(S(z) = 1\\) and we have the following condition:
-\\[ \abs{G\_d(z)} < 1 \\]
+
+\begin{equation\*}
+ \abs{G\_d(z)} < 1
+\end{equation\*}
-
+
We also have that
-\\[ \w\_B > \w\_d \\]
+
+\begin{equation\*}
+ \w\_B > \w\_d
+\end{equation\*}
+
where \\(\w\_d\\) is defined by \\(\abs{G\_d(j\w\_d)} = 1\\).
@@ -1862,22 +2193,32 @@ The actual bandwidth requirement imposed by disturbances may be higher than \\(\
##### Command tracking {#command-tracking}
Assume than \\(d=0\\) and \\(r(t) = R\sin(\w t)\\). For acceptable control (\\(\abs{e} < 1\\)) we must have
-\\[ \abs{S(j\w)R}<1 \quad \forall\w\leq\w\_r \\]
+
+\begin{equation\*}
+ \abs{S(j\w)R}<1 \quad \forall\w\leq\w\_r
+\end{equation\*}
+
where \\(\w\_r\\) is the frequency up to which performance tracking is required.
### Limitation Imposed by Input Constraints {#limitation-imposed-by-input-constraints}
-
+
To achieve acceptable control (\\(\abs{e}<1\\)) and avoid input saturation (\\(\abs{u}<1\\)), we must require:
For **disturbance rejection**:
-\\[ \abs{G} > \abs{G\_d} - 1 \text{ at frequencies where } \abs{G\_d} > 1 \\]
+
+\begin{equation\*}
+ \abs{G} > \abs{G\_d} - 1 \text{ at frequencies where } \abs{G\_d} > 1
+\end{equation\*}
For **command tracking**:
-\\[ \abs{G} > \abs{R} - 1 \quad \forall \w \leq \w\_r \\]
+
+\begin{equation\*}
+ \abs{G} > \abs{R} - 1 \quad \forall \w \leq \w\_r
+\end{equation\*}
@@ -1886,19 +2227,22 @@ For **command tracking**:
Phase lag in the plant present no fundamental limitations, however is usually does on practical designs.
-
+
Let define \\(\w\_u\\) as the frequency where the phase lag of the plant \\(G\\) is \\(\SI{-180}{\degree}\\)
-\begin{equation}
+\begin{equation} \label{eq:w\_u\_definition}
\angle G(j\w\_u) \triangleq \SI{-180}{\degree}
\end{equation}
With simple controllers such as a proportional controller or a PI-controller, the phase lag does pose a fundamental limitation on the achievable bandwidth because of stability bounds:
-\\[ \w\_c < \w\_u \\]
+
+\begin{equation\*}
+ \w\_c < \w\_u
+\end{equation\*}
However, if the model is exactly known and there are no RHP-zeros or time delays, one may extend \\(\w\_c\\) to infinite frequency by placing zeros in the controller at the plant poles.
@@ -1909,9 +2253,17 @@ However, if the model is exactly known and there are no RHP-zeros or time delays
##### Uncertainty with feedforward control {#uncertainty-with-feedforward-control}
Perfect control is obtained using a controller which generates the control input
-\\[ u = G^{-1} r - G^{-1} G\_d d \\]
+
+\begin{equation\*}
+ u = G^{-1} r - G^{-1} G\_d d
+\end{equation\*}
+
When we apply this perfect controller to the actual plant \\(y' = G' u + G\_d' d\\), we find
-\\[ e' = y' - r = \underbrace{\left( \frac{G'}{G} - 1 \right)}\_{\text{rel. error in }G} r - \underbrace{\left( \frac{G'/G\_d'}{G/G\_d} - 1 \right)}\_{\text{rel. error in } G/G\_d} G\_d' d \\]
+
+\begin{equation\*}
+ e' = y' - r = \underbrace{\left( \frac{G'}{G} - 1 \right)}\_{\text{rel. error in }G} r - \underbrace{\left( \frac{G'/G\_d'}{G/G\_d} - 1 \right)}\_{\text{rel. error in } G/G\_d} G\_d' d
+\end{equation\*}
+
For feedforward control, **the model error propagates directly to the control error**.
If we want acceptable control (\\(\abs{e'}<1\\)), we must require that the model error in \\(G/G\_d\\) is less than \\(1/\abs{G\_d'}\\). This is very difficult to satisfy at frequencies where \\(\abs{G\_d'}\\) is much larger than 1.
@@ -1927,7 +2279,7 @@ With model error, we get \\(y' - r = S'(G\_d'd - r)\\) where \\(S' = (I + G'K)^{
We see that the **control error in only weakly affected by model error at frequencies where feedback is effective** (\\(T \approx 1\\)).
-
+
Uncertainty in the crossover frequency region can result in poor performance and even instability:
@@ -1940,16 +2292,18 @@ Uncertainty in the crossover frequency region can result in poor performance and
### Summary: Controllability Analysis with Feedback Control {#summary-controllability-analysis-with-feedback-control}
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="Figure 10: Feedback control system" >}}
-Consider the control system in Fig. [fig:classical_feedback_meas](#fig:classical_feedback_meas).
+Consider the control system in Fig. [10](#orgb84b4ee).
Here \\(G\_m(s)\\) denotes the measurement transfer function and we assume \\(G\_m(0) = 1\\) (perfect steady-state measurement).
-
+
+**Controllability analysis rules**:
+
1. **Speed of response to reject disturbances**. We approximately require \\(\w\_c > \w\_d\\). With feedback control we require \\(\abs{S(j\w)} \leq \abs{1/G\_d(j\w)} \quad \forall\w\\).
2. **Speed of response to track reference changes**. We require \\(\abs{S(j\w)} \leq 1/R\\) up to the frequency \\(\w\_r\\) where tracking is required.
3. **Input constraints arising from disturbances**. For acceptable control we require \\(\abs{G(j\w)} > \abs{G\_d(j\w)} - 1\\) at frequencies where \\(\abs{G\_d(j\w)} > 1\\).
@@ -1967,9 +2321,12 @@ In summary:
- rules 5, 6 and 7 tell us we must use low feedback gains in the frequency range where there are RHP-zeros or delays or where the plant has a lot of phase lag.
Sometimes, the disturbances are so large that we hit input saturation or the required bandwidth is not achievable. To avoid the latter problem, we must at least require that the effect of the disturbance is less than \\(1\\) at frequencies beyond the bandwidth:
-\\[ \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c \\]
-
+\begin{equation\*}
+ \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_margin_requirements.png" caption="Figure 11: Illustration of controllability requirements" >}}
@@ -1991,7 +2348,7 @@ The rules may be used to **determine whether or not a given plant is controllabl
## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems}
-
+
### Introduction {#introduction}
@@ -2022,12 +2379,10 @@ For example, the output angle between a pole and a zero is \\(\phi = \cos^{-1} \
From the identity \\(S + T = I\\), we get:
-\begin{subequations}
- \begin{align}
- |1 - \maxsv(S)| \leq \maxsv(T) \leq 1 + \maxsv(S)\\\\\\
- |1 - \maxsv(T)| \leq \maxsv(S) \leq 1 + \maxsv(T)
- \end{align}
-\end{subequations}
+\begin{align}
+ |1 - \maxsv(S)| \leq \maxsv(T) \leq 1 + \maxsv(S)\\\\\\
+ |1 - \maxsv(T)| \leq \maxsv(S) \leq 1 + \maxsv(T)
+\end{align}
This shows that we cannot have \\(S\\) and \\(T\\) small simultaneously and that \\(\maxsv(S)\\) is large if and only if \\(\maxsv(T)\\) is large.
@@ -2046,19 +2401,29 @@ The waterbed effect can be generalized for MIMO systems:
The basis of many of the results in this chapter are the "**interpolation constraints**".
-
+
+**Interpolation Constraints - RHP-zero** \\(z\\):
+
If \\(G(s)\\) has a RHP-zero at \\(z\\) with output direction \\(y\_z\\), \\(T(s)\\) must have a RHP-zero at \\(z\\), i.e., \\(T(z)\\) has a zero gain in the direction of output direction \\(y\_z\\) of the zero, and we get
-\\[ y\_z^H T(z) = 0 ; \quad y\_z^H S(z) = y\_z^H \\]
+
+\begin{equation\*}
+ y\_z^H T(z) = 0 ; \quad y\_z^H S(z) = y\_z^H
+\end{equation\*}
-
+
+**Interpolation Constraints - RHP-pole \\(p\\)**:
+
If \\(G(s)\\) has a RHP-pole at \\(p\\) with output direction \\(y\_p\\), \\(S(s)\\) must have a RHP-zero at \\(p\\), i.e. \\(S(p)\\) has a zero gain in the input direction of the output direction \\(y\_p\\) of the RHP-pole, and we get
-\\[ S(p) y\_p = 0 ; \quad T(p) y\_p = y\_p \\]
+
+\begin{equation\*}
+ S(p) y\_p = 0 ; \quad T(p) y\_p = y\_p
+\end{equation\*}
@@ -2067,7 +2432,11 @@ If \\(G(s)\\) has a RHP-pole at \\(p\\) with output direction \\(y\_p\\), \\(S(s
Consider a plant \\(G(s)\\) with RHP-poles \\(p\_i\\) and RHP-zeros \\(z\_j\\).
The factorization of \\(G(s)\\) in terms of **Blaschke products** is:
-\\[ \tcmbox{G(s) = B\_p^{-1} G\_s(s), \quad G(s) = B\_z(s) G\_m(s)} \\]
+
+\begin{equation\*}
+ \tcmbox{G(s) = B\_p^{-1} G\_s(s), \quad G(s) = B\_z(s) G\_m(s)}
+\end{equation\*}
+
where \\(G\_s\\) is the stable and \\(G\_m\\) the minimum-phase version of \\(G\\).
\\(B\_p\\) and \\(B\_z\\) are stable all-pass transfer matrices (all singular values are 1 for \\(s=j\w\\)) containing the RHP-poles and RHP-zeros respectively.
@@ -2076,26 +2445,40 @@ where \\(G\_s\\) is the stable and \\(G\_m\\) the minimum-phase version of \\(G\
Suppose that \\(G(s)\\) has \\(N\_z\\) RHP-zeros \\(z\_j\\) with output directions \\(y\_{zj}\\), and \\(N\_p\\) RHP-poles \\(p\_i\\) with output direction \\(y\_{pi}\\).
We define the all-pass transfer matrices from the Blaschke factorization and compute the real constants:
-\\[ c\_{1j} = \normtwo{y\_{zj}^H B\_p(z\_j)} \geq 1; \quad c\_{2i} = \normtwo{B\_z^{-1}(p\_i) y\_{pi}} \geq 1 \\]
+
+\begin{equation\*}
+ c\_{1j} = \normtwo{y\_{zj}^H B\_p(z\_j)} \geq 1; \quad c\_{2i} = \normtwo{B\_z^{-1}(p\_i) y\_{pi}} \geq 1
+\end{equation\*}
Let \\(w\_P(s)\\) be a stable weight. Then, for closed-loop stability the weighted sensitivity function must satisfy for each RPH-zero \\(z\_j\\)
-\\[ \hnorm{w\_p S} \ge c\_{1j} \abs{w\_p(z\_j)} \\]
+
+\begin{equation\*}
+ \hnorm{w\_p S} \ge c\_{1j} \abs{w\_p(z\_j)}
+\end{equation\*}
Let \\(w\_T(s)\\) be a stable weight. Then, for closed-loop stability the weighted complementary sensitivity function must satisfy for each RPH-pole \\(p\_i\\)
-\\[ \hnorm{w\_T T} \ge c\_{2j} \abs{w\_T(p\_i)} \\]
-
+\begin{equation\*}
+ \hnorm{w\_T T} \ge c\_{2j} \abs{w\_T(p\_i)}
+\end{equation\*}
+
+
+**Lower bound on \\(\hnorm{S}\\) and \\(\hnorm{T}\\)**:
+
By selecting \\(w\_P(s) = 1\\) and \\(w\_T(s) = 1\\), we get
-\\[ \hnorm{S} \ge \max\_{\text{zeros } z\_j} c\_{1j}; \quad \hnorm{T} \ge \max\_{\text{poles } p\_i} c\_{2j} \\]
+
+\begin{equation\*}
+ \hnorm{S} \ge \max\_{\text{zeros } z\_j} c\_{1j}; \quad \hnorm{T} \ge \max\_{\text{poles } p\_i} c\_{2j}
+\end{equation\*}
### Functional Controllability {#functional-controllability}
-
+
An m-input l-output system \\(G(s)\\) is **functionally controllable** is the normal rank of \\(G(s)\\), denoted \\(r\\), is equal to the number of outputs (\\(r = l\\)), that is, if \\(G(s)\\) has full row rank.
@@ -2108,12 +2491,17 @@ A square MIMO system is uncontrollable if and only if \\(\det{G(s)} = 0,\ \foral
A plant is functionally uncontrollable if and only if \\(\sigma\_l(G(j\omega)) = 0,\ \forall\w\\).
\\(\sigma\_l(G(j\w))\\) is then a **measure of how close a plant is to being functionally uncontrollable**.
-
+
-If the plant is not functionally controllable (\\(r
@@ -2123,7 +2511,10 @@ By analyzing the uncontrollable output directions, an engineer can decide on whe
### Limitation Imposed by Time Delays {#limitation-imposed-by-time-delays}
Time delays pose limitation also in MIMO systems. Let \\(\theta\_{ij}\\) denote the time delay in the \\(ij\\)'th element of \\(G(s)\\). Then a **lower bound on the time delay for output** \\(i\\) is given by the smallest delay in row \\(i\\) of \\(G(s)\\), that is
-\\[ \theta\_i^{\min} = \min\_j \theta\_{ij} \\]
+
+\begin{equation\*}
+ \theta\_i^{\min} = \min\_j \theta\_{ij}
+\end{equation\*}
For MIMO systems, we have the surprising result that an increase time delay may sometimes improve the achievable performance. The time delay may indeed increase the decoupling between the outputs.
@@ -2133,7 +2524,10 @@ For MIMO systems, we have the surprising result that an increase time delay may
The limitations imposed by RHP-zeros on MIMO systems are similar to those for SISO system, although they only apply in particular directions.
The limitations of a RHP-zero located at \\(z\\) may be derived from the bound:
-\\[ \hnorm{w\_P S(s)} = \max\_{\w} \abs{w\_P(j\w)} \maxsv(S(j\w)) \ge \abs{w\_P(z)} \\]
+
+\begin{equation\*}
+ \hnorm{w\_P S(s)} = \max\_{\w} \abs{w\_P(j\w)} \maxsv(S(j\w)) \ge \abs{w\_P(z)}
+\end{equation\*}
All the results derived for SISO systems generalize if we consider the "worst" direction corresponding to the maximum singular value \\(\maxsv(S)\\).
For instance, if we choose \\(w\_P(s)\\) to require tight control at low frequencies, the bandwidth must satisfy \\(w\_B^\* < z/2\\).
@@ -2151,18 +2545,26 @@ For example, if we have a RHP-zero with \\(y\_z = [0.03,\ -0.04,\ 0.9,\ 0.43]^T\
For unstable plants, feedback is needed for stabilization. More precisely, the presence of an unstable pole \\(p\\) requires for internal stability \\(T(p) y\_p = y\_p\\) where \\(y\_p\\) is the output pole direction.
-
+
+**Input Usage Limitation**:
+
The transfer function \\(KS\\) from plant output to plant inputs must satisfy for any RHP-pole \\(p\\)
-\\[ \hnorm{KS} \ge \normtwo{u\_p^H G\_s(p)^{-1}} \\]
+
+\begin{equation\*}
+ \hnorm{KS} \ge \normtwo{u\_p^H G\_s(p)^{-1}}
+\end{equation\*}
+
where \\(u\_p\\) is the input pole direction, and \\(G\_s\\) is the "stable version" of \\(G\\) with its RHP-poles mirrored in the LHP.
-
+
+**Bandwidth Limitation**:
+
From the bound \\(\hnorm{w\_T(s) T(s)} \ge \abs{w\_T(p)}\\), we find that a RHP-pole \\(p\\) imposes restrictions on \\(\maxsv(T)\\) which are identical to those derived on \\(\abs{T}\\) for SISO systems.
Thus, we need to react sufficiently fast and we must require that \\(\maxsv(T(j\w))\\) is about 1 or larger up to the frequency \\(2 \abs{p}\\).
@@ -2173,8 +2575,14 @@ Thus, we need to react sufficiently fast and we must require that \\(\maxsv(T(j\
For a MIMO plant with single RHP-zero \\(z\\) and single RHP-pole \\(p\\), we derive
-\\[ \hnorm{S} \ge c \quad \hnorm{T} \ge c \\]
-\\[ \text{with } c = \sqrt{\sin^2 \phi + \frac{\abs{z + p}^2}{\abs{z-p}^2} \cos^2 \phi} \\]
+\begin{equation\*}
+ \hnorm{S} \ge c \quad \hnorm{T} \ge c
+\end{equation\*}
+
+\begin{equation\*}
+ \text{with } c = \sqrt{\sin^2 \phi + \frac{\abs{z + p}^2}{\abs{z-p}^2} \cos^2 \phi}
+\end{equation\*}
+
where \\(\phi = cos^{-1} \abs{y\_z^H y\_p}\\) is the angle between the RHP-zero and the RHP-pole.
Thus the angle between the RHP-zero and the RHP-pole is of great importance, we usually want \\(\abs{y\_z^H y\_p}\\) close to zero so that they don't interact with each other.
@@ -2185,13 +2593,13 @@ Thus the angle between the RHP-zero and the RHP-pole is of great importance, we
For SISO systems, we found that large and "fast" disturbances require tight control and a large bandwidth.
The same results apply for MIMO systems, but again the issue of **directions** is important.
-
+
Consider a scalar disturbance \\(d\\) and let the vector \\(g\_d\\) represents its effect on the outputs (\\(y = g\_d d\\)).
-The disturbance direction is defined as
+The **disturbance direction** is defined as
-\begin{equation}
+\begin{equation} \label{eq:dist\_direction}
y\_d = \frac{1}{\normtwo{g\_d}} g\_d
\end{equation}
@@ -2199,10 +2607,12 @@ For a plant with multiple disturbances, \\(g\_d\\) is a column of the matrix \\(
-
+
-\begin{equation}
+**Disturbance Condition Number**:
+
+\begin{equation} \label{eq:dist\_condition\_number}
\gamma\_d (G) = \maxsv(G) \maxsv(G^\dagger y\_d)
\end{equation}
@@ -2210,28 +2620,44 @@ where \\(G^\dagger\\) is the pseudo inverse of \\(G\\)
-The disturbance condition number provides a **measure of how a disturbance is aligned with the plant**. It may vary between 1 (for \\(y\_d = \bar{u}\\)) if the disturbance is in the "good" direction, and the condition number \\(\gamma(G) = \maxsv(G) \maxsv(G^\dagger)\\) (for \\(y\_d = \ubar{u}\\)) if it is in the "bad" direction.
+The disturbance condition number provides a **measure of how a disturbance is aligned with the plant**. It may vary between 1 (for \\(y\_d = \overline{u}\\)) if the disturbance is in the "good" direction, and the condition number \\(\gamma(G) = \maxsv(G) \maxsv(G^\dagger)\\) (for \\(y\_d = \underline{u}\\)) if it is in the "bad" direction.
Let assume \\(r=0\\) and that the system has been scaled. With feedback control \\(e = S g\_d d\\) and the performance objective is
-\\[ \normtwo{S g\_d} = \maxsv(S g\_d) < 1 \ \forall\w \quad \Leftrightarrow \quad \hnorm{S g\_d} < 1 \\]
+
+\begin{equation\*}
+ \normtwo{S g\_d} = \maxsv(S g\_d) < 1 \ \forall\w \quad \Leftrightarrow \quad \hnorm{S g\_d} < 1
+\end{equation\*}
We derive bounds in terms of the singular values of \\(S\\):
-\\[ \minsv(S) \normtwo{g\_d} \le \normtwo{S g\_d} \le \maxsv(S) \normtwo{g\_d} \\]
-
+\begin{equation\*}
+ \minsv(S) \normtwo{g\_d} \le \normtwo{S g\_d} \le \maxsv(S) \normtwo{g\_d}
+\end{equation\*}
+
+
For acceptable performance **we must at least require that**
-\\[ \maxsv(I+L) > \normtwo{g\_d} \\]
+
+\begin{equation\*}
+ \maxsv(I+L) > \normtwo{g\_d}
+\end{equation\*}
And **we may require that**
-\\[ \minsv(I+L) > \normtwo{g\_d} \\]
+
+\begin{equation\*}
+ \minsv(I+L) > \normtwo{g\_d}
+\end{equation\*}
If \\(G(s)\\) has a **RHP-zero** at \\(s = z\\), then the **performance may be poor if the disturbance is aligned with the output direction of this zero**.
To satisfy \\(\hnorm{S g\_d} < 1\\), we must require
-\\[ \abs{y\_z^H g\_d(z)} < 1 \\]
+
+\begin{equation\*}
+ \abs{y\_z^H g\_d(z)} < 1
+\end{equation\*}
+
where \\(y\_z\\) is the direction of the RHP-zero.
@@ -2245,7 +2671,10 @@ We here consider the question: can the disturbances be rejected perfectly while
For a square plant, the input needed for perfect disturbance rejection is \\(u = -G^{-1} G\_d d\\).
For a single disturbance, as the worst-cast disturbance is \\(\abs{d(\w)} = 1\\), we get that input saturation is avoided (\\(\\|u\\|\_{\text{max}} \le 1\\)) if all elements in the vector \\(G^{-1} g\_d\\) are less than 1 in magnitude:
-\\[ \\|G^{-1} g\_d\\|\_{\text{max}} < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \\|G^{-1} g\_d\\|\_{\text{max}} < 1, \ \forall\w
+\end{equation\*}
It is first recommended to **consider one disturbance at a time** by plotting as a function of frequency the individual elements of \\(G^{-1} G\_d\\). This will yields more information about which particular input is most likely to saturate and which disturbance is the most problematic.
@@ -2257,12 +2686,12 @@ We here consider the question: is it possible to achieve \\(\\|e\\|<1\\) while u
For SISO systems, we have to required \\(\abs{G} > \abs{g\_d} - 1\\) at frequencies where \\(\abs{g\_d} > 1\\).
We would like to generalize this result to MIMO systems.
-
+
Each singular value \\(\sigma\_i\\) of \\(G\\) must approximately satisfy:
-\begin{equation}
+\begin{equation} \label{eq:input\_acceptable\_control\_mimo}
\sigma\_i(G) \ge \abs{u\_i^H g\_d} - 1 \text{ where } \abs{u\_i^H g\_d} > 1
\end{equation}
@@ -2299,10 +2728,13 @@ The issues are the same for SISO and MIMO systems, however, with MIMO systems th
In practice, the difference between the true perturbed plant \\(G^\prime\\) and the plant model \\(G\\) is caused by a number of different sources.
We here focus on input and output uncertainty.
-In multiplicative form, the input and output uncertainties are given by (see Fig. [fig:input_output_uncertainty](#fig:input_output_uncertainty)):
-\\[ G^\prime = (I + E\_O) G (I + E\_I) \\]
+In multiplicative form, the input and output uncertainties are given by (see Fig. [12](#org7f11e2b)):
-
+\begin{equation\*}
+ G^\prime = (I + E\_O) G (I + E\_I)
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_input_output_uncertainty.png" caption="Figure 12: Plant with multiplicative input and output uncertainty" >}}
@@ -2326,11 +2758,14 @@ However, for the actual plant \\(G^\prime\\) (with uncertainty), the actual cont
For output uncertainty, we have an identical result as for SISO systems: the worst case relative control error \\(\normtwo{e^\prime}/\normtwo{r}\\) is equal to the magnitude of the relative output uncertainty \\(\maxsv(E\_O)\\).
However, for input uncertainty, the sensitivity may be much larger because the elements in the matrix \\(G E\_I G^{-1}\\) can be much larger than the elements in \\(E\_I\\).
-
+
-For diagonal input uncertainty, the elements of \\(G E\_I G^{-1}\\) are directly related to the RGA:
-\\[ \left[ G E\_I G^{-1} \right]\_{ii} = \sum\_{j=1}^n \lambda\_{ij}(G) \epsilon\_j \\]
+For **diagonal input uncertainty**, the elements of \\(G E\_I G^{-1}\\) are directly related to the RGA:
+
+\begin{equation\*}
+ \left[ G E\_I G^{-1} \right]\_{ii} = \sum\_{j=1}^n \lambda\_{ij}(G) \epsilon\_j
+\end{equation\*}
@@ -2355,12 +2790,16 @@ With feedback control, **the effect of the uncertainty is reduced** by a factor
Consider a controller \\(K(s) = l(s)G^{-1}(s)\\) which results in a nominally decoupled response with sensitivity \\(S = s \cdot I\\) and complementary sensitivity \\(T = t \cdot I\\) where \\(t(s) = 1 - s(s)\\).
Suppose the plant has diagonal input uncertainty of relative magnitude \\(\abs{w\_I(j\w)}\\) in each input channel.
Then there exists a combination of input uncertainties such that at each frequency:
-\\[ \maxsv(S^\prime) \ge \maxsv(S) \left( 1 + \frac{\abs{w\_I t}}{1+\abs{w\_I t}} \\|\Lambda(G)\\|\_{i\infty} \right) \\]
+
+\begin{equation\*}
+ \maxsv(S^\prime) \ge \maxsv(S) \left( 1 + \frac{\abs{w\_I t}}{1+\abs{w\_I t}} \\|\Lambda(G)\\|\_{i\infty} \right)
+\end{equation\*}
+
where \\(\\| \Lambda(G) \\|\_{i\infty}\\) is the maximum row sum of the RGA and \\(\maxsv(S) = \abs{s}\\).
We can see that with an inverse based controller, the worst case sensitivity will be much larger than the nominal sensitivity at frequencies where the plant has large RGA elements.
-
+
These statements apply to the frequency range around crossover.
@@ -2379,7 +2818,7 @@ By "small", we mean smaller than 2 and by "large" we mean larger than 10.
Consider any complex matrix \\(G\\) and let \\(\lambda\_{ij}\\) denote the \\(ij\\)'th element in the RGA-matrix of \\(G\\).
-
+
The matrix \\(G\\) becomes singular if we make a relative change \\(-1/\lambda\_{ij}\\) in its \\(ij\\)'th elements, that is, if a single element in \\(G\\) is perturbed from \\(g\_{ij}\\) to \\(g\_{pij} = g\_{ij}(1-\frac{1}{\lambda\_{ij}})\\)
@@ -2438,7 +2877,7 @@ However, the situation is usually the opposite with model uncertainty because fo
## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems}
-
+
### Introduction to Robustness {#introduction-to-robustness}
@@ -2483,16 +2922,16 @@ The various sources of model uncertainty may be grouped into two main classes:
1. **Parametric uncertainty**. The structure of the model is known, but some parameters are uncertain
2. **Neglected and unmodelled dynamics uncertainty**. The model is in error because of missing dynamics, usually at high frequencies
-
+
Parametric uncertainty will be quantified by assuming that **each uncertain parameters is bounded within some region** \\([\alpha\_{\min}, \alpha\_{\text{max}}]\\). That is, we have parameter sets of the form
-\begin{equation}
- \alpha\_p = \bar{\alpha}(1 + r\_\alpha \Delta); \quad r\_\alpha = \frac{\alpha\_{\text{max}} - \alpha\_{\min}}{\alpha\_{\text{max}} + \alpha\_{\min}}
+\begin{equation} \label{eq:parametric\_uncertainty}
+ \alpha\_p = \overline{\alpha}(1 + r\_\alpha \Delta); \quad r\_\alpha = \frac{\alpha\_{\text{max}} - \alpha\_{\min}}{\alpha\_{\text{max}} + \alpha\_{\min}}
\end{equation}
-where \\(\bar{\alpha}\\) is the mean parameter value, \\(r\_\alpha\\) is the relative uncertainty in the parameter, and \\(\Delta\\) is any real scalar satisfying \\(\abs{\Delta} \le 1\\).
+where \\(\overline{\alpha}\\) is the mean parameter value, \\(r\_\alpha\\) is the relative uncertainty in the parameter, and \\(\Delta\\) is any real scalar satisfying \\(\abs{\Delta} \le 1\\).
@@ -2503,16 +2942,20 @@ There is also a third class of uncertainty (which is a combination of the other
Here the uncertainty description represents one or several sources of parametric and/or unmodelled dynamics uncertainty combined into a single lumped perturbation of a chosen structure.
The frequency domain is also well suited for describing lumped uncertainty.
-
+
-In most cases, we prefer to lump the uncertainty into a multiplicative uncertainty of the form
-\\[ G\_p(s) = G(s)(1 + w\_I(s)\Delta\_I(s)); \quad \abs{\Delta\_I(j\w)} \le 1 \, \forall\w \\]
-which may be represented by the diagram in Fig. [fig:input_uncertainty_set](#fig:input_uncertainty_set).
+In most cases, we prefer to lump the uncertainty into a **multiplicative uncertainty** of the form
+
+\begin{equation\*}
+ G\_p(s) = G(s)(1 + w\_I(s)\Delta\_I(s)); \quad \abs{\Delta\_I(j\w)} \le 1 \, \forall\w
+\end{equation\*}
+
+which may be represented by the diagram in Fig. [13](#org6f74f68).
-
+
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set.png" caption="Figure 13: Plant with multiplicative uncertainty" >}}
@@ -2521,24 +2964,42 @@ which may be represented by the diagram in Fig. [fig:input_uncertainty_set]
Parametric uncertainty may also be represented in the \\(\hinf\\) framework if we restrict \\(\Delta\\) to be real.
-
+
-\\[ G\_p(s) = k\_p G\_0(s); \quad k\_{\min} \le k\_p \le k\_{\text{max}} \\]
+Gain uncertainty:
+
+\begin{equation\*}
+ G\_p(s) = k\_p G\_0(s); \quad k\_{\min} \le k\_p \le k\_{\text{max}}
+\end{equation\*}
+
where \\(k\_p\\) is an uncertain gain and \\(G\_0(s)\\) is a transfer function with no uncertainty.
-By writing \\(k\_p = \bar{k}(1 + r\_k \Delta)\\) where \\(r\_k\\) is the relative magnitude of the gain uncertainty and \\(\bar{k}\\) is the average gain, be may write
-\\[ G\_p = \underbrace{\bar{k}G\_0(s)}\_{G(s)} (1 + r\_k \Delta), \quad \abs{\Delta} \le 1 \\]
+By writing \\(k\_p = \overline{k}(1 + r\_k \Delta)\\) where \\(r\_k\\) is the relative magnitude of the gain uncertainty and \\(\overline{k}\\) is the average gain, be may write
+
+\begin{equation\*}
+ G\_p = \underbrace{\overline{k}G\_0(s)}\_{G(s)} (1 + r\_k \Delta), \quad \abs{\Delta} \le 1
+\end{equation\*}
+
where \\(\Delta\\) is a real scalar and \\(G(s)\\) is the nominal plant.
-
+
-\\[ G\_p(s) = \frac{1}{\tau\_p s + 1}G\_0(s); \quad \tau\_{\min} \le \tau\_p \le \tau\_{\text{max}} \\]
-By writing \\(\tau\_p = \bar{\tau}(1 + r\_\tau \Delta)\\), with \\(\abs{\Delta} \le 1\\), the model set can be rewritten as
-\\[ G\_p(s) = \frac{G\_0}{1+\bar{\tau} s + r\_\tau \bar{\tau} s \Delta} = \underbrace{\frac{G\_0}{1+\bar{\tau}s}}\_{G(s)} \frac{1}{1 + w\_{iI}(s) \Delta} \\]
-with \\(\displaystyle w\_{iI}(s) = \frac{r\_\tau \bar{\tau} s}{1 + \bar{\tau} s}\\).
+Time constant uncertainty:
+
+\begin{equation\*}
+ G\_p(s) = \frac{1}{\tau\_p s + 1}G\_0(s); \quad \tau\_{\min} \le \tau\_p \le \tau\_{\text{max}}
+\end{equation\*}
+
+By writing \\(\tau\_p = \overline{\tau}(1 + r\_\tau \Delta)\\), with \\(\abs{\Delta} \le 1\\), the model set can be rewritten as
+
+\begin{equation\*}
+ G\_p(s) = \frac{G\_0}{1+\overline{\tau} s + r\_\tau \overline{\tau} s \Delta} = \underbrace{\frac{G\_0}{1+\overline{\tau}s}}\_{G(s)} \frac{1}{1 + w\_{iI}(s) \Delta}
+\end{equation\*}
+
+with \\(\displaystyle w\_{iI}(s) = \frac{r\_\tau \overline{\tau} s}{1 + \overline{\tau} s}\\).
@@ -2559,46 +3020,49 @@ This is of course conservative as it introduces possible plants that are not pre
#### Uncertain Regions {#uncertain-regions}
-To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in Fig. [fig:uncertainty_region](#fig:uncertainty_region) the Nyquist plots generated by the following set of plants
-\\[ G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3 \\]
+To illustrate how parametric uncertainty translate into frequency domain uncertainty, consider in Fig. [14](#orgd590978) the Nyquist plots generated by the following set of plants
+
+\begin{equation\*}
+ G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3
+\end{equation\*}
- **Step 1**. At each frequency, a region of complex numbers \\(G\_p(j\w)\\) is generated by varying the parameters.
In general, these uncertain regions have complicated shapes and complex mathematical descriptions
- **Step 2**. We therefore approximate such complex regions as discs, resulting in a **complex additive uncertainty description**
-
+
{{< figure src="/ox-hugo/skogestad07_uncertainty_region.png" caption="Figure 14: Uncertainty regions of the Nyquist plot at given frequencies" >}}
#### Representing Uncertainty Regions by Complex Perturbations {#representing-uncertainty-regions-by-complex-perturbations}
-
+
-The disc-shaped regions may be generated by additive complex norm-bounded perturbations around a nominal plant \\(G\\)
+The disc-shaped regions may be generated by **additive** complex norm-bounded perturbations around a nominal plant \\(G\\)
-\begin{equation}
+\begin{equation} \label{eq:additive\_uncertainty}
\begin{aligned}
\Pi\_A: \ G\_p(s) &= G(s) + w\_A(s) \Delta\_A(s) \\\\\\
& \text{with }\abs{\Delta\_A(j\w)} \le 1 \, \forall\w
\end{aligned}
\end{equation}
-At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in Fig. [fig:uncertainty_disc_generated](#fig:uncertainty_disc_generated).
+At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped region with radius 1 centered at 0, so \\(G(j\w) + w\_A(j\w)\Delta\_A(j\w)\\) generates at each frequency a disc-shapes region of radius \\(\abs{w\_A(j\w)}\\) centered at \\(G(j\w)\\) as shown in Fig. [15](#org446d9c7).
-
+
{{< figure src="/ox-hugo/skogestad07_uncertainty_disc_generated.png" caption="Figure 15: Disc-shaped uncertainty regions generated by complex additive uncertainty" >}}
-
+
The disc-shaped region may alternatively be represented by a **multiplicative uncertainty**
-\begin{equation}
+\begin{equation} \label{eq:multiplicative\_uncertainty}
\begin{aligned}
\Pi\_I: \ G\_p(s) &= G(s)(1 + w\_I(s)\Delta\_I(s)); \\\\\\
& \text{with }\abs{\Delta\_I(j\w)} \le 1 \, \forall\w
@@ -2608,7 +3072,10 @@ The disc-shaped region may alternatively be represented by a **multiplicative un
And we see that for SISO systems, additive and multiplicative uncertainty are equivalent if at each frequency:
-\\[ \abs{w\_I(j\w)} = \abs{w\_A(j\w)}/\abs{G(j\w)} \\]
+
+\begin{equation\*}
+ \abs{w\_I(j\w)} = \abs{w\_A(j\w)}/\abs{G(j\w)}
+\end{equation\*}
However, **multiplicative weights are often preferred because their numerical value is more informative**. At frequencies where \\(\abs{w\_I(j\w)} > 1\\) the uncertainty exceeds \\(\SI{100}{\percent}\\) and the Nyquist curve may pass through the origin.
Then, at these frequencies, we do not know the phase of the plant, and we allow for zeros crossing from the left to the right-half plane. **Tight control is then not possible** at frequencies where \\(\abs{w\_I(j\w)} \ge 1\\).
@@ -2623,30 +3090,48 @@ This complex disc-shaped uncertainty description may be generated as follows:
1. Select a nominal \\(G(s)\\)
2. **Additive uncertainty**.
At each frequency, find the smallest radius \\(l\_A(\w)\\) which includes all the possible plants \\(\Pi\\)
- \\[ l\_A(\w) = \max\_{G\_p\in\Pi} \abs{G\_p(j\w) - G(j\w)} \\]
- If we want a rational transfer function weight, \\(w\_A(s)\\), then it must be chosen to cover the set, so
- \\[ \abs{w\_A(j\w)} \ge l\_A(\w) \quad \forall\w \\]
- Usually \\(w\_A(s)\\) is of low order to simplify the controller design.
-3. **Multiplicative uncertainty**.
- This is often the preferred uncertainty form, and we have
- \\[ l\_I(\w) = \max\_{G\_p\in\Pi} \abs{\frac{G\_p(j\w) - G(j\w)}{G(j\w)}} \\]
- and with a rational weight \\(\abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w\\)
+ \begin{equation\*}
+ l\_A(\w) = max
G\_p∈Π \abs{G\_p(j\w) - G(j\w)}
-
+\end{equation\*}
+ If we want a rational transfer function weight, \\(w\_A(s)\\), then it must be chosen to cover the set, so
+
+ \begin{equation\*}
+ \abs{w\_A(j\w)} \ge l\_A(\w) \quad \forall\w
+\end{equation\*}
+
+Usually \\(w\_A(s)\\) is of low order to simplify the controller design.
+
+1. **Multiplicative uncertainty**.
+ This is often the preferred uncertainty form, and we have
+ \begin{equation\*}
+ l\_I(\w) = max
G\_p∈Π \abs{\frac{G\_p(j\w) - G(j\w)}{G(j\w)}}
+
+\end{equation\*}
+ and with a rational weight \\(\abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w\\)
+
+
We want to represent the following set using multiplicative uncertainty with a rational weight \\(w\_I(s)\\)
-\\[ \Pi: \quad G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3 \\]
+
+\begin{equation\*}
+ \Pi: \quad G\_p(s) = \frac{k}{\tau s + 1} e^{-\theta s}, \quad 2 \le k, \theta, \tau \le 3
+\end{equation\*}
+
To simplify subsequent controller design, we select a delay-free nominal model
-\\[ G(s) = \frac{\bar{k}}{\bar{\tau} s + 1} = \frac{2.5}{2.5 s + 1} \\]
+
+\begin{equation\*}
+ G(s) = \frac{\overline{k}}{\overline{\tau} s + 1} = \frac{2.5}{2.5 s + 1}
+\end{equation\*}
To obtain \\(l\_I(\w)\\), we consider three values (2, 2.5 and 3) for each of the three parameters (\\(k, \theta, \tau\\)).
-The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) (Fig. [fig:uncertainty_weight](#fig:uncertainty_weight)).
+The corresponding relative errors \\(\abs{\frac{G\_p-G}{G}}\\) are shown as functions of frequency for the \\(3^3 = 27\\) resulting \\(G\_p\\) (Fig. [16](#orgc98eb6b)).
To derive \\(w\_I(s)\\), we then try to find a simple weight so that \\(\abs{w\_I(j\w)}\\) lies above all the dotted lines.
-
+
{{< figure src="/ox-hugo/skogestad07_uncertainty_weight.png" caption="Figure 16: Relative error for 27 combinations of \\(k,\ \tau\\) and \\(\theta\\). Solid and dashed lines: two weights \\(\abs{w\_I}\\)" >}}
@@ -2657,7 +3142,7 @@ With parametric uncertainty represented as complex perturbations, there are thre
1. **A simplified model**, for instance a low order, delay free model.
It usually yields the largest uncertainty region, but the model is simple and this facilitates controller design in later stages.
-2. **A model of mean parameter values**, \\(G(s) = \bar{G}(s)\\).
+2. **A model of mean parameter values**, \\(G(s) = \overline{G}(s)\\).
It is probably the most straightforward choice.
3. **The central plant obtained from a Nyquist plot**.
It yields the smallest region, but in this case a significant effort may be required to obtain the nominal model which is usually not a rational transfer function.
@@ -2672,30 +3157,43 @@ If we use a parametric uncertainty description, based on multiple real perturbat
We saw that one advantage of frequency domain uncertainty description is that one can choose to work with a simple nominal model, and **represent neglected dynamics as uncertainty**.
Consider a set of plants
-\\[ G\_p(s) = G\_0(s) f(s) \\]
+
+\begin{equation\*}
+ G\_p(s) = G\_0(s) f(s)
+\end{equation\*}
+
where \\(G\_0(s)\\) is fixed.
We want to neglect the term \\(f(s) \in \Pi\_f\\), and represent \\(G\_p\\) by multiplicative uncertainty with a nominal model \\(G = G\_0\\).
The magnitude of the relative uncertainty caused by neglecting the dynamics in \\(f(s)\\) is
-\\[ l\_I(\w) = \max\_{G\_p} \abs{\frac{G\_p - G}{G}} = \max\_{f(s) \in \Pi\_f} \abs{f(j\w) - 1} \\]
+
+\begin{equation\*}
+ l\_I(\w) = \max\_{G\_p} \abs{\frac{G\_p - G}{G}} = \max\_{f(s) \in \Pi\_f} \abs{f(j\w) - 1}
+\end{equation\*}
##### Neglected delay {#neglected-delay}
-Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig. [fig:neglected_time_delay](#fig:neglected_time_delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
-\\[ l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases} \\]
+Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig. [17](#orgb7f2291)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
-
+\begin{equation\*}
+ l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases}
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_neglected_time_delay.png" caption="Figure 17: Neglected time delay" >}}
##### Neglected lag {#neglected-lag}
-Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig. [fig:neglected_first_order_lag](#fig:neglected_first_order_lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
-\\[ w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1} \\]
+Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig. [18](#orgbfc6539)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
-
+\begin{equation\*}
+ w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1}
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_neglected_first_order_lag.png" caption="Figure 18: Neglected first-order lag uncertainty" >}}
@@ -2703,16 +3201,27 @@ Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}
##### Multiplicative weight for gain and delay uncertainty {#multiplicative-weight-for-gain-and-delay-uncertainty}
Consider the following set of plants
-\\[ G\_p = k\_p e^{-\theta\_p s} G\_0(s); \quad k\_p \in [k\_{\min}, k\_{\text{max}}], \ \theta\_p \in [\theta\_{\min}, \theta\_{\text{max}}] \\]
-which we want to represent by multiplicative uncertainty and a delay-free nominal model \\(G(s) = \bar{k} G\_0(s)\\).
+
+\begin{equation\*}
+ G\_p = k\_p e^{-\theta\_p s} G\_0(s); \quad k\_p \in [k\_{\min}, k\_{\text{max}}], \ \theta\_p \in [\theta\_{\min}, \theta\_{\text{max}}]
+\end{equation\*}
+
+which we want to represent by multiplicative uncertainty and a delay-free nominal model \\(G(s) = \overline{k} G\_0(s)\\).
There is an exact expression, its first order approximation is
-\\[ w\_I(s) = \frac{(1+\frac{r\_k}{2})\theta\_{\text{max}} s + r\_k}{\frac{\theta\_{\text{max}}}{2} s + 1} \\]
-However, as shown in Fig. [fig:lag_delay_uncertainty](#fig:lag_delay_uncertainty), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor:
-\\[ w\_I^\prime(s) = w\_I \cdot \frac{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.838 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.685 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1} \\]
+
+\begin{equation\*}
+ w\_I(s) = \frac{(1+\frac{r\_k}{2})\theta\_{\text{max}} s + r\_k}{\frac{\theta\_{\text{max}}}{2} s + 1}
+\end{equation\*}
+
+However, as shown in Fig. [19](#org06b467d), the weight \\(w\_I\\) is optimistic, especially around frequencies \\(1/\theta\_{\text{max}}\\). To make sure that \\(\abs{w\_I(j\w)} \le l\_I(\w)\\), we can apply a correction factor:
+
+\begin{equation\*}
+ w\_I^\prime(s) = w\_I \cdot \frac{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.838 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}{(\frac{\theta\_{\text{max}}}{2.363})^2 s^2 + 2\cdot 0.685 \cdot \frac{\theta\_{\text{max}}}{2.363} s + 1}
+\end{equation\*}
It is suggested to start with the simple weight and then if needed, to try the higher order weight.
-
+
{{< figure src="/ox-hugo/skogestad07_lag_delay_uncertainty.png" caption="Figure 19: Multiplicative weight for gain and delay uncertainty" >}}
@@ -2722,12 +3231,12 @@ It is suggested to start with the simple weight and then if needed, to try the h
The most important reason for using frequency domain (\\(\hinf\\)) uncertainty description and complex perturbations, is the **incorporation of unmodelled dynamics**.
Unmodelled dynamics, while being close to neglected dynamics, also include unknown dynamics of unknown or even infinite order.
-
+
To represent unmodelled dynamics, we usually use a simple **multiplicative weight** of the form
-\begin{equation}
+\begin{equation} \label{eq:multiplicative\_simple\_weight}
w\_I(s) = \frac{\tau s + r\_0}{(\tau/r\_\infty) s + 1}
\end{equation}
@@ -2741,9 +3250,13 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
#### RS with Multiplicative Uncertainty {#rs-with-multiplicative-uncertainty}
-We want to determine the stability of the uncertain feedback system in Fig. [fig:feedback_multiplicative_uncertainty](#fig:feedback_multiplicative_uncertainty) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\).
+We want to determine the stability of the uncertain feedback system in Fig. [20](#org8ede43d) where there is multiplicative uncertainty of magnitude \\(\abs{w\_I(j\w)}\\).
The loop transfer function becomes
-\\[ L\_P = G\_p K = G K (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \\]
+
+\begin{equation\*}
+ L\_P = G\_p K = G K (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I
+\end{equation\*}
+
We assume (by design) the stability of the nominal closed-loop system (with \\(\Delta\_I = 0\\)).
We use the Nyquist stability condition to test for robust stability of the closed loop system:
@@ -2752,14 +3265,14 @@ We use the Nyquist stability condition to test for robust stability of the close
&\Longleftrightarrow \quad L\_p \ \text{should not encircle -1}, \ \forall L\_p
\end{align\*}
-
+
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback.png" caption="Figure 20: Feedback system with multiplicative uncertainty" >}}
##### Graphical derivation of RS-condition {#graphical-derivation-of-rs-condition}
-Consider the Nyquist plot of \\(L\_p\\) as shown in Fig. [fig:nyquist_uncertainty](#fig:nyquist_uncertainty). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc.
+Consider the Nyquist plot of \\(L\_p\\) as shown in Fig. [21](#orgd4e7f02). \\(\abs{1+L}\\) is the distance from the point \\(-1\\) to the center of the disc representing \\(L\_p\\) and \\(\abs{w\_I L}\\) is the radius of the disc.
Encirclements are avoided if none of the discs cover \\(-1\\), and we get:
\begin{align\*}
@@ -2768,16 +3281,16 @@ Encirclements are avoided if none of the discs cover \\(-1\\), and we get:
&\Leftrightarrow \quad \abs{w\_I T} < 1, \ \forall\w \\\\\\
\end{align\*}
-
+
{{< figure src="/ox-hugo/skogestad07_nyquist_uncertainty.png" caption="Figure 21: Nyquist plot of \\(L\_p\\) for robust stability" >}}
-
+
The requirement of robust stability for the case with multiplicative uncertainty gives an **upper bound on the complementary sensitivity**
-\begin{equation}
+\begin{equation} \label{eq:robust\_stability\_siso}
\text{RS} \quad \Leftrightarrow \quad \abs{T} < 1/\abs{w\_I}, \ \forall\w
\end{equation}
@@ -2797,16 +3310,23 @@ Since \\(L\_p\\) is assumed stable, and the nominal closed-loop is stable, the n
\end{align\*}
At each frequency, the last condition is most easily violated when the complex number \\(\Delta\_I(j\w)\\) is selected with \\(\abs{\Delta(j\w)} = 1\\) and with phase such that \\(1+L\\) and \\(w\_I L \Delta\_I\\) point in the opposite direction. Thus
-\\[ \text{RS} \ \Leftrightarrow \ \abs{1 + L} - \abs{w\_I L} > 0, \ \forall\w \ \Leftrightarrow \ \abs{w\_I T} < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \text{RS} \ \Leftrightarrow \ \abs{1 + L} - \abs{w\_I L} > 0, \ \forall\w \ \Leftrightarrow \ \abs{w\_I T} < 1, \ \forall\w
+\end{equation\*}
+
And we obtain the same condition as before.
#### RS with Inverse Multiplicative Uncertainty {#rs-with-inverse-multiplicative-uncertainty}
-We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig. [fig:inverse_uncertainty_set](#fig:inverse_uncertainty_set)) in which
-\\[ G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1} \\]
+We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig. [22](#org7fd6c1d)) in which
-
+\begin{equation\*}
+ G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1}
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_inverse_uncertainty_set.png" caption="Figure 22: Feedback system with inverse multiplicative uncertainty" >}}
@@ -2820,12 +3340,12 @@ We assume that \\(L\_p\\) and the nominal closed-loop systems are stable. Robust
&\Leftrightarrow \quad \abs{w\_{iI} S} < 1, \ \forall\w\\\\\\
\end{align\*}
-
+
The requirement for robust stability for the case with inverse multiplicative uncertainty gives an **upper bound on the sensitivity**
-\begin{equation}
+\begin{equation} \label{eq:robust\_stability\_inverse\_uncertainty\_siso}
\text{RS} \quad \Leftrightarrow \quad \abs{S} < 1/\abs{w\_{iI}}, \ \forall\w
\end{equation}
@@ -2841,12 +3361,12 @@ The reason is that the uncertainty represents pole uncertainty, and at frequenci
#### SISO Nominal Performance {#siso-nominal-performance}
-
+
-The condition for nominal performance when considering performance in terms of the **weighted sensitivity** function is
+The condition for **nominal performance** when considering performance in terms of the **weighted sensitivity** function is
-\begin{equation}
+\begin{equation} \label{eq:siso\_nominal\_performance}
\begin{aligned}
\text{NP} &\Leftrightarrow \abs{w\_P S} < 1 \ \forall\omega \\\\\\
&\Leftrightarrow \abs{w\_P} < \abs{1 + L} \ \forall\omega
@@ -2856,21 +3376,21 @@ The condition for nominal performance when considering performance in terms of t
Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\).
-This is illustrated graphically in Fig. [fig:nyquist_performance_condition](#fig:nyquist_performance_condition).
+This is illustrated graphically in Fig. [23](#orge41ae9d).
-
+
{{< figure src="/ox-hugo/skogestad07_nyquist_performance_condition.png" caption="Figure 23: Nyquist plot illustration of the nominal performance condition \\(\abs{w\_P} < \abs{1 + L}\\)" >}}
#### Robust Performance {#robust-performance}
-
+
For robust performance, we require the performance condition to be satisfied for **all** possible plants:
-\begin{equation}
+\begin{equation} \label{eq:robust\_performance\_definition\_siso}
\begin{aligned}
\text{RP}\ &\overset{\text{def}}{\Leftrightarrow}\ \abs{w\_P S} < 1 \quad \forall S\_p, \forall \omega\\\\\\
\ &\Leftrightarrow\ \abs{w\_P} < \abs{1 + L\_p} \quad \forall L\_p, \forall \omega
@@ -2879,18 +3399,21 @@ For robust performance, we require the performance condition to be satisfied for
-Let's consider the case of multiplicative uncertainty as shown on Fig. [fig:input_uncertainty_set_feedback_weight_bis](#fig:input_uncertainty_set_feedback_weight_bis).
+Let's consider the case of multiplicative uncertainty as shown on Fig. [24](#org83b2671).
The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is
-\\[ L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \\]
-
+\begin{equation\*}
+ L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I
+\end{equation\*}
+
+
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight_bis.png" caption="Figure 24: Diagram for robust performance with multiplicative uncertainty" >}}
##### Graphical derivation of RP-condition {#graphical-derivation-of-rp-condition}
-As illustrated on Fig. [fig:nyquist_performance_condition](#fig:nyquist_performance_condition), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\).
+As illustrated on Fig. [23](#orge41ae9d), we must required that all possible \\(L\_p(j\omega)\\) stay outside a disk of radius \\(\abs{w\_P(j\omega)}\\) centered on \\(-1\\).
Since \\(L\_p\\) at each frequency stays within a disk of radius \\(|w\_I(j\omega) L(j\omega)|\\) centered on \\(L(j\omega)\\), the condition for RP becomes:
\begin{align\*}
@@ -2898,12 +3421,12 @@ Since \\(L\_p\\) at each frequency stays within a disk of radius \\(|w\_I(j\omeg
&\Leftrightarrow\ \abs{w\_P(1 + L)^{-1}} + \abs{w\_I L(1 + L)^{-1}} < 1 \quad \forall\omega\\\\\\
\end{align\*}
-
+
-Finally, we obtain the following condition for Robust Performance:
+Finally, we obtain the following condition for **Robust Performance**:
-\begin{equation}
+\begin{equation} \label{eq:robust\_performance\_condition\_siso}
\text{RP} \ \Leftrightarrow\ \max\_{\omega} \left(\abs{w\_P S} + \abs{w\_I T} \right) < 1
\end{equation}
@@ -2913,23 +3436,38 @@ Finally, we obtain the following condition for Robust Performance:
##### Algebraic derivation of RP-condition {#algebraic-derivation-of-rp-condition}
RP is satisfied if the worst-case weighted sensitivity at each frequency is less than \\(1\\):
-\\[ \text{RP} \ \Leftrightarrow\ \max\_{S\_p} \abs{w\_P S\_p} < 1, \quad \forall\omega \\]
+
+\begin{equation\*}
+ \text{RP} \ \Leftrightarrow\ \max\_{S\_p} \abs{w\_P S\_p} < 1, \quad \forall\omega
+\end{equation\*}
The perturbed sensitivity \\(S\_p\\) is
-\\[ S\_p = \frac{1}{1 + L\_p} = \frac{1}{1 + L + w\_I L \Delta\_I} \\]
+
+\begin{equation\*}
+ S\_p = \frac{1}{1 + L\_p} = \frac{1}{1 + L + w\_I L \Delta\_I}
+\end{equation\*}
+
Thus:
-\\[ \max\_{S\_p} \abs{w\_P S\_p} = \frac{\abs{w\_P}}{\abs{1 + L} - \abs{w\_I L}} = \frac{\abs{w\_P S}}{1 - \abs{w\_I T}} \\]
+
+\begin{equation\*}
+ \max\_{S\_p} \abs{w\_P S\_p} = \frac{\abs{w\_P}}{\abs{1 + L} - \abs{w\_I L}} = \frac{\abs{w\_P S}}{1 - \abs{w\_I T}}
+\end{equation\*}
+
And we obtain the same RP-condition as the graphically derived one.
##### Remarks on RP-condition {#remarks-on-rp-condition}
1. The RP-condition for this problem is closely approximated by the mixed sensitivity \\(\hinf\\) condition:
- \\[ \tcmbox{\hnorm{\begin{matrix}w\_P S \\ w\_I T\end{matrix}} = \max\_{\omega} \sqrt{\abs{w\_P S}^2 + \abs{w\_I T}^2} <1} \\]
- This condition is within a factor at most \\(\sqrt{2}\\) of the true RP-condition.
- This means that **for SISO systems, we can closely approximate the RP-condition in terms of an \\(\hinf\\) problem**, so there is no need to make use of the structured singular value.
- However, we will see that the situation can be very different for MIMO systems.
-2. The RP-condition can be used to derive bounds on the loop shape \\(\abs{L}\\):
+ \begin{equation\*}
+ \tcmbox{\hnorm{\begin{matrix}w\_P S \\\\ w\_I T\end{matrix}} = max
ω \sqrt{\abs{w\_P S}^2 + \abs{w\_I T}^2} <1}
+
+\end{equation\*}
+ This condition is within a factor at most \\(\sqrt{2}\\) of the true RP-condition.
+ This means that **for SISO systems, we can closely approximate the RP-condition in terms of an \\(\hinf\\) problem**, so there is no need to make use of the structured singular value.
+ However, we will see that the situation can be very different for MIMO systems.
+
+1. The RP-condition can be used to derive bounds on the loop shape \\(\abs{L}\\):
\begin{align\*}
\abs{L} &> \frac{1 + \abs{w\_P}}{1 - \abs{w\_I}}, \text{ at frequencies where } \abs{w\_I} < 1\\\\\\
@@ -2942,16 +3480,14 @@ And we obtain the same RP-condition as the graphically derived one.
Consider a SISO system with multiplicative input uncertainty, and assume that the closed-loop is nominally stable (NS).
The conditions for nominal performance (NP), robust stability (RS) and robust performance (RP) as summarized as follows:
-
+
-\begin{subequations}
- \begin{align}
- \text{NP} & \Leftrightarrow |w\_P S| < 1,\ \forall \omega \\\\\\
- \text{RS} & \Leftrightarrow |w\_I T| < 1,\ \forall \omega \\\\\\
- \text{RP} & \Leftrightarrow |w\_P S| + |w\_I T| < 1,\ \forall \omega
- \end{align}
-\end{subequations}
+\begin{align}
+ \text{NP} & \Leftrightarrow |w\_P S| < 1,\ \forall \omega \\\\\\
+ \text{RS} & \Leftrightarrow |w\_I T| < 1,\ \forall \omega \\\\\\
+ \text{RP} & \Leftrightarrow |w\_P S| + |w\_I T| < 1,\ \forall \omega
+\end{align}
@@ -2959,7 +3495,11 @@ From this we see that **a prerequisite for RP is that we satisfy both NP and RS*
This applies in general, both for SISO and MIMO systems and for any uncertainty.
In addition, for SISO systems, if we satisfy both RS and NP, then we have at each frequency:
-\\[ |w\_P S| + |w\_I T| < 2 \cdot \max \\{|w\_P S|, |w\_I T|\\} < 2 \\]
+
+\begin{equation\*}
+ |w\_P S| + |w\_I T| < 2 \cdot \max \\{|w\_P S|, |w\_I T|\\} < 2
+\end{equation\*}
+
It then follows that, within a factor at most 2, we will automatically get RP when NP and RS are satisfied.
This, RP is not a "big issue" for SISO systems.
@@ -2973,7 +3513,7 @@ This has implications for RP:
&\ge \text{min}\\{|w\_P|, |w\_I|\\}
\end{align\*}
-This means that we cannot have both \\(|w\_P|>1\\) (i.e. good performance) and \\(|w\_I|>1\\) (i.e. more than \\(\si{100}{\%}\\) uncertainty) at the same frequency.
+This means that we cannot have both \\(|w\_P|>1\\) (i.e. good performance) and \\(|w\_I|>1\\) (i.e. more than 100% uncertainty) at the same frequency.
### Examples of Parametric Uncertainty {#examples-of-parametric-uncertainty}
@@ -2982,14 +3522,24 @@ This means that we cannot have both \\(|w\_P|>1\\) (i.e. good performance) and \
#### Parametric Pole Uncertainty {#parametric-pole-uncertainty}
Consider the following set of plants:
-\\[ G\_p(s) = \frac{1}{s - a\_p} G\_0(s); \quad a\_\text{min} \le a\_p \le a\_{\text{max}} \\]
+
+\begin{equation\*}
+ G\_p(s) = \frac{1}{s - a\_p} G\_0(s); \quad a\_\text{min} \le a\_p \le a\_{\text{max}}
+\end{equation\*}
If \\(a\_\text{min}\\) and \\(a\_\text{max}\\) have different signs, then this means that the plant can change from stable to unstable with the pole crossing through the origin.
This set of plants can be written as
-\\[ G\_p(s) = \frac{G\_0(s)}{s - \bar{a}(1 + r\_a \Delta)}; \quad -1 \le \Delta \le 1 \\]
+
+\begin{equation\*}
+ G\_p(s) = \frac{G\_0(s)}{s - \overline{a}(1 + r\_a \Delta)}; \quad -1 \le \Delta \le 1
+\end{equation\*}
+
which can be exactly described by inverse multiplicative uncertainty:
-\\[ G(s) = \frac{G\_0(s)}{(s - \bar{a})}; \quad w\_{iI}(s) = \frac{r\_a \bar{a}}{s - \bar{a}} \\]
+
+\begin{equation\*}
+ G(s) = \frac{G\_0(s)}{(s - \overline{a})}; \quad w\_{iI}(s) = \frac{r\_a \overline{a}}{s - \overline{a}}
+\end{equation\*}
The magnitude of \\(w\_{iI}(s)\\) is equal to \\(r\_a\\) at low frequency and goes to \\(0\\) at high frequencies.
@@ -2997,10 +3547,16 @@ The magnitude of \\(w\_{iI}(s)\\) is equal to \\(r\_a\\) at low frequency and go
##### Time constant form {#time-constant-form}
It is also interesting to consider another form of pole uncertainty, namely that associated with the time constant:
-\\[ G\_p(s) = \frac{1}{\tau\_p s + 1} G\_0(s); \quad \tau\_\text{min} \le \tau\_p \le \tau\_\text{max} \\]
+
+\begin{equation\*}
+ G\_p(s) = \frac{1}{\tau\_p s + 1} G\_0(s); \quad \tau\_\text{min} \le \tau\_p \le \tau\_\text{max}
+\end{equation\*}
The corresponding uncertainty weight is
-\\[ w\_{iI}(s) = \frac{r\_\tau \bar{\tau} s}{1 + \bar{\tau} s} \\]
+
+\begin{equation\*}
+ w\_{iI}(s) = \frac{r\_\tau \overline{\tau} s}{1 + \overline{\tau} s}
+\end{equation\*}
This results in uncertainty in the pole location, but here the uncertainty affects the model at high frequency.
@@ -3008,10 +3564,17 @@ This results in uncertainty in the pole location, but here the uncertainty affec
#### Parametric Zero Uncertainty {#parametric-zero-uncertainty}
Consider zero uncertainty in the "time constant" form as in:
-\\[ G\_p(s) = (1 + \tau\_p s)G\_0(s); \quad \tau\_\text{min} \le \tau\_p \le \tau\_\text{max} \\]
+
+\begin{equation\*}
+ G\_p(s) = (1 + \tau\_p s)G\_0(s); \quad \tau\_\text{min} \le \tau\_p \le \tau\_\text{max}
+\end{equation\*}
This set of plants may be written as multiplicative uncertainty with:
-\\[ w\_I(s) = \frac{r\_\tau \bar{\tau} s}{1 + \bar{\tau} s} \\]
+
+\begin{equation\*}
+ w\_I(s) = \frac{r\_\tau \overline{\tau} s}{1 + \overline{\tau} s}
+\end{equation\*}
+
The magnitude \\(|w\_I(j\omega)|\\) is small at low frequencies and approaches \\(r\_\tau\\) at high frequencies.
For cases with \\(r\_\tau > 1\\) we allow the zero to cross from the LHP to the RHP.
@@ -3036,7 +3599,10 @@ Assume that the underlying cause for the uncertainty is uncertainty in some real
where \\(A\\), \\(B\\), \\(C\\) and \\(D\\) model the nominal system.
We can collect the perturbations \\(\delta\_i\\) in a large diagonal matrix \\(\Delta\\) with the real \\(\delta\_i\\)'s along its diagonal:
-\\[ A\_p = A + \sum \delta\_i A\_i = A + W\_2 \Delta W\_1 \\]
+
+\begin{equation\*}
+ A\_p = A + \sum \delta\_i A\_i = A + W\_2 \Delta W\_1
+\end{equation\*}
In the transfer function form:
@@ -3047,9 +3613,9 @@ In the transfer function form:
with \\(\Phi(s) \triangleq (sI - A)^{-1}\\).
-This is illustrated in the block diagram of Fig. [fig:uncertainty_state_a_matrix](#fig:uncertainty_state_a_matrix), which is in the form of an inverse additive perturbation.
+This is illustrated in the block diagram of Fig. [25](#org80cc1de), which is in the form of an inverse additive perturbation.
-
+
{{< figure src="/ox-hugo/skogestad07_uncertainty_state_a_matrix.png" caption="Figure 25: Uncertainty in state space A-matrix" >}}
@@ -3067,24 +3633,33 @@ We also derived a condition for robust performance with multiplicative uncertain
## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis}
-
+
### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty}
The starting point for our robustness analysis is a system representation in which the uncertain perturbations are "pulled out" into a **block diagonal matrix**
-\\[ \Delta = \text{diag} \\{\Delta\_i\\} = \begin{bmatrix}\Delta\_1 \\ & \ddots \\ & & \Delta\_i \\ & & & \ddots \end{bmatrix} \\]
+
+\begin{equation\*}
+ \Delta = \text{diag} \\{\Delta\_i\\} = \begin{bmatrix}
+ \Delta\_1 & & & \\\\\\
+ & \ddots & & \\\\\\
+ & & \Delta\_i & \\\\\\
+ & & & \ddots
+ \end{bmatrix}
+\end{equation\*}
+
where each \\(\Delta\_i\\) represents a **specific source of uncertainty**, e.g. input uncertainty \\(\Delta\_I\\) or parametric uncertainty \\(\delta\_i\\).
-If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig. [fig:general_control_delta](#fig:general_control_delta). This form is useful for controller synthesis.
+If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig. [26](#orgbe92de5). This form is useful for controller synthesis.
-
+
{{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="Figure 26: General control configuration used for controller synthesis" >}}
-If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig. [fig:general_control_Ndelta](#fig:general_control_Ndelta).
+If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig. [27](#org041abfb).
-
+
{{< figure src="/ox-hugo/skogestad07_general_control_Ndelta.png" caption="Figure 27: \\(N\Delta\text{-structure}\\) for robust performance analysis" >}}
@@ -3102,9 +3677,9 @@ Similarly, the uncertain closed-loop transfer function from \\(w\\) to \\(z\\),
&\triangleq N\_{22} + N\_{21} \Delta (I - N\_{11} \Delta)^{-1} N\_{12}
\end{align\*}
-To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig. [fig:general_control_Mdelta_bis](#fig:general_control_Mdelta_bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
+To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig. [28](#org4b32441) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
-
+
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta_bis.png" caption="Figure 28: \\(M\Delta\text{-structure}\\) for robust stability analysis" >}}
@@ -3112,10 +3687,16 @@ To analyze robust stability of \\(F\\), we can rearrange the system into the \\(
### Representing Uncertainty {#representing-uncertainty}
Each individual perturbation is assumed to be **stable and normalized**:
-\\[ \maxsv(\Delta\_i(j\w)) \le 1 \quad \forall\w \\]
+
+\begin{equation\*}
+ \maxsv(\Delta\_i(j\w)) \le 1 \quad \forall\w
+\end{equation\*}
As the maximum singular value of a block diagonal matrix is equal to the largest of the maximum singular values of the individual blocks, it then follows for \\(\Delta = \text{diag}\\{\Delta\_i\\}\\) that
-\\[ \maxsv(\Delta\_i(j\w)) \le 1 \quad \forall\w, \forall i \quad \Leftrightarrow \quad \tcmbox{\hnorm{\Delta} \le 1} \\]
+
+\begin{equation\*}
+ \maxsv(\Delta\_i(j\w)) \le 1 \quad \forall\w, \forall i \quad \Leftrightarrow \quad \tcmbox{\hnorm{\Delta} \le 1}
+\end{equation\*}
#### Differences Between SISO and MIMO Systems {#differences-between-siso-and-mimo-systems}
@@ -3135,11 +3716,13 @@ However, the inclusion of parametric uncertainty may be more significant for MIM
Unstructured perturbations are often used to get a simple uncertainty model.
We here define unstructured uncertainty as the use of a "full" complex perturbation matrix \\(\Delta\\), usually with dimensions compatible with those of the plant, where at each frequency any \\(\Delta(j\w)\\) satisfying \\(\maxsv(\Delta(j\w)) < 1\\) is allowed.
-Three common forms of **feedforward unstructured uncertainty** are shown Fig. [fig:feedforward_uncertainty](#fig:feedforward_uncertainty): additive uncertainty, multiplicative input uncertainty and multiplicative output uncertainty.
+Three common forms of **feedforward unstructured uncertainty** are shown Fig. [4](#table--fig:feedforward-uncertainty): additive uncertainty, multiplicative input uncertainty and multiplicative output uncertainty.
-
+
+**Feedforward unstructured uncertainty**:
+
\begin{alignat\*}{3}
&\Pi\_A: \quad &&G\_p = G + E\_A; \quad& &E\_a = w\_A \Delta\_a \\\\\\
&\Pi\_I: \quad &&G\_p = G(I + E\_I); \quad& &E\_I = w\_I \Delta\_I \\\\\\
@@ -3156,13 +3739,15 @@ Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nb
| ![](/ox-hugo/skogestad07_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_input_uncertainty.png) | ![](/ox-hugo/skogestad07_output_uncertainty.png) |
|----------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------|
-|
Additive uncertainty |
Multiplicative input uncertainty |
Multiplicative output uncertainty |
+|
Additive uncertainty |
Multiplicative input uncertainty |
Multiplicative output uncertainty |
-In Fig. [fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
+In Fig. [5](#table--fig:feedback-uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
-
+
+**Feedback unstructured uncertainty**:
+
\begin{alignat\*}{3}
&\Pi\_{iA}: \quad &&G\_p = G(I - E\_{iA} G)^{-1}; & & \quad E\_{ia} = w\_{iA} \Delta\_{ia} \\\\\\
&\Pi\_{iI}: \quad &&G\_p = G(I - E\_{iI})^{-1}; & & \quad E\_{iI} = w\_{iI} \Delta\_{iI} \\\\\\
@@ -3179,7 +3764,7 @@ In Fig. [fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feed
| ![](/ox-hugo/skogestad07_inv_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_input_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_output_uncertainty.png) |
|--------------------------------------------------------|------------------------------------------------------------------|-------------------------------------------------------------------|
-|
Inverse additive uncertainty |
Inverse multiplicative input uncertainty |
Inverse multiplicative output uncertainty |
+|
Inverse additive uncertainty |
Inverse multiplicative input uncertainty |
Inverse multiplicative output uncertainty |
##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation}
@@ -3188,15 +3773,29 @@ For SISO systems, we usually lump multiple sources of uncertainty into a single
This may be also done for MIMO systems, but then it makes a difference whether the perturbation is at the input or the output.
Since **output uncertainty is frequently less restrictive than input uncertainty in terms of control performance**, we first attempt to lump the uncertainty at the output. For example, a set of plant \\(\Pi\\) may be represented by multiplicative output uncertainty with a scalar weight \\(w\_O(s)\\) using
-\\[ G\_p = (I + w\_O \Delta\_O) G, \quad \hnorm{\Delta\_O} \le 1 \\]
+
+\begin{equation\*}
+ G\_p = (I + w\_O \Delta\_O) G, \quad \hnorm{\Delta\_O} \le 1
+\end{equation\*}
+
where
-\\[ l\_O(\w) = \max\_{G\_p \in \Pi} \maxsv\left( (G\_p - G)G^{-1} \right); \ \abs{w\_O(j\w)} \ge l\_O(\w), \, \forall\w \\]
+
+\begin{equation\*}
+ l\_O(\w) = \max\_{G\_p \in \Pi} \maxsv\left( (G\_p - G)G^{-1} \right); \ \abs{w\_O(j\w)} \ge l\_O(\w), \, \forall\w
+\end{equation\*}
If the resulting uncertainty weight is reasonable and the analysis shows that robust stability and performance may be achieve, then this lumping of uncertainty at the output is fine.
If this is not the case, then one may try to lump the uncertainty at the input instead, using multiplicative input uncertainty with a scalar weight,
-\\[ G\_p = G(I + w\_I \Delta\_I), \quad \hnorm{\Delta\_I} \le 1 \\]
+
+\begin{equation\*}
+ G\_p = G(I + w\_I \Delta\_I), \quad \hnorm{\Delta\_I} \le 1
+\end{equation\*}
+
where
-\\[ l\_I(\w) = \max\_{G\_p \in \Pi} \maxsv\left( G^{-1}(G\_p - G) \right); \ \abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w \\]
+
+\begin{equation\*}
+ l\_I(\w) = \max\_{G\_p \in \Pi} \maxsv\left( G^{-1}(G\_p - G) \right); \ \abs{w\_I(j\w)} \ge l\_I(\w), \, \forall\w
+\end{equation\*}
However, in many cases, this approach of lumping uncertainty either at the output or the input does **not** work well because **it usually introduces additional plants** that were not present in the original set.
@@ -3212,19 +3811,30 @@ In such cases we may have to represent the uncertainty as it occurs physically (
#### Diagonal Uncertainty {#diagonal-uncertainty}
By "diagonal uncertainty" we mean that the perturbation is a complex diagonal matrix
-\\[ \Delta(s) = \text{diag}\\{\delta\_i(s)\\}; \quad \abs{\delta\_i(j\w)} \le 1, \ \forall\w, \, \forall i \\]
+
+\begin{equation\*}
+ \Delta(s) = \text{diag}\\{\delta\_i(s)\\}; \quad \abs{\delta\_i(j\w)} \le 1, \ \forall\w, \, \forall i
+\end{equation\*}
Diagonal uncertainty usually arises from a consideration of uncertainty or neglected dynamics in the **individual input or output channels**.
This type of diagonal uncertainty is **always present**.
-
+
Let us consider uncertainty in the input channels. With each input \\(u\_i\\), there is a physical system (amplifier, actuator, etc.) which based on the controller output signal \\(u\_i\\), generates a physical plant input \\(m\_i\\)
-\\[ m\_i = h\_i(s) u\_i \\]
+
+\begin{equation\*}
+ m\_i = h\_i(s) u\_i
+\end{equation\*}
+
The scalar transfer function \\(h\_i(s)\\) is often absorbed into the plant model \\(G(s)\\).
We can represent its uncertainty as multiplicative uncertainty
-\\[ h\_{pi}(s) = h\_i(s)(1 + w\_{Ii}(s)\delta\_i(s)); \quad \abs{\delta\_i(j\w)} \le 1, \, \forall\w \\]
+
+\begin{equation\*}
+ h\_{pi}(s) = h\_i(s)(1 + w\_{Ii}(s)\delta\_i(s)); \quad \abs{\delta\_i(j\w)} \le 1, \, \forall\w
+\end{equation\*}
+
which after combining all input channels results in diagonal input uncertainty for the plant
\begin{align\*}
@@ -3235,7 +3845,11 @@ which after combining all input channels results in diagonal input uncertainty f
Normally, we would represent the uncertainty in each input or output channel using a simple weight in the form
-\\[ w(s) = \frac{\tau s + r\_0}{(\tau/r\_\infty)s + 1} \\]
+
+\begin{equation\*}
+ w(s) = \frac{\tau s + r\_0}{(\tau/r\_\infty)s + 1}
+\end{equation\*}
+
where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is the frequency where the relative uncertainty reaches \\(\SI{100}{\percent}\\), and \\(r\_\infty\\) is the magnitude of the weight at high frequencies.
**Diagonal input uncertainty should always be considered because**:
@@ -3246,27 +3860,36 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
### Obtaining \\(P\\), \\(N\\) and \\(M\\) {#obtaining--p----n--and--m}
-Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig. [fig:input_uncertainty_set_feedback_weight](#fig:input_uncertainty_set_feedback_weight).
+Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig. [29](#org4f9f011).
\\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight.
-
+
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight.png" caption="Figure 29: System with multiplicative input uncertainty and performance measured at the output" >}}
We want to derive the generalized plant \\(P\\) which has inputs \\([u\_\Delta,\ w,\ u]^T\\) and outputs \\([y\_\Delta,\ z,\ v]^T\\).
By breaking the loop before and after \\(K\\) and \\(\Delta\_I\\), we get
-\\[ P = \begin{bmatrix}
- 0 & 0 & W\_I \\\\\\
- W\_P G & W\_P & W\_P G \\\\\\
- -G & -I & -G
-\end{bmatrix} \\]
+
+\begin{equation\*}
+ P = \begin{bmatrix}
+ 0 & 0 & W\_I \\\\\\
+ W\_P G & W\_P & W\_P G \\\\\\
+ -G & -I & -G
+ \end{bmatrix}
+\end{equation\*}
Next, we want to derive the matrix \\(N\\). We fist partition \\(P\\) to be compatible with \\(K\\):
\begin{align\*}
- P\_{11} = \begin{bmatrix}0&0\\GW\_P&W\_P\end{bmatrix},\quad & P\_{12} = \begin{bmatrix}W\_I\\GW\_P\end{bmatrix} \\\\\\
- P\_{21} = \begin{bmatrix}G&-1\end{bmatrix}, \quad & P\_{22} = -G \\\\\\
+ P\_{11} = \begin{bmatrix}
+ 0 & 0 \\\\\\
+ GW\_P & W\_P
+ \end{bmatrix}, \quad & P\_{12} = \begin{bmatrix}
+ W\_I \\\\\\
+ GW\_P
+ \end{bmatrix} \\\\\\
+ P\_{21} = \begin{bmatrix} G & -1 \end{bmatrix}, \quad & P\_{22} = -G \\\\\\
\end{align\*}
and then we find \\(N\\) using \\(N = F\_l(P, K)\\).
@@ -3289,7 +3912,7 @@ We have \\(z = F(\Delta) \cdot w\\) with
We here use \\(\hinf\\) norm to define performance and require for RP that \\(\hnorm{F(\Delta)} \le 1\\) for all allowed \\(\Delta\\).
A typical choice is \\(F = w\_P S\_P\\) where \\(w\_P\\) is the performance weight and \\(S\_P\\) represents the set of perturbed sensitivity functions.
-
+
In terms of the \\(N\Delta\text{-structure}\\), our requirements for stability and performance can be summarized as follows:
@@ -3307,16 +3930,22 @@ In terms of the \\(N\Delta\text{-structure}\\), our requirements for stability a
### Robust Stability for the \\(M\Delta\text{-structure}\\) {#robust-stability-for-the--m-delta-text-structure}
Consider the uncertain \\(N\Delta\text{-system}\\) for which the transfer function from \\(w\\) to \\(z\\) is
-\\[ F\_u(N, \Delta) = N\_{22} + N\_{21}\Delta(I - N\_{11}\Delta)^{-1} N\_{12} \\]
+
+\begin{equation\*}
+ F\_u(N, \Delta) = N\_{22} + N\_{21}\Delta(I - N\_{11}\Delta)^{-1} N\_{12}
+\end{equation\*}
+
Suppose that the system is nominally stable (with \\(\Delta = 0\\)) that is \\(N\\) is stable. We also assume that \\(\Delta\\) is stable.
We then see from the above equation that the **only possible source of instability** is the feedback term \\((I - N\_{11}\Delta)^{-1}\\).
Thus, when we have nominal stability, the stability of the \\(N\Delta\text{-structure}\\) is equivalent to the stability of the \\(M\Delta\text{-structure}\\) where \\(M = N\_{11}\\).
We thus need to derive conditions for checking the stability of the \\(M\Delta\text{-structure}\\).
-
+
+**Determinant Stability Condition**:
+
Assume that the nominal system \\(M(s)\\) and the perturbations \\(\Delta(s)\\) are stable.
Consider the convex set of perturbations \\(\Delta\\), such that if \\(\Delta^\prime\\) is an allowed perturbation then so is \\(c\Delta^\prime\\) where c is any **real** scalar such that \\(\abs{c} \le 1\\).
Then the \\(M\Delta\text{-structure}\\) is stable for all allowed perturbations **if and only if** the Nyquist plot of \\(\det\left( I - M\Delta(s) \right)\\) does not encircle the origin, \\(\forall\Delta\\):
@@ -3327,14 +3956,16 @@ Then the \\(M\Delta\text{-structure}\\) is stable for all allowed perturbations
-
+
+**Spectral Radius Condition**:
+
Assume that the nominal system \\(M(s)\\) and the perturbations \\(\Delta(s)\\) are stable.
Consider the class of perturbations, \\(\Delta\\), such that if \\(\Delta^\prime\\) is an allowed perturbation, then so is \\(c\Delta^\prime\\) where c is any **complex** scalar such that \\(\abs{c} \le 1\\).
Then the \\(M\Delta\text{-structure}\\) is stable for all allowed perturbations **if and only if**:
-\begin{equation}
+\begin{equation} \label{eq:spectral\_radio\_condition\_complex\_pert}
\begin{aligned}
&\rho(M\Delta(j\w)) < 1, \quad \forall\w, \, \forall\Delta\\\\\\
\Leftrightarrow \quad &\max\_{\Delta} \rho(M\Delta(j\w)) < 1, \quad \forall\w
@@ -3356,7 +3987,7 @@ Then we have
&= \maxsv(M)
\end{align\*}
-
+
Assume that the nominal system \\(M(s)\\) is stable and that the perturbations \\(\Delta(s)\\) are stable.
@@ -3371,11 +4002,18 @@ Then the \\(M\Delta\text{-system}\\) is stable for all perturbations \\(\Delta\\
#### Application of the Unstructured RS-condition {#application-of-the-unstructured-rs-condition}
-We will now present necessary and sufficient conditions for robust stability for each of the six single unstructured perturbations in Figs [fig:feedforward_uncertainty](#fig:feedforward_uncertainty) and [fig:feedback_uncertainty](#fig:feedback_uncertainty) with
-\\[ E = W\_2 \Delta W\_1, \quad \hnorm{\Delta} \le 1 \\]
+We will now present necessary and sufficient conditions for robust stability for each of the six single unstructured perturbations in Figs [4](#table--fig:feedforward-uncertainty) and [5](#table--fig:feedback-uncertainty) with
+
+\begin{equation\*}
+ E = W\_2 \Delta W\_1, \quad \hnorm{\Delta} \le 1
+\end{equation\*}
To derive the matrix \\(M\\) we simply "isolate" the perturbation, and determine the transfer function matrix
-\\[ M = W\_1 M\_0 W\_2 \\]
+
+\begin{equation\*}
+ M = W\_1 M\_0 W\_2
+\end{equation\*}
+
from the output to the input of the perturbation, where \\(M\_0\\) for each of the six cases is given by
\begin{alignat\*}{2}
@@ -3388,10 +4026,16 @@ from the output to the input of the perturbation, where \\(M\_0\\) for each of t
\end{alignat\*}
Using the theorem to check RS for unstructured perturbations
-\\[ \text{RS} \quad \Leftrightarrow \quad \hnorm{W\_1 M\_0 W\_2(j\w)} < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \text{RS} \quad \Leftrightarrow \quad \hnorm{W\_1 M\_0 W\_2(j\w)} < 1, \ \forall\w
+\end{equation\*}
For instance, for feedforward input uncertainty, we get
-\\[ \text{RS}\ \forall G\_p = G(I + w\_I \Delta\_I), \hnorm{\Delta\_I} \le 1 \Leftrightarrow \hnorm{w\_I T\_I} < 1 \\]
+
+\begin{equation\*}
+ \text{RS}\ \forall G\_p = G(I + w\_I \Delta\_I), \hnorm{\Delta\_I} \le 1 \Leftrightarrow \hnorm{w\_I T\_I} < 1
+\end{equation\*}
In general, **the unstructured uncertainty descriptions in terms of a single perturbation are not "tight"** (in the sense that at each frequency all complex perturbations satisfying \\(\maxsv(\Delta(j\w)) \le 1\\) may not be possible in practice).
Thus, the above RS-conditions are often **conservative**.
@@ -3403,21 +4047,35 @@ In order to get tighter condition we must use a tighter uncertainty description
Robust stability bound in terms of the \\(\hinf\\) norm (\\(\text{RS}\Leftrightarrow\hnorm{M}<1\\)) are in general only tight when there is a single full perturbation block.
An "exception" to this is when the uncertainty blocks enter or exit from the same location in the block diagram, because they can then be stacked on top of each other or side-by-side, in an overall \\(\Delta\\) which is then full matrix.
-One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in Fig. [fig:coprime_uncertainty](#fig:coprime_uncertainty), for which the set of plants is
-\\[ G\_p = (M\_l + \Delta\_M)^{-1}(Nl + \Delta\_N), \quad \hnorm{[\Delta\_N, \ \Delta\_N]} \le \epsilon \\]
+One important uncertainty description that falls into this category is the **coprime uncertainty description** shown in Fig. [30](#org8bb0812), for which the set of plants is
+
+\begin{equation\*}
+ G\_p = (M\_l + \Delta\_M)^{-1}(Nl + \Delta\_N), \quad \hnorm{[\Delta\_N, \ \Delta\_N]} \le \epsilon
+\end{equation\*}
+
Where \\(G = M\_l^{-1} N\_l\\) is a left coprime factorization of the nominal plant.
This uncertainty description is surprisingly **general**, it allows both zeros and poles to cross into the right-half plane, and has proven to be very useful in applications.
-
+
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty.png" caption="Figure 30: Coprime Uncertainty" >}}
Since we have no weights on the perturbations, it is reasonable to use a normalized coprime factorization of the nominal plant.
In any case, to test for RS we can rearrange the block diagram to match the \\(M\Delta\text{-structure}\\) with
-\\[ \Delta = [\Delta\_N, \ \Delta\_M]; \quad M = -\begin{bmatrix}K\\I\end{bmatrix} (I + GK)^{-1} M\_l^{-1} \\]
+
+\begin{equation\*}
+ \Delta = [\Delta\_N, \ \Delta\_M]; \quad M = -\begin{bmatrix}
+ K \\\\\\
+ I
+ \end{bmatrix} (I + GK)^{-1} M\_l^{-1}
+\end{equation\*}
+
And we get
-\\[ \text{RS}\ \forall\ \hnorm{\Delta\_N, \ \Delta\_M} \le \epsilon \quad \Leftrightarrow \quad \hnorm{M} < 1/\epsilon \\]
+
+\begin{equation\*}
+ \text{RS}\ \forall\ \hnorm{\Delta\_N, \ \Delta\_M} \le \epsilon \quad \Leftrightarrow \quad \hnorm{M} < 1/\epsilon
+\end{equation\*}
The coprime uncertainty description provides a good **generic uncertainty description** for cases where we do not use any specific a priori uncertainty information.
Note that the uncertainty magnitude is \\(\epsilon\\), so it is not normalized to be less than 1 in this case.
@@ -3428,33 +4086,47 @@ This is because this uncertainty description is most often used in a controller
Consider now the presence of structured uncertainty, where \\(\Delta = \text{diag}\\{\Delta\_i\\}\\) is block-diagonal.
To test for robust stability, we rearrange the system into the \\(M\Delta\text{-structure}\\) and we have
-\\[ \text{RS if } \maxsv(M(j\w)) < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \text{RS if } \maxsv(M(j\w)) < 1, \ \forall\w
+\end{equation\*}
We have here written "if" rather than "if and only if" since this condition is only sufficient for RS when \\(\Delta\\) has "no structure".
The question is whether we can take advantage of the fact that \\(\Delta = \text{diag}\\{\Delta\_i\\}\\) is structured to obtain an RS-condition which is tighter.
On idea is to make use of the fact that stability must be independent of scaling.
To this effect, introduce the block-diagonal scaling matrix
-\\[ D = \diag{d\_i I\_i} \\]
+
+\begin{equation\*}
+ D = \diag{d\_i I\_i}
+\end{equation\*}
+
where \\(d\_i\\) is a scalar and \\(I\_i\\) is an identity matrix of the same dimension as the \\(i\\)'th perturbation block \\(\Delta\_i\\).
-Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig. [fig:block_diagonal_scalings](#fig:block_diagonal_scalings).
+Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig. [31](#orga3e207a).
This clearly has no effect on stability.
-
+
{{< figure src="/ox-hugo/skogestad07_block_diagonal_scalings.png" caption="Figure 31: Use of block-diagonal scalings, \\(\Delta D = D \Delta\\)" >}}
Note that with the chosen form for the scalings we have for each perturbation block \\(\Delta\_i = d\_i \Delta\_i d\_i^{-1}\\), that is we have \\(\Delta = D \Delta D^{-1}\\).
This means that we have
-\\[ \text{RS if } \maxsv(DM(j\w)D^{-1}) < 1, \ \forall\w \\]
-
+\begin{equation\*}
+ \text{RS if } \maxsv(DM(j\w)D^{-1}) < 1, \ \forall\w
+\end{equation\*}
+
+
This applies for any \\(D\\), and therefore the "most improved" (least conservative) RS-condition is obtained by minimizing at each frequency the scaled singular value and we have
-\\[ \text{RS if } \min\_{D(\w) \in \mathcal{D}} \maxsv(D(\w)M(j\w)D(\w)^{-1}) < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \text{RS if } \min\_{D(\w) \in \mathcal{D}} \maxsv(D(\w)M(j\w)D(\w)^{-1}) < 1, \ \forall\w
+\end{equation\*}
+
where \\(\mathcal{D}\\) is the set of block-diagonal matrices whose structure is compatible to that of \\(\Delta\\), i.e, \\(\Delta D = D \Delta\\).
@@ -3476,16 +4148,20 @@ We will use \\(\mu\\) to get necessary and sufficient conditions for robust stab
> Find the smallest structured \\(\Delta\\) (measured in terms of \\(\maxsv(\Delta)\\)) which makes the matrix \\(I - M \Delta\\) singular; then \\(\mu(M) = 1/\maxsv(\Delta)\\).
Mathematically
-\\[ \mu(M)^{-1} \triangleq \min\_{\Delta}\\{\maxsv(\Delta) | \det(I-M\Delta) = 0 \text{ for struct. }\Delta\\} \\]
+
+\begin{equation\*}
+ \mu(M)^{-1} \triangleq \min\_{\Delta}\\{\maxsv(\Delta) | \det(I-M\Delta) = 0 \text{ for struct. }\Delta\\}
+\end{equation\*}
+
Clearly, \\(\mu(M)\\) depends not only on \\(M\\) but also on the **allowed structure** for \\(\Delta\\). This is sometimes shown explicitly by using the notation \\(\mu\_\Delta (M)\\).
The above definition of \\(\mu\\) involves varying \\(\maxsv(\Delta)\\). However, we prefer to normalize \\(\Delta\\) such that \\(\maxsv(\Delta)\le1\\). We can do that by scaling \\(\Delta\\) by a factor \\(k\_m\\), and looking for the smallest \\(k\_m\\) which makes the matrix \\(I - k\_m M \Delta\\) singular. \\(\mu\\) is then the reciprocal of this small \\(k\_m\\): \\(\mu = 1/k\_m\\). This results in the following alternative definition of \\(\mu\\).
-
+
Let \\(M\\) be a given complex matrix and let \\(\Delta = \diag{\Delta\_i}\\) denote a set of complex matrices with \\(\maxsv(\Delta) \le 1\\) and with a given block-diagonal structure.
-The real non-negative function \\(\mu(M)\\), called the structured singular value, is defined by
+The real non-negative function \\(\mu(M)\\), called the **structured singular value**, is defined by
\begin{align\*}
\mu(M) \triangleq &(\min\\{ k\_m | \det(I - k\_m M \Delta) = 0\\\\\\
@@ -3512,7 +4188,11 @@ A larger value of \\(\mu\\) is "bad" as it means that a smaller perturbation mak
1. \\(\mu(\alpha M) = \abs{\alpha} \mu(M)\\) for any real scalar \\(\alpha\\)
2. Let \\(\Delta = \diag{\Delta\_1, \Delta\_2}\\) be a block-diagonal perturbation and let \\(M\\) be partitioned accordingly.
- Then \\[ \mu\_\Delta \ge \text{max} \\{\mu\_{\Delta\_1} (M\_{11}), \mu\_{\Delta\_2}(M\_{22}) \\} \\]
+ Then
+ \begin{equation\*}
+ μ\_Δ ≥ \text{max} \\{μ
Δ\_1 (M
11), μ
Δ\_2(M
22) \\}
+
+\end{equation\*}
#### Properties of \\(\mu\\) for Complex Perturbations \\(\Delta\\) {#properties-of--mu--for-complex-perturbations--delta}
@@ -3524,20 +4204,24 @@ A larger value of \\(\mu\\) is "bad" as it means that a smaller perturbation mak
\end{equation}
2. \\(\mu(\alpha M) = \abs{\alpha} \mu(M)\\) for any (complex) scalar \\(\alpha\\)
3. For a full block complex perturbation \\(\Delta\\)
- \\[ \mu(M) = \maxsv(M) \\]
-4. \\(\mu\\) for complex perturbations is bounded by the spectral radius and the singular value
+ \begin{equation\*}
+ μ(M) = \maxsv(M)
+
+\end{equation\*}
+
+1. \\(\mu\\) for complex perturbations is bounded by the spectral radius and the singular value
\begin{equation}
\tcmbox{\rho(M) \le \mu(M) \le \maxsv(M)}
\end{equation}
-5. **Improved lower bound**.
+2. **Improved lower bound**.
Defined \\(\mathcal{U}\\) as the set of all unitary matrices \\(U\\) with the same block diagonal structure as \\(\Delta\\).
Then for complex \\(\Delta\\)
\begin{equation}
\tcmbox{\mu(M) = \max\_{U\in\mathcal{U}} \rho(MU)}
\end{equation}
-6. **Improved upper bound**.
+3. **Improved upper bound**.
Defined \\(\mathcal{D}\\) as the set of all unitary matrices \\(D\\) that commute with \\(\Delta\\).
Then
@@ -3550,18 +4234,26 @@ A larger value of \\(\mu\\) is "bad" as it means that a smaller perturbation mak
Consider stability of the \\(M\Delta\text{-structure}\\) for the case where \\(\Delta\\) is a set of norm-bounded block-diagonal perturbations.
From the determinant stability condition which applies to both complex and real perturbations, we get
-\\[ \text{RS} \ \Leftrightarrow \ \det(I-M\Delta(j\w)) \ne 0, \ \forall\w,\, \forall\Delta, \, \\|\Delta\\|\_\infty \le 1 \\]
+
+\begin{equation\*}
+ \text{RS} \ \Leftrightarrow \ \det(I-M\Delta(j\w)) \ne 0, \ \forall\w,\, \forall\Delta, \, \\|\Delta\\|\_\infty \le 1
+\end{equation\*}
+
The problem is that this is only a "yes/no" condition. To find the factor \\(k\_m\\) by which the system is robustly stable, we scale the uncertainty \\(\Delta\\) by \\(k\_m\\), and look for the smallest \\(k\_m\\) which yields "borderline instability", namely
-\\[ \det(I - k\_m M \Delta) = 0 \\]
+
+\begin{equation\*}
+ \det(I - k\_m M \Delta) = 0
+\end{equation\*}
+
From the definition of \\(\mu\\), this value is \\(k\_m = 1/\mu(M)\\), and we obtain the following necessary and sufficient condition for robust stability.
-
+
Assume that the nominal system \\(M\\) and the perturbations \\(\Delta\\) are stable.
Then the \\(M\Delta\text{-system}\\) is stable for all allowed perturbations with \\(\maxsv(\Delta)\le 1, \ \forall\w\\) if on only if
-\begin{equation}
+\begin{equation} \label{eq:RS\_block\_diagonal\_pert}
\mu(M(j\w)) < 1, \ \forall \omega
\end{equation}
@@ -3575,14 +4267,24 @@ A value of \\(\mu = 1.1\\) for robust stability means that **all** the uncertain
But if we want to keep some of the uncertainty blocks fixed, how large can one particular source of uncertainty be before we get instability?
We define this value as \\(1/\mu^s\\), where \\(\mu^s\\) is called skewed-\\(\mu\\). We may view \\(\mu^s(M)\\) as a generalization of \\(\mu(M)\\).
-
+
Let \\(\Delta = \diag{\Delta\_1, \Delta\_2}\\) and assume we have fixed \\(\norm{\Delta\_1} \le 1\\) and we want to find how large \\(\Delta\_2\\) can be before we get instability.
The solution is to select
-\\[ K\_m = \begin{bmatrix}I & 0 \\ 0 & k\_m I\end{bmatrix} \\]
+
+\begin{equation\*}
+ K\_m = \begin{bmatrix}
+ I & 0 \\\\\\
+ 0 & k\_m I
+ \end{bmatrix}
+\end{equation\*}
+
and look at each frequency for the smallest value of \\(k\_m\\) which makes \\(\det(I - K\_m M \Delta) = 0\\) and we have that skewed-\\(\mu\\) is
-\\[ \mu^s(M) \triangleq 1/k\_m \\]
+
+\begin{equation\*}
+ \mu^s(M) \triangleq 1/k\_m
+\end{equation\*}
@@ -3598,10 +4300,11 @@ To test for RP, we first "pull out" the uncertain perturbations and rearrange th
Our RP-requirement, is that the \\(\hinf\\) norm of the transfer function \\(F = F\_u(N, \Delta)\\) remains less than \\(1\\) for all allowed perturbations.
This may be tested exactly by computing \\(\mu(N)\\).
-
+
-Rearrange the uncertain system into the \\(N\Delta\text{-structure}\\). Assume nominal stability such that \\(N\\) is stable.
+Rearrange the uncertain system into the \\(N\Delta\text{-structure}\\).
+Assume nominal stability such that \\(N\\) is stable.
Then
\begin{align\*}
@@ -3610,7 +4313,14 @@ Then
\end{align\*}
where \\(\mu\\) is computed with respect to the structure
-\\[ \hat{\Delta} = \begin{bmatrix}\Delta & 0 \\ 0 & \Delta\_P\end{bmatrix} \\]
+
+\begin{equation\*}
+ \hat{\Delta} = \begin{bmatrix}
+ \Delta & 0 \\\\\\
+ 0 & \Delta\_P
+ \end{bmatrix}
+\end{equation\*}
+
and \\(\Delta\_P\\) is a full complex perturbation with the same dimensions as \\(F^T\\).
@@ -3625,19 +4335,26 @@ Some remarks on the theorem:
#### Summary of \\(\mu\text{-conditions}\\) for NP, RS and RP {#summary-of--mu-text-conditions--for-np-rs-and-rp}
-
+
Rearrange the uncertain system into the \\(N\Delta\text{-structure}\\) where the block-diagonal perturbation satisfy \\(\hnorm{\Delta} \le 1\\).
Introduce
-\\[ F = F\_u(N, \Delta) = N\_{22} + N\_{21}\Delta(I - N\_{11} \Delta)^{-1} N\_{12} \\]
+
+\begin{equation\*}
+ F = F\_u(N, \Delta) = N\_{22} + N\_{21}\Delta(I - N\_{11} \Delta)^{-1} N\_{12}
+\end{equation\*}
+
Let the performance requirement be \\(\hnorm{F} \le 1\\).
\begin{align\*}
\text{NS} \ &\Leftrightarrow \ N \text{ (internally) stable} \\\\\\
\text{NP} \ &\Leftrightarrow \ \text{NS and } \maxsv(N\_{22}) = \mu\_{\Delta\_P} < 1, \ \forall\w \\\\\\
\text{RS} \ &\Leftrightarrow \ \text{NS and } \mu\_\Delta(N\_{11}) < 1, \ \forall\w \\\\\\
- \text{RP} \ &\Leftrightarrow \ \text{NS and } \mu\_{\tilde{\Delta}}(N) < 1, \ \forall\w, \ \tilde{\Delta} = \begin{bmatrix}\Delta & 0 \\ 0 & \Delta\_P\end{bmatrix}
+ \text{RP} \ &\Leftrightarrow \ \text{NS and } \mu\_{\tilde{\Delta}}(N) < 1, \ \forall\w, \ \tilde{\Delta} = \begin{bmatrix}
+ \Delta & 0 \\\\\\
+ 0 & \Delta\_P
+ \end{bmatrix}
\end{align\*}
@@ -3668,22 +4385,39 @@ So \\(\mu\\) does not directly give us the worst-case performance \\(\max\_{\Del
To find the worst-case weighted performance for a given uncertainty, one needs to keep the magnitude of the perturbation fixed (\\(\maxsv(\Delta) \le 1\\)), that is, **we must compute the skewed-\\(\mu\\)** of \\(N\\).
We have, in this case
-\\[ \max\_{\maxsv(\Delta) \le 1} \maxsv(F\_l(N, \Delta)(j\w)) = \mu^s (N(j\w)) \\]
+
+\begin{equation\*}
+ \max\_{\maxsv(\Delta) \le 1} \maxsv(F\_l(N, \Delta)(j\w)) = \mu^s (N(j\w))
+\end{equation\*}
To find \\(\mu^s\\) numerically, we scale the performance part of \\(N\\) by a factor \\(k\_m = 1/\mu^s\\) and iterate on \\(k\_m\\) until \\(\mu = 1\\).
That is, at each frequency skewed-\\(\mu\\) is the value \\(\mu^s(N)\\) which solves
-\\[ \mu(K\_mN) = 1, \quad K\_m = \begin{bmatrix}I & 0 \\ 0 & 1/\mu^s\end{bmatrix} \\]
+
+\begin{equation\*}
+ \mu(K\_mN) = 1, \quad K\_m = \begin{bmatrix}
+ I & 0 \\\\\\
+ 0 & 1/\mu^s
+ \end{bmatrix}
+\end{equation\*}
+
Note that \\(\mu\\) underestimate how bad or good the actual worst case performance is. This follows because \\(\mu^s(N)\\) is always further from 1 than \\(\mu(N)\\).
### Application: RP with Input Uncertainty {#application-rp-with-input-uncertainty}
-We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity (Fig. [fig:input_uncertainty_set_feedback_weight](#fig:input_uncertainty_set_feedback_weight)).
+We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity (Fig. [29](#org4f9f011)).
The performance requirement is then
-\\[ \text{RP} \quad \stackrel{\text{def}}{\Longleftrightarrow} \quad \hnorm{w\_P (I + G\_p K)^{-1}} < 1, \quad \forall G\_p \\]
+
+\begin{equation\*}
+ \text{RP} \quad \stackrel{\text{def}}{\Longleftrightarrow} \quad \hnorm{w\_P (I + G\_p K)^{-1}} < 1, \quad \forall G\_p
+\end{equation\*}
+
where the set of plant is given by
-\\[ G\_p = G (I + w\_I \Delta\_I), \quad \hnorm{\Delta\_I} \le 1 \\]
+
+\begin{equation\*}
+ G\_p = G (I + w\_I \Delta\_I), \quad \hnorm{\Delta\_I} \le 1
+\end{equation\*}
Here \\(w\_p(s)\\) and \\(w\_I(s)\\) are scalar weights, so the performance objective is the same for all the outputs, and the uncertainty is the same for all the inputs.
@@ -3700,8 +4434,11 @@ In this section, we will:
On rearranging the system into the \\(N\Delta\text{-structure}\\), we get
-\begin{equation}
- N = \begin{bmatrix} - w\_I T\_I & - w\_I K S \\ w\_p S G & w\_p S \end{bmatrix}
+\begin{equation} \label{eq:n\_delta\_structure\_clasic}
+ N = \begin{bmatrix}
+ - w\_I T\_I & - w\_I K S \\\\\\
+ w\_p S G & w\_p S
+ \end{bmatrix}
\end{equation}
where \\(T\_I = KG(I + KG)^{-1}\\), \\(S = (I + GK)^{-1}\\).
@@ -3724,24 +4461,47 @@ For a SISO system with \\(N\\) as described above:
Robust performance optimization, in terms of weighted sensitivity with multiplicative uncertainty for a SISO system, thus involves minimizing the peak value of \\(\mu(N) = |w\_I T| + |w\_P S|\\).
This may be solved using DK-iteration.
A closely related problem, which is easier to solve is to minimize the peak value (\\(\mathcal{H}\_\infty\\) norm) of the mixed sensitivity matrix:
-\\[ N\_\text{mix} = \begin{bmatrix} w\_P S \\ w\_I T \end{bmatrix} \\]
-At each frequency, \\(\mu(N)\\) differs from and \\(\bar{\sigma}(N\_\text{mix})\\) by at most a factor \\(\sqrt{2}\\).
+\begin{equation\*}
+ N\_\text{mix} = \begin{bmatrix}
+ w\_P S \\\\\\
+ w\_I T
+ \end{bmatrix}
+\end{equation\*}
+
+At each frequency, \\(\mu(N)\\) differs from and \\(\overline{\sigma}(N\_\text{mix})\\) by at most a factor \\(\sqrt{2}\\).
Thus, minimizing \\(\\| N\_\text{mix} \\|\_\infty\\) is close to optimizing robust performance in terms of \\(\mu(N)\\).
#### Robust Performance for \\(2 \times 2\\) Distillation Process {#robust-performance-for--2-times-2--distillation-process}
Consider a distillation process and a corresponding inverse-based controller:
-\\[ G(s) = \frac{1}{75s + 1} \begin{bmatrix} 87.8 & -86.4 \\ 108.2 & -109.6 \end{bmatrix} ; \quad K(s) = \frac{0.7}{s} G(s)^{-1} \\]
+
+\begin{equation\*}
+ G(s) = \frac{1}{75s + 1} \begin{bmatrix}
+ 87.8 & -86.4 \\\\\\
+ 108.2 & -109.6
+ \end{bmatrix} ;
+ \quad K(s) = \frac{0.7}{s} G(s)^{-1}
+\end{equation\*}
The controller provides a nominally decoupled system:
-\\[ L = l I,\ S = \epsilon I \text{ and } T = t I \\]
+
+\begin{equation\*}
+ L = l I,\ S = \epsilon I \text{ and } T = t I
+\end{equation\*}
+
where
-\\[ l = \frac{0.7}{s}, \ \epsilon = \frac{s}{s + 0.7}, \ t = \frac{0.7}{s + 0.7} \\]
+
+\begin{equation\*}
+ l = \frac{0.7}{s}, \ \epsilon = \frac{s}{s + 0.7}, \ t = \frac{0.7}{s + 0.7}
+\end{equation\*}
The following weights for uncertainty and performance are used:
-\\[ w\_I(s) = \frac{s + 0.2}{0.5s + 1}; \quad w\_P(s) = \frac{s/2 + 0.05}{s} \\]
+
+\begin{equation\*}
+ w\_I(s) = \frac{s + 0.2}{0.5s + 1}; \quad w\_P(s) = \frac{s/2 + 0.05}{s}
+\end{equation\*}
We now test for NS, NP, RS and RP.
@@ -3754,10 +4514,14 @@ with \\(G\\) and \\(K\\) as defined, we find that \\(S\\), \\(SG\\), \\(KS\\) an
##### NP {#np}
with the decoupling controller we have:
-\\[ \bar{\sigma}(N\_{22}) = \bar{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right| \\]
-and we see from Fig. [fig:mu_plots_distillation](#fig:mu_plots_distillation) that the NP-condition is satisfied.
-
+\begin{equation\*}
+ \overline{\sigma}(N\_{22}) = \overline{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right|
+\end{equation\*}
+
+and we see from Fig. [32](#org7d3694a) that the NP-condition is satisfied.
+
+
{{< figure src="/ox-hugo/skogestad07_mu_plots_distillation.png" caption="Figure 32: \\(\mu\text{-plots}\\) for distillation process with decoupling controller" >}}
@@ -3765,8 +4529,12 @@ and we see from Fig. [fig:mu_plots_distillation](#fig:mu_plots_distillation
##### RS {#rs}
In this case \\(w\_I T\_I = w\_I T\\) is a scalar times the identity matrix:
-\\[ \mu\_{\Delta\_I}(w\_I T\_I) = |w\_I t| = \left|0.2 \frac{5s + 1}{(0.5s + 1)(1.43s + 1)}\right| \\]
-and we see from Fig. [fig:mu_plots_distillation](#fig:mu_plots_distillation) that RS is satisfied.
+
+\begin{equation\*}
+ \mu\_{\Delta\_I}(w\_I T\_I) = |w\_I t| = \left|0.2 \frac{5s + 1}{(0.5s + 1)(1.43s + 1)}\right|
+\end{equation\*}
+
+and we see from Fig. [32](#org7d3694a) that RS is satisfied.
The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may increase the uncertainty by a factor of \\(1/0.53 = 1.89\\) before the worst case uncertainty yields instability.
@@ -3774,7 +4542,7 @@ The peak value of \\(\mu\_{\Delta\_I}(M)\\) is \\(0.53\\) meaning that we may in
##### RP {#rp}
Although the system has good robustness margins and excellent nominal performance, the robust performance is poor.
-This is shown in Fig. [fig:mu_plots_distillation](#fig:mu_plots_distillation) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\).
+This is shown in Fig. [32](#org7d3694a) where the \\(\mu\text{-curve}\\) for RP was computed numerically using \\(\mu\_{\hat{\Delta}}(N)\\), with \\(\hat{\Delta} = \text{diag}\\{\Delta\_I, \Delta\_P\\}\\) and \\(\Delta\_I = \text{diag}\\{\delta\_1, \delta\_2\\}\\).
The peak value is close to 6, meaning that even with 6 times less uncertainty, the weighted sensitivity will be about 6 times larger than what we require.
@@ -3783,9 +4551,16 @@ The peak value is close to 6, meaning that even with 6 times less uncertainty, t
We here consider the relationship between \\(\mu\\) for RP and the condition number of the plant or of the controller.
We consider unstructured multiplicative uncertainty (i.e. \\(\Delta\_I\\) is a full matrix) and performance is measured in terms of the weighted sensitivity.
With \\(N\\) given by \eqref{eq:n_delta_structure_clasic}, we have:
-\\[ \overbrace{\mu\_{\tilde{\Delta}}(N)}^{\text{RP}} \le [ \overbrace{\bar{\sigma}(w\_I T\_I)}^{\text{RS}} + \overbrace{\bar{\sigma}(w\_P S)}^{\text{NP}} ] (1 + \sqrt{k}) \\]
+
+\begin{equation\*}
+ \overbrace{\mu\_{\tilde{\Delta}}(N)}^{\text{RP}} \le [ \overbrace{\overline{\sigma}(w\_I T\_I)}^{\text{RS}} + \overbrace{\overline{\sigma}(w\_P S)}^{\text{NP}} ] (1 + \sqrt{k})
+\end{equation\*}
+
where \\(k\\) is taken as the smallest value between the condition number of the plant and of the controller:
-\\[ k = \text{min}(\gamma(G), \gamma(K)) \\]
+
+\begin{equation\*}
+ k = \text{min}(\gamma(G), \gamma(K))
+\end{equation\*}
We see that with a "round" controller (i.e. one with \\(\gamma(K) = 1\\)), there is less sensitivity to uncertainty.
On the other hand, we would expect \\(\mu\\) for RP to be large if we used an inverse-based controller for a plant with large condition number, since then \\(\gamma(K) = \gamma(G)\\) is large.
@@ -3795,10 +4570,27 @@ On the other hand, we would expect \\(\mu\\) for RP to be large if we used an in
Consider output multiplicative uncertainty of magnitude \\(w\_O(j\omega)\\).
In this case, we get the interconnection matrix
-\\[ N = \begin{bmatrix} w\_O T & w\_O T \\ w\_P S & w\_P S \end{bmatrix} \\]
+
+\begin{equation\*}
+ N = \begin{bmatrix}
+ w\_O T & w\_O T \\\\\\
+ w\_P S & w\_P S
+ \end{bmatrix}
+\end{equation\*}
+
and for any structure of the uncertainty, \\(\mu(N)\\) is bounded as follows:
-\\[ \bar{\sigma}\begin{bmatrix} w\_O T \\ w\_P S\end{bmatrix} \le \overbrace{\mu(N)}^{\text{RP}} \le \sqrt{2}\ \bar{\sigma} \overbrace{\underbrace{\begin{bmatrix} w\_O T \\ w\_P S \end{bmatrix}}\_{\text{NP}}}^{\text{RS}} \\]
-This follows since the uncertainty and performance blocks both enter at the output and that the difference between bounding the combined perturbations \\(\bar{\sigma}[\Delta\_O \ \Delta\_P]\\) and the individual perturbations \\(\bar{\sigma}(\Delta\_O)\\) and \\(\bar{\sigma}(\Delta\_P)\\) is at most a factor \\(\sqrt{2}\\).
+
+\begin{equation\*}
+ \overline{\sigma}\begin{bmatrix}
+ w\_O T \\\\\\
+ w\_P S
+ \end{bmatrix} \le \overbrace{\mu(N)}^{\text{RP}} \le \sqrt{2}\ \overline{\sigma} \overbrace{\underbrace{\begin{bmatrix}
+ w\_O T \\\\\\
+ w\_P S
+ \end{bmatrix}}\_{\text{NP}}}^{\text{RS}}
+\end{equation\*}
+
+This follows since the uncertainty and performance blocks both enter at the output and that the difference between bounding the combined perturbations \\(\overline{\sigma}[\Delta\_O \ \Delta\_P]\\) and the individual perturbations \\(\overline{\sigma}(\Delta\_O)\\) and \\(\overline{\sigma}(\Delta\_P)\\) is at most a factor \\(\sqrt{2}\\).
Thus, we "automatically" achieve RP if we satisfy separately NP and RS.
Multiplicative output uncertainty then poses no particular problem for performance.
@@ -3816,13 +4608,13 @@ It combines \\(\hinf\\) synthesis and \\(\mu\text{-analysis}\\) and often yields
The starting point is the upper bound on \\(\mu\\) in terms of the scaled singular value
-\begin{equation}
+\begin{equation} \label{eq:upper\_bound\_mu}
\mu(N) \le \min\_{D \in \mathcal{D}} \maxsv(D N D^{-1})
\end{equation}
The idea is to find the controller that minimizes the peak value over frequency of this upper bound, namely
-\begin{equation}
+\begin{equation} \label{eq:min\_peak\_value\_scale\_sv}
\min\_{K} \left( \min\_{D \in \mathcal{D}} \hnorm{D N(K) D^{-1} } \right)
\end{equation}
@@ -3831,9 +4623,11 @@ by alternating between minimizing \\(\hnorm{DN(K)D^{-1}}\\) with respect to eith
To start the iterations, one selects an initial stable rational transfer matrix \\(D(s)\\) with appropriate structure.
The identity matrix is often a good initial choice for \\(D\\) provided the system has been reasonably scaled for performance.
-
+
+**DK-Procedure**:
+
1. **K-step**. Synthesize an \\(\hinf\\) controller for the scaled problem, \\(\min\_{K} \hnorm{DN(K)D^{-1}}\\) with fixed \\(D(s)\\)
2. **D-step**. Find \\(D(j\w)\\) to minimize at each frequency \\(\maxsv(DND^{-1}(j\w))\\) with fixed \\(N\\)
3. Fit the magnitude of each element of \\(D(j\w)\\) to a stable and minimum phase transfer function \\(D(s)\\) and go to step 1
@@ -3859,10 +4653,18 @@ In \\(\mu\text{-synthesis}\\), the designer will usually adjust some parameter i
Sometimes, uncertainty is fixed and we effectively optimize worst-cast performance by adjusting a parameter in the performance weight.
Consider the performance weight
-\\[ w\_p(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A} \\]
+
+\begin{equation\*}
+ w\_p(s) = \frac{s/M + \w\_B^\*}{s + \w\_B^\* A}
+\end{equation\*}
+
where we want to keep \\(M\\) constant and find the high achievable bandwidth frequency \\(\w\_B^\*\\).
The optimization problem becomes
-\\[ \text{max} \abs{\w\_B^\*} \quad \text{such that} \quad \mu(N) < 1, \ \forall\w \\]
+
+\begin{equation\*}
+ \text{max} \abs{\w\_B^\*} \quad \text{such that} \quad \mu(N) < 1, \ \forall\w
+\end{equation\*}
+
where \\(N\\), the interconnection matrix for the RP-problem, depends on \\(\w\_B^\*\\). This may be implemented as an **outer loop around the DK-iteration**.
@@ -3878,9 +4680,9 @@ The latter is an attempt to "flatten out" \\(\mu\\).
#### Example: \\(\mu\text{-synthesis}\\) with DK-iteration {#example--mu-text-synthesis--with-dk-iteration}
For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity.
-The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig. [fig:weights_distillation](#fig:weights_distillation).
+The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig. [33](#org3040e0c).
-
+
{{< figure src="/ox-hugo/skogestad07_weights_distillation.png" caption="Figure 33: Uncertainty and performance weights" >}}
@@ -3894,8 +4696,8 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
- Iteration No. 1.
Step 1: with the initial scalings, the \\(\mathcal{H}\_\infty\\) synthesis produced a 6 state controller (2 states from the plant model and 2 from each of the weights).
- Step 2: the upper \\(\mu\text{-bound}\\) is shown in Fig. [fig:dk_iter_mu](#fig:dk_iter_mu).
- Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in Fig. [fig:dk_iter_d_scale](#fig:dk_iter_d_scale)
+ Step 2: the upper \\(\mu\text{-bound}\\) is shown in Fig. [34](#orgfced99f).
+ Step 3: the frequency dependent \\(d\_1(\omega)\\) and \\(d\_2(\omega)\\) from step 2 are fitted using a 4th order transfer function shown in Fig. [35](#org462af49)
- Iteration No. 2.
Step 1: with the 8 state scalings \\(D^1(s)\\), the \\(\mathcal{H}\_\infty\\) synthesis gives a 22 state controller.
Step 2: This controller gives a peak value of \\(\mu\\) of \\(1.02\\).
@@ -3903,27 +4705,27 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
- Iteration No. 3.
Step 1: The \\(\mathcal{H}\_\infty\\) norm is only slightly reduced. We thus decide the stop the iterations.
-
+
{{< figure src="/ox-hugo/skogestad07_dk_iter_mu.png" caption="Figure 34: Change in \\(\mu\\) during DK-iteration" >}}
-
+
{{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="Figure 35: Change in D-scale \\(d\_1\\) during DK-iteration" >}}
-The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in Fig. [fig:mu_plot_optimal_k3](#fig:mu_plot_optimal_k3).
+The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\\) are shown in Fig. [36](#org75929f1).
The objectives of RS and NP are easily satisfied.
-The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\bar{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants.
+The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\overline{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants.
-
+
{{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="Figure 36: \\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}}
-To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig. [fig:perturb_s_k3](#fig:perturb_s_k3).
+To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig. [37](#org73cb573).
-
+
-{{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\bar{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}}
+{{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\overline{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}}
### Further Remarks on \\(\mu\\) {#further-remarks-on--mu}
@@ -3939,15 +4741,27 @@ We have discussed how to represent uncertainty and how to analyze its effect on
To analyze robust stability of an uncertain system, we make use of the \\(M\Delta\text{-structure}\\) where \\(M\\) represents the transfer function for the "new" feedback part generated by the uncertainty.
From the small gain theorem
-\\[ \tcmbox{RS \quad \Leftarrow \quad \maxsv(M) < 1, \ \forall\w} \\]
+
+\begin{equation\*}
+ \tcmbox{RS \quad \Leftarrow \quad \maxsv(M) < 1, \ \forall\w}
+\end{equation\*}
+
which is tight (necessary and sufficient) for the special case where at each frequency any complex \\(\Delta\\) satisfying \\(\maxsv(\Delta) \le 1\\) is allowed.
More generally, the **tight condition is**
-\\[ \tcmbox{RP \quad \Leftrightarrow \quad \mu(M) < 1, \ \forall\w} \\]
+
+\begin{equation\*}
+ \tcmbox{RP \quad \Leftrightarrow \quad \mu(M) < 1, \ \forall\w}
+\end{equation\*}
+
where \\(\mu(M)\\) is the **structured singular value**. The calculation of \\(\mu\\) makes use of the fact that \\(\Delta\\) has a given block-diagonal structure, where certain blocks may also be real (e.g. to handle parametric uncertainty).
We defined robust performance as \\(\hnorm{F\_l(N, \Delta)} < 1\\) for all allowed \\(\Delta\\).
Since we used the \\(\hinf\\) norm in both the representation of uncertainty and the definition of performance, we found that RP could be viewed as a special case of RS, and we derived
-\\[ \tcmbox{RS \quad \Leftrightarrow \quad \mu(N) < 1, \ \forall\w} \\]
+
+\begin{equation\*}
+ \tcmbox{RS \quad \Leftrightarrow \quad \mu(N) < 1, \ \forall\w}
+\end{equation\*}
+
where \\(\mu\\) is computed with respect to the **block-diagonal structure** \\(\diag{\Delta, \Delta\_P}\\).
Here \\(\Delta\\) represents the uncertainty and \\(\Delta\_P\\) is a fictitious full uncertainty block representing the \\(\hinf\\) performance bound.
@@ -3976,7 +4790,7 @@ If resulting control performance is not satisfactory, one may switch to the seco
## Controller Design {#controller-design}
-
+
### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design}
@@ -3986,23 +4800,23 @@ By multivariable transfer function shaping, therefore, we mean the shaping of th
The classical loop-shaping ideas can be further generalized to MIMO systems by considering the singular values.
-Consider the one degree-of-freedom system as shown in Fig. [fig:classical_feedback_small](#fig:classical_feedback_small).
+Consider the one degree-of-freedom system as shown in Fig. [38](#org86eebc5).
We have the following important relationships:
-\begin{subequations}
- \begin{align}
- y(s) &= T(s) r(s) + S(s) d(s) - T(s) n(s) \\\\\\
- u(s) &= K(s) S(s) \big(r(s) - n(s) - d(s) \big)
- \end{align}
-\end{subequations}
+\begin{align}
+ y(s) &= T(s) r(s) + S(s) d(s) - T(s) n(s) \\\\\\
+ u(s) &= K(s) S(s) \big(r(s) - n(s) - d(s) \big)
+\end{align}
-
+
{{< figure src="/ox-hugo/skogestad07_classical_feedback_small.png" caption="Figure 38: One degree-of-freedom feedback configuration" >}}
-
+
+**Typical Closed-Loop Objectives**:
+
1. For disturbance rejection make \\(\maxsv(S)\\) small
2. For noise attenuation make \\(\maxsv(T)\\) small
3. For reference tracking make \\(\maxsv(T) \approx \minsv(T) \approx 1\\)
@@ -4018,15 +4832,21 @@ This is not always as difficult as it sounds because the frequency range over wh
In classical loop shaping, it is the magnitude of the open-loop transfer function \\(L = GK\\) which is shaped, whereas the above requirements are all in terms of closed-loop transfer functions.
However, we have that
-\\[ \minsv(L) - 1 \le \frac{1}{\maxsv(S)} \le \minsv(L) + 1 \\]
+
+\begin{equation\*}
+ \minsv(L) - 1 \le \frac{1}{\maxsv(S)} \le \minsv(L) + 1
+\end{equation\*}
+
from which we see that \\(\maxsv(S) \approx 1/\minsv(L)\\) at frequencies where \\(\minsv(L)\\) is much larger than \\(1\\).
Furthermore, from \\(T = L(I+L)^{-1}\\) it follows that \\(\maxsv(T) \approx \maxsv(L)\\) at frequencies where \\(\maxsv(L)\\) is much smaller than \\(1\\).
Thus, over specified frequency ranges, it is relatively easy to approximate the closed-loop requirements by open-loop objectives.
-
+