Update Content - 2020-12-11

This commit is contained in:
Thomas Dehaeze 2020-12-11 16:00:37 +01:00
parent 86eb3b2a13
commit 8f368b7515
11 changed files with 325 additions and 211 deletions

View File

@ -9,7 +9,7 @@ Tags
Reference Reference
: ([McInroy 2002](#orgf7c9a88)) : ([McInroy 2002](#org48d21a1))
Author(s) Author(s)
: McInroy, J. : McInroy, J.
@ -17,7 +17,7 @@ Author(s)
Year Year
: 2002 : 2002
This short paper is very similar to ([McInroy 1999](#org4526c4b)). This short paper is very similar to ([McInroy 1999](#org287d886)).
> This paper develops guidelines for designing the flexure joints to facilitate closed-loop control. > This paper develops guidelines for designing the flexure joints to facilitate closed-loop control.
@ -36,15 +36,15 @@ This short paper is very similar to ([McInroy 1999](#org4526c4b)).
## Flexure Jointed Hexapod Dynamics {#flexure-jointed-hexapod-dynamics} ## Flexure Jointed Hexapod Dynamics {#flexure-jointed-hexapod-dynamics}
<a id="orgd884ef4"></a> <a id="org66e9285"></a>
{{< figure src="/ox-hugo/mcinroy02_leg_model.png" caption="Figure 1: The dynamics of the ith strut. A parallel spring, damper, and actautor drives the moving mass of the strut and a payload" >}} {{< figure src="/ox-hugo/mcinroy02_leg_model.png" caption="Figure 1: The dynamics of the ith strut. A parallel spring, damper, and actautor drives the moving mass of the strut and a payload" >}}
The strut can be modeled as consisting of a parallel arrangement of an actuator force, a spring and some damping driving a mass (Figure [1](#orgd884ef4)). The strut can be modeled as consisting of a parallel arrangement of an actuator force, a spring and some damping driving a mass (Figure [1](#org66e9285)).
Thus, **the strut does not output force directly, but rather outputs a mechanically filtered force**. Thus, **the strut does not output force directly, but rather outputs a mechanically filtered force**.
The model of the strut are shown in Figure [1](#orgd884ef4) with: The model of the strut are shown in Figure [1](#org66e9285) with:
- \\(m\_{s\_i}\\) moving strut mass - \\(m\_{s\_i}\\) moving strut mass
- \\(k\_i\\) spring constant - \\(k\_i\\) spring constant
@ -132,16 +132,16 @@ Many prior hexapod dynamic formulations assume that the strut exerts force only
The flexure joints Hexapods transmit forces (or torques) proportional to the deflection of the joints. The flexure joints Hexapods transmit forces (or torques) proportional to the deflection of the joints.
This section establishes design guidelines for the spherical flexure joint to guarantee that the dynamics remain tractable for control. This section establishes design guidelines for the spherical flexure joint to guarantee that the dynamics remain tractable for control.
<a id="org54c99c4"></a> <a id="org343afcb"></a>
{{< figure src="/ox-hugo/mcinroy02_model_strut_joint.png" caption="Figure 2: A simplified dynamic model of a strut and its joint" >}} {{< figure src="/ox-hugo/mcinroy02_model_strut_joint.png" caption="Figure 2: A simplified dynamic model of a strut and its joint" >}}
Figure [2](#org54c99c4) depicts a strut, along with the corresponding force diagram. Figure [2](#org343afcb) depicts a strut, along with the corresponding force diagram.
The force diagram is obtained using standard finite element assumptions (\\(\sin \theta \approx \theta\\)). The force diagram is obtained using standard finite element assumptions (\\(\sin \theta \approx \theta\\)).
Damping terms are neglected. Damping terms are neglected.
\\(k\_r\\) denotes the rotational stiffness of the spherical joint. \\(k\_r\\) denotes the rotational stiffness of the spherical joint.
From Figure [2](#org54c99c4) (b), Newton's second law yields: From Figure [2](#org343afcb) (b), Newton's second law yields:
\begin{equation} \begin{equation}
f\_p = \begin{bmatrix} f\_p = \begin{bmatrix}
@ -188,7 +188,7 @@ The first part depends on the mechanical terms and the frequency of the movement
x\_{\text{gain}\_\omega} = \frac{|-m\_s \omega^2 + k|}{|-m\_s \omega^2 + \frac{k\_r}{l^2}|} x\_{\text{gain}\_\omega} = \frac{|-m\_s \omega^2 + k|}{|-m\_s \omega^2 + \frac{k\_r}{l^2}|}
\end{equation} \end{equation}
<div class="bred"> <div class="important">
<div></div> <div></div>
In order to get dominance at low frequencies, the hexapod must be designed so that: In order to get dominance at low frequencies, the hexapod must be designed so that:
@ -206,7 +206,7 @@ By satisfying \eqref{eq:cond_stiff}, \\(f\_p\\) can be aligned with the strut fo
At frequencies much above the strut's resonance mode, \\(f\_p\\) is not dominated by its \\(x\\) component: At frequencies much above the strut's resonance mode, \\(f\_p\\) is not dominated by its \\(x\\) component:
\\[ \omega \gg \sqrt{\frac{k}{m\_s}} \rightarrow x\_{\text{gain}\_\omega} \approx 1 \\] \\[ \omega \gg \sqrt{\frac{k}{m\_s}} \rightarrow x\_{\text{gain}\_\omega} \approx 1 \\]
<div class="bred"> <div class="important">
<div></div> <div></div>
To ensure that the control system acts only in the band of frequencies where dominance is retained, the control bandwidth can be selected so that: To ensure that the control system acts only in the band of frequencies where dominance is retained, the control bandwidth can be selected so that:
@ -225,7 +225,7 @@ In this case, it is reasonable to use:
\text{control bandwidth} \ll \sqrt{\frac{k}{m\_s}} \text{control bandwidth} \ll \sqrt{\frac{k}{m\_s}}
\end{equation} \end{equation}
<div class="bred"> <div class="important">
<div></div> <div></div>
By designing the flexure jointed hexapod and its controller so that both \eqref{eq:cond_stiff} and \eqref{eq:cond_bandwidth} are met, the dynamics of the hexapod can be greatly reduced in complexity. By designing the flexure jointed hexapod and its controller so that both \eqref{eq:cond_stiff} and \eqref{eq:cond_bandwidth} are met, the dynamics of the hexapod can be greatly reduced in complexity.
@ -271,6 +271,6 @@ By using the vector triple identity \\(a \cdot (b \times c) = b \cdot (c \times
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="org4526c4b"></a>McInroy, J.E. 1999. “Dynamic Modeling of Flexure Jointed Hexapods for Control Purposes.” In _Proceedings of the 1999 IEEE International Conference on Control Applications (Cat. No.99CH36328)_, nil. <https://doi.org/10.1109/cca.1999.806694>. <a id="org287d886"></a>McInroy, J.E. 1999. “Dynamic Modeling of Flexure Jointed Hexapods for Control Purposes.” In _Proceedings of the 1999 IEEE International Conference on Control Applications (Cat. No.99CH36328)_, nil. <https://doi.org/10.1109/cca.1999.806694>.
<a id="orgf7c9a88"></a>———. 2002. “Modeling and Design of Flexure Jointed Stewart Platforms for Control Purposes.” _IEEE/ASME Transactions on Mechatronics_ 7 (1):9599. <https://doi.org/10.1109/3516.990892>. <a id="org48d21a1"></a>———. 2002. “Modeling and Design of Flexure Jointed Stewart Platforms for Control Purposes.” _IEEE/ASME Transactions on Mechatronics_ 7 (1):9599. <https://doi.org/10.1109/3516.990892>.

View File

@ -9,7 +9,7 @@ Tags
Reference Reference
: ([Fleming and Leang 2014](#orga9e1886)) : ([Fleming and Leang 2014](#org378bdb9))
Author(s) Author(s)
: Fleming, A. J., & Leang, K. K. : Fleming, A. J., & Leang, K. K.
@ -821,15 +821,15 @@ Year
### Amplifier and Piezo electrical models {#amplifier-and-piezo-electrical-models} ### Amplifier and Piezo electrical models {#amplifier-and-piezo-electrical-models}
<a id="orgddc1a2b"></a> <a id="org80070ee"></a>
{{< figure src="/ox-hugo/fleming14_amplifier_model.png" caption="Figure 1: A voltage source \\(V\_s\\) driving a piezoelectric load. The actuator is modeled by a capacitance \\(C\_p\\) and strain-dependent voltage source \\(V\_p\\). The resistance \\(R\_s\\) is the output impedance and \\(L\\) the cable inductance." >}} {{< figure src="/ox-hugo/fleming14_amplifier_model.png" caption="Figure 1: A voltage source \\(V\_s\\) driving a piezoelectric load. The actuator is modeled by a capacitance \\(C\_p\\) and strain-dependent voltage source \\(V\_p\\). The resistance \\(R\_s\\) is the output impedance and \\(L\\) the cable inductance." >}}
Consider the electrical circuit shown in Figure [1](#orgddc1a2b) where a voltage source is connected to a piezoelectric actuator. Consider the electrical circuit shown in Figure [1](#org80070ee) where a voltage source is connected to a piezoelectric actuator.
The actuator is modeled as a capacitance \\(C\_p\\) in series with a strain-dependent voltage source \\(V\_p\\). The actuator is modeled as a capacitance \\(C\_p\\) in series with a strain-dependent voltage source \\(V\_p\\).
The resistance \\(R\_s\\) and inductance \\(L\\) are the source impedance and the cable inductance respectively. The resistance \\(R\_s\\) and inductance \\(L\\) are the source impedance and the cable inductance respectively.
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Typical inductance of standard RG-58 coaxial cable is \\(250 nH/m\\). Typical inductance of standard RG-58 coaxial cable is \\(250 nH/m\\).
@ -902,7 +902,7 @@ For sinusoidal signals, the amplifiers slew rate must exceed:
\\[ SR\_{\text{sin}} > V\_{p-p} \pi f \\] \\[ SR\_{\text{sin}} > V\_{p-p} \pi f \\]
where \\(V\_{p-p}\\) is the peak to peak voltage and \\(f\\) is the frequency. where \\(V\_{p-p}\\) is the peak to peak voltage and \\(f\\) is the frequency.
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
If a 300kHz sine wave is to be reproduced with an amplitude of 10V, the required slew rate is \\(\approx 20 V/\mu s\\). If a 300kHz sine wave is to be reproduced with an amplitude of 10V, the required slew rate is \\(\approx 20 V/\mu s\\).
@ -948,4 +948,4 @@ The bandwidth limitations of standard piezoelectric drives were identified as:
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="orga9e1886"></a>Fleming, Andrew J., and Kam K. Leang. 2014. _Design, Modeling and Control of Nanopositioning Systems_. Advances in Industrial Control. Springer International Publishing. <https://doi.org/10.1007/978-3-319-06617-2>. <a id="org378bdb9"></a>Fleming, Andrew J., and Kam K. Leang. 2014. _Design, Modeling and Control of Nanopositioning Systems_. Advances in Industrial Control. Springer International Publishing. <https://doi.org/10.1007/978-3-319-06617-2>.

View File

@ -4,15 +4,11 @@ author = ["Thomas Dehaeze"]
draft = false draft = false
+++ +++
Backlinks:
- [Finite Element Model]({{< relref "finite_element_model" >}})
Tags Tags
: [Finite Element Model]({{< relref "finite_element_model" >}}) : [Finite Element Model]({{< relref "finite_element_model" >}})
Reference Reference
: ([Hatch 2000](#org5c6cd54)) : ([Hatch 2000](#org50dffcf))
Author(s) Author(s)
: Hatch, M. R. : Hatch, M. R.
@ -25,16 +21,16 @@ Matlab Code form the book is available [here](https://in.mathworks.com/matlabcen
## Introduction {#introduction} ## Introduction {#introduction}
<a id="org287c7b5"></a> <a id="orgda412b2"></a>
The main goal of this book is to show how to take results of large dynamic finite element models and build small Matlab state space dynamic mechanical models for use in control system models. The main goal of this book is to show how to take results of large dynamic finite element models and build small Matlab state space dynamic mechanical models for use in control system models.
### Modal Analysis {#modal-analysis} ### Modal Analysis {#modal-analysis}
The diagram in Figure [1](#org03ed3a8) shows the methodology for analyzing a lightly damped structure using normal modes. The diagram in Figure [1](#org4d6ba0d) shows the methodology for analyzing a lightly damped structure using normal modes.
<div class="bred"> <div class="important">
<div></div> <div></div>
The steps are: The steps are:
@ -50,7 +46,7 @@ The steps are:
</div> </div>
<a id="org03ed3a8"></a> <a id="org4d6ba0d"></a>
{{< figure src="/ox-hugo/hatch00_modal_analysis_flowchart.png" caption="Figure 1: Modal analysis method flowchart" >}} {{< figure src="/ox-hugo/hatch00_modal_analysis_flowchart.png" caption="Figure 1: Modal analysis method flowchart" >}}
@ -59,10 +55,10 @@ The steps are:
Because finite element models usually have a very large number of states, an important step is the reduction of the number of states while still providing correct responses for the forcing function input and desired output points. Because finite element models usually have a very large number of states, an important step is the reduction of the number of states while still providing correct responses for the forcing function input and desired output points.
<div class="bred"> <div class="important">
<div></div> <div></div>
Figure [2](#orgefbc9c9) shows such process, the steps are: Figure [2](#org1081f0b) shows such process, the steps are:
- start with the finite element model - start with the finite element model
- compute the eigenvalues and eigenvectors (as many as dof in the model) - compute the eigenvalues and eigenvectors (as many as dof in the model)
@ -75,14 +71,14 @@ Figure [2](#orgefbc9c9) shows such process, the steps are:
</div> </div>
<a id="orgefbc9c9"></a> <a id="org1081f0b"></a>
{{< figure src="/ox-hugo/hatch00_model_reduction_chart.png" caption="Figure 2: Model size reduction flowchart" >}} {{< figure src="/ox-hugo/hatch00_model_reduction_chart.png" caption="Figure 2: Model size reduction flowchart" >}}
### Notations {#notations} ### Notations {#notations}
Tables [3](#org806d457), [2](#table--tab:notations-eigen-vectors-values) and [3](#table--tab:notations-stiffness-mass) summarize the notations of this document. Tables [3](#orgb6964ec), [2](#table--tab:notations-eigen-vectors-values) and [3](#table--tab:notations-stiffness-mass) summarize the notations of this document.
<a id="table--tab:notations-modes-nodes"></a> <a id="table--tab:notations-modes-nodes"></a>
<div class="table-caption"> <div class="table-caption">
@ -131,22 +127,22 @@ Tables [3](#org806d457), [2](#table--tab:notations-eigen-vectors-values) and [3]
## Zeros in SISO Mechanical Systems {#zeros-in-siso-mechanical-systems} ## Zeros in SISO Mechanical Systems {#zeros-in-siso-mechanical-systems}
<a id="org24ddd79"></a> <a id="org8996806"></a>
The origin and influence of poles are clear: they represent the resonant frequencies of the system, and for each resonance frequency, a mode shape can be defined to describe the motion at that frequency. The origin and influence of poles are clear: they represent the resonant frequencies of the system, and for each resonance frequency, a mode shape can be defined to describe the motion at that frequency.
We here which to give an intuitive understanding for **when to expect zeros in SISO mechanical systems** and **how to predict the frequencies at which they will occur**. We here which to give an intuitive understanding for **when to expect zeros in SISO mechanical systems** and **how to predict the frequencies at which they will occur**.
Figure [3](#org806d457) shows a series arrangement of masses and springs, with a total of \\(n\\) masses and \\(n+1\\) springs. Figure [3](#orgb6964ec) shows a series arrangement of masses and springs, with a total of \\(n\\) masses and \\(n+1\\) springs.
The degrees of freedom are numbered from left to right, \\(z\_1\\) through \\(z\_n\\). The degrees of freedom are numbered from left to right, \\(z\_1\\) through \\(z\_n\\).
<a id="org806d457"></a> <a id="orgb6964ec"></a>
{{< figure src="/ox-hugo/hatch00_n_dof_zeros.png" caption="Figure 3: n dof system showing various SISO input/output configurations" >}} {{< figure src="/ox-hugo/hatch00_n_dof_zeros.png" caption="Figure 3: n dof system showing various SISO input/output configurations" >}}
<div class="bred"> <div class="important">
<div></div> <div></div>
([Miu 1993](#orgb6198fb)) shows that the zeros of any particular transfer function are the poles of the constrained system to the left and/or right of the system defined by constraining the one or two dof's defining the transfer function. ([Miu 1993](#org03acd9e)) shows that the zeros of any particular transfer function are the poles of the constrained system to the left and/or right of the system defined by constraining the one or two dof's defining the transfer function.
The resonances of the "overhanging appendages" of the constrained system create the zeros. The resonances of the "overhanging appendages" of the constrained system create the zeros.
@ -155,16 +151,16 @@ The resonances of the "overhanging appendages" of the constrained system create
## State Space Analysis {#state-space-analysis} ## State Space Analysis {#state-space-analysis}
<a id="orgc443288"></a> <a id="org8166c96"></a>
## Modal Analysis {#modal-analysis} ## Modal Analysis {#modal-analysis}
<a id="org99cf9a5"></a> <a id="org331466a"></a>
Lightly damped structures are typically analyzed with the "normal mode" method described in this section. Lightly damped structures are typically analyzed with the "normal mode" method described in this section.
<div class="bred"> <div class="important">
<div></div> <div></div>
The modal method allows one to replace the n-coupled differential equations with n-uncoupled equations, where each uncoupled equation represents the motion of the system for that mode of vibration. The modal method allows one to replace the n-coupled differential equations with n-uncoupled equations, where each uncoupled equation represents the motion of the system for that mode of vibration.
@ -176,7 +172,7 @@ The overall response of the system is then reconstructed as a superposition of t
Heavily damped structures or structures which explicit damping elements, such as dashpots, result in complex modes and require state space solution techniques using the original coupled equations of motion. Heavily damped structures or structures which explicit damping elements, such as dashpots, result in complex modes and require state space solution techniques using the original coupled equations of motion.
Thus, the present methods only works for lightly damped structures. Thus, the present methods only works for lightly damped structures.
<div class="bred"> <div class="important">
<div></div> <div></div>
Summarizing the modal analysis method of analyzing linear mechanical systems and the benefits derived: Summarizing the modal analysis method of analyzing linear mechanical systems and the benefits derived:
@ -200,9 +196,9 @@ Summarizing the modal analysis method of analyzing linear mechanical systems and
#### Equation of Motion {#equation-of-motion} #### Equation of Motion {#equation-of-motion}
Let's consider the model shown in Figure [4](#org87075a2) with \\(k\_1 = k\_2 = k\\), \\(m\_1 = m\_2 = m\_3 = m\\) and \\(c\_1 = c\_2 = 0\\). Let's consider the model shown in Figure [4](#org627cff8) with \\(k\_1 = k\_2 = k\\), \\(m\_1 = m\_2 = m\_3 = m\\) and \\(c\_1 = c\_2 = 0\\).
<a id="org87075a2"></a> <a id="org627cff8"></a>
{{< figure src="/ox-hugo/hatch00_undamped_tdof_model.png" caption="Figure 4: Undamped tdof model" >}} {{< figure src="/ox-hugo/hatch00_undamped_tdof_model.png" caption="Figure 4: Undamped tdof model" >}}
@ -237,7 +233,7 @@ The equations of motions are:
Since the system is conservative (it has no damping), normal modes of vibration will exist. Since the system is conservative (it has no damping), normal modes of vibration will exist.
<div class="bred"> <div class="important">
<div></div> <div></div>
Having normal modes means that at certain frequencies all points in the system will vibrate at the same frequency and in phase, i.e., **all points in the system will reach their minimum and maximum displacements at the same point in time**. Having normal modes means that at certain frequencies all points in the system will vibrate at the same frequency and in phase, i.e., **all points in the system will reach their minimum and maximum displacements at the same point in time**.
@ -301,17 +297,17 @@ One then find:
\end{bmatrix} \end{bmatrix}
\end{equation} \end{equation}
Virtual interpretation of the eigenvectors are shown in Figures [5](#org7cb88db), [6](#org3b438be) and [7](#org1b7d8f7). Virtual interpretation of the eigenvectors are shown in Figures [5](#org0396b30), [6](#orgd3bc915) and [7](#orgc82dccd).
<a id="org7cb88db"></a> <a id="org0396b30"></a>
{{< figure src="/ox-hugo/hatch00_tdof_mode_1.png" caption="Figure 5: Rigid-Body Mode, 0rad/s" >}} {{< figure src="/ox-hugo/hatch00_tdof_mode_1.png" caption="Figure 5: Rigid-Body Mode, 0rad/s" >}}
<a id="org3b438be"></a> <a id="orgd3bc915"></a>
{{< figure src="/ox-hugo/hatch00_tdof_mode_2.png" caption="Figure 6: Second Model, Middle Mass Stationary, 1rad/s" >}} {{< figure src="/ox-hugo/hatch00_tdof_mode_2.png" caption="Figure 6: Second Model, Middle Mass Stationary, 1rad/s" >}}
<a id="org1b7d8f7"></a> <a id="orgc82dccd"></a>
{{< figure src="/ox-hugo/hatch00_tdof_mode_3.png" caption="Figure 7: Third Mode, 1.7rad/s" >}} {{< figure src="/ox-hugo/hatch00_tdof_mode_3.png" caption="Figure 7: Third Mode, 1.7rad/s" >}}
@ -340,7 +336,7 @@ It is thus useful to **transform the n-coupled second order differential equatio
In linear algebra terms, the transformation from physical to principal coordinates is known as a **change of basis**. In linear algebra terms, the transformation from physical to principal coordinates is known as a **change of basis**.
<div class="bred"> <div class="important">
<div></div> <div></div>
There are many options for change of basis, but we will show that **when eigenvectors are used for the transformation, the principal coordinate system has a physical meaning: each of the uncoupled sdof systems represents the motion of a specific mode of vibration**. There are many options for change of basis, but we will show that **when eigenvectors are used for the transformation, the principal coordinate system has a physical meaning: each of the uncoupled sdof systems represents the motion of a specific mode of vibration**.
@ -350,9 +346,9 @@ There are many options for change of basis, but we will show that **when eigenve
The n-uncoupled equations in the principal coordinate system can then be solved for the responses in the principal coordinate system using the well known solutions for the single dof systems. The n-uncoupled equations in the principal coordinate system can then be solved for the responses in the principal coordinate system using the well known solutions for the single dof systems.
The n-responses in the principal coordinate system can then be **transformed back** to the physical coordinate system to provide the actual response in physical coordinate. The n-responses in the principal coordinate system can then be **transformed back** to the physical coordinate system to provide the actual response in physical coordinate.
This procedure is schematically shown in Figure [8](#orgb66daef). This procedure is schematically shown in Figure [8](#org2a145bc).
<a id="orgb66daef"></a> <a id="org2a145bc"></a>
{{< figure src="/ox-hugo/hatch00_schematic_modal_solution.png" caption="Figure 8: Roadmap for Modal Solution" >}} {{< figure src="/ox-hugo/hatch00_schematic_modal_solution.png" caption="Figure 8: Roadmap for Modal Solution" >}}
@ -472,7 +468,7 @@ The normalized stiffness matrix is known as the **spectral matrix**.
Normalizing with respect to mass results in an identify principal mass matrix and squares of the eigenvalues on the diagonal in the principal stiffness matrix, this normalization technique is thus very useful for the following reason. Normalizing with respect to mass results in an identify principal mass matrix and squares of the eigenvalues on the diagonal in the principal stiffness matrix, this normalization technique is thus very useful for the following reason.
<div class="bred"> <div class="important">
<div></div> <div></div>
Since we know the form of the principal matrices when normalizing with respect to mass, no multiplying of modal matrices is actually required: **the homogeneous principal equations of motion can be written by inspection knowing only the eigenvalues**. Since we know the form of the principal matrices when normalizing with respect to mass, no multiplying of modal matrices is actually required: **the homogeneous principal equations of motion can be written by inspection knowing only the eigenvalues**.
@ -499,7 +495,7 @@ Pre-multiplying by \\(\bm{z}\_n^T\\) and inserting \\(I = \bm{z}\_n \bm{z}\_n^{-
Which is re-written in the following form: Which is re-written in the following form:
<div class="bred"> <div class="important">
<div></div> <div></div>
\begin{equation} \begin{equation}
@ -530,7 +526,7 @@ where \\(\bm{z}\_0\\) and \\(\dot{\bm{z}}\_0\\) are the vectors of initial displ
We have now everything required to solve the equations in the principal coordinate system. We have now everything required to solve the equations in the principal coordinate system.
<div class="bred"> <div class="important">
<div></div> <div></div>
The variables in physical coordinates are the positions and velocities of the masses. The variables in physical coordinates are the positions and velocities of the masses.
@ -598,7 +594,7 @@ Let's now examine the displacement transformation from principal to physical coo
And thus, if we are only interested in the physical displacement of the mass 2 (\\(z\_2 = z\_{n21} z\_{p1} + z\_{n22} z\_{p2} + z\_{n23} z\_{p3}\\)), only the second row of the modal matrix is required to transform the three displacements \\(z\_{p1}\\), \\(z\_{p2}\\), \\(z\_{p3}\\) in principal coordinates to \\(z\_2\\). And thus, if we are only interested in the physical displacement of the mass 2 (\\(z\_2 = z\_{n21} z\_{p1} + z\_{n22} z\_{p2} + z\_{n23} z\_{p3}\\)), only the second row of the modal matrix is required to transform the three displacements \\(z\_{p1}\\), \\(z\_{p2}\\), \\(z\_{p3}\\) in principal coordinates to \\(z\_2\\).
<div class="bred"> <div class="important">
<div></div> <div></div>
**Only the rows of the modal matrix that correspond to degrees of freedom to which forces are applied and/or for which displacements are desired are required to complete the model.** **Only the rows of the modal matrix that correspond to degrees of freedom to which forces are applied and/or for which displacements are desired are required to complete the model.**
@ -700,7 +696,7 @@ Absolute damping is based on making \\(b = 0\\), in which case the percentage of
## Frequency Response: Modal Form {#frequency-response-modal-form} ## Frequency Response: Modal Form {#frequency-response-modal-form}
<a id="org2f9ff42"></a> <a id="orgcf74144"></a>
The procedure to obtain the frequency response from a modal form is as follow: The procedure to obtain the frequency response from a modal form is as follow:
@ -708,9 +704,9 @@ The procedure to obtain the frequency response from a modal form is as follow:
- use Laplace transform to obtain the transfer functions in principal coordinates - use Laplace transform to obtain the transfer functions in principal coordinates
- back-transform the transfer functions to physical coordinates where the individual mode contributions will be evident - back-transform the transfer functions to physical coordinates where the individual mode contributions will be evident
This will be applied to the model shown in Figure [9](#org18f2291). This will be applied to the model shown in Figure [9](#org5228de8).
<a id="org18f2291"></a> <a id="org5228de8"></a>
{{< figure src="/ox-hugo/hatch00_tdof_model.png" caption="Figure 9: tdof undamped model for modal analysis" >}} {{< figure src="/ox-hugo/hatch00_tdof_model.png" caption="Figure 9: tdof undamped model for modal analysis" >}}
@ -859,7 +855,7 @@ The forces transform in the principal coordinates using:
\bm{F}\_p = \bm{z}\_n^T \bm{F} \bm{F}\_p = \bm{z}\_n^T \bm{F}
\end{equation} \end{equation}
<div class="bred"> <div class="important">
<div></div> <div></div>
Thus, if \\(\bm{F}\\) is aligned with \\(\bm{z}\_{ni}\\) (the i'th normalized eigenvector), then \\(\bm{F}\_p\\) will be null except for its i'th term and only the i'th mode will be excited. Thus, if \\(\bm{F}\\) is aligned with \\(\bm{z}\_{ni}\\) (the i'th normalized eigenvector), then \\(\bm{F}\_p\\) will be null except for its i'th term and only the i'th mode will be excited.
@ -871,7 +867,7 @@ Thus, if \\(\bm{F}\\) is aligned with \\(\bm{z}\_{ni}\\) (the i'th normalized ei
Any transfer function derived from the modal analysis is an additive combination of sdof systems. Any transfer function derived from the modal analysis is an additive combination of sdof systems.
<div class="bred"> <div class="important">
<div></div> <div></div>
Each single degree of freedom system has a gain determined by the appropriate eigenvector entries and a resonant frequency given by the appropriate eigenvalue. Each single degree of freedom system has a gain determined by the appropriate eigenvector entries and a resonant frequency given by the appropriate eigenvalue.
@ -892,9 +888,9 @@ Equations \eqref{eq:general_add_tf} and \eqref{eq:general_add_tf_damp} shows tha
</div> </div>
Figure [10](#org235691b) shows the separate contributions of each mode to the total response \\(z\_1/F\_1\\). Figure [10](#org36b2696) shows the separate contributions of each mode to the total response \\(z\_1/F\_1\\).
<a id="org235691b"></a> <a id="org36b2696"></a>
{{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 10: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}} {{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 10: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}}
@ -903,16 +899,16 @@ The zeros for SISO transfer functions are the roots of the numerator, however, f
## SISO State Space Matlab Model from ANSYS Model {#siso-state-space-matlab-model-from-ansys-model} ## SISO State Space Matlab Model from ANSYS Model {#siso-state-space-matlab-model-from-ansys-model}
<a id="org1120b9f"></a> <a id="org6520d55"></a>
### Introduction {#introduction} ### Introduction {#introduction}
In this section is developed a SISO state space Matlab model from an ANSYS cantilever beam model as shown in Figure [11](#org670dac0). In this section is developed a SISO state space Matlab model from an ANSYS cantilever beam model as shown in Figure [11](#org332d1e7).
A z direction force is applied at the midpoint of the beam and z displacement at the tip is the output. A z direction force is applied at the midpoint of the beam and z displacement at the tip is the output.
The objective is to provide the smallest Matlab state space model that accurately represents the pertinent dynamics. The objective is to provide the smallest Matlab state space model that accurately represents the pertinent dynamics.
<a id="org670dac0"></a> <a id="org332d1e7"></a>
{{< figure src="/ox-hugo/hatch00_cantilever_beam.png" caption="Figure 11: Cantilever beam with forcing function at midpoint" >}} {{< figure src="/ox-hugo/hatch00_cantilever_beam.png" caption="Figure 11: Cantilever beam with forcing function at midpoint" >}}
@ -991,7 +987,7 @@ If sorting of DC gain values is performed prior to the `truncate` operation, the
## Ground Acceleration Matlab Model From ANSYS Model {#ground-acceleration-matlab-model-from-ansys-model} ## Ground Acceleration Matlab Model From ANSYS Model {#ground-acceleration-matlab-model-from-ansys-model}
<a id="orge9f1067"></a> <a id="orgd3512da"></a>
### Model Description {#model-description} ### Model Description {#model-description}
@ -1005,25 +1001,25 @@ If sorting of DC gain values is performed prior to the `truncate` operation, the
## SISO Disk Drive Actuator Model {#siso-disk-drive-actuator-model} ## SISO Disk Drive Actuator Model {#siso-disk-drive-actuator-model}
<a id="orgf40e20c"></a> <a id="org17e706f"></a>
In this section we wish to extract a SISO state space model from a Finite Element model representing a Disk Drive Actuator (Figure [12](#orge365d89)). In this section we wish to extract a SISO state space model from a Finite Element model representing a Disk Drive Actuator (Figure [12](#org6d55a33)).
### Actuator Description {#actuator-description} ### Actuator Description {#actuator-description}
<a id="orge365d89"></a> <a id="org6d55a33"></a>
{{< figure src="/ox-hugo/hatch00_disk_drive_siso_model.png" caption="Figure 12: Drawing of Actuator/Suspension system" >}} {{< figure src="/ox-hugo/hatch00_disk_drive_siso_model.png" caption="Figure 12: Drawing of Actuator/Suspension system" >}}
The primary motion of the actuator is rotation about the pivot bearing, therefore the final model has the coordinate system transformed from a Cartesian x,y,z coordinate system to a Cylindrical \\(r\\), \\(\theta\\) and \\(z\\) system, with the two origins coincident (Figure [13](#org117f7e6)). The primary motion of the actuator is rotation about the pivot bearing, therefore the final model has the coordinate system transformed from a Cartesian x,y,z coordinate system to a Cylindrical \\(r\\), \\(\theta\\) and \\(z\\) system, with the two origins coincident (Figure [13](#org482c35b)).
<a id="org117f7e6"></a> <a id="org482c35b"></a>
{{< figure src="/ox-hugo/hatch00_disk_drive_nodes_reduced_model.png" caption="Figure 13: Nodes used for reduced Matlab model. Shown with partial finite element mesh at coil" >}} {{< figure src="/ox-hugo/hatch00_disk_drive_nodes_reduced_model.png" caption="Figure 13: Nodes used for reduced Matlab model. Shown with partial finite element mesh at coil" >}}
For reduced models, we only require eigenvector information for dof where forces are applied and where displacements are required. For reduced models, we only require eigenvector information for dof where forces are applied and where displacements are required.
Figure [13](#org117f7e6) shows the nodes used for the reduced Matlab model. Figure [13](#org482c35b) shows the nodes used for the reduced Matlab model.
The four nodes 24061, 24066, 24082 and 24087 are located in the center of the coil in the z direction and are used for simulating the VCM force. The four nodes 24061, 24066, 24082 and 24087 are located in the center of the coil in the z direction and are used for simulating the VCM force.
The arrows at the nodes indicate the direction of forces. The arrows at the nodes indicate the direction of forces.
@ -1046,7 +1042,7 @@ A recommended sequence for analyzing dynamic finite element models is:
A small section of the exported `.eig` file from ANSYS is shown bellow.. A small section of the exported `.eig` file from ANSYS is shown bellow..
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
LOAD STEP= 1 SUBSTEP= 1 LOAD STEP= 1 SUBSTEP= 1
@ -1086,7 +1082,7 @@ From Ansys, we have the eigenvalues \\(\omega\_i\\) and eigenvectors \\(\bm{z}\\
## Balanced Reduction {#balanced-reduction} ## Balanced Reduction {#balanced-reduction}
<a id="org0e9e97f"></a> <a id="org2c8e979"></a>
In this chapter another method of reducing models, “balanced reduction”, will be introduced and compared with the DC and peak gain ranking methods. In this chapter another method of reducing models, “balanced reduction”, will be introduced and compared with the DC and peak gain ranking methods.
@ -1201,14 +1197,14 @@ The **states to be kept are the states with the largest diagonal terms**.
## MIMO Two Stage Actuator Model {#mimo-two-stage-actuator-model} ## MIMO Two Stage Actuator Model {#mimo-two-stage-actuator-model}
<a id="org3f77b06"></a> <a id="org0b45098"></a>
In this section, a MIMO two-stage actuator model is derived from a finite element model (Figure [14](#org265cca4)). In this section, a MIMO two-stage actuator model is derived from a finite element model (Figure [14](#orgdc24ed7)).
### Actuator Description {#actuator-description} ### Actuator Description {#actuator-description}
<a id="org265cca4"></a> <a id="orgdc24ed7"></a>
{{< figure src="/ox-hugo/hatch00_disk_drive_mimo_schematic.png" caption="Figure 14: Drawing of actuator/suspension system" >}} {{< figure src="/ox-hugo/hatch00_disk_drive_mimo_schematic.png" caption="Figure 14: Drawing of actuator/suspension system" >}}
@ -1217,7 +1213,7 @@ The piezo actuator consists of a ceramic element that changes size when a voltag
Then the fine positioning motion of the piezo is used in conjunction with VCM's coarse positioning motion, higher servo bandwidth is possible. Then the fine positioning motion of the piezo is used in conjunction with VCM's coarse positioning motion, higher servo bandwidth is possible.
<div class="bred"> <div class="important">
<div></div> <div></div>
Instead of applying voltage as the input into the piezo elements, we will assume that we have calculated an equivalent set of forces which can be applied at the ends of the element that will replicate the voltage force function. Instead of applying voltage as the input into the piezo elements, we will assume that we have calculated an equivalent set of forces which can be applied at the ends of the element that will replicate the voltage force function.
@ -1230,9 +1226,9 @@ Since the same forces are being applied to both piezo elements, they represent t
### Ansys Model Description {#ansys-model-description} ### Ansys Model Description {#ansys-model-description}
In Figure [15](#orge18f970) are shown the principal nodes used for the model. In Figure [15](#org40d5587) are shown the principal nodes used for the model.
<a id="orge18f970"></a> <a id="org40d5587"></a>
{{< figure src="/ox-hugo/hatch00_disk_drive_mimo_ansys.png" caption="Figure 15: Nodes used for reduced Matlab model, shown with partial mesh at coil and piezo element" >}} {{< figure src="/ox-hugo/hatch00_disk_drive_mimo_ansys.png" caption="Figure 15: Nodes used for reduced Matlab model, shown with partial mesh at coil and piezo element" >}}
@ -1351,11 +1347,11 @@ And we note:
G = zn * Gp; G = zn * Gp;
``` ```
<a id="orgda45630"></a> <a id="org12f3141"></a>
{{< figure src="/ox-hugo/hatch00_z13_tf.png" caption="Figure 16: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_3\\)" >}} {{< figure src="/ox-hugo/hatch00_z13_tf.png" caption="Figure 16: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_3\\)" >}}
<a id="orgfe76948"></a> <a id="orgd9eb688"></a>
{{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 17: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}} {{< figure src="/ox-hugo/hatch00_z11_tf.png" caption="Figure 17: Mode contributions to the transfer function from \\(F\_1\\) to \\(z\_1\\)" >}}
@ -1453,13 +1449,13 @@ G_f = ss(A, B, C, D);
### Simple mode truncation {#simple-mode-truncation} ### Simple mode truncation {#simple-mode-truncation}
Let's plot the frequency of the modes (Figure [18](#orge6429ee)). Let's plot the frequency of the modes (Figure [18](#org152bcb2)).
<a id="orge6429ee"></a> <a id="org152bcb2"></a>
{{< figure src="/ox-hugo/hatch00_cant_beam_modes_freq.png" caption="Figure 18: Frequency of the modes" >}} {{< figure src="/ox-hugo/hatch00_cant_beam_modes_freq.png" caption="Figure 18: Frequency of the modes" >}}
<a id="org0be181c"></a> <a id="orge00504f"></a>
{{< figure src="/ox-hugo/hatch00_cant_beam_unsorted_dc_gains.png" caption="Figure 19: Unsorted DC Gains" >}} {{< figure src="/ox-hugo/hatch00_cant_beam_unsorted_dc_gains.png" caption="Figure 19: Unsorted DC Gains" >}}
@ -1528,7 +1524,7 @@ dc_gain = abs(xn(i_input, :).*xn(i_output, :))./(2*pi*f0).^2;
[dc_gain_sort, index_sort] = sort(dc_gain, 'descend'); [dc_gain_sort, index_sort] = sort(dc_gain, 'descend');
``` ```
<a id="org573efcd"></a> <a id="orga1ddc35"></a>
{{< figure src="/ox-hugo/hatch00_cant_beam_sorted_dc_gains.png" caption="Figure 20: Sorted DC Gains" >}} {{< figure src="/ox-hugo/hatch00_cant_beam_sorted_dc_gains.png" caption="Figure 20: Sorted DC Gains" >}}
@ -1872,7 +1868,7 @@ wo = gram(G_m, 'o');
And we plot the diagonal terms And we plot the diagonal terms
<a id="org5a0abd3"></a> <a id="org27ebe1f"></a>
{{< figure src="/ox-hugo/hatch00_gramians.png" caption="Figure 21: Observability and Controllability Gramians" >}} {{< figure src="/ox-hugo/hatch00_gramians.png" caption="Figure 21: Observability and Controllability Gramians" >}}
@ -1890,7 +1886,7 @@ We use `balreal` to rank oscillatory states.
[G_b, G, T, Ti] = balreal(G_m); [G_b, G, T, Ti] = balreal(G_m);
``` ```
<a id="org77009c0"></a> <a id="org801e76e"></a>
{{< figure src="/ox-hugo/hatch00_cant_beam_gramian_balanced.png" caption="Figure 22: Sorted values of the Gramian of the balanced realization" >}} {{< figure src="/ox-hugo/hatch00_cant_beam_gramian_balanced.png" caption="Figure 22: Sorted values of the Gramian of the balanced realization" >}}
@ -2135,6 +2131,6 @@ pos_frames = pos([1, i_input, i_output], :);
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="org5c6cd54"></a>Hatch, Michael R. 2000. _Vibration Simulation Using MATLAB and ANSYS_. CRC Press. <a id="org50dffcf"></a>Hatch, Michael R. 2000. _Vibration Simulation Using MATLAB and ANSYS_. CRC Press.
<a id="orgb6198fb"></a>Miu, Denny K. 1993. _Mechatronics: Electromechanics and Contromechanics_. 1st ed. Mechanical Engineering Series. Springer-Verlag New York. <a id="org03acd9e"></a>Miu, Denny K. 1993. _Mechatronics: Electromechanics and Contromechanics_. 1st ed. Mechanical Engineering Series. Springer-Verlag New York.

View File

@ -8,7 +8,7 @@ Tags
: [Reference Books]({{< relref "reference_books" >}}), [Multivariable Control]({{< relref "multivariable_control" >}}) : [Reference Books]({{< relref "reference_books" >}}), [Multivariable Control]({{< relref "multivariable_control" >}})
Reference Reference
: ([Skogestad and Postlethwaite 2007](#org81e2975)) : ([Skogestad and Postlethwaite 2007](#org57bef6b))
Author(s) Author(s)
: Skogestad, S., & Postlethwaite, I. : Skogestad, S., & Postlethwaite, I.
@ -19,7 +19,7 @@ Year
## Introduction {#introduction} ## Introduction {#introduction}
<a id="org4ef301c"></a> <a id="orga0078c7"></a>
### The Process of Control System Design {#the-process-of-control-system-design} ### The Process of Control System Design {#the-process-of-control-system-design}
@ -190,7 +190,7 @@ Notations used throughout this note are summarized in tables&nbsp;[table:notatio
## Classical Feedback Control {#classical-feedback-control} ## Classical Feedback Control {#classical-feedback-control}
<a id="org535af0b"></a> <a id="org7271725"></a>
### Frequency Response {#frequency-response} ### Frequency Response {#frequency-response}
@ -239,7 +239,7 @@ Thus, the input to the plant is \\(u = K(s) (r-y-n)\\).
The objective of control is to manipulate \\(u\\) (design \\(K\\)) such that the control error \\(e\\) remains small in spite of disturbances \\(d\\). The objective of control is to manipulate \\(u\\) (design \\(K\\)) such that the control error \\(e\\) remains small in spite of disturbances \\(d\\).
The control error is defined as \\(e = y-r\\). The control error is defined as \\(e = y-r\\).
<a id="orge6ef8ec"></a> <a id="org77fbf8e"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_alt.png" caption="Figure 1: Configuration for one degree-of-freedom control" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_alt.png" caption="Figure 1: Configuration for one degree-of-freedom control" >}}
@ -551,7 +551,7 @@ We cannot achieve both of these simultaneously with a single feedback controller
The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig.&nbsp;[fig:classical_feedback_2dof_alt](#fig:classical_feedback_2dof_alt)), rather than operating on their difference \\(r - y\_m\\). The solution is to use a **two degrees of freedom controller** where the reference signal \\(r\\) and output measurement \\(y\_m\\) are independently treated by the controller (Fig.&nbsp;[fig:classical_feedback_2dof_alt](#fig:classical_feedback_2dof_alt)), rather than operating on their difference \\(r - y\_m\\).
<a id="orgcbd55bf"></a> <a id="org81824cd"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="Figure 2: 2 degrees-of-freedom control architecture" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_alt.png" caption="Figure 2: 2 degrees-of-freedom control architecture" >}}
@ -560,7 +560,7 @@ The controller can be slit into two separate blocks (Fig.&nbsp;[fig:classical_fe
- the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors) - the **feedback controller** \\(K\_y\\) that is used to **reduce the effect of uncertainty** (disturbances and model errors)
- the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance - the **prefilter** \\(K\_r\\) that **shapes the commands** \\(r\\) to improve tracking performance
<a id="org3a11a0b"></a> <a id="org787203e"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_sep.png" caption="Figure 3: 2 degrees-of-freedom control architecture with two separate blocs" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_sep.png" caption="Figure 3: 2 degrees-of-freedom control architecture with two separate blocs" >}}
@ -629,7 +629,7 @@ With (see Fig.&nbsp;[fig:performance_weigth](#fig:performance_weigth)):
</div> </div>
<a id="org01b7b3c"></a> <a id="orgca017f5"></a>
{{< figure src="/ox-hugo/skogestad07_weight_first_order.png" caption="Figure 4: Inverse of performance weight" >}} {{< figure src="/ox-hugo/skogestad07_weight_first_order.png" caption="Figure 4: Inverse of performance weight" >}}
@ -653,7 +653,7 @@ After selecting the form of \\(N\\) and the weights, the \\(\hinf\\) optimal con
## Introduction to Multivariable Control {#introduction-to-multivariable-control} ## Introduction to Multivariable Control {#introduction-to-multivariable-control}
<a id="orge446503"></a> <a id="orgbf0f66e"></a>
### Introduction {#introduction} ### Introduction {#introduction}
@ -696,7 +696,7 @@ For negative feedback system (Fig.&nbsp;[fig:classical_feedback_bis](#fig:classi
- \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\) - \\(S \triangleq (I + L)^{-1}\\) is the transfer function from \\(d\_1\\) to \\(y\\)
- \\(T \triangleq L(I + L)^{-1}\\) is the transfer function from \\(r\\) to \\(y\\) - \\(T \triangleq L(I + L)^{-1}\\) is the transfer function from \\(r\\) to \\(y\\)
<a id="org13ea84b"></a> <a id="org8fc2a9c"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_bis.png" caption="Figure 5: Conventional negative feedback control system" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_bis.png" caption="Figure 5: Conventional negative feedback control system" >}}
@ -1011,7 +1011,7 @@ The **structured singular value** \\(\mu\\) is a tool for analyzing the effects
The general control problem formulation is represented in Fig.&nbsp;[fig:general_control_names](#fig:general_control_names). The general control problem formulation is represented in Fig.&nbsp;[fig:general_control_names](#fig:general_control_names).
<a id="orgbdaf0ee"></a> <a id="org0f9e5cf"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_names.png" caption="Figure 6: General control configuration" >}} {{< figure src="/ox-hugo/skogestad07_general_control_names.png" caption="Figure 6: General control configuration" >}}
@ -1041,7 +1041,7 @@ We consider:
- The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system) - The weighted or normalized exogenous inputs \\(w\\) (where \\(\tilde{w} = W\_w w\\) consists of the "physical" signals entering the system)
- The weighted or normalized controlled outputs \\(z = W\_z \tilde{z}\\) (where \\(\tilde{z}\\) often consists of the control error \\(y-r\\) and the manipulated input \\(u\\)) - The weighted or normalized controlled outputs \\(z = W\_z \tilde{z}\\) (where \\(\tilde{z}\\) often consists of the control error \\(y-r\\) and the manipulated input \\(u\\))
<a id="org30037ce"></a> <a id="org5d57dcb"></a>
{{< figure src="/ox-hugo/skogestad07_general_plant_weights.png" caption="Figure 7: General Weighted Plant" >}} {{< figure src="/ox-hugo/skogestad07_general_plant_weights.png" caption="Figure 7: General Weighted Plant" >}}
@ -1084,7 +1084,7 @@ where \\(F\_l(P, K)\\) denotes a **lower linear fractional transformation** (LFT
The general control configuration may be extended to include model uncertainty as shown in Fig.&nbsp;[fig:general_config_model_uncertainty](#fig:general_config_model_uncertainty). The general control configuration may be extended to include model uncertainty as shown in Fig.&nbsp;[fig:general_config_model_uncertainty](#fig:general_config_model_uncertainty).
<a id="org1ab8896"></a> <a id="orgc0d2312"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta.png" caption="Figure 8: General control configuration for the case with model uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_general_control_Mdelta.png" caption="Figure 8: General control configuration for the case with model uncertainty" >}}
@ -1112,7 +1112,7 @@ MIMO systems are often **more sensitive to uncertainty** than SISO systems.
## Elements of Linear System Theory {#elements-of-linear-system-theory} ## Elements of Linear System Theory {#elements-of-linear-system-theory}
<a id="orgf123bb1"></a> <a id="org9517705"></a>
### System Descriptions {#system-descriptions} ### System Descriptions {#system-descriptions}
@ -1398,7 +1398,7 @@ RHP-zeros therefore imply high gain instability.
### Internal Stability of Feedback Systems {#internal-stability-of-feedback-systems} ### Internal Stability of Feedback Systems {#internal-stability-of-feedback-systems}
<a id="org55ea85e"></a> <a id="orgbd7faac"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="Figure 9: Block diagram used to check internal stability" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_stability.png" caption="Figure 9: Block diagram used to check internal stability" >}}
@ -1545,7 +1545,7 @@ It may be shown that the Hankel norm is equal to \\(\left\\|G(s)\right\\|\_H = \
## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems} ## Limitations on Performance in SISO Systems {#limitations-on-performance-in-siso-systems}
<a id="org4e98597"></a> <a id="org92b7ead"></a>
### Input-Output Controllability {#input-output-controllability} ### Input-Output Controllability {#input-output-controllability}
@ -1937,7 +1937,7 @@ Uncertainty in the crossover frequency region can result in poor performance and
### Summary: Controllability Analysis with Feedback Control {#summary-controllability-analysis-with-feedback-control} ### Summary: Controllability Analysis with Feedback Control {#summary-controllability-analysis-with-feedback-control}
<a id="orgc07e3d8"></a> <a id="orgcf527a3"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="Figure 10: Feedback control system" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_meas.png" caption="Figure 10: Feedback control system" >}}
@ -1966,7 +1966,7 @@ In summary:
Sometimes, the disturbances are so large that we hit input saturation or the required bandwidth is not achievable. To avoid the latter problem, we must at least require that the effect of the disturbance is less than \\(1\\) at frequencies beyond the bandwidth: Sometimes, the disturbances are so large that we hit input saturation or the required bandwidth is not achievable. To avoid the latter problem, we must at least require that the effect of the disturbance is less than \\(1\\) at frequencies beyond the bandwidth:
\\[ \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c \\] \\[ \abs{G\_d(j\w)} < 1 \quad \forall \w \geq \w\_c \\]
<a id="orgc9af8a3"></a> <a id="org6de05c1"></a>
{{< figure src="/ox-hugo/skogestad07_margin_requirements.png" caption="Figure 11: Illustration of controllability requirements" >}} {{< figure src="/ox-hugo/skogestad07_margin_requirements.png" caption="Figure 11: Illustration of controllability requirements" >}}
@ -1988,7 +1988,7 @@ The rules may be used to **determine whether or not a given plant is controllabl
## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems} ## Limitations on Performance in MIMO Systems {#limitations-on-performance-in-mimo-systems}
<a id="org5f56766"></a> <a id="org2a52a06"></a>
### Introduction {#introduction} ### Introduction {#introduction}
@ -2299,7 +2299,7 @@ We here focus on input and output uncertainty.
In multiplicative form, the input and output uncertainties are given by (see Fig.&nbsp;[fig:input_output_uncertainty](#fig:input_output_uncertainty)): In multiplicative form, the input and output uncertainties are given by (see Fig.&nbsp;[fig:input_output_uncertainty](#fig:input_output_uncertainty)):
\\[ G^\prime = (I + E\_O) G (I + E\_I) \\] \\[ G^\prime = (I + E\_O) G (I + E\_I) \\]
<a id="org5cc6888"></a> <a id="orge254987"></a>
{{< figure src="/ox-hugo/skogestad07_input_output_uncertainty.png" caption="Figure 12: Plant with multiplicative input and output uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_input_output_uncertainty.png" caption="Figure 12: Plant with multiplicative input and output uncertainty" >}}
@ -2435,7 +2435,7 @@ However, the situation is usually the opposite with model uncertainty because fo
## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems} ## Uncertainty and Robustness for SISO Systems {#uncertainty-and-robustness-for-siso-systems}
<a id="org1f5277c"></a> <a id="org7590b78"></a>
### Introduction to Robustness {#introduction-to-robustness} ### Introduction to Robustness {#introduction-to-robustness}
@ -2509,7 +2509,7 @@ which may be represented by the diagram in Fig.&nbsp;[fig:input_uncertainty_set]
</div> </div>
<a id="orgb8488ee"></a> <a id="org59d99b4"></a>
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set.png" caption="Figure 13: Plant with multiplicative uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set.png" caption="Figure 13: Plant with multiplicative uncertainty" >}}
@ -2563,7 +2563,7 @@ To illustrate how parametric uncertainty translate into frequency domain uncerta
In general, these uncertain regions have complicated shapes and complex mathematical descriptions In general, these uncertain regions have complicated shapes and complex mathematical descriptions
- **Step 2**. We therefore approximate such complex regions as discs, resulting in a **complex additive uncertainty description** - **Step 2**. We therefore approximate such complex regions as discs, resulting in a **complex additive uncertainty description**
<a id="org52b5f29"></a> <a id="org9aee3fc"></a>
{{< figure src="/ox-hugo/skogestad07_uncertainty_region.png" caption="Figure 14: Uncertainty regions of the Nyquist plot at given frequencies" >}} {{< figure src="/ox-hugo/skogestad07_uncertainty_region.png" caption="Figure 14: Uncertainty regions of the Nyquist plot at given frequencies" >}}
@ -2586,7 +2586,7 @@ At each frequency, all possible \\(\Delta(j\w)\\) "generates" a disc-shaped regi
</div> </div>
<a id="orgd3120f1"></a> <a id="org25a3a51"></a>
{{< figure src="/ox-hugo/skogestad07_uncertainty_disc_generated.png" caption="Figure 15: Disc-shaped uncertainty regions generated by complex additive uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_uncertainty_disc_generated.png" caption="Figure 15: Disc-shaped uncertainty regions generated by complex additive uncertainty" >}}
@ -2643,7 +2643,7 @@ To derive \\(w\_I(s)\\), we then try to find a simple weight so that \\(\abs{w\_
</div> </div>
<a id="org2893031"></a> <a id="org15a3cec"></a>
{{< figure src="/ox-hugo/skogestad07_uncertainty_weight.png" caption="Figure 16: Relative error for 27 combinations of \\(k,\ \tau\\) and \\(\theta\\). Solid and dashed lines: two weights \\(\abs{w\_I}\\)" >}} {{< figure src="/ox-hugo/skogestad07_uncertainty_weight.png" caption="Figure 16: Relative error for 27 combinations of \\(k,\ \tau\\) and \\(\theta\\). Solid and dashed lines: two weights \\(\abs{w\_I}\\)" >}}
@ -2682,7 +2682,7 @@ The magnitude of the relative uncertainty caused by neglecting the dynamics in \
Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig.&nbsp;[fig:neglected_time_delay](#fig:neglected_time_delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then: Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{max}}\\). We want to represent \\(G\_p(s) = G\_0(s)e^{-\theta\_p s}\\) by a delay-free plant \\(G\_0(s)\\) and multiplicative uncertainty. Let first consider the maximum delay, for which the relative error \\(\abs{1 - e^{-j \w \theta\_{\text{max}}}}\\) is shown as a function of frequency (Fig.&nbsp;[fig:neglected_time_delay](#fig:neglected_time_delay)). If we consider all \\(\theta \in [0, \theta\_{\text{max}}]\\) then:
\\[ l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases} \\] \\[ l\_I(\w) = \begin{cases} \abs{1 - e^{-j\w\theta\_{\text{max}}}} & \w < \pi/\theta\_{\text{max}} \\ 2 & \w \ge \pi/\theta\_{\text{max}} \end{cases} \\]
<a id="org284a46a"></a> <a id="org45ae2b1"></a>
{{< figure src="/ox-hugo/skogestad07_neglected_time_delay.png" caption="Figure 17: Neglected time delay" >}} {{< figure src="/ox-hugo/skogestad07_neglected_time_delay.png" caption="Figure 17: Neglected time delay" >}}
@ -2692,7 +2692,7 @@ Let \\(f(s) = e^{-\theta\_p s}\\), where \\(0 \le \theta\_p \le \theta\_{\text{m
Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig.&nbsp;[fig:neglected_first_order_lag](#fig:neglected_first_order_lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where Let \\(f(s) = 1/(\tau\_p s + 1)\\), where \\(0 \le \tau\_p \le \tau\_{\text{max}}\\). In this case the resulting \\(l\_I(\w)\\) (Fig.&nbsp;[fig:neglected_first_order_lag](#fig:neglected_first_order_lag)) can be represented by a rational transfer function with \\(\abs{w\_I(j\w)} = l\_I(\w)\\) where
\\[ w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1} \\] \\[ w\_I(s) = \frac{\tau\_{\text{max}} s}{\tau\_{\text{max}} s + 1} \\]
<a id="org6e3281a"></a> <a id="orga754d5c"></a>
{{< figure src="/ox-hugo/skogestad07_neglected_first_order_lag.png" caption="Figure 18: Neglected first-order lag uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_neglected_first_order_lag.png" caption="Figure 18: Neglected first-order lag uncertainty" >}}
@ -2709,7 +2709,7 @@ However, as shown in Fig.&nbsp;[fig:lag_delay_uncertainty](#fig:lag_delay_uncert
It is suggested to start with the simple weight and then if needed, to try the higher order weight. It is suggested to start with the simple weight and then if needed, to try the higher order weight.
<a id="org7e8648f"></a> <a id="org51e6318"></a>
{{< figure src="/ox-hugo/skogestad07_lag_delay_uncertainty.png" caption="Figure 19: Multiplicative weight for gain and delay uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_lag_delay_uncertainty.png" caption="Figure 19: Multiplicative weight for gain and delay uncertainty" >}}
@ -2749,7 +2749,7 @@ We use the Nyquist stability condition to test for robust stability of the close
&\Longleftrightarrow \quad L\_p \ \text{should not encircle -1}, \ \forall L\_p &\Longleftrightarrow \quad L\_p \ \text{should not encircle -1}, \ \forall L\_p
\end{align\*} \end{align\*}
<a id="orgeadb268"></a> <a id="org8a7056b"></a>
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback.png" caption="Figure 20: Feedback system with multiplicative uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback.png" caption="Figure 20: Feedback system with multiplicative uncertainty" >}}
@ -2765,7 +2765,7 @@ Encirclements are avoided if none of the discs cover \\(-1\\), and we get:
&\Leftrightarrow \quad \abs{w\_I T} < 1, \ \forall\w \\\\\\ &\Leftrightarrow \quad \abs{w\_I T} < 1, \ \forall\w \\\\\\
\end{align\*} \end{align\*}
<a id="org30f5a68"></a> <a id="org8bc35ca"></a>
{{< figure src="/ox-hugo/skogestad07_nyquist_uncertainty.png" caption="Figure 21: Nyquist plot of \\(L\_p\\) for robust stability" >}} {{< figure src="/ox-hugo/skogestad07_nyquist_uncertainty.png" caption="Figure 21: Nyquist plot of \\(L\_p\\) for robust stability" >}}
@ -2803,7 +2803,7 @@ And we obtain the same condition as before.
We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig.&nbsp;[fig:inverse_uncertainty_set](#fig:inverse_uncertainty_set)) in which We will derive a corresponding RS-condition for feedback system with inverse multiplicative uncertainty (Fig.&nbsp;[fig:inverse_uncertainty_set](#fig:inverse_uncertainty_set)) in which
\\[ G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1} \\] \\[ G\_p = G(1 + w\_{iI}(s) \Delta\_{iI})^{-1} \\]
<a id="org5f46f8f"></a> <a id="org3e7ba07"></a>
{{< figure src="/ox-hugo/skogestad07_inverse_uncertainty_set.png" caption="Figure 22: Feedback system with inverse multiplicative uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_inverse_uncertainty_set.png" caption="Figure 22: Feedback system with inverse multiplicative uncertainty" >}}
@ -2855,7 +2855,7 @@ The condition for nominal performance when considering performance in terms of t
Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\). Now \\(\abs{1 + L}\\) represents at each frequency the distance of \\(L(j\omega)\\) from the point \\(-1\\) in the Nyquist plot, so \\(L(j\omega)\\) must be at least a distance of \\(\abs{w\_P(j\omega)}\\) from \\(-1\\).
This is illustrated graphically in Fig.&nbsp;[fig:nyquist_performance_condition](#fig:nyquist_performance_condition). This is illustrated graphically in Fig.&nbsp;[fig:nyquist_performance_condition](#fig:nyquist_performance_condition).
<a id="org6d212d4"></a> <a id="org22ac31b"></a>
{{< figure src="/ox-hugo/skogestad07_nyquist_performance_condition.png" caption="Figure 23: Nyquist plot illustration of the nominal performance condition \\(\abs{w\_P} < \abs{1 + L}\\)" >}} {{< figure src="/ox-hugo/skogestad07_nyquist_performance_condition.png" caption="Figure 23: Nyquist plot illustration of the nominal performance condition \\(\abs{w\_P} < \abs{1 + L}\\)" >}}
@ -2880,7 +2880,7 @@ Let's consider the case of multiplicative uncertainty as shown on Fig.&nbsp;[fig
The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is The robust performance corresponds to requiring \\(\abs{\hat{y}/d}<1\ \forall \Delta\_I\\) and the set of possible loop transfer functions is
\\[ L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \\] \\[ L\_p = G\_p K = L (1 + w\_I \Delta\_I) = L + w\_I L \Delta\_I \\]
<a id="orgd7a2e02"></a> <a id="orga419cf8"></a>
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight_bis.png" caption="Figure 24: Diagram for robust performance with multiplicative uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight_bis.png" caption="Figure 24: Diagram for robust performance with multiplicative uncertainty" >}}
@ -3046,7 +3046,7 @@ with \\(\Phi(s) \triangleq (sI - A)^{-1}\\).
This is illustrated in the block diagram of Fig.&nbsp;[fig:uncertainty_state_a_matrix](#fig:uncertainty_state_a_matrix), which is in the form of an inverse additive perturbation. This is illustrated in the block diagram of Fig.&nbsp;[fig:uncertainty_state_a_matrix](#fig:uncertainty_state_a_matrix), which is in the form of an inverse additive perturbation.
<a id="orgd259852"></a> <a id="org7cd4d84"></a>
{{< figure src="/ox-hugo/skogestad07_uncertainty_state_a_matrix.png" caption="Figure 25: Uncertainty in state space A-matrix" >}} {{< figure src="/ox-hugo/skogestad07_uncertainty_state_a_matrix.png" caption="Figure 25: Uncertainty in state space A-matrix" >}}
@ -3064,7 +3064,7 @@ We also derived a condition for robust performance with multiplicative uncertain
## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis} ## Robust Stability and Performance Analysis {#robust-stability-and-performance-analysis}
<a id="orgf1ff09f"></a> <a id="org2be789b"></a>
### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty} ### General Control Configuration with Uncertainty {#general-control-configuration-with-uncertainty}
@ -3075,13 +3075,13 @@ where each \\(\Delta\_i\\) represents a **specific source of uncertainty**, e.g.
If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig.&nbsp;[fig:general_control_delta](#fig:general_control_delta). This form is useful for controller synthesis. If we also pull out the controller \\(K\\), we get the generalized plant \\(P\\) as shown in Fig.&nbsp;[fig:general_control_delta](#fig:general_control_delta). This form is useful for controller synthesis.
<a id="orgc543c94"></a> <a id="orgc602523"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="Figure 26: General control configuration used for controller synthesis" >}} {{< figure src="/ox-hugo/skogestad07_general_control_delta.png" caption="Figure 26: General control configuration used for controller synthesis" >}}
If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig.&nbsp;[fig:general_control_Ndelta](#fig:general_control_Ndelta). If the controller is given and we want to analyze the uncertain system, we use the \\(N\Delta\text{-structure}\\) in Fig.&nbsp;[fig:general_control_Ndelta](#fig:general_control_Ndelta).
<a id="org7e33b5a"></a> <a id="orgb849575"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_Ndelta.png" caption="Figure 27: \\(N\Delta\text{-structure}\\) for robust performance analysis" >}} {{< figure src="/ox-hugo/skogestad07_general_control_Ndelta.png" caption="Figure 27: \\(N\Delta\text{-structure}\\) for robust performance analysis" >}}
@ -3101,7 +3101,7 @@ Similarly, the uncertain closed-loop transfer function from \\(w\\) to \\(z\\),
To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig.&nbsp;[fig:general_control_Mdelta_bis](#fig:general_control_Mdelta_bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations. To analyze robust stability of \\(F\\), we can rearrange the system into the \\(M\Delta\text{-structure}\\) shown in Fig.&nbsp;[fig:general_control_Mdelta_bis](#fig:general_control_Mdelta_bis) where \\(M = N\_{11}\\) is the transfer function from the output to the input of the perturbations.
<a id="orgbff915e"></a> <a id="org8eb2223"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_Mdelta_bis.png" caption="Figure 28: \\(M\Delta\text{-structure}\\) for robust stability analysis" >}} {{< figure src="/ox-hugo/skogestad07_general_control_Mdelta_bis.png" caption="Figure 28: \\(M\Delta\text{-structure}\\) for robust stability analysis" >}}
@ -3153,7 +3153,7 @@ Three common forms of **feedforward unstructured uncertainty** are shown Fig.&nb
| ![](/ox-hugo/skogestad07_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_input_uncertainty.png) | ![](/ox-hugo/skogestad07_output_uncertainty.png) | | ![](/ox-hugo/skogestad07_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_input_uncertainty.png) | ![](/ox-hugo/skogestad07_output_uncertainty.png) |
|----------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------| |----------------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------|
| <a id="org75a2913"></a> Additive uncertainty | <a id="orgef6f9a2"></a> Multiplicative input uncertainty | <a id="org750fcc5"></a> Multiplicative output uncertainty | | <a id="org27b2961"></a> Additive uncertainty | <a id="org269d6fa"></a> Multiplicative input uncertainty | <a id="org51f791f"></a> Multiplicative output uncertainty |
In Fig.&nbsp;[fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty. In Fig.&nbsp;[fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feedback or inverse unstructured uncertainty** forms are shown: inverse additive uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output uncertainty.
@ -3176,7 +3176,7 @@ In Fig.&nbsp;[fig:feedback_uncertainty](#fig:feedback_uncertainty), three **feed
| ![](/ox-hugo/skogestad07_inv_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_input_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_output_uncertainty.png) | | ![](/ox-hugo/skogestad07_inv_additive_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_input_uncertainty.png) | ![](/ox-hugo/skogestad07_inv_output_uncertainty.png) |
|--------------------------------------------------------|------------------------------------------------------------------|-------------------------------------------------------------------| |--------------------------------------------------------|------------------------------------------------------------------|-------------------------------------------------------------------|
| <a id="orgecb6f6b"></a> Inverse additive uncertainty | <a id="org401dca4"></a> Inverse multiplicative input uncertainty | <a id="org8672587"></a> Inverse multiplicative output uncertainty | | <a id="orgc0a9c0b"></a> Inverse additive uncertainty | <a id="org90a3fb2"></a> Inverse multiplicative input uncertainty | <a id="orgb1747a9"></a> Inverse multiplicative output uncertainty |
##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation} ##### Lumping uncertainty into a single perturbation {#lumping-uncertainty-into-a-single-perturbation}
@ -3246,7 +3246,7 @@ where \\(r\_0\\) is the relative uncertainty at steady-state, \\(1/\tau\\) is th
Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig.&nbsp;[fig:input_uncertainty_set_feedback_weight](#fig:input_uncertainty_set_feedback_weight). Let's consider the feedback system with multiplicative input uncertainty \\(\Delta\_I\\) shown Fig.&nbsp;[fig:input_uncertainty_set_feedback_weight](#fig:input_uncertainty_set_feedback_weight).
\\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight. \\(W\_I\\) is a normalization weight for the uncertainty and \\(W\_P\\) is a performance weight.
<a id="org063a63b"></a> <a id="org31ea15f"></a>
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight.png" caption="Figure 29: System with multiplicative input uncertainty and performance measured at the output" >}} {{< figure src="/ox-hugo/skogestad07_input_uncertainty_set_feedback_weight.png" caption="Figure 29: System with multiplicative input uncertainty and performance measured at the output" >}}
@ -3406,7 +3406,7 @@ Where \\(G = M\_l^{-1} N\_l\\) is a left coprime factorization of the nominal pl
This uncertainty description is surprisingly **general**, it allows both zeros and poles to cross into the right-half plane, and has proven to be very useful in applications. This uncertainty description is surprisingly **general**, it allows both zeros and poles to cross into the right-half plane, and has proven to be very useful in applications.
<a id="orgb6b8639"></a> <a id="org3996f6b"></a>
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty.png" caption="Figure 30: Coprime Uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty.png" caption="Figure 30: Coprime Uncertainty" >}}
@ -3438,7 +3438,7 @@ where \\(d\_i\\) is a scalar and \\(I\_i\\) is an identity matrix of the same di
Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig.&nbsp;[fig:block_diagonal_scalings](#fig:block_diagonal_scalings). Now rescale the inputs and outputs of \\(M\\) and \\(\Delta\\) by inserting the matrices \\(D\\) and \\(D^{-1}\\) on both sides as shown in Fig.&nbsp;[fig:block_diagonal_scalings](#fig:block_diagonal_scalings).
This clearly has no effect on stability. This clearly has no effect on stability.
<a id="orgad1b2ef"></a> <a id="org5377248"></a>
{{< figure src="/ox-hugo/skogestad07_block_diagonal_scalings.png" caption="Figure 31: Use of block-diagonal scalings, \\(\Delta D = D \Delta\\)" >}} {{< figure src="/ox-hugo/skogestad07_block_diagonal_scalings.png" caption="Figure 31: Use of block-diagonal scalings, \\(\Delta D = D \Delta\\)" >}}
@ -3754,7 +3754,7 @@ with the decoupling controller we have:
\\[ \bar{\sigma}(N\_{22}) = \bar{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right| \\] \\[ \bar{\sigma}(N\_{22}) = \bar{\sigma}(w\_P S) = \left|\frac{s/2 + 0.05}{s + 0.7}\right| \\]
and we see from Fig.&nbsp;[fig:mu_plots_distillation](#fig:mu_plots_distillation) that the NP-condition is satisfied. and we see from Fig.&nbsp;[fig:mu_plots_distillation](#fig:mu_plots_distillation) that the NP-condition is satisfied.
<a id="org7e4b22b"></a> <a id="org6561d8a"></a>
{{< figure src="/ox-hugo/skogestad07_mu_plots_distillation.png" caption="Figure 32: \\(\mu\text{-plots}\\) for distillation process with decoupling controller" >}} {{< figure src="/ox-hugo/skogestad07_mu_plots_distillation.png" caption="Figure 32: \\(\mu\text{-plots}\\) for distillation process with decoupling controller" >}}
@ -3877,7 +3877,7 @@ The latter is an attempt to "flatten out" \\(\mu\\).
For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity. For simplicity, we will consider again the case of multiplicative uncertainty and performance defined in terms of weighted sensitivity.
The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig.&nbsp;[fig:weights_distillation](#fig:weights_distillation). The uncertainty weight \\(w\_I I\\) and performance weight \\(w\_P I\\) are shown graphically in Fig.&nbsp;[fig:weights_distillation](#fig:weights_distillation).
<a id="org3afb894"></a> <a id="org4a75c93"></a>
{{< figure src="/ox-hugo/skogestad07_weights_distillation.png" caption="Figure 33: Uncertainty and performance weights" >}} {{< figure src="/ox-hugo/skogestad07_weights_distillation.png" caption="Figure 33: Uncertainty and performance weights" >}}
@ -3900,11 +3900,11 @@ The scaling matrix \\(D\\) for \\(DND^{-1}\\) then has the structure \\(D = \tex
- Iteration No. 3. - Iteration No. 3.
Step 1: The \\(\mathcal{H}\_\infty\\) norm is only slightly reduced. We thus decide the stop the iterations. Step 1: The \\(\mathcal{H}\_\infty\\) norm is only slightly reduced. We thus decide the stop the iterations.
<a id="orgc881bf2"></a> <a id="org316e326"></a>
{{< figure src="/ox-hugo/skogestad07_dk_iter_mu.png" caption="Figure 34: Change in \\(\mu\\) during DK-iteration" >}} {{< figure src="/ox-hugo/skogestad07_dk_iter_mu.png" caption="Figure 34: Change in \\(\mu\\) during DK-iteration" >}}
<a id="org4cf37d6"></a> <a id="org585c918"></a>
{{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="Figure 35: Change in D-scale \\(d\_1\\) during DK-iteration" >}} {{< figure src="/ox-hugo/skogestad07_dk_iter_d_scale.png" caption="Figure 35: Change in D-scale \\(d\_1\\) during DK-iteration" >}}
@ -3912,13 +3912,13 @@ The final \\(\mu\text{-curves}\\) for NP, RS and RP with the controller \\(K\_3\
The objectives of RS and NP are easily satisfied. The objectives of RS and NP are easily satisfied.
The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\bar{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants. The peak value of \\(\mu\\) is just slightly over 1, so the performance specification \\(\bar{\sigma}(w\_P S\_p) < 1\\) is almost satisfied for all possible plants.
<a id="org43c5b30"></a> <a id="orgc63d84a"></a>
{{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="Figure 36: \\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}} {{< figure src="/ox-hugo/skogestad07_mu_plot_optimal_k3.png" caption="Figure 36: \\(mu\text{-plots}\\) with \\(\mu\\) \"optimal\" controller \\(K\_3\\)" >}}
To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig.&nbsp;[fig:perturb_s_k3](#fig:perturb_s_k3). To confirm that, 6 perturbed plants are used to compute the perturbed sensitivity functions shown in Fig.&nbsp;[fig:perturb_s_k3](#fig:perturb_s_k3).
<a id="orgdfe6ad0"></a> <a id="orgfc73254"></a>
{{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\bar{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}} {{< figure src="/ox-hugo/skogestad07_perturb_s_k3.png" caption="Figure 37: Perturbed sensitivity functions \\(\bar{\sigma}(S^\prime)\\) using \\(\mu\\) \"optimal\" controller \\(K\_3\\). Lower solid line: nominal plant. Upper solid line: worst-case plant" >}}
@ -3973,7 +3973,7 @@ If resulting control performance is not satisfactory, one may switch to the seco
## Controller Design {#controller-design} ## Controller Design {#controller-design}
<a id="orgf7c443b"></a> <a id="org81cd286"></a>
### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design} ### Trade-offs in MIMO Feedback Design {#trade-offs-in-mimo-feedback-design}
@ -3993,7 +3993,7 @@ We have the following important relationships:
\end{align} \end{align}
\end{subequations} \end{subequations}
<a id="org4552dc8"></a> <a id="orgfc101bb"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_small.png" caption="Figure 38: One degree-of-freedom feedback configuration" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_small.png" caption="Figure 38: One degree-of-freedom feedback configuration" >}}
@ -4035,7 +4035,7 @@ Thus, over specified frequency ranges, it is relatively easy to approximate the
Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig.&nbsp;[fig:design_trade_off_mimo_gk](#fig:design_trade_off_mimo_gk). Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies \\(0 \le \omega \le \omega\_l \le \omega\_B\\), while conditions 2, 4, 5 and 6 are conditions which are valid and important at high frequencies \\(\omega\_B \le \omega\_h \le \omega \le \infty\\), as illustrated in Fig.&nbsp;[fig:design_trade_off_mimo_gk](#fig:design_trade_off_mimo_gk).
<a id="orgcca1430"></a> <a id="orgb8b3048"></a>
{{< figure src="/ox-hugo/skogestad07_design_trade_off_mimo_gk.png" caption="Figure 39: Design trade-offs for the multivariable loop transfer function \\(GK\\)" >}} {{< figure src="/ox-hugo/skogestad07_design_trade_off_mimo_gk.png" caption="Figure 39: Design trade-offs for the multivariable loop transfer function \\(GK\\)" >}}
@ -4092,7 +4092,7 @@ The solution to the LQG problem is then found by replacing \\(x\\) by \\(\hat{x}
We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig.&nbsp;[fig:lqg_separation](#fig:lqg_separation): the optimal state feedback and the optimal state estimator (the Kalman filter). We therefore see that the LQG problem and its solution can be separated into two distinct parts as illustrated in Fig.&nbsp;[fig:lqg_separation](#fig:lqg_separation): the optimal state feedback and the optimal state estimator (the Kalman filter).
<a id="org3fc661b"></a> <a id="org0ee8748"></a>
{{< figure src="/ox-hugo/skogestad07_lqg_separation.png" caption="Figure 40: The separation theorem" >}} {{< figure src="/ox-hugo/skogestad07_lqg_separation.png" caption="Figure 40: The separation theorem" >}}
@ -4142,7 +4142,7 @@ Where \\(Y\\) is the unique positive-semi definite solution of the algebraic Ric
</div> </div>
<a id="org035195d"></a> <a id="org9328dd8"></a>
{{< figure src="/ox-hugo/skogestad07_lqg_kalman_filter.png" caption="Figure 41: The LQG controller and noisy plant" >}} {{< figure src="/ox-hugo/skogestad07_lqg_kalman_filter.png" caption="Figure 41: The LQG controller and noisy plant" >}}
@ -4163,7 +4163,7 @@ It has the same degree (number of poles) as the plant.<br />
For the LQG-controller, as shown on Fig.&nbsp;[fig:lqg_kalman_filter](#fig:lqg_kalman_filter), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig.&nbsp;[fig:lqg_integral](#fig:lqg_integral). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states. For the LQG-controller, as shown on Fig.&nbsp;[fig:lqg_kalman_filter](#fig:lqg_kalman_filter), it is not easy to see where to position the reference input \\(r\\) and how integral action may be included, if desired. Indeed, the standard LQG design procedure does not give a controller with integral action. One strategy is illustrated in Fig.&nbsp;[fig:lqg_integral](#fig:lqg_integral). Here, the control error \\(r-y\\) is integrated and the regulator \\(K\_r\\) is designed for the plant augmented with these integral states.
<a id="orge9445be"></a> <a id="orgb96cd46"></a>
{{< figure src="/ox-hugo/skogestad07_lqg_integral.png" caption="Figure 42: LQG controller with integral action and reference input" >}} {{< figure src="/ox-hugo/skogestad07_lqg_integral.png" caption="Figure 42: LQG controller with integral action and reference input" >}}
@ -4176,18 +4176,18 @@ Their main limitation is that they can only be applied to minimum phase plants.
### \\(\htwo\\) and \\(\hinf\\) Control {#htwo--and--hinf--control} ### \\(\htwo\\) and \\(\hinf\\) Control {#htwo--and--hinf--control}
<a id="orgd4f7a75"></a> <a id="orga525cc0"></a>
#### General Control Problem Formulation {#general-control-problem-formulation} #### General Control Problem Formulation {#general-control-problem-formulation}
<a id="orgc120866"></a> <a id="orgfc02b74"></a>
There are many ways in which feedback design problems can be cast as \\(\htwo\\) and \\(\hinf\\) optimization problems. There are many ways in which feedback design problems can be cast as \\(\htwo\\) and \\(\hinf\\) optimization problems.
It is very useful therefore to have a **standard problem formulation** into which any particular problem may be manipulated. It is very useful therefore to have a **standard problem formulation** into which any particular problem may be manipulated.
Such a general formulation is afforded by the general configuration shown in Fig.&nbsp;[fig:general_control](#fig:general_control). Such a general formulation is afforded by the general configuration shown in Fig.&nbsp;[fig:general_control](#fig:general_control).
<a id="orgba58752"></a> <a id="orgd8daf7e"></a>
{{< figure src="/ox-hugo/skogestad07_general_control.png" caption="Figure 43: General control configuration" >}} {{< figure src="/ox-hugo/skogestad07_general_control.png" caption="Figure 43: General control configuration" >}}
@ -4438,7 +4438,7 @@ The elements of the generalized plant are
\end{array} \end{array}
\end{equation\*} \end{equation\*}
<a id="org0c0c4b3"></a> <a id="orgf4bf125"></a>
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_dist_rejection.png" caption="Figure 44: \\(S/KS\\) mixed-sensitivity optimization in standard form (regulation)" >}} {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_dist_rejection.png" caption="Figure 44: \\(S/KS\\) mixed-sensitivity optimization in standard form (regulation)" >}}
@ -4447,7 +4447,7 @@ Here we consider a tracking problem.
The exogenous input is a reference command \\(r\\), and the error signals are \\(z\_1 = -W\_1 e = W\_1 (r-y)\\) and \\(z\_2 = W\_2 u\\). The exogenous input is a reference command \\(r\\), and the error signals are \\(z\_1 = -W\_1 e = W\_1 (r-y)\\) and \\(z\_2 = W\_2 u\\).
As the regulation problem of Fig.&nbsp;[fig:mixed_sensitivity_dist_rejection](#fig:mixed_sensitivity_dist_rejection), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\). As the regulation problem of Fig.&nbsp;[fig:mixed_sensitivity_dist_rejection](#fig:mixed_sensitivity_dist_rejection), we have that \\(z\_1 = W\_1 S w\\) and \\(z\_2 = W\_2 KS w\\).
<a id="org484d7ac"></a> <a id="orge63037d"></a>
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_ref_tracking.png" caption="Figure 45: \\(S/KS\\) mixed-sensitivity optimization in standard form (tracking)" >}} {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_ref_tracking.png" caption="Figure 45: \\(S/KS\\) mixed-sensitivity optimization in standard form (tracking)" >}}
@ -4471,7 +4471,7 @@ The elements of the generalized plant are
\end{array} \end{array}
\end{equation\*} \end{equation\*}
<a id="orge6e489b"></a> <a id="orgcae1c61"></a>
{{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_s_t.png" caption="Figure 46: \\(S/T\\) mixed-sensitivity optimization in standard form" >}} {{< figure src="/ox-hugo/skogestad07_mixed_sensitivity_s_t.png" caption="Figure 46: \\(S/T\\) mixed-sensitivity optimization in standard form" >}}
@ -4499,7 +4499,7 @@ The focus of attention has moved to the size of signals and away from the size a
Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals. Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals.
Weights are also used if a perturbation is used to model uncertainty, as in Fig.&nbsp;[fig:input_uncertainty_hinf](#fig:input_uncertainty_hinf), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\). Weights are also used if a perturbation is used to model uncertainty, as in Fig.&nbsp;[fig:input_uncertainty_hinf](#fig:input_uncertainty_hinf), where \\(G\\) represents the nominal model, \\(W\\) is a weighting function that captures the relative model fidelity over frequency, and \\(\Delta\\) represents unmodelled dynamics usually normalized such that \\(\hnorm{\Delta} < 1\\).
<a id="orgf539cc4"></a> <a id="orgcbbbe4d"></a>
{{< figure src="/ox-hugo/skogestad07_input_uncertainty_hinf.png" caption="Figure 47: Multiplicative dynamic uncertainty model" >}} {{< figure src="/ox-hugo/skogestad07_input_uncertainty_hinf.png" caption="Figure 47: Multiplicative dynamic uncertainty model" >}}
@ -4521,7 +4521,7 @@ The problem can be cast as a standard \\(\hinf\\) optimization in the general co
w = \begin{bmatrix}d\\r\\n\end{bmatrix},\ z = \begin{bmatrix}z\_1\\z\_2\end{bmatrix}, \ v = \begin{bmatrix}r\_s\\y\_m\end{bmatrix},\ u = u w = \begin{bmatrix}d\\r\\n\end{bmatrix},\ z = \begin{bmatrix}z\_1\\z\_2\end{bmatrix}, \ v = \begin{bmatrix}r\_s\\y\_m\end{bmatrix},\ u = u
\end{equation\*} \end{equation\*}
<a id="org7778fd1"></a> <a id="orgbf6e2ac"></a>
{{< figure src="/ox-hugo/skogestad07_hinf_signal_based.png" caption="Figure 48: A signal-based \\(\hinf\\) control problem" >}} {{< figure src="/ox-hugo/skogestad07_hinf_signal_based.png" caption="Figure 48: A signal-based \\(\hinf\\) control problem" >}}
@ -4532,7 +4532,7 @@ This problem is a non-standard \\(\hinf\\) optimization.
It is a robust performance problem for which the \\(\mu\text{-synthesis}\\) procedure can be applied where we require the structured singular value: It is a robust performance problem for which the \\(\mu\text{-synthesis}\\) procedure can be applied where we require the structured singular value:
\\[ \mu(M(j\omega)) < 1, \quad \forall\omega \\] \\[ \mu(M(j\omega)) < 1, \quad \forall\omega \\]
<a id="org1ebe2af"></a> <a id="org8211ec2"></a>
{{< figure src="/ox-hugo/skogestad07_hinf_signal_based_uncertainty.png" caption="Figure 49: A signal-based \\(\hinf\\) control problem with input multiplicative uncertainty" >}} {{< figure src="/ox-hugo/skogestad07_hinf_signal_based_uncertainty.png" caption="Figure 49: A signal-based \\(\hinf\\) control problem with input multiplicative uncertainty" >}}
@ -4590,7 +4590,7 @@ For the perturbed feedback system of Fig.&nbsp;[fig:coprime_uncertainty_bis](#fi
Notice that \\(\gamma\\) is the \\(\hinf\\) norm from \\(\phi\\) to \\(\begin{bmatrix}u\\y\end{bmatrix}\\) and \\((I-GK)^{-1}\\) is the sensitivity function for this positive feedback arrangement. Notice that \\(\gamma\\) is the \\(\hinf\\) norm from \\(\phi\\) to \\(\begin{bmatrix}u\\y\end{bmatrix}\\) and \\((I-GK)^{-1}\\) is the sensitivity function for this positive feedback arrangement.
<a id="org10044d3"></a> <a id="org6ec03ef"></a>
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_bis.png" caption="Figure 50: \\(\hinf\\) robust stabilization problem" >}} {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_bis.png" caption="Figure 50: \\(\hinf\\) robust stabilization problem" >}}
@ -4637,7 +4637,7 @@ It is important to emphasize that since we can compute \\(\gamma\_\text{min}\\)
#### A Systematic \\(\hinf\\) Loop-Shaping Design Procedure {#a-systematic--hinf--loop-shaping-design-procedure} #### A Systematic \\(\hinf\\) Loop-Shaping Design Procedure {#a-systematic--hinf--loop-shaping-design-procedure}
<a id="org50e44e0"></a> <a id="org9136674"></a>
Robust stabilization alone is not much used in practice because the designer is not able to specify any performance requirements. Robust stabilization alone is not much used in practice because the designer is not able to specify any performance requirements.
To do so, **pre and post compensators** are used to **shape the open-loop singular values** prior to robust stabilization of the "shaped" plant. To do so, **pre and post compensators** are used to **shape the open-loop singular values** prior to robust stabilization of the "shaped" plant.
@ -4650,7 +4650,7 @@ If \\(W\_1\\) and \\(W\_2\\) are the pre and post compensators respectively, the
as shown in Fig.&nbsp;[fig:shaped_plant](#fig:shaped_plant). as shown in Fig.&nbsp;[fig:shaped_plant](#fig:shaped_plant).
<a id="org3eefb91"></a> <a id="orgef11ed5"></a>
{{< figure src="/ox-hugo/skogestad07_shaped_plant.png" caption="Figure 51: The shaped plant and controller" >}} {{< figure src="/ox-hugo/skogestad07_shaped_plant.png" caption="Figure 51: The shaped plant and controller" >}}
@ -4687,7 +4687,7 @@ Systematic procedure for \\(\hinf\\) loop-shaping design:
This is because the references do not directly excite the dynamics of \\(K\_s\\), which can result in large amounts of overshoot. This is because the references do not directly excite the dynamics of \\(K\_s\\), which can result in large amounts of overshoot.
The constant prefilter ensure a steady-state gain of \\(1\\) between \\(r\\) and \\(y\\), assuming integral action in \\(W\_1\\) or \\(G\\) The constant prefilter ensure a steady-state gain of \\(1\\) between \\(r\\) and \\(y\\), assuming integral action in \\(W\_1\\) or \\(G\\)
<a id="org1fca13f"></a> <a id="orgbfd1976"></a>
{{< figure src="/ox-hugo/skogestad07_shapping_practical_implementation.png" caption="Figure 52: A practical implementation of the loop-shaping controller" >}} {{< figure src="/ox-hugo/skogestad07_shapping_practical_implementation.png" caption="Figure 52: A practical implementation of the loop-shaping controller" >}}
@ -4713,7 +4713,7 @@ But in cases where stringent time-domain specifications are set on the output re
A general two degrees-of-freedom feedback control scheme is depicted in Fig.&nbsp;[fig:classical_feedback_2dof_simple](#fig:classical_feedback_2dof_simple). A general two degrees-of-freedom feedback control scheme is depicted in Fig.&nbsp;[fig:classical_feedback_2dof_simple](#fig:classical_feedback_2dof_simple).
The commands and feedbacks enter the controller separately and are independently processed. The commands and feedbacks enter the controller separately and are independently processed.
<a id="org248287b"></a> <a id="org02d3783"></a>
{{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_simple.png" caption="Figure 53: General two degrees-of-freedom feedback control scheme" >}} {{< figure src="/ox-hugo/skogestad07_classical_feedback_2dof_simple.png" caption="Figure 53: General two degrees-of-freedom feedback control scheme" >}}
@ -4724,7 +4724,7 @@ The design problem is illustrated in Fig.&nbsp;[fig:coprime_uncertainty_hinf](#f
The feedback part of the controller \\(K\_2\\) is designed to meet robust stability and disturbance rejection requirements. The feedback part of the controller \\(K\_2\\) is designed to meet robust stability and disturbance rejection requirements.
A prefilter is introduced to force the response of the closed-loop system to follow that of a specified model \\(T\_{\text{ref}}\\), often called the **reference model**. A prefilter is introduced to force the response of the closed-loop system to follow that of a specified model \\(T\_{\text{ref}}\\), often called the **reference model**.
<a id="orgd2e1c73"></a> <a id="org79631f7"></a>
{{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_hinf.png" caption="Figure 54: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping design problem" >}} {{< figure src="/ox-hugo/skogestad07_coprime_uncertainty_hinf.png" caption="Figure 54: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping design problem" >}}
@ -4749,7 +4749,7 @@ The main steps required to synthesize a two degrees-of-freedom \\(\mathcal{H}\_\
The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig.&nbsp;[fig:hinf_synthesis_2dof](#fig:hinf_synthesis_2dof). The final two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller is illustrated in Fig.&nbsp;[fig:hinf_synthesis_2dof](#fig:hinf_synthesis_2dof).
<a id="orge23af74"></a> <a id="org44f4511"></a>
{{< figure src="/ox-hugo/skogestad07_hinf_synthesis_2dof.png" caption="Figure 55: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller" >}} {{< figure src="/ox-hugo/skogestad07_hinf_synthesis_2dof.png" caption="Figure 55: Two degrees-of-freedom \\(\mathcal{H}\_\infty\\) loop-shaping controller" >}}
@ -4821,7 +4821,7 @@ where \\(u\_a\\) is the **actual plant input**, that is the measurement at the *
The situation is illustrated in Fig.&nbsp;[fig:weight_anti_windup](#fig:weight_anti_windup), where the actuators are each modeled by a unit gain and a saturation. The situation is illustrated in Fig.&nbsp;[fig:weight_anti_windup](#fig:weight_anti_windup), where the actuators are each modeled by a unit gain and a saturation.
<a id="org60e4b7d"></a> <a id="org6787ef9"></a>
{{< figure src="/ox-hugo/skogestad07_weight_anti_windup.png" caption="Figure 56: Self-conditioned weight \\(W\_1\\)" >}} {{< figure src="/ox-hugo/skogestad07_weight_anti_windup.png" caption="Figure 56: Self-conditioned weight \\(W\_1\\)" >}}
@ -4869,14 +4869,14 @@ Moreover, one should be careful about combining controller synthesis and analysi
## Controller Structure Design {#controller-structure-design} ## Controller Structure Design {#controller-structure-design}
<a id="org34b5b7c"></a> <a id="orgb7b170f"></a>
### Introduction {#introduction} ### Introduction {#introduction}
In previous sections, we considered the general problem formulation in Fig.&nbsp;[fig:general_control_names_bis](#fig:general_control_names_bis) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\). In previous sections, we considered the general problem formulation in Fig.&nbsp;[fig:general_control_names_bis](#fig:general_control_names_bis) and stated that the controller design problem is to find a controller \\(K\\) which based on the information in \\(v\\), generates a control signal \\(u\\) which counteracts the influence of \\(w\\) on \\(z\\), thereby minimizing the closed loop norm from \\(w\\) to \\(z\\).
<a id="org3b7721a"></a> <a id="orgfc83c01"></a>
{{< figure src="/ox-hugo/skogestad07_general_control_names_bis.png" caption="Figure 57: General Control Configuration" >}} {{< figure src="/ox-hugo/skogestad07_general_control_names_bis.png" caption="Figure 57: General Control Configuration" >}}
@ -4911,7 +4911,7 @@ The reference value \\(r\\) is usually set at some higher layer in the control h
Additional layers are possible, as is illustrated in Fig.&nbsp;[fig:control_system_hierarchy](#fig:control_system_hierarchy) which shows a typical control hierarchy for a chemical plant. Additional layers are possible, as is illustrated in Fig.&nbsp;[fig:control_system_hierarchy](#fig:control_system_hierarchy) which shows a typical control hierarchy for a chemical plant.
<a id="org0ad04a3"></a> <a id="orgb38fb33"></a>
{{< figure src="/ox-hugo/skogestad07_system_hierarchy.png" caption="Figure 58: Typical control system hierarchy in a chemical plant" >}} {{< figure src="/ox-hugo/skogestad07_system_hierarchy.png" caption="Figure 58: Typical control system hierarchy in a chemical plant" >}}
@ -4933,7 +4933,7 @@ However, this solution is normally not used for a number a reasons, included the
| ![](/ox-hugo/skogestad07_optimize_control_a.png) | ![](/ox-hugo/skogestad07_optimize_control_b.png) | ![](/ox-hugo/skogestad07_optimize_control_c.png) | | ![](/ox-hugo/skogestad07_optimize_control_a.png) | ![](/ox-hugo/skogestad07_optimize_control_b.png) | ![](/ox-hugo/skogestad07_optimize_control_c.png) |
|--------------------------------------------------|--------------------------------------------------------------------------------|-------------------------------------------------------------| |--------------------------------------------------|--------------------------------------------------------------------------------|-------------------------------------------------------------|
| <a id="orgd48241a"></a> Open loop optimization | <a id="orgf34ee5e"></a> Closed-loop implementation with separate control layer | <a id="org6d54cf6"></a> Integrated optimization and control | | <a id="org46243fe"></a> Open loop optimization | <a id="org9054ed1"></a> Closed-loop implementation with separate control layer | <a id="org7c9ab5b"></a> Integrated optimization and control |
### Selection of Controlled Outputs {#selection-of-controlled-outputs} ### Selection of Controlled Outputs {#selection-of-controlled-outputs}
@ -5140,7 +5140,7 @@ A cascade control structure results when either of the following two situations
| ![](/ox-hugo/skogestad07_cascade_extra_meas.png) | ![](/ox-hugo/skogestad07_cascade_extra_input.png) | | ![](/ox-hugo/skogestad07_cascade_extra_meas.png) | ![](/ox-hugo/skogestad07_cascade_extra_input.png) |
|-------------------------------------------------------|---------------------------------------------------| |-------------------------------------------------------|---------------------------------------------------|
| <a id="orge84e8c3"></a> Extra measurements \\(y\_2\\) | <a id="org740a5b2"></a> Extra inputs \\(u\_2\\) | | <a id="orgb5eec16"></a> Extra measurements \\(y\_2\\) | <a id="org17e03b3"></a> Extra inputs \\(u\_2\\) |
#### Cascade Control: Extra Measurements {#cascade-control-extra-measurements} #### Cascade Control: Extra Measurements {#cascade-control-extra-measurements}
@ -5189,7 +5189,7 @@ With reference to the special (but common) case of cascade control shown in Fig.
</div> </div>
<a id="orgeb4cf35"></a> <a id="orga224135"></a>
{{< figure src="/ox-hugo/skogestad07_cascade_control.png" caption="Figure 59: Common case of cascade control where the primary output \\(y\_1\\) depends directly on the extra measurement \\(y\_2\\)" >}} {{< figure src="/ox-hugo/skogestad07_cascade_control.png" caption="Figure 59: Common case of cascade control where the primary output \\(y\_1\\) depends directly on the extra measurement \\(y\_2\\)" >}}
@ -5239,7 +5239,7 @@ We would probably tune the three controllers in the order \\(K\_2\\), \\(K\_3\\)
</div> </div>
<a id="org3546b65"></a> <a id="org96aa47c"></a>
{{< figure src="/ox-hugo/skogestad07_cascade_control_two_layers.png" caption="Figure 60: Control configuration with two layers of cascade control" >}} {{< figure src="/ox-hugo/skogestad07_cascade_control_two_layers.png" caption="Figure 60: Control configuration with two layers of cascade control" >}}
@ -5354,7 +5354,7 @@ We get:
\end{aligned} \end{aligned}
\end{equation} \end{equation}
<a id="orgd8e08b6"></a> <a id="orgf93ad55"></a>
{{< figure src="/ox-hugo/skogestad07_partial_control.png" caption="Figure 61: Partial Control" >}} {{< figure src="/ox-hugo/skogestad07_partial_control.png" caption="Figure 61: Partial Control" >}}
@ -5474,7 +5474,7 @@ Then to minimize the control error for the primary output, \\(J = \\|y\_1 - r\_1
In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig.&nbsp;[fig:decentralized_diagonal_control](#fig:decentralized_diagonal_control)). In this section, \\(G(s)\\) is a square plant which is to be controlled using a diagonal controller (Fig.&nbsp;[fig:decentralized_diagonal_control](#fig:decentralized_diagonal_control)).
<a id="orge74047e"></a> <a id="org6e9e0ea"></a>
{{< figure src="/ox-hugo/skogestad07_decentralized_diagonal_control.png" caption="Figure 62: Decentralized diagonal control of a \\(2 \times 2\\) plant" >}} {{< figure src="/ox-hugo/skogestad07_decentralized_diagonal_control.png" caption="Figure 62: Decentralized diagonal control of a \\(2 \times 2\\) plant" >}}
@ -5861,7 +5861,7 @@ The conditions are also useful in an **input-output controllability analysis** f
## Model Reduction {#model-reduction} ## Model Reduction {#model-reduction}
<a id="orgad4c2d9"></a> <a id="org01b0041"></a>
### Introduction {#introduction} ### Introduction {#introduction}
@ -6268,4 +6268,4 @@ In such a case, using truncation or optimal Hankel norm approximation with appro
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="org81e2975"></a>Skogestad, Sigurd, and Ian Postlethwaite. 2007. _Multivariable Feedback Control: Analysis and Design_. John Wiley. <a id="org57bef6b"></a>Skogestad, Sigurd, and Ian Postlethwaite. 2007. _Multivariable Feedback Control: Analysis and Design_. John Wiley.

View File

@ -49,7 +49,7 @@ The noise source has a PSD given by:
\\[ S\_T(f) = 4 k T \text{Re}(Z(f)) \ [V^2/Hz] \\] \\[ S\_T(f) = 4 k T \text{Re}(Z(f)) \ [V^2/Hz] \\]
with \\(k = 1.38 \cdot 10^{-23} \,[J/K]\\) the Boltzmann's constant, \\(T\\) the temperature [K] and \\(Z(f)\\) the frequency dependent impedance of the system. with \\(k = 1.38 \cdot 10^{-23} \,[J/K]\\) the Boltzmann's constant, \\(T\\) the temperature [K] and \\(Z(f)\\) the frequency dependent impedance of the system.
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
A kilo Ohm resistor at 20 degree Celsius will show a thermal noise of \\(0.13 \mu V\\) from zero up to one kHz. A kilo Ohm resistor at 20 degree Celsius will show a thermal noise of \\(0.13 \mu V\\) from zero up to one kHz.
@ -62,7 +62,7 @@ It has a white spectral density:
\\[ S\_S = 2 q\_e i\_{dc} \ [A^2/Hz] \\] \\[ S\_S = 2 q\_e i\_{dc} \ [A^2/Hz] \\]
with \\(q\_e\\) the electronic charge (\\(1.6 \cdot 10^{-19}\, [C]\\)), \\(i\_{dc}\\) the average current [A]. with \\(q\_e\\) the electronic charge (\\(1.6 \cdot 10^{-19}\, [C]\\)), \\(i\_{dc}\\) the average current [A].
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
An averable current of 1 A will introduce noise with a STD of \\(10 \cdot 10^{-9}\,[A]\\) from zero up to one kHz. An averable current of 1 A will introduce noise with a STD of \\(10 \cdot 10^{-9}\,[A]\\) from zero up to one kHz.
@ -97,7 +97,7 @@ The corresponding PSD is white up to the Nyquist frequency:
\\[ S\_Q = \frac{q^2}{12 f\_N} \\] \\[ S\_Q = \frac{q^2}{12 f\_N} \\]
with \\(f\_N\\) the Nyquist frequency [Hz]. with \\(f\_N\\) the Nyquist frequency [Hz].
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Let's take the example of a 16 bit ADC which has an electronic noise with a SNR of 80dB. Let's take the example of a 16 bit ADC which has an electronic noise with a SNR of 80dB.
@ -129,7 +129,7 @@ The disturbance force acting on a body, is the **difference of pressure between
To have a pressure difference, the body must have a certain minimum dimension, depending on the wave length of the sound. To have a pressure difference, the body must have a certain minimum dimension, depending on the wave length of the sound.
For a body of typical dimensions of 100mm, only frequencies above 800 Hz have a significant disturbance contribution. For a body of typical dimensions of 100mm, only frequencies above 800 Hz have a significant disturbance contribution.
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Consider a cube with a rib size of 100 mm located in a room with a sound level of 80dB, distributed between one and ten kHz, then the force disturbance PSD equal \\(2.2 \cdot 10^{-2}\,[N^2/Hz]\\) Consider a cube with a rib size of 100 mm located in a room with a sound level of 80dB, distributed between one and ten kHz, then the force disturbance PSD equal \\(2.2 \cdot 10^{-2}\,[N^2/Hz]\\)
@ -161,21 +161,21 @@ Three factors influence the performance:
The DEB helps identifying which disturbance is the limiting factor, and it should be investigated if the controller can deal with this disturbance before re-designing the plant. The DEB helps identifying which disturbance is the limiting factor, and it should be investigated if the controller can deal with this disturbance before re-designing the plant.
The modelling of disturbance as stochastic variables, is by excellence suitable for the optimal stochastic control framework. The modelling of disturbance as stochastic variables, is by excellence suitable for the optimal stochastic control framework.
In Figure [1](#org322128e), the generalized plant maps the disturbances to the performance channels. In Figure [1](#org7b34df5), the generalized plant maps the disturbances to the performance channels.
By minimizing the \\(\mathcal{H}\_2\\) system norm of the generalized plant, the variance of the performance channels is minimized. By minimizing the \\(\mathcal{H}\_2\\) system norm of the generalized plant, the variance of the performance channels is minimized.
<a id="org322128e"></a> <a id="org7b34df5"></a>
{{< figure src="/ox-hugo/jabben07_general_plant.png" caption="Figure 1: Control system with the generalized plant \\(G\\). The performance channels are stacked in \\(z\\), while the controller input is denoted with \\(y\\)" >}} {{< figure src="/ox-hugo/jabben07_general_plant.png" caption="Figure 1: Control system with the generalized plant \\(G\\). The performance channels are stacked in \\(z\\), while the controller input is denoted with \\(y\\)" >}}
#### Using Weighting Filters for Disturbance Modelling {#using-weighting-filters-for-disturbance-modelling} #### Using Weighting Filters for Disturbance Modelling {#using-weighting-filters-for-disturbance-modelling}
Since disturbances are generally not white, the system of Figure [1](#org322128e) needs to be augmented with so called **disturbance weighting filters**. Since disturbances are generally not white, the system of Figure [1](#org7b34df5) needs to be augmented with so called **disturbance weighting filters**.
A disturbance weighting filter gives the disturbance PSD when white noise as input is applied. A disturbance weighting filter gives the disturbance PSD when white noise as input is applied.
This is illustrated in Figure [2](#orgd4f3b10) where a vector of white noise time signals \\(\underbar{w}(t)\\) is filtered through a weighting filter to obtain the colored physical disturbances \\(w(t)\\) with the desired PSD \\(S\_w\\) . This is illustrated in Figure [2](#org5013433) where a vector of white noise time signals \\(\underbar{w}(t)\\) is filtered through a weighting filter to obtain the colored physical disturbances \\(w(t)\\) with the desired PSD \\(S\_w\\) .
The generalized plant framework also allows to include **weighting filters for the performance channels**. The generalized plant framework also allows to include **weighting filters for the performance channels**.
This is useful for three reasons: This is useful for three reasons:
@ -184,7 +184,7 @@ This is useful for three reasons:
- some performance channels may be of more importance than others - some performance channels may be of more importance than others
- by using dynamic weighting filters, one can emphasize the performance in a certain frequency range - by using dynamic weighting filters, one can emphasize the performance in a certain frequency range
<a id="orgd4f3b10"></a> <a id="org5013433"></a>
{{< figure src="/ox-hugo/jabben07_weighting_functions.png" caption="Figure 2: Control system with the generalized plant \\(G\\) and weighting functions" >}} {{< figure src="/ox-hugo/jabben07_weighting_functions.png" caption="Figure 2: Control system with the generalized plant \\(G\\) and weighting functions" >}}
@ -209,9 +209,9 @@ So, to obtain feasible controllers, the performance channel is a combination of
By choosing suitable weighting filters for \\(y\\) and \\(u\\), the performance can be optimized while keeping the controller effort limited: By choosing suitable weighting filters for \\(y\\) and \\(u\\), the performance can be optimized while keeping the controller effort limited:
\\[ \\|z\\|\_{rms}^2 = \left\\| \begin{bmatrix} y \\ \alpha u \end{bmatrix} \right\\|\_{rms}^2 = \\|y\\|\_{rms}^2 + \alpha^2 \\|u\\|\_{rms}^2 \\] \\[ \\|z\\|\_{rms}^2 = \left\\| \begin{bmatrix} y \\ \alpha u \end{bmatrix} \right\\|\_{rms}^2 = \\|y\\|\_{rms}^2 + \alpha^2 \\|u\\|\_{rms}^2 \\]
By calculation \\(\mathcal{H}\_2\\) optimal controllers for increasing \\(\alpha\\) and plotting the performance \\(\\|y\\|\\) vs the controller effort \\(\\|u\\|\\), the curve as depicted in Figure [3](#orgae97f26) is obtained. By calculation \\(\mathcal{H}\_2\\) optimal controllers for increasing \\(\alpha\\) and plotting the performance \\(\\|y\\|\\) vs the controller effort \\(\\|u\\|\\), the curve as depicted in Figure [3](#org47370f3) is obtained.
<a id="orgae97f26"></a> <a id="org47370f3"></a>
{{< figure src="/ox-hugo/jabben07_pareto_curve_H2.png" caption="Figure 3: An illustration of a Pareto curve. Each point of the curve represents the performance obtained with an optimal controller. The curve is obtained by varying \\(\alpha\\) and calculating an \\(\mathcal{H}\_2\\) optimal controller for each \\(\alpha\\)." >}} {{< figure src="/ox-hugo/jabben07_pareto_curve_H2.png" caption="Figure 3: An illustration of a Pareto curve. Each point of the curve represents the performance obtained with an optimal controller. The curve is obtained by varying \\(\alpha\\) and calculating an \\(\mathcal{H}\_2\\) optimal controller for each \\(\alpha\\)." >}}

View File

@ -23,9 +23,9 @@ Let's suppose that the ADC is ideal and the only noise comes from the quantizati
Interestingly, the noise amplitude is uniformly distributed. Interestingly, the noise amplitude is uniformly distributed.
The quantization noise can take a value between \\(\pm q/2\\), and the probability density function is constant in this range (i.e., its a uniform distribution). The quantization noise can take a value between \\(\pm q/2\\), and the probability density function is constant in this range (i.e., its a uniform distribution).
Since the integral of the probability density function is equal to one, its value will be \\(1/q\\) for \\(-q/2 < e < q/2\\) (Fig. [1](#org5848c2b)). Since the integral of the probability density function is equal to one, its value will be \\(1/q\\) for \\(-q/2 < e < q/2\\) (Fig. [1](#orgf547b74)).
<a id="org5848c2b"></a> <a id="orgf547b74"></a>
{{< figure src="/ox-hugo/probability_density_function_adc.png" caption="Figure 1: Probability density function \\(p(e)\\) of the ADC error \\(e\\)" >}} {{< figure src="/ox-hugo/probability_density_function_adc.png" caption="Figure 1: Probability density function \\(p(e)\\) of the ADC error \\(e\\)" >}}
@ -48,7 +48,7 @@ Thus, the two-sided PSD (from \\(\frac{-f\_s}{2}\\) to \\(\frac{f\_s}{2}\\)), we
\int\_{-f\_s/2}^{f\_s/2} \Gamma(f) d f = f\_s \Gamma = \frac{q^2}{12} \int\_{-f\_s/2}^{f\_s/2} \Gamma(f) d f = f\_s \Gamma = \frac{q^2}{12}
\end{equation} \end{equation}
<div class="bred"> <div class="important">
<div></div> <div></div>
Finally, the Power Spectral Density of the quantization noise of an ADC is equal to: Finally, the Power Spectral Density of the quantization noise of an ADC is equal to:
@ -62,7 +62,7 @@ Finally, the Power Spectral Density of the quantization noise of an ADC is equal
</div> </div>
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Let's take a 18bits ADC with a range of +/-10V and a sample frequency of 10kHz. Let's take a 18bits ADC with a range of +/-10V and a sample frequency of 10kHz.

View File

@ -4,23 +4,21 @@ author = ["Thomas Dehaeze"]
draft = false draft = false
+++ +++
Backlinks:
- [Multivariable Control]({{< relref "multivariable_control" >}})
Tags Tags
: :
\\[ \SI{1}{\meter\per\second} \\]
Resources: Resources:
- ([Skogestad and Postlethwaite 2007](#org44811fa)) - ([Skogestad and Postlethwaite 2007](#org4fdbcff))
- ([Toivonen 2002](#orgfbd38d8)) - ([Toivonen 2002](#org4782daf))
- ([Zhang 2011](#orgc3b14cc)) - ([Zhang 2011](#org9b9c22a))
## Definition {#definition} ## Definition {#definition}
<div class="bblue"> <div class="definition">
<div></div> <div></div>
A norm of \\(e\\) (which may be a vector, matrix, signal of system) is a real number, denoted \\(\\|e\\|\\), that satisfies the following properties: A norm of \\(e\\) (which may be a vector, matrix, signal of system) is a real number, denoted \\(\\|e\\|\\), that satisfies the following properties:
@ -47,7 +45,7 @@ A norm of \\(e\\) (which may be a vector, matrix, signal of system) is a real nu
## Matrix Norms {#matrix-norms} ## Matrix Norms {#matrix-norms}
<div class="bgreen"> <div class="definition">
<div></div> <div></div>
A norm on a matrix \\(\\|A\\|\\) is a matrix norm if, in addition to the four norm properties, it also satisfies the multiplicative property: A norm on a matrix \\(\\|A\\|\\) is a matrix norm if, in addition to the four norm properties, it also satisfies the multiplicative property:
@ -141,7 +139,7 @@ We now consider which system norms result from the definition of input classes a
### \\(\mathcal{H}\_\infty\\) Norm {#mathcal-h-infty--norm} ### \\(\mathcal{H}\_\infty\\) Norm {#mathcal-h-infty--norm}
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Consider a proper linear stable system \\(G(s)\\). Consider a proper linear stable system \\(G(s)\\).
@ -159,7 +157,7 @@ In terms of signals, the \\(\mathcal{H}\_\infty\\) norm can be interpreted as fo
### \\(\mathcal{H}\_2\\) Norm {#mathcal-h-2--norm} ### \\(\mathcal{H}\_2\\) Norm {#mathcal-h-2--norm}
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Consider a strictly proper system \\(G(s)\\). Consider a strictly proper system \\(G(s)\\).
@ -178,17 +176,17 @@ In terms of signals, the \\(\mathcal{H}\_\infty\\) norm can be interpreted as fo
The \\(\mathcal{H}\_2\\) is very useful when combined to [Dynamic Error Budgeting]({{< relref "dynamic_error_budgeting" >}}). The \\(\mathcal{H}\_2\\) is very useful when combined to [Dynamic Error Budgeting]({{< relref "dynamic_error_budgeting" >}}).
As explained in ([Monkhorst 2004](#orgc4a9d92)), the \\(\mathcal{H}\_2\\) norm has a stochastic interpretation: As explained in ([Monkhorst 2004](#orgb605c51)), the \\(\mathcal{H}\_2\\) norm has a stochastic interpretation:
> The squared \\(\mathcal{H}\_2\\) norm can be interpreted as the output variance of a system with zero mean white noise input. > The squared \\(\mathcal{H}\_2\\) norm can be interpreted as the output variance of a system with zero mean white noise input.
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="orgc4a9d92"></a>Monkhorst, Wouter. 2004. “Dynamic Error Budgeting, a Design Approach.” Delft University. <a id="orgb605c51"></a>Monkhorst, Wouter. 2004. “Dynamic Error Budgeting, a Design Approach.” Delft University.
<a id="org44811fa"></a>Skogestad, Sigurd, and Ian Postlethwaite. 2007. _Multivariable Feedback Control: Analysis and Design_. John Wiley. <a id="org4fdbcff"></a>Skogestad, Sigurd, and Ian Postlethwaite. 2007. _Multivariable Feedback Control: Analysis and Design_. John Wiley.
<a id="orgfbd38d8"></a>Toivonen, Hannu T. 2002. “Robust Control Methods.” Abo Akademi University. <a id="org4782daf"></a>Toivonen, Hannu T. 2002. “Robust Control Methods.” Abo Akademi University.
<a id="orgc3b14cc"></a>Zhang, Weidong. 2011. _Quantitative Process Control Theory_. CRC Press. <a id="org9b9c22a"></a>Zhang, Weidong. 2011. _Quantitative Process Control Theory_. CRC Press.

View File

@ -0,0 +1,120 @@
+++
title = "Sensor Noise Estimation"
author = ["Thomas Dehaeze"]
draft = false
+++
Tags
:
## Estimation of the Noise of Inertial Sensors {#estimation-of-the-noise-of-inertial-sensors}
Measuring the noise level of inertial sensors is not easy as the seismic motion is usually much larger than the sensor's noise level.
A technique to estimate the sensor noise in such case is proposed in ([Barzilai, VanZandt, and Kenny 1998](#org7fe766e)) and well explained in ([Poel 2010](#org964c18e)) (Section 6.1.3).
The idea is to mount two inertial sensors closely together such that they should measure the same quantity.
This is represented in Figure [1](#org53e9426) where two identical sensors are measuring the same motion \\(x(t)\\).
<a id="org53e9426"></a>
{{< figure src="/ox-hugo/huddle_test_setup.png" caption="Figure 1: Schematic representation of the setup for measuring the noise of inertial sensors." >}}
<div class="definition">
<div></div>
Few quantities that will be used to estimate the sensor noise are now defined.
This include the **Coherence**, the **Power Spectral Density** (PSD) and the **Cross Spectral Density** (CSD).
The coherence between signals \\(x\\) and \\(y\\) is defined as follow
\\[ \gamma^2\_{xy}(\omega) = \frac{|C\_{xy}(\omega)|^2}{|P\_{x}(\omega)| |P\_{y}(\omega)|} \\]
where \\(|P\_{x}(\omega)|\\) is the output PSD of signal \\(x(t)\\) and \\(|C\_{xy}(\omega)|\\) is the CSD of signals \\(x(t)\\) and \\(y(t)\\).
The PSD and CSD are defined as follow:
\begin{align}
|P\_x(\omega)| &= \frac{2}{n\_d T} \sum^{n\_d}\_{n=1} \left| x\_k(\omega, T) \right|^2 \\\\\\
|C\_{xy}(\omega)| &= \frac{2}{n\_d T} \sum^{n\_d}\_{n=1} [ x\_k^\*(\omega, T) ] [ y\_k(\omega, T) ]
\end{align}
where:
- \\(n\_d\\) is the number for records averaged
- \\(T\\) is the length of each record
- \\(x\_k(\omega, T)\\) is the finite Fourier transform of the kth record
- \\(x\_k^\*(\omega, T)\\) is its complex conjugate
The Matlab function `mscohere` can be used to compute the coherence:
```matlab
%% Parameters
Fs = 1e4; % Sampling Frequency [Hz]
win = hanning(ceil(10*Fs)); % 10 seconds Hanning Windows
%% Coherence between x and y
[pxy, f] = mscohere(x, y, win, [], [], Fs); % Coherence, frequency vector in [Hz]
```
Alternatively, it can be manually computed using the `cpsd` and `pwelch` commands:
```matlab
%% Manual Computation of the Coherence
[pxy, f] = cpsd(x, y, win, [], [], Fs); % Cross Spectral Density between x and y
[pxx, ~] = pwelch(x, win, [], [], Fs); % Power Spectral Density of x
[pyy, ~] = pwelch(y, win, [], [], Fs); % Power Spectral Density of y
pxy_manual = abs(pxy).^2./abs(pxx)./abs(pyy);
```
</div>
Now suppose that:
- both sensors are modelled as LTI systems \\(H\_1(s)\\) and \\(H\_2(s)\\)
- sensor noises are modelled as input noises \\(n\_1(t)\\) and \\(n\_2(s)\\)
- sensor noises are uncorrelated and each are uncorrelated with \\(x(t)\\)
Then, the system can be represented by the block diagram in Figure [2](#org0e1cf4a), and we can write:
\begin{align}
P\_{y\_1y\_1}(\omega) &= |H\_1(\omega)|^2 ( P\_{x}(\omega) + P\_{n\_1}(\omega) ) \\\\\\
P\_{y\_2y\_2}(\omega) &= |H\_2(\omega)|^2 ( P\_{x}(\omega) + P\_{n\_2}(\omega) ) \\\\\\
C\_{y\_1y\_2}(j\omega) &= H\_2^H(j\omega) H\_1(j\omega) P\_{x}(\omega)
\end{align}
And the CSD between \\(y\_1(t)\\) and \\(y\_2(t)\\) is:
\begin{equation}
\gamma^2\_{y\_1y\_2}(\omega) = \frac{|C\_{y\_1y\_2}(j\omega)|^2}{P\_{y\_1}(\omega) P\_{y\_2}(\omega)}
\end{equation}
<a id="org0e1cf4a"></a>
{{< figure src="/ox-hugo/huddle_test_block_diagram.png" caption="Figure 2: Huddle test block diagram" >}}
Rearranging the equations, we obtain the PSD of \\(n\_1(t)\\) and \\(n\_2(t)\\):
\begin{align}
P\_{n1}(\omega) = \frac{P\_{y\_1}(\omega)}{|H\_1(j\omega)|^2} \left( 1 - \gamma\_{y\_1y\_2}(\omega) \frac{|H\_1(j\omega)|}{|H\_2(j\omega)|} \sqrt{\frac{P\_{y\_2}(\omega)}{P\_{y\_1}(\omega)}} \right) \\\\\\
P\_{n2}(\omega) = \frac{P\_{y\_2}(\omega)}{|H\_2(j\omega)|^2} \left( 1 - \gamma\_{y\_1y\_2}(\omega) \frac{|H\_2(j\omega)|}{|H\_1(j\omega)|} \sqrt{\frac{P\_{y\_1}(\omega)}{P\_{y\_2}(\omega)}} \right)
\end{align}
If we assume the two sensor dynamics to be the same \\(H\_1(s) \approx H\_2(s)\\) and the PSD of \\(n\_1(t)\\) and \\(n\_2(t)\\) to be the same (\\(P\_{n\_1}(\omega) \approx P\_{n\_2}(\omega)\\)) which is most of the time the case when using two identical sensors, we obtain this approximate equation:
<div class="important">
<div></div>
\begin{equation}
|P\_{n\_1}(\omega)| \approx \frac{P\_{y\_1}}{|H\_1(j\omega)|^2} \big( 1 - \gamma\_{y\_1y\_2}(\omega) \big)
\end{equation}
</div>
## Bibliography {#bibliography}
<a id="org7fe766e"></a>Barzilai, Aaron, Tom VanZandt, and Tom Kenny. 1998. “Technique for Measurement of the Noise of a Sensor in the Presence of Large Background Signals.” _Review of Scientific Instruments_ 69 (7):276772. <https://doi.org/10.1063/1.1149013>.
<a id="org964c18e"></a>Poel, Gerrit Wijnand van der. 2010. “An Exploration of Active Hard Mount Vibration Isolation for Precision Equipment.” University of Twente. <https://doi.org/10.3990/1.9789036530163>.

View File

@ -10,7 +10,7 @@ Tags
## SNR to Noise PSD {#snr-to-noise-psd} ## SNR to Noise PSD {#snr-to-noise-psd}
From ([Jabben 2007](#org4650879)) (Section 3.3.2): From ([Jabben 2007](#orgf2f4e47)) (Section 3.3.2):
> Electronic equipment does most often not come with detailed electric schemes, in which case the PSD should be determined from measurements. > Electronic equipment does most often not come with detailed electric schemes, in which case the PSD should be determined from measurements.
> In the design phase however, one has to rely on information provided by specification sheets from the manufacturer. > In the design phase however, one has to rely on information provided by specification sheets from the manufacturer.
@ -22,7 +22,7 @@ From ([Jabben 2007](#org4650879)) (Section 3.3.2):
> \\[ S\_{snr} = \frac{x\_{fr}^2}{8 f\_c C\_{snr}^2} \\] > \\[ S\_{snr} = \frac{x\_{fr}^2}{8 f\_c C\_{snr}^2} \\]
> with \\(x\_{fr}\\) the full range of \\(x\\), and \\(C\_{snr}\\) the SNR. > with \\(x\_{fr}\\) the full range of \\(x\\), and \\(C\_{snr}\\) the SNR.
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Let's take an example. Let's take an example.
@ -49,7 +49,7 @@ where \\(S\_{snr}\\) is the SNR in dB and \\(S\_\text{rms}\\) is the RMS value o
If the full range is \\(\Delta V\\), then: If the full range is \\(\Delta V\\), then:
\\[ S\_\text{rms} = \frac{\Delta V/2}{\sqrt{2}} \\] \\[ S\_\text{rms} = \frac{\Delta V/2}{\sqrt{2}} \\]
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
As an example, let's take a voltage amplifier with a full range of \\(\Delta V = 20V\\) and a SNR of 85dB. As an example, let's take a voltage amplifier with a full range of \\(\Delta V = 20V\\) and a SNR of 85dB.
@ -66,7 +66,7 @@ The RMS value of the noise is then:
If the wanted full range and RMS value of the noise are defined, the required SNR can be computed from: If the wanted full range and RMS value of the noise are defined, the required SNR can be computed from:
\\[ S\_{snr} = 20 \log \frac{\text{Signal, rms}}{\text{Noise, rms}} \\] \\[ S\_{snr} = 20 \log \frac{\text{Signal, rms}}{\text{Noise, rms}} \\]
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
Let's say the wanted noise is \\(1 mV, \text{rms}\\) for a full range of \\(20 V\\), the corresponding SNR is: Let's say the wanted noise is \\(1 mV, \text{rms}\\) for a full range of \\(20 V\\), the corresponding SNR is:
@ -78,13 +78,13 @@ Let's say the wanted noise is \\(1 mV, \text{rms}\\) for a full range of \\(20 V
## Noise Density to RMS noise {#noise-density-to-rms-noise} ## Noise Density to RMS noise {#noise-density-to-rms-noise}
From ([Fleming 2010](#orgf1518db)): From ([Fleming 2010](#orgf17a758)):
\\[ \text{RMS noise} = \sqrt{2 \times \text{bandwidth}} \times \text{noise density} \\] \\[ \text{RMS noise} = \sqrt{2 \times \text{bandwidth}} \times \text{noise density} \\]
If the noise is normally distributed, the RMS value is also the standard deviation \\(\sigma\\). If the noise is normally distributed, the RMS value is also the standard deviation \\(\sigma\\).
The peak to peak amplitude is then approximately \\(6 \sigma\\). The peak to peak amplitude is then approximately \\(6 \sigma\\).
<div class="bgreen"> <div class="exampl">
<div></div> <div></div>
- noise density = \\(20 pm/\sqrt{Hz}\\) - noise density = \\(20 pm/\sqrt{Hz}\\)
@ -98,6 +98,6 @@ The peak-to-peak noise will be approximately \\(6 \sigma = 1.7 nm\\)
## Bibliography {#bibliography} ## Bibliography {#bibliography}
<a id="orgf1518db"></a>Fleming, A.J. 2010. “Nanopositioning System with Force Feedback for High-Performance Tracking and Vibration Control.” _IEEE/ASME Transactions on Mechatronics_ 15 (3):43347. <https://doi.org/10.1109/tmech.2009.2028422>. <a id="orgf17a758"></a>Fleming, A.J. 2010. “Nanopositioning System with Force Feedback for High-Performance Tracking and Vibration Control.” _IEEE/ASME Transactions on Mechatronics_ 15 (3):43347. <https://doi.org/10.1109/tmech.2009.2028422>.
<a id="org4650879"></a>Jabben, Leon. 2007. “Mechatronic Design of a Magnetically Suspended Rotating Platform.” Delft University. <a id="orgf2f4e47"></a>Jabben, Leon. 2007. “Mechatronic Design of a Magnetically Suspended Rotating Platform.” Delft University.

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB