3.8 KiB
+++ title = "Singular Value Decomposition" author = ["Dehaeze Thomas"] draft = false +++
Tags :
SVD of a MIMO system
This is taken from (NO_ITEM_DATA:skogestad05_multiv_feedb_contr).
We are interested by the physical interpretation of the SVD when applied to the frequency response of a MIMO system \(G(s)\) with \(m\) inputs and \(l\) outputs.
\begin{equation} G = U \Sigma V^H \end{equation}
- \(\Sigma\): is an \(l \times m\) matrix with \(k = \min\{l, m\}\) non-negative singular values \(\sigma_i\), arranged in descending order along its main diagonal, the other entries are zero.
- \(U\): is an \(l \times l\) unitary matrix. The columns of \(U\), denoted \(u_i\), represent the output directions of the plant. They are orthonormal.
- \(V\): is an \(m \times m\) unitary matrix. The columns of \(V\), denoted \(v_i\), represent the input directions of the plant. They are orthonormal.
The input and output directions are related through the singular values:
\begin{equation} G v_i = \sigma_i u_i \end{equation}
So, if we consider an input in the direction \(v_i\), then the output is in the direction \(u_i\). Furthermore, since \(\|v_i\|_2=1\) and \(\|u_i\|_2=1\), we see that the singular value \(\sigma_i\) directly gives the gain of the matrix \(G\) in this direction.
The largest gain for any input is equal to the maximum singular value: \[\overline{\sigma}(G) \equiv \sigma_1(G) = \max_{d\neq 0}\frac{\|Gd\|_2}{\|d\|_2} = \frac{\|Gv_1\|_2}{\|v_1\|_2} \] The smallest gain for any input direction is equal to the minimum singular value: \[ \underline{\sigma}(G) \equiv \sigma_k(G) = \min_{d\neq 0}\frac{\|Gd\|_2}{\|d\|_2} = \frac{\|Gv_k\|_2}{\|v_k\|_2} \]
We define \(u_1 = \overline{u}\), \(v_1 = \overline{v}\), \(u_k=\underline{u}\) and \(v_k = \underline{v}\). Then is follows that: \[ G\overline{v} = \overline{\sigma} \cdot \overline{u} ; \quad G\underline{v} = \underline{\sigma} \cdot \underline{u} \]
SVD to pseudo inverse rectangular matrices
This is taken from (Preumont 2018).
The Singular Value Decomposition (SVD) is a generalization of the eigenvalue decomposition of a rectangular matrix: \[ J = U \Sigma V^T = \sum_{i=1}^r \sigma_i u_i v_i^T \] With:
- \(U\) and \(V\) orthogonal matrices. The columns \(u_i\) and \(v_i\) of \(U\) and \(V\) are the eigenvectors of the square matrices \(JJ^T\) and \(J^TJ\) respectively
- \(\Sigma\) a rectangular diagonal matrix of dimension \(m \times n\) containing the square root of the common non-zero eigenvalues of \(JJ^T\) and \(J^TJ\)
- \(r\) is the number of non-zero singular values of \(J\)
The pseudo-inverse of \(J\) is: \[ J^+ = V\Sigma^+U^T = \sum_{i=1}^r \frac{1}{\sigma_i} v_i u_i^T \]
The conditioning of the Jacobian is measured by the condition number: \[ c(J) = \frac{\sigma_{max}}{\sigma_{min}} \]
When \(c(J)\) becomes large, the most straightforward way to handle the ill-conditioning is to truncate the smallest singular value out of the sum. This will have usually little impact of the fitting error while reducing considerably the actuator inputs \(v\).