digital-brain/content/zettels/singular_value_decomposition.md

3.2 KiB

+++ title = "Singular Value Decomposition" author = ["Thomas Dehaeze"] draft = false +++

Tags :

SVD of a MIMO system

We are interested by the physical interpretation of the SVD when applied to the frequency response of a MIMO system \(G(s)\) with \(m\) inputs and \(l\) outputs.

\begin{equation} G = U \Sigma V^H \end{equation}

\(\Sigma\)
is an \(l \times m\) matrix with \(k = \min\{l, m\}\) non-negative singular values \(\sigma_i\), arranged in descending order along its main diagonal, the other entries are zero.
\(U\)
is an \(l \times l\) unitary matrix. The columns of \(U\), denoted \(u_i\), represent the output directions of the plant. They are orthonormal.
\(V\)
is an \(m \times m\) unitary matrix. The columns of \(V\), denoted \(v_i\), represent the input directions of the plant. They are orthonormal.

The input and output directions are related through the singular values:

\begin{equation} G v_i = \sigma_i u_i \end{equation}

So, if we consider an input in the direction \(v_i\), then the output is in the direction \(u_i\). Furthermore, since \(\normtwo{v_i}=1\) and \(\normtwo{u_i}=1\), we see that the singular value \(\sigma_i\) directly gives the gain of the matrix \(G\) in this direction.

The largest gain for any input is equal to the maximum singular value: \[\maxsv(G) \equiv \sigma_1(G) = \max_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv_1}}{\normtwo{v_1}} \] The smallest gain for any input direction is equal to the minimum singular value: \[\minsv(G) \equiv \sigma_k(G) = \min_{d\neq 0}\frac{\normtwo{Gd}}{\normtwo{d}} = \frac{\normtwo{Gv_k}}{\normtwo{v_k}} \]

We define \(u_1 = \bar{u}\), \(v_1 = \bar{v}\), \(u_k=\ubar{u}\) and \(v_k = \ubar{v}\). Then is follows that: \[ G\bar{v} = \maxsv \bar{u} ; \quad G\ubar{v} = \minsv \ubar{u} \]

SVD to pseudo inverse rectangular matrices

This is taken from [Singular Value Decomposition]({{< relref "preumont18_vibrat_contr_activ_struc_fourt_edition" >}}).

The Singular Value Decomposition (SVD) is a generalization of the eigenvalue decomposition of a rectangular matrix: \[ J = U \Sigma V^T = \sum_{i=1}^r \sigma_i u_i v_i^T \] With:

  • \(U\) and \(V\) orthogonal matrices. The columns \(u_i\) and \(v_i\) of \(U\) and \(V\) are the eigenvectors of the square matrices \(JJ^T\) and \(J^TJ\) respectively
  • \(\Sigma\) a rectangular diagonal matrix of dimension \(m \times n\) containing the square root of the common non-zero eigenvalues of \(JJ^T\) and \(J^TJ\)
  • \(r\) is the number of non-zero singular values of \(J\)

The pseudo-inverse of \(J\) is: \[ J^+ = V\Sigma^+U^T = \sum_{i=1}^r \frac{1}{\sigma_i} v_i u_i^T \]

The conditioning of the Jacobian is measured by the condition number: \[ c(J) = \frac{\sigma_{max}}{\sigma_{min}} \]

When \(c(J)\) becomes large, the most straightforward way to handle the ill-conditioning is to truncate the smallest singular value out of the sum. This will have usually little impact of the fitting error while reducing considerably the actuator inputs \(v\).

<./biblio/references.bib>