This repository has been archived on 2025-04-18. You can view files and clone it, but cannot push or open issues or pull requests.
phd-control/nass-control.tex

1586 lines
92 KiB
TeX

% Created 2025-04-03 Thu 17:40
% Intended LaTeX compiler: pdflatex
\documentclass[a4paper, 10pt, DIV=12, parskip=full, bibliography=totoc]{scrreprt}
\input{preamble.tex}
\input{preamble_extra.tex}
\bibliography{nass-control.bib}
\author{Dehaeze Thomas}
\date{\today}
\title{Control Optimization}
\usepackage{biblatex}
\begin{document}
\maketitle
\tableofcontents
\clearpage
When controlling a MIMO system (specifically parallel manipulator such as the Stewart platform?)
Several considerations:
\begin{itemize}
\item Section \ref{sec:detail_control_multiple_sensor}: How to most effectively use/combine multiple sensors
\item Section \ref{sec:detail_control_decoupling}: How to decouple a system
\item Section \ref{sec:detail_control_optimization}: How to design the controller
\end{itemize}
\chapter{Multiple Sensor Control}
\label{sec:orga9622a3}
\label{sec:detail_control_multiple_sensor}
\textbf{Look at what was done in the introduction \href{file:///home/thomas/Cloud/work-projects/ID31-NASS/phd-thesis-chapters/A0-nass-introduction/nass-introduction.org}{Stewart platforms: Control architecture}}
Explain why multiple sensors are sometimes beneficial:
\begin{itemize}
\item collocated sensor that guarantee stability, but is still useful to damp modes outside the bandwidth of the controller using sensor measuring the performance objective
\item Review for Stewart platform => Table
\href{file:///home/thomas/Cloud/work-projects/ID31-NASS/matlab/stewart-simscape/org/bibliography.org}{Multi Sensor Control}
Several sensors:
\begin{itemize}
\item force sensor, inertial, strain \ldots{}
\end{itemize}
Several architectures:
\begin{itemize}
\item Sensor fusion
\item Cascaded control
\item Two sensor control
\end{itemize}
\cite{beijen14_two_sensor_contr_activ_vibrat}
\cite{yong16_high_speed_vertic_posit_stage}
\end{itemize}
\begin{figure}[htbp]
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_architecture_hac_lac.png}
\end{center}
\subcaption{\label{fig:detail_control_architecture_hac_lac} HAC-LAC}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_architecture_two_sensor_control.png}
\end{center}
\subcaption{\label{fig:detail_control_architecture_two_sensor_control} Two Sensor Control}
\end{subfigure}
\bigskip
\begin{subfigure}{0.95\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_architecture_sensor_fusion.png}
\end{center}
\subcaption{\label{fig:detail_control_architecture_sensor_fusion} Sensor Fusion}
\end{subfigure}
\caption{\label{fig:detail_control_control_multiple_sensors}Different control strategies when using multiple sensors. High Authority Control / Low Authority Control (\subref{fig:detail_control_architecture_hac_lac}). Sensor Fusion (\subref{fig:detail_control_architecture_sensor_fusion}). Two-Sensor Control (\subref{fig:detail_control_architecture_two_sensor_control})}
\end{figure}
Cascaded control / HAC-LAC Architecture was already discussed during the conceptual phase.
This is a very comprehensive approach that proved to give good performances.
On the other hand of the spectrum, the two sensor approach yields to more control design freedom.
But it is also more complex.
In this section, we wish to study if sensor fusion can be an option for multi-sensor control:
\begin{itemize}
\item may be used to optimize the noise characteristics
\item optimize the dynamical uncertainty
\end{itemize}
While there are different ways to fuse sensors:
\begin{itemize}
\item complementary filters
\item kalman filtering
\end{itemize}
The focus is made here on complementary filters, as they give a simple frequency analysis.
Measuring a physical quantity using sensors is always subject to several limitations.
First, the accuracy of the measurement is affected by several noise sources, such as electrical noise of the conditioning electronics being used.
Second, the frequency range in which the measurement is relevant is bounded by the bandwidth of the sensor.
One way to overcome these limitations is to combine several sensors using a technique called ``sensor fusion'' \cite{bendat57_optim_filter_indep_measur_two}.
Fortunately, a wide variety of sensors exists, each with different characteristics.
By carefully choosing the fused sensors, a so called ``super sensor'' is obtained that can combines benefits of the individual sensors.
In some situations, sensor fusion is used to increase the bandwidth of the measurement \cite{shaw90_bandw_enhan_posit_measur_using_measur_accel,zimmermann92_high_bandw_orien_measur_contr,min15_compl_filter_desig_angle_estim}.
For instance, in \cite{shaw90_bandw_enhan_posit_measur_using_measur_accel} the bandwidth of a position sensor is increased by fusing it with an accelerometer providing the high frequency motion information.
For other applications, sensor fusion is used to obtain an estimate of the measured quantity with lower noise \cite{hua05_low_ligo,hua04_polyp_fir_compl_filter_contr_system,plummer06_optim_compl_filter_their_applic_motion_measur,robert12_introd_random_signal_applied_kalman}.
More recently, the fusion of sensors measuring different physical quantities has been proposed to obtain interesting properties for control \cite{collette15_sensor_fusion_method_high_perfor,yong16_high_speed_vertic_posit_stage}.
In \cite{collette15_sensor_fusion_method_high_perfor}, an inertial sensor used for active vibration isolation is fused with a sensor collocated with the actuator for improving the stability margins of the feedback controller.
Practical applications of sensor fusion are numerous.
It is widely used for the attitude estimation of several autonomous vehicles such as unmanned aerial vehicle \cite{baerveldt97_low_cost_low_weigh_attit,corke04_inert_visual_sensin_system_small_auton_helic,jensen13_basic_uas} and underwater vehicles \cite{pascoal99_navig_system_desig_using_time,batista10_optim_posit_veloc_navig_filter_auton_vehic}.
Naturally, it is of great benefits for high performance positioning control as shown in \cite{shaw90_bandw_enhan_posit_measur_using_measur_accel,zimmermann92_high_bandw_orien_measur_contr,min15_compl_filter_desig_angle_estim,yong16_high_speed_vertic_posit_stage}.
Sensor fusion was also shown to be a key technology to improve the performance of active vibration isolation systems \cite{tjepkema12_sensor_fusion_activ_vibrat_isolat_precis_equip}.
Emblematic examples are the isolation stages of gravitational wave detectors \cite{collette15_sensor_fusion_method_high_perfor,heijningen18_low} such as the ones used at the LIGO \cite{hua05_low_ligo,hua04_polyp_fir_compl_filter_contr_system} and at the Virgo \cite{lucia18_low_frequen_optim_perfor_advan}.
There are mainly two ways to perform sensor fusion: either using a set of complementary filters \cite{anderson53_instr_approac_system_steer_comput} or using Kalman filtering \cite{brown72_integ_navig_system_kalman_filter}.
For sensor fusion applications, both methods are sharing many relationships \cite{brown72_integ_navig_system_kalman_filter,higgins75_compar_compl_kalman_filter,robert12_introd_random_signal_applied_kalman,fonseca15_compl}.
However, for Kalman filtering, assumptions must be made about the probabilistic character of the sensor noises \cite{robert12_introd_random_signal_applied_kalman} whereas it is not the case with complementary filters.
Furthermore, the advantages of complementary filters over Kalman filtering for sensor fusion are their general applicability, their low computational cost \cite{higgins75_compar_compl_kalman_filter}, and the fact that they are intuitive as their effects can be easily interpreted in the frequency domain.
A set of filters is said to be complementary if the sum of their transfer functions is equal to one at all frequencies.
In the early days of complementary filtering, analog circuits were employed to physically realize the filters \cite{anderson53_instr_approac_system_steer_comput}.
Analog complementary filters are still used today \cite{yong16_high_speed_vertic_posit_stage,moore19_capac_instr_sensor_fusion_high_bandw_nanop}, but most of the time they are now implemented digitally as it allows for much more flexibility.
Several design methods have been developed over the years to optimize complementary filters.
The easiest way to design complementary filters is to use analytical formulas.
Depending on the application, the formulas used are of first order \cite{corke04_inert_visual_sensin_system_small_auton_helic,yeh05_model_contr_hydraul_actuat_two,yong16_high_speed_vertic_posit_stage}, second order \cite{baerveldt97_low_cost_low_weigh_attit,stoten01_fusion_kinet_data_using_compos_filter,jensen13_basic_uas} or even higher orders \cite{shaw90_bandw_enhan_posit_measur_using_measur_accel,zimmermann92_high_bandw_orien_measur_contr,stoten01_fusion_kinet_data_using_compos_filter,collette15_sensor_fusion_method_high_perfor,matichard15_seism_isolat_advan_ligo}.
As the characteristics of the super sensor depends on the proper design of the complementary filters \cite{dehaeze19_compl_filter_shapin_using_synth}, several optimization techniques have been developed.
Some are based on the finding of optimal parameters of analytical formulas \cite{jensen13_basic_uas,min15_compl_filter_desig_angle_estim,fonseca15_compl}, while other are using convex optimization tools \cite{hua04_polyp_fir_compl_filter_contr_system,hua05_low_ligo} such as linear matrix inequalities \cite{pascoal99_navig_system_desig_using_time}.
As shown in \cite{plummer06_optim_compl_filter_their_applic_motion_measur}, the design of complementary filters can also be linked to the standard mixed-sensitivity control problem.
Therefore, all the powerful tools developed for the classical control theory can also be used for the design of complementary filters.
For instance, in \cite{jensen13_basic_uas} the two gains of a Proportional Integral (PI) controller are optimized to minimize the noise of the super sensor.
The common objective of all these complementary filters design methods is to obtain a super sensor that has desired characteristics, usually in terms of noise and dynamics.
Moreover, as reported in \cite{zimmermann92_high_bandw_orien_measur_contr,plummer06_optim_compl_filter_their_applic_motion_measur}, phase shifts and magnitude bumps of the super sensors dynamics can be observed if either the complementary filters are poorly designed or if the sensors are not well calibrated.
Hence, the robustness of the fusion is also of concern when designing the complementary filters.
Although many design methods of complementary filters have been proposed in the literature, no simple method that allows to specify the desired super sensor characteristic while ensuring good fusion robustness has been proposed.
Fortunately, both the robustness of the fusion and the super sensor characteristics can be linked to the magnitude of the complementary filters \cite{dehaeze19_compl_filter_shapin_using_synth}.
Based on that, this paper introduces a new way to design complementary filters using the \(\mathcal{H}_\infty\) synthesis which allows to shape the complementary filters' magnitude in an easy and intuitive way.
\section{Sensor Fusion and Complementary Filters Requirements}
\label{sec:org338cf90}
\label{ssec:detail_control_sensor_fusion_requirements}
Complementary filtering provides a framework for fusing signals from different sensors.
As the effectiveness of the fusion depends on the proper design of the complementary filters, they are expected to fulfill certain requirements.
These requirements are discussed in this section.
\subsection{Sensor Fusion Architecture}
\label{sec:org1a108b3}
A general sensor fusion architecture using complementary filters is shown in Fig. \ref{fig:detail_control_sensor_fusion_overview} where several sensors (here two) are measuring the same physical quantity \(x\).
The two sensors output signals \(\hat{x}_1\) and \(\hat{x}_2\) are estimates of \(x\).
These estimates are then filtered out by complementary filters and combined to form a new estimate \(\hat{x}\).
The resulting sensor, termed as super sensor, can have larger bandwidth and better noise characteristics in comparison to the individual sensors.
This means that the super sensor provides an estimate \(\hat{x}\) of \(x\) which can be more accurate over a larger frequency band than the outputs of the individual sensors.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sensor_fusion_overview.png}
\caption{\label{fig:detail_control_sensor_fusion_overview}Schematic of a sensor fusion architecture using complementary filters.}
\end{figure}
The complementary property of filters \(H_1(s)\) and \(H_2(s)\) implies that the sum of their transfer functions is equal to one.
That is, unity magnitude and zero phase at all frequencies.
Therefore, a pair of complementary filter needs to satisfy the following condition:
\begin{equation}\label{eq:detail_control_comp_filter}
H_1(s) + H_2(s) = 1
\end{equation}
It will soon become clear why the complementary property is important for the sensor fusion architecture.
\subsection{Sensor Models and Sensor Normalization}
\label{sec:orgf7e77fa}
In order to study such sensor fusion architecture, a model for the sensors is required.
Such model is shown in Fig. \ref{fig:detail_control_sensor_model} and consists of a linear time invariant (LTI) system \(G_i(s)\) representing the sensor dynamics and an input \(n_i\) representing the sensor noise.
The model input \(x\) is the measured physical quantity and its output \(\tilde{x}_i\) is the ``raw'' output of the sensor.
Before filtering the sensor outputs \(\tilde{x}_i\) by the complementary filters, the sensors are usually normalized to simplify the fusion.
This normalization consists of using an estimate \(\hat{G}_i(s)\) of the sensor dynamics \(G_i(s)\), and filtering the sensor output by the inverse of this estimate \(\hat{G}_i^{-1}(s)\) as shown in Fig. \ref{fig:detail_control_sensor_model_calibrated}.
It is here supposed that the sensor inverse \(\hat{G}_i^{-1}(s)\) is proper and stable.
This way, the units of the estimates \(\hat{x}_i\) are equal to the units of the physical quantity \(x\).
The sensor dynamics estimate \(\hat{G}_i(s)\) can be a simple gain or a more complex transfer function.
\begin{figure}[htbp]
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_sensor_model.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_model}Basic sensor model consisting of a noise input $n_i$ and a linear time invariant transfer function $G_i(s)$}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_sensor_model_calibrated.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_model_calibrated}Normalized sensors using the inverse of an estimate $\hat{G}}
\end{subfigure}
\caption{\label{fig:detail_control_sensor_models}Sensor models with and without normalization.}
\end{figure}
Two normalized sensors are then combined to form a super sensor as shown in Fig. \ref{fig:detail_control_fusion_super_sensor}.
The two sensors are measuring the same physical quantity \(x\) with dynamics \(G_1(s)\) and \(G_2(s)\), and with \emph{uncorrelated} noises \(n_1\) and \(n_2\).
The signals from both normalized sensors are fed into two complementary filters \(H_1(s)\) and \(H_2(s)\) and then combined to yield an estimate \(\hat{x}\) of \(x\).
The super sensor output is therefore equal to:
\begin{equation}\label{eq:detail_control_comp_filter_estimate}
\hat{x} = \Big( H_1(s) \hat{G}_1^{-1}(s) G_1(s) + H_2(s) \hat{G}_2^{-1}(s) G_2(s) \Big) x + H_1(s) \hat{G}_1^{-1}(s) G_1(s) n_1 + H_2(s) \hat{G}_2^{-1}(s) G_2(s) n_2
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_fusion_super_sensor.png}
\caption{\label{fig:detail_control_fusion_super_sensor}Sensor fusion architecture with two normalized sensors.}
\end{figure}
\subsection{Noise Sensor Filtering}
\label{sec:orgf16500e}
In this section, it is supposed that all the sensors are perfectly normalized, such that:
\begin{equation}\label{eq:detail_control_perfect_dynamics}
\frac{\hat{x}_i}{x} = \hat{G}_i(s) G_i(s) = 1
\end{equation}
The effect of a non-perfect normalization will be discussed in the next section.
Provided \eqref{eq:detail_control_perfect_dynamics} is verified, the super sensor output \(\hat{x}\) is then equal to:
\begin{equation}\label{eq:detail_control_estimate_perfect_dyn}
\hat{x} = x + H_1(s) n_1 + H_2(s) n_2
\end{equation}
From \eqref{eq:detail_control_estimate_perfect_dyn}, the complementary filters \(H_1(s)\) and \(H_2(s)\) are shown to only operate on the noise of the sensors.
Thus, this sensor fusion architecture permits to filter the noise of both sensors without introducing any distortion in the physical quantity to be measured.
This is why the two filters must be complementary.
The estimation error \(\delta x\), defined as the difference between the sensor output \(\hat{x}\) and the measured quantity \(x\), is computed for the super sensor \eqref{eq:detail_control_estimate_error}.
\begin{equation}\label{eq:detail_control_estimate_error}
\delta x \triangleq \hat{x} - x = H_1(s) n_1 + H_2(s) n_2
\end{equation}
As shown in \eqref{eq:detail_control_noise_filtering_psd}, the Power Spectral Density (PSD) of the estimation error \(\Phi_{\delta x}\) depends both on the norm of the two complementary filters and on the PSD of the noise sources \(\Phi_{n_1}\) and \(\Phi_{n_2}\).
\begin{equation}\label{eq:detail_control_noise_filtering_psd}
\Phi_{\delta x}(\omega) = \left|H_1(j\omega)\right|^2 \Phi_{n_1}(\omega) + \left|H_2(j\omega)\right|^2 \Phi_{n_2}(\omega)
\end{equation}
If the two sensors have identical noise characteristics, \(\Phi_{n_1}(\omega) = \Phi_{n_2}(\omega)\), a simple averaging (\(H_1(s) = H_2(s) = 0.5\)) is what would minimize the super sensor noise.
This is the simplest form of sensor fusion with complementary filters.
However, the two sensors have usually high noise levels over distinct frequency regions.
In such case, to lower the noise of the super sensor, the norm \(|H_1(j\omega)|\) has to be small when \(\Phi_{n_1}(\omega)\) is larger than \(\Phi_{n_2}(\omega)\) and the norm \(|H_2(j\omega)|\) has to be small when \(\Phi_{n_2}(\omega)\) is larger than \(\Phi_{n_1}(\omega)\).
Hence, by properly shaping the norm of the complementary filters, it is possible to reduce the noise of the super sensor.
\subsection{Sensor Fusion Robustness}
\label{sec:orgfd6ea33}
In practical systems the sensor normalization is not perfect and condition \eqref{eq:detail_control_perfect_dynamics} is not verified.
In order to study such imperfection, a multiplicative input uncertainty is added to the sensor dynamics (Fig. \ref{fig:detail_control_sensor_model_uncertainty}).
The nominal model is the estimated model used for the normalization \(\hat{G}_i(s)\), \(\Delta_i(s)\) is any stable transfer function satisfying \(|\Delta_i(j\omega)| \le 1,\ \forall\omega\), and \(w_i(s)\) is a weighting transfer function representing the magnitude of the uncertainty.
The weight \(w_i(s)\) is chosen such that the real sensor dynamics \(G_i(j\omega)\) is contained in the uncertain region represented by a circle in the complex plane, centered on \(1\) and with a radius equal to \(|w_i(j\omega)|\).
As the nominal sensor dynamics is taken as the normalized filter, the normalized sensor can be further simplified as shown in Fig. \ref{fig:detail_control_sensor_model_uncertainty_simplified}.
\begin{figure}[htbp]
\begin{subfigure}{0.58\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_sensor_model_uncertainty.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_model_uncertainty}Sensor with multiplicative input uncertainty}
\end{subfigure}
\begin{subfigure}{0.38\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_sensor_model_uncertainty_simplified.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_model_uncertainty_simplified}Simplified sensor model}
\end{subfigure}
\caption{\label{fig:detail_control_sensor_models_uncertainty}Sensor models with dynamical uncertainty}
\end{figure}
The sensor fusion architecture with the sensor models including dynamical uncertainty is shown in Fig. \ref{fig:detail_control_sensor_fusion_dynamic_uncertainty}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sensor_fusion_dynamic_uncertainty.png}
\caption{\label{fig:detail_control_sensor_fusion_dynamic_uncertainty}Sensor fusion architecture with sensor dynamics uncertainty}
\end{figure}
The super sensor dynamics \eqref{eq:detail_control_super_sensor_dyn_uncertainty} is no longer equal to \(1\) and now depends on the sensor dynamical uncertainty weights \(w_i(s)\) as well as on the complementary filters \(H_i(s)\).
\begin{equation}\label{eq:detail_control_super_sensor_dyn_uncertainty}
\frac{\hat{x}}{x} = 1 + w_1(s) H_1(s) \Delta_1(s) + w_2(s) H_2(s) \Delta_2(s)
\end{equation}
The dynamical uncertainty of the super sensor can be graphically represented in the complex plane by a circle centered on \(1\) with a radius equal to \(|w_1(j\omega) H_1(j\omega)| + |w_2(j\omega) H_2(j\omega)|\) (Fig. \ref{fig:detail_control_uncertainty_set_super_sensor}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_uncertainty_set_super_sensor.png}
\caption{\label{fig:detail_control_uncertainty_set_super_sensor}Uncertainty region of the super sensor dynamics in the complex plane (grey circle). The contribution of both sensors 1 and 2 to the total uncertainty are represented respectively by a blue circle and a red circle. The frequency dependency \(\omega\) is here omitted.}
\end{figure}
The super sensor dynamical uncertainty, and hence the robustness of the fusion, clearly depends on the complementary filters' norm.
For instance, the phase \(\Delta\phi(\omega)\) added by the super sensor dynamics at frequency \(\omega\) is bounded by \(\Delta\phi_{\text{max}}(\omega)\) which can be found by drawing a tangent from the origin to the uncertainty circle of the super sensor (Fig. \ref{fig:detail_control_uncertainty_set_super_sensor}) and that is mathematically described by \eqref{eq:detail_control_max_phase_uncertainty}.
\begin{equation}\label{eq:detail_control_max_phase_uncertainty}
\Delta\phi_\text{max}(\omega) = \arcsin\big( |w_1(j\omega) H_1(j\omega)| + |w_2(j\omega) H_2(j\omega)| \big)
\end{equation}
As it is generally desired to limit the maximum phase added by the super sensor, \(H_1(s)\) and \(H_2(s)\) should be designed such that \(\Delta \phi\) is bounded to acceptable values.
Typically, the norm of the complementary filter \(|H_i(j\omega)|\) should be made small when \(|w_i(j\omega)|\) is large, i.e., at frequencies where the sensor dynamics is uncertain.
\section{Complementary Filters Shaping}
\label{sec:orgd7c419a}
\label{ssec:detail_control_hinf_method}
As shown in Section \ref{ssec:detail_control_sensor_fusion_requirements}, the noise and robustness of the super sensor are a function of the complementary filters' norm.
Therefore, a synthesis method of complementary filters that allows to shape their norm would be of great use.
In this section, such synthesis is proposed by writing the synthesis objective as a standard \(\mathcal{H}_\infty\) optimization problem.
As weighting functions are used to represent the wanted complementary filters' shape during the synthesis, their proper design is discussed.
Finally, the synthesis method is validated on an simple example.
\subsection{Synthesis Objective}
\label{sec:orgd2e0d7e}
The synthesis objective is to shape the norm of two filters \(H_1(s)\) and \(H_2(s)\) while ensuring their complementary property \eqref{eq:detail_control_comp_filter}.
This is equivalent as to finding proper and stable transfer functions \(H_1(s)\) and \(H_2(s)\) such that conditions \eqref{eq:detail_control_hinf_cond_complementarity}, \eqref{eq:detail_control_hinf_cond_h1} and \eqref{eq:detail_control_hinf_cond_h2} are satisfied.
\begin{subequations}\label{eq:detail_control_comp_filter_problem_form}
\begin{align}
& H_1(s) + H_2(s) = 1 \label{eq:detail_control_hinf_cond_complementarity} \\
& |H_1(j\omega)| \le \frac{1}{|W_1(j\omega)|} \quad \forall\omega \label{eq:detail_control_hinf_cond_h1} \\
& |H_2(j\omega)| \le \frac{1}{|W_2(j\omega)|} \quad \forall\omega \label{eq:detail_control_hinf_cond_h2}
\end{align}
\end{subequations}
\(W_1(s)\) and \(W_2(s)\) are two weighting transfer functions that are carefully chosen to specify the maximum wanted norm of the complementary filters during the synthesis.
\subsection{Shaping of Complementary Filters using \(\mathcal{H}_\infty\) synthesis}
\label{sec:orgbfba454}
In this section, it is shown that the synthesis objective can be easily expressed as a standard \(\mathcal{H}_\infty\) optimization problem and therefore solved using convenient tools readily available.
Consider the generalized plant \(P(s)\) shown in Fig. \ref{fig:detail_control_h_infinity_robust_fusion_plant} and mathematically described by \eqref{eq:detail_control_generalized_plant}.
\begin{equation}\label{eq:detail_control_generalized_plant}
\begin{bmatrix} z_1 \\ z_2 \\ v \end{bmatrix} = P(s) \begin{bmatrix} w\\u \end{bmatrix}; \quad P(s) = \begin{bmatrix}W_1(s) & -W_1(s) \\ 0 & \phantom{+}W_2(s) \\ 1 & 0 \end{bmatrix}
\end{equation}
\begin{figure}[htbp]
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_h_infinity_robust_fusion_plant.png}
\end{center}
\subcaption{\label{fig:detail_control_h_infinity_robust_fusion_plant}Generalized plant}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_h_infinity_robust_fusion_fb.png}
\end{center}
\subcaption{\label{fig:detail_control_h_infinity_robust_fusion_fb}Generalized plant with the synthesized filter}
\end{subfigure}
\caption{\label{fig:detail_control_h_infinity_robust_fusion}Architecture for the \(\mathcal{H}_\infty\) synthesis of complementary filters}
\end{figure}
Applying the standard \(\mathcal{H}_\infty\) synthesis to the generalized plant \(P(s)\) is then equivalent as finding a stable filter \(H_2(s)\) which based on \(v\), generates a signal \(u\) such that the \(\mathcal{H}_\infty\) norm of the system in Fig. \ref{fig:detail_control_h_infinity_robust_fusion_fb} from \(w\) to \([z_1, \ z_2]\) is less than one \eqref{eq:detail_control_hinf_syn_obj}.
\begin{equation}\label{eq:detail_control_hinf_syn_obj}
\left\|\begin{matrix} \left(1 - H_2(s)\right) W_1(s) \\ H_2(s) W_2(s) \end{matrix}\right\|_\infty \le 1
\end{equation}
By then defining \(H_1(s)\) to be the complementary of \(H_2(s)\) \eqref{eq:detail_control_definition_H1}, the \(\mathcal{H}_\infty\) synthesis objective becomes equivalent to \eqref{eq:detail_control_hinf_problem} which ensures that \eqref{eq:detail_control_hinf_cond_h1} and \eqref{eq:detail_control_hinf_cond_h2} are satisfied.
\begin{equation}\label{eq:detail_control_definition_H1}
H_1(s) \triangleq 1 - H_2(s)
\end{equation}
\begin{equation}\label{eq:detail_control_hinf_problem}
\left\|\begin{matrix} H_1(s) W_1(s) \\ H_2(s) W_2(s) \end{matrix}\right\|_\infty \le 1
\end{equation}
Therefore, applying the \(\mathcal{H}_\infty\) synthesis to the standard plant \(P(s)\) \eqref{eq:detail_control_generalized_plant} will generate two filters \(H_2(s)\) and \(H_1(s) \triangleq 1 - H_2(s)\) that are complementary \eqref{eq:detail_control_comp_filter_problem_form} and such that there norms are bellow specified bounds \eqref{eq:detail_control_hinf_cond_h1}, \eqref{eq:detail_control_hinf_cond_h2}.
Note that there is only an implication between the \(\mathcal{H}_\infty\) norm condition \eqref{eq:detail_control_hinf_problem} and the initial synthesis objectives \eqref{eq:detail_control_hinf_cond_h1} and \eqref{eq:detail_control_hinf_cond_h2} and not an equivalence.
Hence, the optimization may be a little bit conservative with respect to the set of filters on which it is performed, see \cite[Chap. 2.8.3]{skogestad07_multiv_feedb_contr}.
In practice, this is however not an found to be an issue.
\subsection{Weighting Functions Design}
\label{sec:orga5537db}
Weighting functions are used during the synthesis to specify the maximum allowed complementary filters' norm.
The proper design of these weighting functions is of primary importance for the success of the presented \(\mathcal{H}_\infty\) synthesis of complementary filters.
First, only proper and stable transfer functions should be used.
Second, the order of the weighting functions should stay reasonably small in order to reduce the computational costs associated with the solving of the optimization problem and for the physical implementation of the filters (the synthesized filters' order being equal to the sum of the weighting functions' order).
Third, one should not forget the fundamental limitations imposed by the complementary property \eqref{eq:detail_control_comp_filter}.
This implies for instance that \(|H_1(j\omega)|\) and \(|H_2(j\omega)|\) cannot be made small at the same frequency.
When designing complementary filters, it is usually desired to specify their slopes, their ``blending'' frequency and their maximum gains at low and high frequency.
To easily express these specifications, formula \eqref{eq:detail_control_weight_formula} is proposed to help with the design of weighting functions.
\begin{equation}\label{eq:detail_control_weight_formula}
W(s) = \left( \frac{
\hfill{} \frac{1}{\omega_c} \sqrt{\frac{1 - \left(\frac{G_0}{G_c}\right)^{\frac{2}{n}}}{1 - \left(\frac{G_c}{G_\infty}\right)^{\frac{2}{n}}}} s + \left(\frac{G_0}{G_c}\right)^{\frac{1}{n}}
}{
\left(\frac{1}{G_\infty}\right)^{\frac{1}{n}} \frac{1}{\omega_c} \sqrt{\frac{1 - \left(\frac{G_0}{G_c}\right)^{\frac{2}{n}}}{1 - \left(\frac{G_c}{G_\infty}\right)^{\frac{2}{n}}}} s + \left(\frac{1}{G_c}\right)^{\frac{1}{n}}
}\right)^n
\end{equation}
The parameters in formula \eqref{eq:detail_control_weight_formula} are:
\begin{itemize}
\item \(G_0 = \lim_{\omega \to 0} |W(j\omega)|\): the low frequency gain
\item \(G_\infty = \lim_{\omega \to \infty} |W(j\omega)|\): the high frequency gain
\item \(G_c = |W(j\omega_c)|\): the gain at a specific frequency \(\omega_c\) in \(\si{rad/s}\).
\item \(n\): the slope between high and low frequency. It also corresponds to the order of the weighting function.
\end{itemize}
The parameters \(G_0\), \(G_c\) and \(G_\infty\) should either satisfy \eqref{eq:detail_control_cond_formula_1} or \eqref{eq:detail_control_cond_formula_2}.
\begin{subequations}\label{eq:detail_control_condition_params_formula}
\begin{align}
G_0 < 1 < G_\infty \text{ and } G_0 < G_c < G_\infty \label{eq:detail_control_cond_formula_1}\\
G_\infty < 1 < G_0 \text{ and } G_\infty < G_c < G_0 \label{eq:detail_control_cond_formula_2}
\end{align}
\end{subequations}
The typical magnitude of a weighting function generated using \eqref{eq:detail_control_weight_formula} is shown in Fig. \ref{fig:detail_control_weight_formula}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_weight_formula.png}
\caption{\label{fig:detail_control_weight_formula}Magnitude of a weighting function generated using formula \eqref{eq:detail_control_weight_formula}, \(G_0 = 1e^{-3}\), \(G_\infty = 10\), \(\omega_c = \SI{10}{Hz}\), \(G_c = 2\), \(n = 3\).}
\end{figure}
\subsection{Validation of the proposed synthesis method}
\label{sec:orgd4c8c3e}
The proposed methodology for the design of complementary filters is now applied on a simple example.
Let's suppose two complementary filters \(H_1(s)\) and \(H_2(s)\) have to be designed such that:
\begin{itemize}
\item the blending frequency is around \(\SI{10}{Hz}\).
\item the slope of \(|H_1(j\omega)|\) is \(+2\) below \(\SI{10}{Hz}\).
Its low frequency gain is \(10^{-3}\).
\item the slope of \(|H_2(j\omega)|\) is \(-3\) above \(\SI{10}{Hz}\).
Its high frequency gain is \(10^{-3}\).
\end{itemize}
The first step is to translate the above requirements by properly designing the weighting functions.
The proposed formula \eqref{eq:detail_control_weight_formula} is here used for such purpose.
Parameters used are summarized in Table \ref{tab:detail_control_weights_params}.
The inverse magnitudes of the designed weighting functions, which are representing the maximum allowed norms of the complementary filters, are shown by the dashed lines in Fig. \ref{fig:detail_control_weights_W1_W2}.
\begin{minipage}[b]{0.44\linewidth}
\begin{center}
\begin{tabularx}{0.7\linewidth}{ccc}
\toprule
Parameter & \(W_1(s)\) & \(W_2(s)\)\\
\midrule
\(G_0\) & \(0.1\) & \(1000\)\\
\(G_{\infty}\) & \(1000\) & \(0.1\)\\
\(\omega_c\) & \(2 \pi \cdot 10\) & \(2 \pi \cdot 10\)\\
\(G_c\) & \(0.45\) & \(0.45\)\\
\(n\) & \(2\) & \(3\)\\
\bottomrule
\end{tabularx}
\end{center}
\captionof{table}{\label{tab:detail_control_weights_params}Parameters for \(W_1(s)\) and \(W_2(s)\)}
\end{minipage}
\hfill
\begin{minipage}[b]{0.52\linewidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_weights_W1_W2.png}
\captionof{figure}{\label{fig:detail_control_weights_W1_W2}Inverse magnitude of the weights}
\end{center}
\end{minipage}
The standard \(\mathcal{H}_\infty\) synthesis is then applied to the generalized plant of Fig. \ref{fig:detail_control_h_infinity_robust_fusion_plant}.
The filter \(H_2(s)\) that minimizes the \(\mathcal{H}_\infty\) norm between \(w\) and \([z_1,\ z_2]^T\) is obtained.
The \(\mathcal{H}_\infty\) norm is here found to be close to one \eqref{eq:detail_control_hinf_synthesis_result} which indicates that the synthesis is successful: the complementary filters norms are below the maximum specified upper bounds.
This is confirmed by the bode plots of the obtained complementary filters in Fig. \ref{fig:detail_control_hinf_filters_results}.
\begin{equation}\label{eq:detail_control_hinf_synthesis_result}
\left\|\begin{matrix} \left(1 - H_2(s)\right) W_1(s) \\ H_2(s) W_2(s) \end{matrix}\right\|_\infty \approx 1
\end{equation}
The transfer functions in the Laplace domain of the complementary filters are given in \eqref{eq:detail_control_hinf_synthesis_result_tf}.
As expected, the obtained filters are of order \(5\), that is the sum of the weighting functions' order.
\begin{subequations}\label{eq:detail_control_hinf_synthesis_result_tf}
\begin{align}
H_2(s) &= \frac{(s+6.6e^4) (s+160) (s+4)^3}{(s+6.6e^4) (s^2 + 106 s + 3e^3) (s^2 + 72s + 3580)} \\
H_1(s) &\triangleq H_2(s) - 1 = \frac{10^{-8} (s+6.6e^9) (s+3450)^2 (s^2 + 49s + 895)}{(s+6.6e^4) (s^2 + 106 s + 3e^3) (s^2 + 72s + 3580)}
\end{align}
\end{subequations}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_hinf_filters_results.png}
\caption{\label{fig:detail_control_hinf_filters_results}Bode plot of the obtained complementary filters}
\end{figure}
This simple example illustrates the fact that the proposed methodology for complementary filters shaping is easy to use and effective.
A more complex real life example is taken up in the next section.
\section{``Closed-Loop'' complementary filters}
\label{sec:orgc89b4c8}
\label{ssec:detail_control_closed_loop_complementary_filters}
An alternative way to implement complementary filters is by using a fundamental property of the classical feedback architecture shown in Fig. \ref{fig:detail_control_feedback_sensor_fusion}.
This idea is discussed in \cite{mahony05_compl_filter_desig_special_orthog,plummer06_optim_compl_filter_their_applic_motion_measur,jensen13_basic_uas}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_feedback_sensor_fusion.png}
\caption{\label{fig:detail_control_feedback_sensor_fusion}``Closed-Loop'' complementary filters.}
\end{figure}
Consider the feedback architecture of Fig. \ref{fig:detail_control_feedback_sensor_fusion}, with two inputs \(\hat{x}_1\) and \(\hat{x}_2\), and one output \(\hat{x}\).
The output \(\hat{x}\) is linked to the inputs by \eqref{eq:detail_control_closed_loop_complementary_filters}.
\begin{equation}\label{eq:detail_control_closed_loop_complementary_filters}
\hat{x} = \underbrace{\frac{1}{1 + L(s)}}_{S(s)} \hat{x}_1 + \underbrace{\frac{L(s)}{1 + L(s)}}_{T(s)} \hat{x}_2
\end{equation}
As for any classical feedback architecture, we have that the sum of the sensitivity transfer function \(S(s)\) and complementary sensitivity transfer function \(T_(s)\) is equal to one \eqref{eq:detail_control_sensitivity_sum}.
\begin{equation}\label{eq:detail_control_sensitivity_sum}
S(s) + T(s) = 1
\end{equation}
Therefore, provided that the the closed-loop system in Fig. \ref{fig:detail_control_feedback_sensor_fusion} is stable, it can be used as a set of two complementary filters.
Two sensors can then be merged as shown in Fig. \ref{fig:detail_control_feedback_sensor_fusion_arch}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_feedback_sensor_fusion_arch.png}
\caption{\label{fig:detail_control_feedback_sensor_fusion_arch}Classical feedback architecture used for sensor fusion.}
\end{figure}
One of the main advantage of implementing and designing complementary filters using the feedback architecture of Fig. \ref{fig:detail_control_feedback_sensor_fusion} is that all the tools of the linear control theory can be applied for the design of the filters.
If one want to shape both \(\frac{\hat{x}}{\hat{x}_1}(s) = S(s)\) and \(\frac{\hat{x}}{\hat{x}_2}(s) = T(s)\), the \(\mathcal{H}_\infty\) mixed-sensitivity synthesis can be easily applied.
To do so, weighting functions \(W_1(s)\) and \(W_2(s)\) are added to respectively shape \(S(s)\) and \(T(s)\) (Fig. \ref{fig:detail_control_feedback_synthesis_architecture}).
Then the system is rearranged to form the generalized plant \(P_L(s)\) shown in Fig. \ref{fig:detail_control_feedback_synthesis_architecture_generalized_plant}.
The \(\mathcal{H}_\infty\) mixed-sensitivity synthesis can finally be performed by applying the standard \(\mathcal{H}_\infty\) synthesis to the generalized plant \(P_L(s)\) which is described by \eqref{eq:detail_control_generalized_plant_mixed_sensitivity}.
\begin{equation}\label{eq:detail_control_generalized_plant_mixed_sensitivity}
\begin{bmatrix} z \\ v \end{bmatrix} = P_L(s) \begin{bmatrix} w_1 \\ w_2 \\ u \end{bmatrix}; \quad P_L(s) = \begin{bmatrix}
\phantom{+}W_1(s) & 0 & \phantom{+}1 \\
-W_1(s) & W_2(s) & -1
\end{bmatrix}
\end{equation}
The output of the synthesis is a filter \(L(s)\) such that the ``closed-loop'' \(\mathcal{H}_\infty\) norm from \([w_1,\ w_2]\) to \(z\) of the system in Fig. \ref{fig:detail_control_feedback_sensor_fusion} is less than one \eqref{eq:detail_control_comp_filters_feedback_obj}.
\begin{equation}\label{eq:detail_control_comp_filters_feedback_obj}
\left\| \begin{matrix} \frac{z}{w_1} \\ \frac{z}{w_2} \end{matrix} \right\|_\infty = \left\| \begin{matrix} \frac{1}{1 + L(s)} W_1(s) \\ \frac{L(s)}{1 + L(s)} W_2(s) \end{matrix} \right\|_\infty \le 1
\end{equation}
If the synthesis is successful, the transfer functions from \(\hat{x}_1\) to \(\hat{x}\) and from \(\hat{x}_2\) to \(\hat{x}\) have their magnitude bounded by the inverse magnitude of the corresponding weighting functions.
The sensor fusion can then be implemented using the feedback architecture in Fig. \ref{fig:detail_control_feedback_sensor_fusion_arch} or more classically as shown in Fig. \ref{fig:detail_control_sensor_fusion_overview} by defining the two complementary filters using \eqref{eq:detail_control_comp_filters_feedback}.
The two architectures are equivalent regarding their inputs/outputs relationships.
\begin{equation}\label{eq:detail_control_comp_filters_feedback}
H_1(s) = \frac{1}{1 + L(s)}; \quad H_2(s) = \frac{L(s)}{1 + L(s)}
\end{equation}
\begin{figure}[htbp]
\begin{subfigure}{0.58\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_feedback_synthesis_architecture.png}
\end{center}
\subcaption{\label{fig:detail_control_feedback_synthesis_architecture}Feedback architecture with included weights}
\end{subfigure}
\begin{subfigure}{0.38\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_feedback_synthesis_architecture_generalized_plant.png}
\end{center}
\subcaption{\label{fig:detail_control_feedback_synthesis_architecture_generalized_plant}Generalized plant}
\end{subfigure}
\caption{\label{fig:detail_control_h_inf_mixed_sensitivity_synthesis}\(\mathcal{H}_\infty\) mixed-sensitivity synthesis}
\end{figure}
As an example, two ``closed-loop'' complementary filters are designed using the \(\mathcal{H}_\infty\) mixed-sensitivity synthesis.
The weighting functions are designed using formula \eqref{eq:detail_control_weight_formula} with parameters shown in Table \ref{tab:detail_control_weights_params}.
After synthesis, a filter \(L(s)\) is obtained whose magnitude is shown in Fig. \ref{fig:detail_control_hinf_filters_results_mixed_sensitivity} by the black dashed line.
The ``closed-loop'' complementary filters are compared with the inverse magnitude of the weighting functions in Fig. \ref{fig:detail_control_hinf_filters_results_mixed_sensitivity} confirming that the synthesis is successful.
The obtained ``closed-loop'' complementary filters are indeed equal to the ones obtained in Section \ref{ssec:detail_control_hinf_method}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_hinf_filters_results_mixed_sensitivity.png}
\caption{\label{fig:detail_control_hinf_filters_results_mixed_sensitivity}Bode plot of the obtained complementary filters after \(\mathcal{H}_\infty\) mixed-sensitivity synthesis}
\end{figure}
\section{Synthesis of a set of three complementary filters}
\label{sec:orgb190cc7}
\label{sec:detail_control_hinf_three_comp_filters}
Some applications may require to merge more than two sensors \cite{stoten01_fusion_kinet_data_using_compos_filter,fonseca15_compl}.
For instance at the LIGO, three sensors (an LVDT, a seismometer and a geophone) are merged to form a super sensor \cite{matichard15_seism_isolat_advan_ligo}.
When merging \(n>2\) sensors using complementary filters, two architectures can be used as shown in Fig. \ref{fig:detail_control_sensor_fusion_three}.
The fusion can either be done in a ``sequential'' way where \(n-1\) sets of two complementary filters are used (Fig. \ref{fig:detail_control_sensor_fusion_three_sequential}), or in a ``parallel'' way where one set of \(n\) complementary filters is used (Fig. \ref{fig:detail_control_sensor_fusion_three_parallel}).
In the first case, typical sensor fusion synthesis techniques can be used.
However, when a parallel architecture is used, a new synthesis method for a set of more than two complementary filters is required as only simple analytical formulas have been proposed in the literature \cite{stoten01_fusion_kinet_data_using_compos_filter,fonseca15_compl}.
A generalization of the proposed synthesis method of complementary filters is presented in this section.
\begin{figure}[htbp]
\begin{subfigure}{0.58\textwidth}
\begin{center}
\includegraphics[scale=1,scale=0.9]{figs/detail_control_sensor_fusion_three_sequential.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_fusion_three_sequential}Sequential fusion}
\end{subfigure}
\begin{subfigure}{0.38\textwidth}
\begin{center}
\includegraphics[scale=1,scale=0.9]{figs/detail_control_sensor_fusion_three_parallel.png}
\end{center}
\subcaption{\label{fig:detail_control_sensor_fusion_three_parallel}Parallel fusion}
\end{subfigure}
\caption{\label{fig:detail_control_sensor_fusion_three}Possible sensor fusion architecture when more than two sensors are to be merged}
\end{figure}
The synthesis objective is to compute a set of \(n\) stable transfer functions \([H_1(s),\ H_2(s),\ \dots,\ H_n(s)]\) such that conditions \eqref{eq:detail_control_hinf_cond_compl_gen} and \eqref{eq:detail_control_hinf_cond_perf_gen} are satisfied.
\begin{subequations}\label{eq:detail_control_hinf_problem_gen}
\begin{align}
& \sum_{i=1}^n H_i(s) = 1 \label{eq:detail_control_hinf_cond_compl_gen} \\
& \left| H_i(j\omega) \right| < \frac{1}{\left| W_i(j\omega) \right|}, \quad \forall \omega,\ i = 1 \dots n \label{eq:detail_control_hinf_cond_perf_gen}
\end{align}
\end{subequations}
\([W_1(s),\ W_2(s),\ \dots,\ W_n(s)]\) are weighting transfer functions that are chosen to specify the maximum complementary filters' norm during the synthesis.
Such synthesis objective is closely related to the one described in Section \ref{ssec:detail_control_hinf_method}, and indeed the proposed synthesis method is a generalization of the one previously presented.
A set of \(n\) complementary filters can be shaped by applying the standard \(\mathcal{H}_\infty\) synthesis to the generalized plant \(P_n(s)\) described by \eqref{eq:detail_control_generalized_plant_n_filters}.
\begin{equation}\label{eq:detail_control_generalized_plant_n_filters}
\begin{bmatrix} z_1 \\ \vdots \\ z_n \\ v \end{bmatrix} = P_n(s) \begin{bmatrix} w \\ u_1 \\ \vdots \\ u_{n-1} \end{bmatrix}; \quad
P_n(s) = \begin{bmatrix}
W_1 & -W_1 & \dots & \dots & -W_1 \\
0 & W_2 & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
\vdots & & \ddots & \ddots & 0 \\
0 & \dots & \dots & 0 & W_n \\
1 & 0 & \dots & \dots & 0
\end{bmatrix}
\end{equation}
If the synthesis if successful, a set of \(n-1\) filters \([H_2(s),\ H_3(s),\ \dots,\ H_n(s)]\) are obtained such that \eqref{eq:detail_control_hinf_syn_obj_gen} is verified.
\begin{equation}\label{eq:detail_control_hinf_syn_obj_gen}
\left\|\begin{matrix} \left(1 - \left[ H_2(s) + H_3(s) + \dots + H_n(s) \right]\right) W_1(s) \\ H_2(s) W_2(s) \\ \vdots \\ H_n(s) W_n(s) \end{matrix}\right\|_\infty \le 1
\end{equation}
\(H_1(s)\) is then defined using \eqref{eq:detail_control_h1_comp_h2_hn} which is ensuring the complementary property for the set of \(n\) filters \eqref{eq:detail_control_hinf_cond_compl_gen}.
Condition \eqref{eq:detail_control_hinf_cond_perf_gen} is satisfied thanks to \eqref{eq:detail_control_hinf_syn_obj_gen}.
\begin{equation}\label{eq:detail_control_h1_comp_h2_hn}
H_1(s) \triangleq 1 - \big[ H_2(s) + H_3(s) + \dots + H_n(s) \big]
\end{equation}
An example is given to validate the proposed method for the synthesis of a set of three complementary filters.
The sensors to be merged are a displacement sensor from DC up to \(\SI{1}{Hz}\), a geophone from \(1\) to \(\SI{10}{Hz}\) and an accelerometer above \(\SI{10}{Hz}\).
Three weighting functions are designed using formula \eqref{eq:detail_control_weight_formula} and their inverse magnitude are shown in Fig. \ref{fig:detail_control_three_complementary_filters_results} (dashed curves).
Consider the generalized plant \(P_3(s)\) shown in Fig. \ref{fig:detail_control_comp_filter_three_hinf_gen_plant} which is also described by \eqref{eq:detail_control_generalized_plant_three_filters}.
\begin{equation}\label{eq:detail_control_generalized_plant_three_filters}
\begin{bmatrix} z_1 \\ z_2 \\ z_3 \\ v \end{bmatrix} = P_3(s) \begin{bmatrix} w \\ u_1 \\ u_2 \end{bmatrix}; \quad P_3(s) = \begin{bmatrix}W_1(s) & -W_1(s) & -W_1(s) \\ 0 & \phantom{+}W_2(s) & 0 \\ 0 & 0 & \phantom{+}W_3(s) \\ 1 & 0 & 0 \end{bmatrix}
\end{equation}
\begin{figure}[htbp]
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_comp_filter_three_hinf_gen_plant.png}
\end{center}
\subcaption{\label{fig:detail_control_comp_filter_three_hinf_gen_plant}Generalized plant}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\begin{center}
\includegraphics[scale=1,scale=1]{figs/detail_control_comp_filter_three_hinf_fb.png}
\end{center}
\subcaption{\label{fig:detail_control_comp_filter_three_hinf_fb}Generalized plant with the synthesized filter}
\end{subfigure}
\caption{\label{fig:detail_control_comp_filter_three_hinf}Architecture for the \(\mathcal{H}_\infty\) synthesis of three complementary filters}
\end{figure}
The standard \(\mathcal{H}_\infty\) synthesis is performed on the generalized plant \(P_3(s)\).
Two filters \(H_2(s)\) and \(H_3(s)\) are obtained such that the \(\mathcal{H}_\infty\) norm of the closed-loop transfer from \(w\) to \([z_1,\ z_2,\ z_3]\) of the system in Fig. \ref{fig:detail_control_comp_filter_three_hinf_fb} is less than one.
Filter \(H_1(s)\) is defined using \eqref{eq:detail_control_h1_compl_h2_h3} thus ensuring the complementary property of the obtained set of filters.
\begin{equation}\label{eq:detail_control_h1_compl_h2_h3}
H_1(s) \triangleq 1 - \big[ H_2(s) + H_3(s) \big]
\end{equation}
Figure \ref{fig:detail_control_three_complementary_filters_results} displays the three synthesized complementary filters (solid lines) which confirms that the synthesis is successful.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_three_complementary_filters_results.png}
\caption{\label{fig:detail_control_three_complementary_filters_results}Bode plot of the inverse weighting functions and of the three complementary filters obtained using the \(\mathcal{H}_\infty\) synthesis}
\end{figure}
\section*{Conclusion}
\label{sec:orgfc547f6}
A new method for designing complementary filters using the \(\mathcal{H}_\infty\) synthesis has been proposed.
It allows to shape the magnitude of the filters by the use of weighting functions during the synthesis.
This is very valuable in practice as the characteristics of the super sensor are linked to the complementary filters' magnitude.
Therefore typical sensor fusion objectives can be translated into requirements on the magnitudes of the filters.
Several examples were used to emphasize the simplicity and the effectiveness of the proposed method.
However, the shaping of the complementary filters' magnitude does not allow to directly optimize the super sensor noise and dynamical characteristics.
Future work will aim at developing a complementary filter synthesis method that minimizes the super sensor noise while ensuring the robustness of the fusion.
\chapter{Decoupling}
\label{sec:org55fd174}
\label{sec:detail_control_decoupling}
\begin{itemize}
\item[{$\square$}] Add some citations about different methods
\end{itemize}
When dealing with MIMO systems, a typical strategy is to:
\begin{itemize}
\item first decouple the plant dynamics
\item apply SISO control for the decoupled plant
\end{itemize}
Assumptions:
\begin{itemize}
\item parallel manipulators
\end{itemize}
Review of decoupling strategies for Stewart platforms:
\begin{itemize}
\item \href{file:///home/thomas/Cloud/work-projects/ID31-NASS/matlab/stewart-simscape/org/bibliography.org}{Decoupling Strategies}
\end{itemize}
The goal of this section is to compare the use of several methods for the decoupling of parallel manipulators.
It is structured as follow:
\begin{itemize}
\item Section \ref{ssec:detail_control_decoupling_comp_model}: the model used to compare/test decoupling strategies is presented
\item Section \ref{ssec:detail_control_comp_jacobian}: decoupling using Jacobian matrices is presented
\item Section \ref{ssec:detail_control_comp_modal}: modal decoupling is presented
\item Section \ref{ssec:detail_control_comp_svd}: SVD decoupling is presented
\item Section \ref{ssec:detail_control_decoupling_comp}: the three decoupling methods are applied on the test model and compared
\item Conclusions are drawn on the three decoupling methods
\end{itemize}
\section{Test Model}
\label{sec:org07c14be}
\label{ssec:detail_control_decoupling_comp_model}
Let's consider a parallel manipulator with several collocated actuator/sensors pairs.
System in Figure \ref{fig:detail_control_model_test_decoupling} will serve as an example.
We will note:
\begin{itemize}
\item \(b_i\): location of the joints on the top platform
\item \(\hat{s}_i\): unit vector corresponding to the struts direction
\item \(k_i\): stiffness of the struts
\item \(\tau_i\): actuator forces
\item \(O_M\): center of mass of the solid body
\item \(\mathcal{L}_i\): relative displacement of the struts
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_model_test_decoupling.png}
\caption{\label{fig:detail_control_model_test_decoupling}Model use to compare decoupling techniques}
\end{figure}
The magnitude of the coupled plant \(G\) is shown in Figure \ref{fig:detail_control_coupled_plant_bode}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_coupled_plant_bode.png}
\caption{\label{fig:detail_control_coupled_plant_bode}Magnitude of the coupled plant.}
\end{figure}
\section{Decentralized Plant / Control in the frame of the struts}
\label{sec:orgc8c0b5f}
\section{Jacobian Decoupling}
\label{sec:org3e20680}
\label{ssec:detail_control_comp_jacobian}
The Jacobian matrix can be used to:
\begin{itemize}
\item Convert joints velocity \(\dot{\mathcal{L}}\) to payload velocity and angular velocity \(\dot{\bm{\mathcal{X}}}_{\{O\}}\):
\[ \dot{\bm{\mathcal{X}}}_{\{O\}} = J_{\{O\}} \dot{\bm{\mathcal{L}}} \]
\item Convert actuators forces \(\bm{\tau}\) to forces/torque applied on the payload \(\bm{\mathcal{F}}_{\{O\}}\):
\[ \bm{\mathcal{F}}_{\{O\}} = J_{\{O\}}^T \bm{\tau} \]
\end{itemize}
with \(\{O\}\) any chosen frame.
By wisely choosing frame \(\{O\}\), we can obtain nice decoupling for plant:
\begin{equation}
\bm{G}_{\{O\}} = J_{\{O\}}^{-1} \bm{G} J_{\{O\}}^{-T}
\end{equation}
The obtained plan corresponds to forces/torques applied on origin of frame \(\{O\}\) to the translation/rotation of the payload expressed in frame \(\{O\}\).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_jacobian_decoupling_arch.png}
\caption{\label{fig:detail_control_jacobian_decoupling_arch}Block diagram of the transfer function from \(\bm{\mathcal{F}}_{\{O\}}\) to \(\bm{\mathcal{X}}_{\{O\}}\)}
\end{figure}
The Jacobian matrix is only based on the geometry of the system and does not depend on the physical properties such as mass and stiffness.
The inputs and outputs of the decoupled plant \(\bm{G}_{\{O\}}\) have physical meaning:
\begin{itemize}
\item \(\bm{\mathcal{F}}_{\{O\}}\) are forces/torques applied on the payload at the origin of frame \(\{O\}\)
\item \(\bm{\mathcal{X}}_{\{O\}}\) are translations/rotation of the payload expressed in frame \(\{O\}\)
\end{itemize}
It is then easy to include a reference tracking input that specify the wanted motion of the payload in the frame \(\{O\}\).
\section{Modal Decoupling}
\label{sec:org08a0372}
\label{ssec:detail_control_comp_modal}
Let's consider a system with the following equations of motion:
\begin{equation}
M \bm{\ddot{x}} + C \bm{\dot{x}} + K \bm{x} = \bm{\mathcal{F}}
\end{equation}
And the measurement output is a combination of the motion variable \(\bm{x}\):
\begin{equation}
\bm{y} = C_{ox} \bm{x} + C_{ov} \dot{\bm{x}}
\end{equation}
Let's make a \textbf{change of variables}:
\begin{equation}
\boxed{\bm{x} = \Phi \bm{x}_m}
\end{equation}
with:
\begin{itemize}
\item \(\bm{x}_m\) the modal amplitudes
\item \(\Phi\) a matrix whose columns are the modes shapes of the system
\end{itemize}
And we map the actuator forces:
\begin{equation}
\bm{\mathcal{F}} = J^T \bm{\tau}
\end{equation}
The equations of motion become:
\begin{equation}
M \Phi \bm{\ddot{x}}_m + C \Phi \bm{\dot{x}}_m + K \Phi \bm{x}_m = J^T \bm{\tau}
\end{equation}
And the measured output is:
\begin{equation}
\bm{y} = C_{ox} \Phi \bm{x}_m + C_{ov} \Phi \dot{\bm{x}}_m
\end{equation}
By pre-multiplying the EoM by \(\Phi^T\):
\begin{equation}
\Phi^T M \Phi \bm{\ddot{x}}_m + \Phi^T C \Phi \bm{\dot{x}}_m + \Phi^T K \Phi \bm{x}_m = \Phi^T J^T \bm{\tau}
\end{equation}
And we note:
\begin{itemize}
\item \(M_m = \Phi^T M \Phi = \text{diag}(\mu_i)\) the modal mass matrix
\item \(C_m = \Phi^T C \Phi = \text{diag}(2 \xi_i \mu_i \omega_i)\) (classical damping)
\item \(K_m = \Phi^T K \Phi = \text{diag}(\mu_i \omega_i^2)\) the modal stiffness matrix
\end{itemize}
And we have:
\begin{equation}
\ddot{\bm{x}}_m + 2 \Xi \Omega \dot{\bm{x}}_m + \Omega^2 \bm{x}_m = \mu^{-1} \Phi^T J^T \bm{\tau}
\end{equation}
with:
\begin{itemize}
\item \(\mu = \text{diag}(\mu_i)\)
\item \(\Omega = \text{diag}(\omega_i)\)
\item \(\Xi = \text{diag}(\xi_i)\)
\end{itemize}
And we call the \textbf{modal input matrix}:
\begin{equation}
\boxed{B_m = \mu^{-1} \Phi^T J^T}
\end{equation}
And the \textbf{modal output matrices}:
\begin{equation}
\boxed{C_m = C_{ox} \Phi + C_{ov} \Phi s}
\end{equation}
Let's note the ``modal input'':
\begin{equation}
\bm{\tau}_m = B_m \bm{\tau}
\end{equation}
The transfer function from \(\bm{\tau}_m\) to \(\bm{x}_m\) is:
\begin{equation} \label{eq:modal_eq}
\boxed{\frac{\bm{x}_m}{\bm{\tau}_m} = \left( I_n s^2 + 2 \Xi \Omega s + \Omega^2 \right)^{-1}}
\end{equation}
which is a \textbf{diagonal} transfer function matrix.
We therefore have decoupling of the dynamics from \(\bm{\tau}_m\) to \(\bm{x}_m\).
We now expressed the transfer function from input \(\bm{\tau}\) to output \(\bm{y}\) as a function of the ``modal variables'':
\begin{equation}
\boxed{\frac{\bm{y}}{\bm{\tau}} = \underbrace{\left( C_{ox} + s C_{ov} \right) \Phi}_{C_m} \underbrace{\left( I_n s^2 + 2 \Xi \Omega s + \Omega^2 \right)^{-1}}_{\text{diagonal}} \underbrace{\left( \mu^{-1} \Phi^T J^T \right)}_{B_m}}
\end{equation}
By inverting \(B_m\) and \(C_m\) and using them as shown in Figure \ref{fig:modal_decoupling_architecture}, we can see that we control the system in the ``modal space'' in which it is decoupled.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_decoupling_modal.png}
\caption{\label{fig:modal_decoupling_architecture}Modal Decoupling Architecture}
\end{figure}
The system \(\bm{G}_m(s)\) shown in Figure \ref{fig:modal_decoupling_architecture} is diagonal \eqref{eq:modal_eq}.
Modal decoupling requires to have the equations of motion of the system.
From the equations of motion (and more precisely the mass and stiffness matrices), the mode shapes \(\Phi\) are computed.
Then, the system can be decoupled in the modal space.
The obtained system on the diagonal are second order resonant systems which can be easily controlled.
Using this decoupling strategy, it is possible to control each mode individually.
\section{SVD Decoupling}
\label{sec:orgd91e1be}
\label{ssec:detail_control_comp_svd}
Procedure:
\begin{itemize}
\item Identify the dynamics of the system from inputs to outputs (can be obtained experimentally)
\item Choose a frequency where we want to decouple the system (usually, the crossover frequency is a good choice)
\item Compute a real approximation of the system's response at that frequency
\item Perform a Singular Value Decomposition of the real approximation
\item Use the singular input and output matrices to decouple the system as shown in Figure \ref{fig:detail_control_decoupling_svd}
\[ G_{svd}(s) = U^{-1} G(s) V^{-T} \]
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_decoupling_svd.png}
\caption{\label{fig:detail_control_decoupling_svd}Decoupled plant \(\bm{G}_{SVD}\) using the Singular Value Decomposition}
\end{figure}
In order to apply the Singular Value Decomposition, we need to have the Frequency Response Function of the system, at least near the frequency where we wish to decouple the system.
The FRF can be experimentally obtained or based from a model.
This method ensure good decoupling near the chosen frequency, but no guaranteed decoupling away from this frequency.
Also, it depends on how good the real approximation of the FRF is, therefore it might be less good for plants with high damping.
This method is quite general and can be applied to any type of system.
The inputs and outputs are ordered from higher gain to lower gain at the chosen frequency.
\begin{itemize}
\item[{$\square$}] Do we loose any physical meaning of the obtained inputs and outputs?
\item[{$\square$}] Can we take advantage of the fact that U and V are unitary?
\end{itemize}
\section{Comparison}
\label{sec:org670a0b0}
\label{ssec:detail_control_decoupling_comp}
\subsection{Jacobian Decoupling}
\label{sec:org2a50d56}
Decoupling properties depends on the chosen frame \(\{O\}\).
Let's take the CoM as the decoupling frame.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_jacobian_plant.png}
\caption{\label{fig:detail_control_jacobian_plant}Plant decoupled using the Jacobian matrices \(G_x(s)\)}
\end{figure}
\subsection{Modal Decoupling}
\label{sec:org6cc56e8}
For the system in Figure \ref{fig:detail_control_model_test_decoupling}, we have:
\begin{align}
\bm{x} &= \begin{bmatrix} x \\ y \\ R_z \end{bmatrix} \\
\bm{y} &= \mathcal{L} = J \bm{x}; \quad C_{ox} = J; \quad C_{ov} = 0 \\
M &= \begin{bmatrix}
m & 0 & 0 \\
0 & m & 0 \\
0 & 0 & I
\end{bmatrix}; \quad K = J' \begin{bmatrix}
k & 0 & 0 \\
0 & k & 0 \\
0 & 0 & k
\end{bmatrix} J; \quad C = J' \begin{bmatrix}
c & 0 & 0 \\
0 & c & 0 \\
0 & 0 & c
\end{bmatrix} J
\end{align}
In order to apply the architecture shown in Figure \ref{fig:modal_decoupling_architecture}, we need to compute \(C_{ox}\), \(C_{ov}\), \(\Phi\), \(\mu\) and \(J\).
\begin{table}[htbp]
\caption{\label{tab:modal_decoupling_Bm}\(B_m\) matrix}
\centering
\begin{tabularx}{0.3\linewidth}{ccc}
\toprule
-0.0004 & -0.0007 & 0.0007\\
-0.0151 & 0.0041 & -0.0041\\
0.0 & 0.0025 & 0.0025\\
\bottomrule
\end{tabularx}
\end{table}
\begin{table}[htbp]
\caption{\label{tab:modal_decoupling_Cm}\(C_m\) matrix}
\centering
\begin{tabularx}{0.2\linewidth}{ccc}
\toprule
-0.1 & -1.8 & 0.0\\
-0.2 & 0.5 & 1.0\\
0.2 & -0.5 & 1.0\\
\bottomrule
\end{tabularx}
\end{table}
And the plant in the modal space is defined below and its magnitude is shown in Figure \ref{fig:detail_control_modal_plant}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_modal_plant.png}
\caption{\label{fig:detail_control_modal_plant}Modal plant \(G_m(s)\)}
\end{figure}
Let's now close one loop at a time and see how the transmissibility changes.
\subsection{SVD Decoupling}
\label{sec:org1bd92ef}
\begin{table}[htbp]
\caption{\label{}Real approximate of \(G\) at the decoupling frequency \(\omega_c\)}
\centering
\begin{tabularx}{0.3\linewidth}{ccc}
\toprule
-8e-06 & 2.1e-06 & -2.1e-06\\
2.1e-06 & -1.3e-06 & -2.5e-08\\
-2.1e-06 & -2.5e-08 & -1.3e-06\\
\bottomrule
\end{tabularx}
\end{table}
\begin{itemize}
\item[{$\square$}] Do we have something special when applying SVD to a collocated MIMO system?
\item \textbf{Verify why such a good decoupling is obtained!}
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_svd_plant.png}
\caption{\label{fig:detail_control_svd_plant}Svd plant \(G_m(s)\)}
\end{figure}
\section*{Conclusion}
\label{sec:orge4184ce}
The three proposed methods clearly have a lot in common as they all tend to make system more decoupled by pre and/or post multiplying by a constant matrix
However, the three methods also differs by a number of points which are summarized in Table \ref{tab:detail_control_decoupling_strategies_comp}.
Other decoupling strategies could be included in this study, such as:
\begin{itemize}
\item DC decoupling: pre-multiply the plant by \(G(0)^{-1}\)
\item full decoupling: pre-multiply the plant by \(G(s)^{-1}\)
\end{itemize}
\begin{table}[htbp]
\caption{\label{tab:detail_control_decoupling_strategies_comp}Comparison of decoupling strategies}
\centering
\scriptsize
\begin{tabularx}{\linewidth}{lXXX}
\toprule
& \textbf{Jacobian} & \textbf{Modal} & \textbf{SVD}\\
\midrule
\textbf{Philosophy} & Topology Driven & Physics Driven & Data Driven\\
\midrule
\textbf{Requirements} & Known geometry & Known equations of motion & Identified FRF\\
\midrule
\textbf{Decoupling Matrices} & Decoupling using \(J\) obtained from geometry & Decoupling using \(\Phi\) obtained from modal decomposition & Decoupling using \(U\) and \(V\) obtained from SVD\\
\midrule
\textbf{Decoupled Plant} & \(\bm{G}_{\{O\}} = J_{\{O\}}^{-1} \bm{G} J_{\{O\}}^{-T}\) & \(\bm{G}_m = C_m^{-1} \bm{G} B_m^{-1}\) & \(\bm{G}_{svd}(s) = U^{-1} \bm{G}(s) V^{-T}\)\\
\midrule
\textbf{Implemented Controller} & \(\bm{K}_{\{O\}} = J_{\{O\}}^{-T} \bm{K}_{d}(s) J_{\{O\}}^{-1}\) & \(\bm{K}_m = B_m^{-1} \bm{K}_{d}(s) C_m^{-1}\) & \(\bm{K}_{svd}(s) = V^{-T} \bm{K}_{d}(s) U^{-1}\)\\
\midrule
\textbf{Physical Interpretation} & Forces/Torques to Displacement/Rotation in chosen frame & Inputs to excite individual modes & Directions of max to min controllability/observability\\
& & Output to sense individual modes & \\
\midrule
\textbf{Decoupling Properties} & Decoupling at low or high frequency depending on the chosen frame & Good decoupling at all frequencies & Good decoupling near the chosen frequency\\
\midrule
\textbf{Pros} & Physical inputs / outputs & Target specific modes & Good Decoupling near the crossover\\
& Good decoupling at High frequency (diagonal mass matrix if Jacobian taken at the CoM) & 2nd order diagonal plant & Very General\\
& Good decoupling at Low frequency (if Jacobian taken at specific point) & & \\
& Easy integration of meaningful reference inputs & & \\
& & & \\
\midrule
\textbf{Cons} & Coupling between force/rotation may be high at low frequency (non diagonal terms in K) & Need analytical equations & Loose the physical meaning of inputs /outputs\\
& Limited to parallel mechanisms (?) & & Decoupling depends on the real approximation validity\\
& If good decoupling at all frequencies => requires specific mechanical architecture & & Diagonal plants may not be easy to control\\
\midrule
\textbf{Applicability} & Parallel Mechanisms & Systems whose dynamics that can be expressed with M and K matrices & Very general\\
& Only small motion for the Jacobian matrix to stay constant & & Need FRF data (either experimentally or analytically)\\
\bottomrule
\end{tabularx}
\end{table}
\chapter{Closed-Loop Shaping using Complementary Filters}
\label{sec:orga76ba90}
\label{sec:detail_control_optimization}
Performance of a feedback control is dictated by closed-loop transfer functions.
For instance sensitivity, transmissibility, etc\ldots{} Gang of Four.
There are several ways to design a controller to obtain a given performance.
Decoupled Open-Loop Shaping:
\begin{itemize}
\item As shown in previous section, once the plant is decoupled: open loop shaping
\item Explain procedure when applying open-loop shaping
\item Lead, Lag, Notches, Check Stability, c2d, etc\ldots{}
\item But this is open-loop shaping, and it does not directly work on the closed loop transfer functions
\end{itemize}
Other strategy: Model Based Design:
\begin{itemize}
\item \href{file:///home/thomas/Cloud/work-projects/ID31-NASS/matlab/stewart-simscape/org/bibliography.org}{Multivariable Control}
\item Talk about Caio's thesis?
\item Review of model based design (LQG, H-Infinity) applied to Stewart platform
\item Difficulty to specify robustness to change of payload mass
\end{itemize}
In this section, an alternative is proposed in which complementary filters are used for closed-loop shaping.
It is presented for a SISO system, but can be generalized to MIMO if decoupling is sufficient.
It will be experimentally demonstrated with the NASS.
\textbf{Paper's introduction}:
\textbf{Model based control}
\textbf{SISO control design methods}
\begin{itemize}
\item frequency domain techniques
\item manual loop-shaping - key idea: modification of the controller such that the open-loop is made according to specifications \cite{oomen18_advan_motion_contr_precis_mechat}.
\end{itemize}
This works well because the open loop transfer function is linearly dependent of the controller.
However, the specifications are given in terms of the final system performance, i.e. as closed-loop specifications.
\textbf{Norm-based control}
\(\hinf\) loop-shaping \cite{skogestad07_multiv_feedb_contr}. Far from standard in industry as it requires lot of efforts.
Problem of robustness to plant uncertainty:
\begin{itemize}
\item Trade off performance / robustness. Difficult to obtain high performance in presence of high uncertainty.
\item Robust control \(\mu\text{-synthesis}\). Takes a lot of effort to model the plant uncertainty.
\item Sensor fusion: combines two sensors using complementary filters. The high frequency sensor is collocated with the actuator in order to ensure the stability of the system even in presence of uncertainty. \cite{collette15_sensor_fusion_method_high_perfor,collette14_vibrat}
\end{itemize}
Complementary filters: \cite{hua05_low_ligo}.
In this paper, we propose a new controller synthesis method
\begin{itemize}
\item based on the use of complementary high pass and low pass filters
\item inverse based control
\item direct translation of requirements such as disturbance rejection and robustness to plant uncertainty
\end{itemize}
\section{Control Architecture}
\label{sec:orgaae401b}
\label{ssec:detail_control_control_arch}
\paragraph{Virtual Sensor Fusion}
\label{sec:orgbe4fa57}
Let's consider the control architecture represented in Fig. \ref{fig:detail_control_sf_arch} where \(G^\prime\) is the physical plant to control, \(G\) is a model of the plant, \(k\) is a gain, \(H_L\) and \(H_H\) are complementary filters (\(H_L + H_H = 1\) in the complex sense).
The signals are the reference signal \(r\), the output perturbation \(d_y\), the measurement noise \(n\) and the control input \(u\).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sf_arch.png}
\caption{\label{fig:detail_control_sf_arch}Sensor Fusion Architecture}
\end{figure}
The dynamics of the closed-loop system is described by the following equations
\begin{alignat}{5}
y &= \frac{1+kGH_H}{1+L} dy &&+ \frac{kG^{\prime}}{1+L} r &&- \frac{kG^{\prime}H_L}{1+L} n \\
u &= -\frac{kH_L}{1+L} dy &&+ \frac{k}{1+L} r &&- \frac{kH_L}{1+L} n
\end{alignat}
with \(L = k(G H_H + G^\prime H_L)\).
The idea of using such architecture comes from sensor fusion \cite{collette14_vibrat,collette15_sensor_fusion_method_high_perfor} where we use two sensors.
One is measuring the quantity that is required to control, the other is collocated with the actuator in such a way that stability is guaranteed.
The first one is low pass filtered in order to obtain good performance at low frequencies and the second one is high pass filtered to benefits from its good dynamical properties.
Here, the second sensor is replaced by a model \(G\) of the plant which is assumed to be stable and minimum phase.
One may think that the control architecture shown in Fig. \ref{fig:detail_control_sf_arch} is a multi-loop system, but because no non-linear saturation-type element is present in the inner-loop (containing \(k\), \(G\) and \(H_H\) which are all numerically implemented), the structure is equivalent to the architecture shown in Fig. \ref{fig:detail_control_sf_arch_eq}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sf_arch_eq.png}
\caption{\label{fig:detail_control_sf_arch_eq}Equivalent feedback architecture}
\end{figure}
The dynamics of the system can be rewritten as follow
\begin{alignat}{5}
y &= \frac{1}{1+G^{\prime} K H_L} dy &&+ \frac{G^{\prime} K}{1+G^{\prime} K H_L} r &&- \frac{G^{\prime} K H_L}{1+G^{\prime} K H_L} n \\
u &= \frac{-K H_L}{1+G^{\prime} K H_L} dy &&+ \frac{K}{1+G^{\prime} K H_L} r &&- \frac{K H_L}{1+G^{\prime} K H_L} n
\end{alignat}
with \(K = \frac{k}{1 + H_H G k}\)
\paragraph{Asymptotic behavior}
\label{sec:org100d48c}
We now want to study the asymptotic system obtained when using very high value of \(k\)
\begin{equation}
\lim_{k\to\infty} K = \lim_{k\to\infty} \frac{k}{1+H_H G k} = \left( H_H G \right)^{-1}
\end{equation}
If the obtained \(K\) is improper, a low pass filter can be added to have its causal realization.
Also, we want \(K\) to be stable, so \(G\) and \(H_H\) must be minimum phase transfer functions.
For now on, we will consider the resulting control architecture as shown on Fig. \ref{fig:detail_control_sf_arch_class} where the only ``tuning parameters'' are the complementary filters.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sf_arch_class.png}
\caption{\label{fig:detail_control_sf_arch_class}Equivalent classical feedback control architecture}
\end{figure}
The equations describing the dynamics of the closed-loop system are
\begin{align}
y &= \frac{ H_H dy + G^{\prime} G^{-1} r - G^{\prime} G^{-1} H_L n }{H_H + G^\prime G^{-1} H_L} \label{eq:detail_control_cl_system_y}\\
u &= \frac{ -G^{-1} H_L dy + G^{-1} r - G^{-1} H_L n }{H_H + G^\prime G^{-1} H_L} \label{eq:detail_control_cl_system_u}
\end{align}
At frequencies where the model is accurate: \(G^{-1} G^{\prime} \approx 1\), \(H_H + G^\prime G^{-1} H_L \approx H_H + H_L = 1\) and
\begin{align}
y &= H_H dy + r - H_L n \label{eq:detail_control_cl_performance_y} \\
u &= -G^{-1} H_L dy + G^{-1} r - G^{-1} H_L n \label{eq:detail_control_cl_performance_u}
\end{align}
We obtain a sensitivity transfer function equals to the high pass filter \(S = \frac{y}{dy} = H_H\) and a transmissibility transfer function equals to the low pass filter \(T = \frac{y}{n} = H_L\).
Assuming that we have a good model of the plant, we have then that the closed-loop behavior of the system converges to the designed complementary filters.
\section{Translating the performance requirements into the shapes of the complementary filters}
\label{sec:org30471b6}
\label{ssec:detail_control_trans_perf}
The required performance specifications in a feedback system can usually be translated into requirements on the upper bounds of \(\abs{S(j\w)}\) and \(|T(j\omega)|\) \cite{bibel92_guidel_h}.
The process of designing a controller \(K(s)\) in order to obtain the desired shapes of \(\abs{S(j\w)}\) and \(\abs{T(j\w)}\) is called loop shaping.
The equations \eqref{eq:detail_control_cl_system_y} and \eqref{eq:detail_control_cl_system_u} describing the dynamics of the studied feedback architecture are not written in terms of \(K\) but in terms of the complementary filters \(H_L\) and \(H_H\).
In this section, we then translate the typical specifications into the desired shapes of the complementary filters \(H_L\) and \(H_H\).\\
\paragraph{Nominal Stability (NS)}
\label{sec:orgb61eb25}
The closed-loop system is stable if all its elements are stable (\(K\), \(G^\prime\) and \(H_L\)) and if the sensitivity function (\(S = \frac{1}{1 + G^\prime K H_L}\)) is stable.
For the nominal system (\(G^\prime = G\)), we have \(S = H_H\).
Nominal stability is then guaranteed if \(H_L\), \(H_H\) and \(G\) are stable and if \(G\) and \(H_H\) are minimum phase (to have \(K\) stable).
Thus we must design stable and minimum phase complementary filters.\\
\paragraph{Nominal Performance (NP)}
\label{sec:org4748252}
Typical performance specifications can usually be translated into upper bounds on \(|S(j\omega)|\) and \(|T(j\omega)|\).
Two performance weights \(w_H\) and \(w_L\) are defined in such a way that performance specifications are satisfied if
\begin{equation}
|w_H(j\omega) S(j\omega)| \le 1,\ |w_L(j\omega) T(j\omega)| \le 1 \quad \forall\omega
\end{equation}
For the nominal system, we have \(S = H_H\) and \(T = H_L\), and then nominal performance is ensured by requiring
\begin{subnumcases}{\text{NP} \Leftrightarrow}\label{eq:detail_control_nominal_performance}
|w_H(j\omega) H_H(j\omega)| \le 1 \quad \forall\omega \label{eq:detail_control_nominal_perf_hh}\\
|w_L(j\omega) H_L(j\omega)| \le 1 \quad \forall\omega \label{eq:detail_control_nominal_perf_hl}
\end{subnumcases}
The translation of typical performance requirements on the shapes of the complementary filters is discussed below:
\begin{itemize}
\item for disturbance rejections, make \(|S| = |H_H|\) small
\item for noise attenuation, make \(|T| = |H_L|\) small
\item for control energy reduction, make \(|KS| = |G^{-1}|\) small
\end{itemize}
We may have other requirements in terms of stability margins, maximum or minimum closed-loop bandwidth.\\
\paragraph{Closed-Loop Bandwidth}
\label{sec:org20cf288}
The closed-loop bandwidth \(\w_B\) can be defined as the frequency where \(\abs{S(j\w)}\) first crosses \(\frac{1}{\sqrt{2}}\) from below.
If one wants the closed-loop bandwidth to be at least \(\w_B^*\) (e.g. to stabilize an unstable pole), one can required that \(|S(j\omega)| \le \frac{1}{\sqrt{2}}\) below \(\omega_B^*\) by designing \(w_H\) such that \(|w_H(j\omega)| \ge \sqrt{2}\) for \(\omega \le \omega_B^*\).
Similarly, if one wants the closed-loop bandwidth to be less than \(\w_B^*\), one can approximately require that the magnitude of \(T\) is less than \(\frac{1}{\sqrt{2}}\) at frequencies above \(\w_B^*\) by designing \(w_L\) such that \(|w_L(j\omega)| \ge \sqrt{2}\) for \(\omega \ge \omega_B^*\).\\
\paragraph{Classical stability margins}
\label{sec:orgd6e8f52}
Gain margin (GM) and phase margin (PM) are usual specifications on controlled system.
Minimum GM and PM can be guaranteed by limiting the maximum magnitude of the sensibility function \(M_S = \max_{\omega} |S(j\omega)|\):
\begin{equation}
\text{GM} \geq \frac{M_S}{M_S-1}; \quad \text{PM} \geq \frac{1}{M_S}
\end{equation}
Thus, having \(M_S \le 2\) guarantees a gain margin of at least \(2\) and a phase margin of at least \(\SI{29}{\degree}\).
For the nominal system \(M_S = \max_\omega |S| = \max_\omega |H_H|\), so one can design \(w_H\) so that \(|w_H(j\omega)| \ge 1/2\) in order to have
\begin{equation}
|H_H(j\omega)| \le 2 \quad \forall\omega
\end{equation}
and thus obtain acceptable stability margins.\\
\paragraph{Response time to change of reference signal}
\label{sec:org8095473}
For the nominal system, the model is accurate and the transfer function from reference signal \(r\) to output \(y\) is \(1\) \eqref{eq:detail_control_cl_performance_y} and does not depends of the complementary filters.
However, one can add a pre-filter as shown in Fig. \ref{fig:detail_control_sf_arch_class_prefilter}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_sf_arch_class_prefilter.png}
\caption{\label{fig:detail_control_sf_arch_class_prefilter}Prefilter used to limit input usage}
\end{figure}
The transfer function from \(y\) to \(r\) becomes \(\frac{y}{r} = K_r\) and \(K_r\) can we chosen to obtain acceptable response to change of the reference signal.
Typically, \(K_r\) is a low pass filter of the form
\begin{equation}
K_r(s) = \frac{1}{1 + \tau s}
\end{equation}
with \(\tau\) corresponding to the desired response time.\\
\paragraph{Input usage}
\label{sec:org29193ac}
Input usage due to disturbances \(d_y\) and measurement noise \(n\) is determined by \(\big|\frac{u}{d_y}\big| = \big|\frac{u}{n}\big| = \big|G^{-1}H_L\big|\).
Thus it can be limited by setting an upper bound on \(|H_L|\).
Input usage due to reference signal \(r\) is determined by \(\big|\frac{u}{r}\big| = \big|G^{-1} K_r\big|\) when using a pre-filter (Fig. \ref{fig:detail_control_sf_arch_class_prefilter}) and \(\big|\frac{u}{r}\big| = \big|G^{-1}\big|\) otherwise.
Proper choice of \(|K_r|\) is then useful to limit input usage due to change of reference signal.\\
\paragraph{Robust Stability (RS)}
\label{sec:orgee16ad4}
Robustness stability represents the ability of the control system to remain stable even though there are differences between the actual system \(G^\prime\) and the model \(G\) that was used to design the controller.
These differences can have various origins such as unmodelled dynamics or non-linearities.
To represent the differences between the model and the actual system, one can choose to use the general input multiplicative uncertainty as represented in Fig. \ref{fig:detail_control_input_uncertainty}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_input_uncertainty.png}
\caption{\label{fig:detail_control_input_uncertainty}Input multiplicative uncertainty}
\end{figure}
Then, the set of possible perturbed plant is described by
\begin{equation}\label{eq:detail_control_multiplicative_uncertainty}
\Pi_i: \quad G_p(s) = G(s)\big(1 + w_I(s)\Delta_I(s)\big); \quad \abs{\Delta_I(j\w)} \le 1 \ \forall\w
\end{equation}
and \(w_I\) should be chosen such that all possible plants \(G^\prime\) are contained in the set \(\Pi_i\).
Using input multiplicative uncertainty, robust stability is equivalent to have \cite{skogestad07_multiv_feedb_contr}:
\begin{align*}
\text{RS} \Leftrightarrow & |w_I T| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \\
\Leftrightarrow & \left| w_I \frac{G^\prime K H_L}{1 + G^\prime K H_L} \right| \le 1 \quad \forall G^\prime \in \Pi_I ,\ \forall\omega \\
\Leftrightarrow & \left| w_I \frac{G^\prime G^{-1} {H_H}^{-1} H_L}{1 + G^\prime G^{-1} {H_H}^{-1} H_L} \right| \le 1 \quad \forall G^\prime \in \Pi_I ,\ \forall\omega \\
\Leftrightarrow & \left| w_I \frac{(1 + w_I \Delta) {H_H}^{-1} H_L}{1 + (1 + w_I \Delta) {H_H}^{-1} H_L} \right| \le 1 \quad \forall \Delta, \ |\Delta| \le 1 ,\ \forall\omega \\
\Leftrightarrow & \left| w_I \frac{(1 + w_I \Delta) H_L}{1 + w_I \Delta H_L} \right| \le 1 \quad \forall \Delta, \ |\Delta| \le 1 ,\ \forall\omega \\
\Leftrightarrow & \left| H_L w_I \right| \frac{1 + |w_I|}{1 - |w_I H_L|} \le 1, \quad 1 - |w_I H_L| > 0 \quad \forall\omega \\
\Leftrightarrow & \left| H_L w_I \right| (2 + |w_I|) \le 1, \quad 1 - |w_I H_L| > 0 \quad \forall\omega \\
\Leftrightarrow & \left| H_L w_I \right| (2 + |w_I|) \le 1 \quad \forall\omega
\end{align*}
Robust stability is then guaranteed by having the low pass filter \(H_L\) satisfying \eqref{eq:detail_control_robust_stability}.
\begin{equation}\label{eq:detail_control_robust_stability}
\text{RS} \Leftrightarrow |H_L| \le \frac{1}{|w_I| (2 + |w_I|)}\quad \forall \omega
\end{equation}
To ensure robust stability condition \eqref{eq:detail_control_nominal_perf_hl} can be used if \(w_L\) is designed in such a way that \(|w_L| \ge |w_I| (2 + |w_I|)\).\\
\paragraph{Robust Performance (RP)}
\label{sec:org289dea0}
Robust performance is a property for a controlled system to have its performance guaranteed even though the dynamics of the plant is changing within specified bounds.
For robust performance, we then require to have the performance condition valid for all possible plants in the defined uncertainty set:
\begin{subnumcases}{\text{RP} \Leftrightarrow}
|w_H S| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \label{eq:detail_control_robust_perf_S}\\
|w_L T| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \label{eq:detail_control_robust_perf_T}
\end{subnumcases}
Let's transform condition \eqref{eq:detail_control_robust_perf_S} into a condition on the complementary filters
\begin{align*}
& \left| w_H S \right| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \\
\Leftrightarrow & \left| w_H \frac{1}{1 + G^\prime G^{-1} H_H^{-1} H_L} \right| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \\
\Leftrightarrow & \left| \frac{w_H H_H}{1 + \Delta w_I H_L} \right| \le 1 \quad \forall \Delta, \ |\Delta| \le 1, \ \forall\omega \\
\Leftrightarrow & \frac{|w_H H_H|}{1 - |w_I H_L|} \le 1, \ \forall\omega \\
\Leftrightarrow & | w_H H_H | + | w_I H_L | \le 1, \ \forall\omega \\
\end{align*}
The same can be done with condition \eqref{eq:detail_control_robust_perf_T}
\begin{align*}
& \left| w_L T \right| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \\
\Leftrightarrow & \left| w_L \frac{G^\prime G^{-1} H_H^{-1} H_L}{1 + G^\prime G^{-1} H_H^{-1} H_L} \right| \le 1 \quad \forall G^\prime \in \Pi_I, \ \forall\omega \\
\Leftrightarrow & \left| w_L H_L \frac{1 + w_I \Delta}{1 + w_I \Delta H_L} \right| \le 1 \quad \forall \Delta, \ |\Delta| \le 1, \ \forall\omega \\
\Leftrightarrow & \left| w_L H_L \right| \frac{1 + |w_I|}{1 - |w_I H_L|} \le 1 \quad \forall\omega \\
\Leftrightarrow & \left| H_L \right| \le \frac{1}{|w_L| (1 + |w_I|) + |w_I|} \quad \forall\omega \\
\end{align*}
Robust performance is then guaranteed if \eqref{eq:detail_control_robust_perf_a} and \eqref{eq:detail_control_robust_perf_b} are satisfied.
\begin{subnumcases}\label{eq:detail_control_robust_performance}
{\text{RP} \Leftrightarrow}
| w_H H_H | + | w_I H_L | \le 1, \ \forall\omega \label{eq:detail_control_robust_perf_a}\\
\left| H_L \right| \le \frac{1}{|w_L| (1 + |w_I|) + |w_I|} \quad \forall\omega \label{eq:detail_control_robust_perf_b}
\end{subnumcases}
One should be aware than when looking for a robust performance condition, only the worst case is evaluated and using the robust stability condition may lead to conservative control.
\section{Analytical formulas for complementary filters?}
\label{sec:org9e830a6}
\label{ssec:detail_control_analytical_complementary_filters}
\section{Numerical Example}
\label{sec:orgafeeecf}
\label{ssec:detail_control_simulations}
\paragraph{Procedure}
\label{sec:org267f5e8}
In order to apply this control technique, we propose the following procedure:
\begin{enumerate}
\item Identify the plant to be controlled in order to obtain \(G\)
\item Design the weighting function \(w_I\) such that all possible plants \(G^\prime\) are contained in the set \(\Pi_i\)
\item Translate the performance requirements into upper bounds on the complementary filters (as explained in Sec. \ref{ssec:detail_control_trans_perf})
\item Design the weighting functions \(w_H\) and \(w_L\) and generate the complementary filters using \(\hinf\text{-synthesis}\) (as further explained in Sec. \ref{ssec:detail_control_hinf_method}).
If the synthesis fails to give filters satisfying the upper bounds previously defined, either the requirements have to be reworked or a better model \(G\) that will permits to have a smaller \(w_I\) should be obtained.
If one does not want to use the \(\mathcal{H}_\infty\) synthesis, one can use pre-made complementary filters given in Sec. \ref{ssec:detail_control_analytical_complementary_filters}.
\item If \(K = \left( G H_H \right)^{-1}\) is not proper, a low pass filter should be added
\item Design a pre-filter \(K_r\) if requirements on input usage or response to reference change are not met
\item Control implementation: Filter the measurement with \(H_L\), implement the controller \(K\) and the pre-filter \(K_r\) as shown on Fig. \ref{fig:detail_control_sf_arch_class_prefilter}
\end{enumerate}
\paragraph{Plant}
\label{sec:org4080126}
Let's consider the problem of controlling an active vibration isolation system that consist of a mass \(m\) to be isolated, a piezoelectric actuator and a geophone.
We represent this system by a mass-spring-damper system as shown Fig. \ref{fig:detail_control_mech_sys_alone} where \(m\) typically represents the mass of the payload to be isolated, \(k\) and \(c\) represent respectively the stiffness and damping of the mount.
\(w\) is the ground motion.
The values for the parameters of the models are
\[ m = \SI{20}{\kg}; \quad k = 10^4\si{\N/\m}; \quad c = 10^2\si{\N\per(\m\per\s)} \]
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_mech_sys_alone.png}
\caption{\label{fig:detail_control_mech_sys_alone}Model of the positioning system}
\end{figure}
The model of the plant \(G(s)\) from actuator force \(F\) to displacement \(x\) is then
\begin{equation}
G(s) = \frac{1}{m s^2 + c s + k}
\end{equation}
Its bode plot is shown on Fig. \ref{fig:detail_control_bode_plot_mech_sys}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_bode_plot_mech_sys.png}
\caption{\label{fig:detail_control_bode_plot_mech_sys}Bode plot of the transfer function \(G(s)\) from \(F\) to \(x\)}
\end{figure}
\paragraph{Requirements}
\label{sec:orgf3c8638}
The control objective is to isolate the displacement \(x\) of the mass from the ground motion \(w\).
The disturbance rejection should be at least \(10\) at \(\SI{2}{\hertz}\) and with a slope of \(-2\) below \(\SI{2}{\hertz}\) until a rejection of \(10^4\).
Closed-loop bandwidth should be less than \(\SI{20}{\hertz}\) (because of time delay induced by limited sampling frequency?).
Noise attenuation should be at least \(10\) above \(\SI{40}{\hertz}\) and \(100\) above \(\SI{500}{\hertz}\)
Robustness to unmodelled dynamics.
We model the uncertainty on the dynamics of the plant by a multiplicative weight
\begin{equation}
w_I(s) = \frac{\tau s + r_0}{(\tau/r_\infty) s + 1}
\end{equation}
where \(r_0=0.1\) is the relative uncertainty at steady-state, \(1/\tau=\SI{100}{\hertz}\) is the frequency at which the relative uncertainty reaches \(\SI{100}{\percent}\), and \(r_\infty=10\) is the magnitude of the weight at high frequency.
All the requirements on \(H_L\) and \(H_H\) are represented on Fig. \ref{fig:detail_control_spec_S_T}.
\begin{itemize}
\item[{$\square$}] TODO: Make Matlab code to plot the specifications
\end{itemize}
\begin{figure}[htbp]
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_spec_S_T.png}
\end{center}
\subcaption{\label{fig:detail_control_spec_S_T}Closed loop specifications}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_hinf_filters_result_weights.png}
\end{center}
\subcaption{\label{fig:detail_control_hinf_filters_result_weights}Obtained complementary filters}
\end{subfigure}
\caption{\label{fig:detail_control_spec_S_T_obtained_filters}Caption with reference to sub figure (\subref{fig:detail_control_spec_S_T}) (\subref{fig:detail_control_hinf_filters_result_weights})}
\end{figure}
\paragraph{Design of the filters}
\label{sec:org9e0d6e9}
\textbf{Or maybe use analytical formulas as proposed here: \href{file:///home/thomas/Cloud/research/papers/dehaeze20\_virtu\_senso\_fusio/matlab/index.org}{Complementary filters using analytical formula}}
We then design \(w_L\) and \(w_H\) such that their magnitude are below the upper bounds shown on Fig. \ref{fig:detail_control_hinf_filters_result_weights}.
\begin{subequations}
\begin{align}
w_L &= \frac{(s+22.36)^2}{0.005(s+1000)^2}\\
w_H &= \frac{1}{0.0005(s+0.4472)^2}
\end{align}
\end{subequations}
After the \(\hinf\text{-synthesis}\), we obtain \(H_L\) and \(H_H\), and we plot their magnitude on phase on Fig. \ref{fig:detail_control_hinf_filters_result_weights}.
\begin{subequations}
\begin{align}
H_L &= \frac{0.0063957 (s+1016) (s+985.4) (s+26.99)}{(s+57.99) (s^2 + 65.77s + 2981)}\\
H_H &= \frac{0.9936 (s+111.1) (s^2 + 0.3988s + 0.08464)}{(s+57.99) (s^2 + 65.77s + 2981)}
\end{align}
\end{subequations}
\paragraph{Controller analysis}
\label{sec:org6b1f5a6}
The controller is \(K = \left( H_H G \right)^{-1}\).
A low pass filter is added to \(K\) so that it is proper and implementable.
The obtained controller is shown on Fig. \ref{fig:detail_control_bode_Kfb}.
It is implemented as shown on Fig. \ref{fig:detail_control_mech_sys_alone_ctrl}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{figs/detail_control_mech_sys_alone_ctrl.png}
\caption{\label{fig:detail_control_mech_sys_alone_ctrl}Control of a positioning system}
\end{figure}
\begin{figure}[htbp]
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_bode_Kfb.png}
\end{center}
\subcaption{\label{fig:detail_control_bode_Kfb}Controller $K$}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,width=0.95\linewidth]{figs/detail_control_bode_plot_loop_gain_robustness.png}
\end{center}
\subcaption{\label{fig:detail_control_bode_plot_loop_gain_robustness}Loop Gain}
\end{subfigure}
\caption{\label{fig:detail_control_bode_Kfb_loop_gain}Caption with reference to sub figure (\subref{fig:detail_control_bode_Kfb}) (\subref{fig:detail_control_bode_plot_loop_gain_robustness})}
\end{figure}
\paragraph{Robustness analysis}
\label{sec:org6fc1bac}
The robust stability can be access on the nyquist plot (Fig. \ref{fig:detail_control_nyquist_robustness}).
The robust performance is shown on Fig. \ref{fig:detail_control_robust_perf}.
\begin{figure}[htbp]
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,scale=0.8]{figs/detail_control_nyquist_robustness.png}
\end{center}
\subcaption{\label{fig:detail_control_nyquist_robustness}Robust Stability}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\begin{center}
\includegraphics[scale=1,scale=0.8]{figs/detail_control_robust_perf.png}
\end{center}
\subcaption{\label{fig:detail_control_robust_perf}Robust performance}
\end{subfigure}
\caption{\label{fig:fig_label}Caption with reference to sub figure (\subref{fig:detail_control_nyquist_robustness}) (\subref{fig:detail_control_robust_perf})}
\end{figure}
\section{Experimental Validation?}
\label{sec:org7fb6422}
\label{ssec:detail_control_exp_validation}
\href{file:///home/thomas/Cloud/research/papers/dehaeze20\_virtu\_senso\_fusio/matlab/index.org}{Experimental Validation}
\section*{Conclusion}
\label{sec:org8770e9e}
\begin{itemize}
\item[{$\square$}] Discuss how useful it is as the bandwidth can be changed in real time with analytical formulas of second order complementary filters.
Maybe make a section about that.
Maybe give analytical formulas of second order complementary filters in the digital domain?
\item[{$\square$}] Say that it will be validated with the nano-hexapod
\item[{$\square$}] Disadvantages:
\begin{itemize}
\item not optimal
\item computationally intensive?
\item lead to inverse control which may not be wanted in many cases. Add reference.
\end{itemize}
\end{itemize}
\chapter*{Conclusion}
\label{sec:org64023b0}
\label{sec:detail_control_conclusion}
\printbibliography[heading=bibintoc,title={Bibliography}]
\end{document}