Update Content - 2022-03-01

This commit is contained in:
2022-03-01 17:02:01 +01:00
parent df10833d92
commit 85de9df477
534 changed files with 52023 additions and 42 deletions

View File

@@ -0,0 +1,20 @@
+++
title = "Essential challenges in motion control education"
author = ["Dehaeze Thomas"]
draft = true
+++
Tags
:
Reference
: <vcech19_essen_chall_motion_contr_educat>
Author(s)
: M. \VCech, J. K\\"onigsmarkov\\'a, Goubej, M., Oomen, T., & Visioli, A.
Year
: 2019
<./biblio/references.bib>

View File

@@ -0,0 +1,626 @@
+++
title = "Understanding Digital Signal Processing"
author = ["Dehaeze Thomas"]
draft = true
+++
Tags
: [IRR and FIR Filters]({{<relref "irr_and_fir_filters.md#" >}}), [Digital Filters]({{<relref "digital_filters.md#" >}})
Reference
: <lyons11_under_digit_signal_proces>
Author(s)
: Lyons, R.
Year
: 2011
## Discrete Sequences And Systems {#discrete-sequences-and-systems}
### Discrete Sequences And Their Notation {#discrete-sequences-and-their-notation}
### Signal Amplitude, Magnitude, Power {#signal-amplitude-magnitude-power}
### Signal Processing Operational Symbols {#signal-processing-operational-symbols}
### Introduction To Discrete Linear Time-Invariant Systems {#introduction-to-discrete-linear-time-invariant-systems}
### Discrete Linear Systems {#discrete-linear-systems}
### Time-Invariant Systems {#time-invariant-systems}
### The Commutative Property Of Linear Time-Invariant Systems {#the-commutative-property-of-linear-time-invariant-systems}
### Analyzing Linear Time-Invariant Systems {#analyzing-linear-time-invariant-systems}
<a id="orgcbbc38b"></a>
{{< figure src="/ox-hugo/lyons11_lti_impulse_response.png" caption="Figure 1: LTI system unit impulse response sequences. (a) system block diagram. (b) impulse input sequence \\(x(n)\\) and impulse reponse output sequence \\(y(n)\\)." >}}
<a id="org50f1362"></a>
{{< figure src="/ox-hugo/lyons11_moving_average.png" caption="Figure 2: Analyzing a moving average filter. (a) averager block diagram; (b) impulse input and impulse response; (c) averager frequency magnitude reponse." >}}
## Periodic Sampling {#periodic-sampling}
### Aliasing: Signal Ambiguity In The Frequency Domain {#aliasing-signal-ambiguity-in-the-frequency-domain}
<a id="org0ff6bb3"></a>
{{< figure src="/ox-hugo/lyons11_frequency_ambiguity.png" caption="Figure 3: Frequency ambiguity; (a) discrete time sequence of values; (b) two different sinewaves that pass through the points of discete sequence" >}}
### Sampling Lowpass Signals {#sampling-lowpass-signals}
<a id="org38fdf07"></a>
{{< figure src="/ox-hugo/lyons11_noise_spectral_replication.png" caption="Figure 4: Spectral replications; (a) original continuous signal plus noise spectrum; (b) discrete spectrum with noise contaminating the signal of interest" >}}
<a id="org5e8c824"></a>
{{< figure src="/ox-hugo/lyons11_lowpass_sampling.png" caption="Figure 5: Low pass analog filtering prior to sampling at a rate of \\(f\_s\\) Hz." >}}
## The Discrete Fourier Transform {#the-discrete-fourier-transform}
\begin{equation}
X(f) = \int\_{-\infty}^{\infty} x(t) e^{-j2\pi f t} dt
\end{equation}
\begin{equation}
X(m) = \sum\_{n = 0}^{N-1} x(n) e^{-j2 \pi n m /N}
\end{equation}
### Understanding The Dft Equation {#understanding-the-dft-equation}
### Dft Symmetry {#dft-symmetry}
### Dft Linearity {#dft-linearity}
### Dft Magnitudes {#dft-magnitudes}
### Dft Frequency Axis {#dft-frequency-axis}
### Dft Shifting Theorem {#dft-shifting-theorem}
### Inverse Dft {#inverse-dft}
### Dft Leakage {#dft-leakage}
### Windows {#windows}
### Dft Scalloping Loss {#dft-scalloping-loss}
### Dft Resolution, Zero Padding, And Frequency-Domain Sampling {#dft-resolution-zero-padding-and-frequency-domain-sampling}
### Dft Processing Gain {#dft-processing-gain}
### The Dft Of Rectangular Functions {#the-dft-of-rectangular-functions}
### Interpreting The Dft Using The Discrete-Time Fourier Transform {#interpreting-the-dft-using-the-discrete-time-fourier-transform}
## The Fast Fourier Transform {#the-fast-fourier-transform}
### Relationship Of The Fft To The Dft {#relationship-of-the-fft-to-the-dft}
### Hints On Using Ffts In Practice {#hints-on-using-ffts-in-practice}
### Derivation Of The Radix-2 Fft Algorithm {#derivation-of-the-radix-2-fft-algorithm}
### Fft Input/Output Data Index Bit Reversal {#fft-input-output-data-index-bit-reversal}
### Radix-2 Fft Butterfly Structures {#radix-2-fft-butterfly-structures}
### Alternate Single-Butterfly Structures {#alternate-single-butterfly-structures}
## Finite Impulse Response Filters {#finite-impulse-response-filters}
### An Introduction To Finite Impulse Response (Fir) Filters {#an-introduction-to-finite-impulse-response--fir--filters}
### Convolution In Fir Filters {#convolution-in-fir-filters}
### Lowpass Fir Filter Design {#lowpass-fir-filter-design}
### Bandpass Fir Filter Design {#bandpass-fir-filter-design}
### Highpass Fir Filter Design {#highpass-fir-filter-design}
### Parks-Mcclellan Exchange Fir Filter Design Method {#parks-mcclellan-exchange-fir-filter-design-method}
### Half-Band Fir Filters {#half-band-fir-filters}
### Phase Response Of Fir Filters {#phase-response-of-fir-filters}
### A Generic Description Of Discrete Convolution {#a-generic-description-of-discrete-convolution}
### Analyzing Fir Filters {#analyzing-fir-filters}
## Infinite Impulse Response Filters {#infinite-impulse-response-filters}
### An Introduction To Infinite Impulse Response Filters {#an-introduction-to-infinite-impulse-response-filters}
### The Laplace Transform {#the-laplace-transform}
### The Z-Transform {#the-z-transform}
### Using The Z-Transform To Analyze Iir Filters {#using-the-z-transform-to-analyze-iir-filters}
### Using Poles And Zeros To Analyze Iir Filters {#using-poles-and-zeros-to-analyze-iir-filters}
### Alternate Iir Filter Structures {#alternate-iir-filter-structures}
### Pitfalls In Building Iir Filters {#pitfalls-in-building-iir-filters}
### Improving Iir Filters With Cascaded Structures {#improving-iir-filters-with-cascaded-structures}
### Scaling The Gain Of Iir Filters {#scaling-the-gain-of-iir-filters}
### Impulse Invariance Iir Filter Design Method {#impulse-invariance-iir-filter-design-method}
### Bilinear Transform Iir Filter Design Method {#bilinear-transform-iir-filter-design-method}
### Optimized Iir Filter Design Method {#optimized-iir-filter-design-method}
### A Brief Comparison Of Iir And Fir Filters {#a-brief-comparison-of-iir-and-fir-filters}
## Specialized Digital Networks And Filters {#specialized-digital-networks-and-filters}
### Differentiators {#differentiators}
### Integrators {#integrators}
### Matched Filters {#matched-filters}
### Interpolated Lowpass Fir Filters {#interpolated-lowpass-fir-filters}
### Frequency Sampling Filters: The Lost Art {#frequency-sampling-filters-the-lost-art}
## Quadrature Signals {#quadrature-signals}
### Why Care About Quadrature Signals? {#why-care-about-quadrature-signals}
### The Notation Of Complex Numbers {#the-notation-of-complex-numbers}
### Representing Real Signals Using Complex Phasors {#representing-real-signals-using-complex-phasors}
### A Few Thoughts On Negative Frequency {#a-few-thoughts-on-negative-frequency}
### Quadrature Signals In The Frequency Domain {#quadrature-signals-in-the-frequency-domain}
### Bandpass Quadrature Signals In The Frequency Domain {#bandpass-quadrature-signals-in-the-frequency-domain}
### Complex Down-Conversion {#complex-down-conversion}
### A Complex Down-Conversion Example {#a-complex-down-conversion-example}
### An Alternate Down-Conversion Method {#an-alternate-down-conversion-method}
## The Discrete Hilbert Transform {#the-discrete-hilbert-transform}
### Hilbert Transform Definition {#hilbert-transform-definition}
### Why Care About The Hilbert Transform? {#why-care-about-the-hilbert-transform}
### Impulse Response Of A Hilbert Transformer {#impulse-response-of-a-hilbert-transformer}
### Designing A Discrete Hilbert Transformer {#designing-a-discrete-hilbert-transformer}
### Time-Domain Analytic Signal Generation {#time-domain-analytic-signal-generation}
### Comparing Analytical Signal Generation Methods {#comparing-analytical-signal-generation-methods}
## 10 Sample Rate Conversion {#10-sample-rate-conversion}
### 10.1 Decimation {#10-dot-1-decimation}
### 10.2 Two-Stage Decimation {#10-dot-2-two-stage-decimation}
### 10.3 Properties Of Downsampling {#10-dot-3-properties-of-downsampling}
### 10.4 Interpolation {#10-dot-4-interpolation}
### 10.5 Properties Of Interpolation {#10-dot-5-properties-of-interpolation}
### 10.6 Combining Decimation And Interpolation {#10-dot-6-combining-decimation-and-interpolation}
### 10.7 Polyphase Filters {#10-dot-7-polyphase-filters}
### 10.8 Two-Stage Interpolation {#10-dot-8-two-stage-interpolation}
### 10.9 Z-Transform Analysis Of Multirate Systems {#10-dot-9-z-transform-analysis-of-multirate-systems}
### 10.10 Polyphase Filter Implementations {#10-dot-10-polyphase-filter-implementations}
### 10.11 Sample Rate Conversion By Rational Factors {#10-dot-11-sample-rate-conversion-by-rational-factors}
### 10.12 Sample Rate Conversion With Half-Band Filters {#10-dot-12-sample-rate-conversion-with-half-band-filters}
### 10.13 Sample Rate Conversion With Ifir Filters {#10-dot-13-sample-rate-conversion-with-ifir-filters}
### 10.14 Cascaded Integrator-Comb Filters {#10-dot-14-cascaded-integrator-comb-filters}
## 11 Signal Averaging {#11-signal-averaging}
### 11.1 Coherent Averaging {#11-dot-1-coherent-averaging}
### 11.2 Incoherent Averaging {#11-dot-2-incoherent-averaging}
### 11.3 Averaging Multiple Fast Fourier Transforms {#11-dot-3-averaging-multiple-fast-fourier-transforms}
### 11.4 Averaging Phase Angles {#11-dot-4-averaging-phase-angles}
### 11.5 Filtering Aspects Of Time-Domain Averaging {#11-dot-5-filtering-aspects-of-time-domain-averaging}
### 11.6 Exponential Averaging {#11-dot-6-exponential-averaging}
## 12 Digital Data Formats And Their Effects {#12-digital-data-formats-and-their-effects}
### 12.1 Fixed-Point Binary Formats {#12-dot-1-fixed-point-binary-formats}
### 12.2 Binary Number Precision And Dynamic Range {#12-dot-2-binary-number-precision-and-dynamic-range}
### 12.3 Effects Of Finite Fixed-Point Binary Word Length {#12-dot-3-effects-of-finite-fixed-point-binary-word-length}
### 12.4 Floating-Point Binary Formats {#12-dot-4-floating-point-binary-formats}
### 12.5 Block Floating-Point Binary Format {#12-dot-5-block-floating-point-binary-format}
## 13 Digital Signal Processing Tricks {#13-digital-signal-processing-tricks}
### 13.1 Frequency Translation Without Multiplication {#13-dot-1-frequency-translation-without-multiplication}
### 13.2 High-Speed Vector Magnitude Approximation {#13-dot-2-high-speed-vector-magnitude-approximation}
### 13.3 Frequency-Domain Windowing {#13-dot-3-frequency-domain-windowing}
### 13.4 Fast Multiplication Of Complex Numbers {#13-dot-4-fast-multiplication-of-complex-numbers}
### 13.5 Efficiently Performing The Fft Of Real Sequences {#13-dot-5-efficiently-performing-the-fft-of-real-sequences}
### 13.6 Computing The Inverse Fft Using The Forward Fft {#13-dot-6-computing-the-inverse-fft-using-the-forward-fft}
### 13.7 Simplified Fir Filter Structure {#13-dot-7-simplified-fir-filter-structure}
### 13.8 Reducing A/D Converter Quantization Noise {#13-dot-8-reducing-a-d-converter-quantization-noise}
### 13.9 A/D Converter Testing Techniques {#13-dot-9-a-d-converter-testing-techniques}
### 13.10 Fast Fir Filtering Using The Fft {#13-dot-10-fast-fir-filtering-using-the-fft}
### 13.11 Generating Normally Distributed Random Data {#13-dot-11-generating-normally-distributed-random-data}
### 13.12 Zero-Phase Filtering {#13-dot-12-zero-phase-filtering}
### 13.13 Sharpened Fir Filters {#13-dot-13-sharpened-fir-filters}
### 13.14 Interpolating A Bandpass Signal {#13-dot-14-interpolating-a-bandpass-signal}
### 13.15 Spectral Peak Location Algorithm {#13-dot-15-spectral-peak-location-algorithm}
### 13.16 Computing Fft Twiddle Factors {#13-dot-16-computing-fft-twiddle-factors}
### 13.17 Single Tone Detection {#13-dot-17-single-tone-detection}
### 13.18 The Sliding Dft {#13-dot-18-the-sliding-dft}
### 13.19 The Zoom Fft {#13-dot-19-the-zoom-fft}
### 13.20 A Practical Spectrum Analyzer {#13-dot-20-a-practical-spectrum-analyzer}
### 13.21 An Efficient Arctangent Approximation {#13-dot-21-an-efficient-arctangent-approximation}
### 13.22 Frequency Demodulation Algorithms {#13-dot-22-frequency-demodulation-algorithms}
### 13.23 Dc Removal {#13-dot-23-dc-removal}
### 13.24 Improving Traditional Cic Filters {#13-dot-24-improving-traditional-cic-filters}
### 13.25 Smoothing Impulsive Noise {#13-dot-25-smoothing-impulsive-noise}
### 13.26 Efficient Polynomial Evaluation {#13-dot-26-efficient-polynomial-evaluation}
### 13.27 Designing Very High-Order Fir Filters {#13-dot-27-designing-very-high-order-fir-filters}
### 13.28 Time-Domain Interpolation Using The Fft {#13-dot-28-time-domain-interpolation-using-the-fft}
### 13.29 Frequency Translation Using Decimation {#13-dot-29-frequency-translation-using-decimation}
### 13.30 Automatic Gain Control (Agc) {#13-dot-30-automatic-gain-control--agc}
### 13.31 Approximate Envelope Detection {#13-dot-31-approximate-envelope-detection}
### 13.32 A Quadrature Oscillator {#13-dot-32-a-quadrature-oscillator}
### 13.33 Specialized Exponential Averaging {#13-dot-33-specialized-exponential-averaging}
### 13.34 Filtering Narrowband Noise Using Filter Nulls {#13-dot-34-filtering-narrowband-noise-using-filter-nulls}
### 13.35 Efficient Computation Of Signal Variance {#13-dot-35-efficient-computation-of-signal-variance}
### 13.36 Real-Time Computation Of Signal Averages And Variances {#13-dot-36-real-time-computation-of-signal-averages-and-variances}
### 13.37 Building Hilbert Transformers From Half-Band Filters {#13-dot-37-building-hilbert-transformers-from-half-band-filters}
### 13.38 Complex Vector Rotation With Arctangents {#13-dot-38-complex-vector-rotation-with-arctangents}
### 13.39 An Efficient Differentiating Network {#13-dot-39-an-efficient-differentiating-network}
### 13.40 Linear-Phase Dc-Removal Filter {#13-dot-40-linear-phase-dc-removal-filter}
### 13.41 Avoiding Overflow In Magnitude Computations {#13-dot-41-avoiding-overflow-in-magnitude-computations}
### 13.42 Efficient Linear Interpolation {#13-dot-42-efficient-linear-interpolation}
### 13.43 Alternate Complex Down-Conversion Schemes {#13-dot-43-alternate-complex-down-conversion-schemes}
### 13.44 Signal Transition Detection {#13-dot-44-signal-transition-detection}
### 13.45 Spectral Flipping Around Signal Center Frequency {#13-dot-45-spectral-flipping-around-signal-center-frequency}
### 13.46 Computing Missing Signal Samples {#13-dot-46-computing-missing-signal-samples}
### 13.47 Computing Large Dfts Using Small Ffts {#13-dot-47-computing-large-dfts-using-small-ffts}
### 13.48 Computing Filter Group Delay Without Arctangents {#13-dot-48-computing-filter-group-delay-without-arctangents}
### 13.49 Computing A Forward And Inverse Fft Using A Single Fft {#13-dot-49-computing-a-forward-and-inverse-fft-using-a-single-fft}
### 13.50 Improved Narrowband Lowpass Iir Filters {#13-dot-50-improved-narrowband-lowpass-iir-filters}
### 13.51 A Stable Goertzel Algorithm {#13-dot-51-a-stable-goertzel-algorithm}
## A: The Arithmetic Of Complex Numbers {#a-the-arithmetic-of-complex-numbers}
### A.1 Graphical Representation Of Real And Complex Numbers {#a-dot-1-graphical-representation-of-real-and-complex-numbers}
### A.2 Arithmetic Representation Of Complex Numbers {#a-dot-2-arithmetic-representation-of-complex-numbers}
### A.3 Arithmetic Operations Of Complex Numbers {#a-dot-3-arithmetic-operations-of-complex-numbers}
### A.4 Some Practical Implications Of Using Complex Numbers {#a-dot-4-some-practical-implications-of-using-complex-numbers}
## B: Closed Form Of A Geometric Series {#b-closed-form-of-a-geometric-series}
## C: Time Reversal And The Dft {#c-time-reversal-and-the-dft}
## D: Mean,Variance, And Standard Deviation {#d-mean-variance-and-standard-deviation}
### D.1 Statistical Measures {#d-dot-1-statistical-measures}
### D.2 Statistics Of Short Sequences {#d-dot-2-statistics-of-short-sequences}
### D.3 Statistics Of Summed Sequences {#d-dot-3-statistics-of-summed-sequences}
### D.4 Standard Deviation (Rms) Of A Continuous Sinewave {#d-dot-4-standard-deviation--rms--of-a-continuous-sinewave}
### D.5 Estimating Signal-To-Noise Ratios {#d-dot-5-estimating-signal-to-noise-ratios}
### D.6 The Mean And Variance Of Random Functions {#d-dot-6-the-mean-and-variance-of-random-functions}
### D.7 The Normal Probability Density Function {#d-dot-7-the-normal-probability-density-function}
## E: Decibels (Db And Dbm) {#e-decibels--db-and-dbm}
### E.1 Using Logarithms To Determine Relative Signal Power {#e-dot-1-using-logarithms-to-determine-relative-signal-power}
### E.2 Some Useful Decibel Numbers {#e-dot-2-some-useful-decibel-numbers}
### E.3 Absolute Power Using Decibels {#e-dot-3-absolute-power-using-decibels}
## F: Digital Filter Terminology {#f-digital-filter-terminology}
## G: Frequency Sampling Filter Derivations {#g-frequency-sampling-filter-derivations}
### G.1 Frequency Response Of A Comb Filter {#g-dot-1-frequency-response-of-a-comb-filter}
### G.2 Single Complex Fsf Frequency Response {#g-dot-2-single-complex-fsf-frequency-response}
### G.3 Multisection Complex Fsf Phase {#g-dot-3-multisection-complex-fsf-phase}
### G.4 Multisection Complex Fsf Frequency Response {#g-dot-4-multisection-complex-fsf-frequency-response}
### G.5 Real Fsf Transfer Function {#g-dot-5-real-fsf-transfer-function}
### G.6 Type-Iv Fsf Frequency Response {#g-dot-6-type-iv-fsf-frequency-response}
## H: Frequency Sampling Filter Design Tables {#h-frequency-sampling-filter-design-tables}
## I: Computing Chebyshev Window Sequences {#i-computing-chebyshev-window-sequences}
### I.1 Chebyshev Windows For Fir Filter Design {#i-dot-1-chebyshev-windows-for-fir-filter-design}
### I.2 Chebyshev Windows For Spectrum Analysis {#i-dot-2-chebyshev-windows-for-spectrum-analysis}
<./biblio/references.bib>

View File

@@ -0,0 +1,20 @@
+++
title = "Precision Machine Design"
author = ["Dehaeze Thomas"]
draft = true
+++
Tags
:
Reference
: <slocum92_precis_machin_desig>
Author(s)
: Slocum, A. H.
Year
: 1992
<./biblio/references.bib>

View File

@@ -1,6 +1,6 @@
+++
title = "Dynamic error budgeting, a design approach"
author = ["Thomas Dehaeze"]
author = ["Dehaeze Thomas"]
draft = false
ref_author = "Monkhorst, W."
ref_year = 2004
@@ -10,7 +10,7 @@ Tags
: [Dynamic Error Budgeting]({{<relref "dynamic_error_budgeting.md#" >}})
Reference
: ([Monkhorst 2004](#org114939a))
: <monkhorst04_dynam_error_budget>
Author(s)
: Monkhorst, W.
@@ -21,6 +21,11 @@ Year
## Introduction {#introduction}
The performance of a mechatronic system is generally defined by the error made, which is caused by the disturbances \\(d\\) that act on the system.
In order to study how the disturbances \\(d\\) propagates to the error, frequency dependent models of the disturbances and subsystems must be used.
Disturbances (which are stochastic) are modeled with their power spectral densities.
The new design approach will be referred to as _Dynamic Error Budgeting_, where "dynamic" refers to the use of the frequency dependent models.
Challenge definition of this thesis:
> Develop a tool which enables the designer to account for stochastic disturbances during the design of a mechatronics system.
@@ -40,21 +45,26 @@ Develop tools should enable the designer to:
Main motivations are:
- Cutting costs in the design phase
- Speeding up the design process
- Enhancing design insight
- **Cutting costs in the design phase**: if the error is not simulated during the design phase, the final performance level can only be found when a costly prototype is build and the performance can be measured physically. If the performance level is not met, the designer has to find out what component or disturbance causes the output to exceed the error budget and then redesign the system. If the error could be simulated beforehand however, changes can be made when the system is still in the design phase, cutting down the costs of the system.
- **Speeding up the design process**: It can give a quick indication if a concept is feasible or not.
Several concepts can be analyzed in a short period of time and the most promising concept can be chosen, speeding up the design process.
- **Enhancing design insight**: If the performance specifications is not met, the designer wants to know which component or what system property is limiting the performance most.
### DEB design process {#deb-design-process}
The DEB design process can be summarized as follows: choose a system concept and simulate the output error.
If the total error is meets the performance specifications, the design is satisfying.
If the error exceeds the specified budget, the designer has to change the system such that the specifications is met.
Step by step, the process is as follows:
- design a concept system
- model the concept system, such that the closed loop transfer functions can be determined
- Design a concept system.
- Model the concept system, such that the closed loop transfer functions can be determined.
- Identify all significant disturbances.
Model them with their _Power Spectral Density_
- Define the performance outputs of the system and simulate the output error.
Using the theory of _propagation_, the contribution of each disturbance to the output error can be analyzed and the critical disturbance can be pointed out
Using the theory of _propagation_, the contribution of each disturbance to the output error can be analyzed and the critical disturbance can be pointed out.
- Make changes to the system that are expected to improve the performance level, and simulate the output error again.
Iterate until the error budget is meet.
@@ -63,16 +73,15 @@ Step by step, the process is as follows:
The assumptions when applying DEB are:
- the system can be accurately described with a **linear time invariant model**.
- The system can be accurately described with a **linear time invariant model**.
This is usually the case as much effort is put in to make systems have a linear behavior and because feedback loops have a " linearizing" effect on the closed loop behavior.
- the disturbances action on the system must be **stationary** (their statistical properties are not allowed to change over time).
- the disturbances are **uncorrelated** with each other.
- The disturbances action on the system must be **stationary** (their statistical properties are not allowed to change over time).
- The disturbances are **uncorrelated** with each other.
This is more difficult to satisfy for MIMO systems and the designer must make sure that the separate disturbances all originate from separate independent sources.
- the disturbance signals are modeled by their **Power Spectral Density**.
- The disturbance signals are modeled by their **Power Spectral Density**.
This implies that only stochastic disturbances are allowed.
Deterministic components like sinusoidal and DC signals are infinite peaks in their PSD and should not be used.
For the deterministic part, other techniques can be used to determine their influence to the error.
- the calculation method makes no assumption on the distribution of the distribution functions of the disturbances.
- The calculation method makes no assumption on the distribution of the distribution functions of the disturbances.
In practice, many disturbances will have a normal like distribution.
@@ -97,36 +106,36 @@ Find a controller \\(C\_{\mathcal{H}\_2}\\) which minimizes the \\(\mathcal{H}\_
In order to synthesize an \\(\mathcal{H}\_2\\) controller that will minimize the output error, the total system including disturbances needs to be modeled as a system with zero mean white noise inputs.
This is done by using weighting filter \\(V\_w\\), of which the output signal has a PSD \\(S\_w(f)\\) when the input is zero mean white noise (Figure [1](#orgf4eeaee)).
This is done by using weighting filter \\(V\_w\\), of which the output signal has a PSD \\(S\_w(f)\\) when the input is zero mean white noise (Figure [1](#orgfce1d5b)).
<a id="orgf4eeaee"></a>
<a id="orgfce1d5b"></a>
{{< figure src="/ox-hugo/monkhorst04_weighting_filter.png" caption="Figure 1: The use of a weighting filter \\(V\_w(f)\,[SI]\\) to give the weighted signal \\(\bar{w}(t)\\) a certain PSD \\(S\_w(f)\\)." >}}
{{< figure src="/ox-hugo/monkhorst04_weighting_filter.png" caption="Figure 1: The use of a weighting filter \\(V\_w(f)\\,[SI]\\) to give the weighted signal \\(\bar{w}(t)\\) a certain PSD \\(S\_w(f)\\)." >}}
The white noise input \\(w(t)\\) is dimensionless, and when the weighting filter has units [SI], the resulting weighted signal \\(\bar{w}(t)\\) has units [SI].
The PSD \\(S\_w(f)\\) of the weighted signal is:
\\[ |S\_w(f)| = V\_w(j 2 \pi f) V\_w^T(-j 2 \pi f) \\]
Given \\(S\_w(f)\\), \\(V\_w(f)\\) can be obtained using a technique called _spectral factorization_.
However, this can be avoided if the modelling of the disturbances is directly done in terms of weighting filters.
However, this can be avoided if the modeling of the disturbances is directly done in terms of weighting filters.
Output weighting filters can also be used to scale different outputs relative to each other (Figure [2](#org9ce3aeb)).
Output weighting filters can also be used to scale different outputs relative to each other (Figure [2](#orgd937879)).
<a id="org9ce3aeb"></a>
<a id="orgd937879"></a>
{{< figure src="/ox-hugo/monkhorst04_general_weighted_plant.png" caption="Figure 2: The open loop system \\(\bar{G}\\) in series with the diagonal input weightin filter \\(V\_w\\) and diagonal output scaling iflter \\(W\_z\\) defining the generalized plant \\(G\\)" >}}
#### Output scaling and the Pareto curve {#output-scaling-and-the-pareto-curve}
In this research, the outputs of the closed loop system (Figure [3](#orged901e4)) are:
In this research, the outputs of the closed loop system (Figure [3](#orgf4dc585)) are:
- the performance (error) signal \\(e\\)
- the controller output \\(u\\)
In this way, the designer can analyze how much control effort is used to achieve the performance level at the performance output.
<a id="orged901e4"></a>
<a id="orgf4dc585"></a>
{{< figure src="/ox-hugo/monkhorst04_closed_loop_H2.png" caption="Figure 3: The closed loop system with weighting filters included. The system has \\(n\\) disturbance inputs and two outputs: the error \\(e\\) and the control signal \\(u\\). The \\(\mathcal{H}\_2\\) minimized the \\(\mathcal{H}\_2\\) norm of this system." >}}
@@ -148,9 +157,3 @@ To achieve the highest degree of prediction accuracy, it is recommended to use t
When an \\(\mathcal{H}\_2\\) controller is synthesized for a particular system, it can give the control designer useful hints about how to control the system best for optimal performance.
Drawbacks however are, that no robustness guarantees can be given and that the order of the \\(\mathcal{H}\_2\\) controller will generally be too high for implementation.
## Bibliography {#bibliography}
<a id="org114939a"></a>Monkhorst, Wouter. 2004. “Dynamic Error Budgeting, a Design Approach.” Delft University.

View File

@@ -1,6 +1,6 @@
+++
title = "Actuators"
author = ["Thomas Dehaeze"]
author = ["Dehaeze Thomas"]
draft = false
+++
@@ -17,19 +17,14 @@ Links to specific actuators:
For vibration isolation:
- In ([Ito and Schitter 2016](#orge96c061)), the effect of the actuator stiffness on the attainable vibration isolation is studied ([Notes]({{<relref "ito16_compar_class_high_precis_actuat.md#" >}}))
- In <ito16_compar_class_high_precis_actuat>, the effect of the actuator stiffness on the attainable vibration isolation is studied ([Notes]({{<relref "ito16_compar_class_high_precis_actuat.md#" >}}))
## Brush-less DC Motor {#brush-less-dc-motor}
- ([Yedamale 2003](#org9fa946a))
- <yedamale03_brush_dc_bldc_motor_fundam>
<https://www.electricaltechnology.org/2016/05/bldc-brushless-dc-motor-construction-working-principle.html>
## Bibliography {#bibliography}
<a id="orge96c061"></a>Ito, Shingo, and Georg Schitter. 2016. “Comparison and Classification of High-Precision Actuators Based on Stiffness Influencing Vibration Isolation.” _IEEE/ASME Transactions on Mechatronics_ 21 (2):116978. <https://doi.org/10.1109/tmech.2015.2478658>.
<a id="org9fa946a"></a>Yedamale, Padmaraja. 2003. “Brushless Dc (BLDC) Motor Fundamentals.” _Microchip Technology Inc_ 20:315.
## [Stepper Motor]({{<relref "stepper_motor.md#" >}}) {#stepper-motor--stepper-motor-dot-md}

View File

@@ -0,0 +1,114 @@
+++
title = "Decimation"
author = ["Dehaeze Thomas"]
draft = false
+++
Tags
: [Digital Signal Processing]({{< relref "digital_signal_processing.md" >}})
<div class="definition">
Decimation is the two-step process of low pass filtering followed by and operation known as downsampling.
</div>
We can downsample a sequence of sampled signal values by a factor of \\(M\\) by retaining every Mth sample and discarding all the remaining samples.
Relative to the original sample rate \\(f\_{s,\text{old}}\\), the sample rate of the downsampled sequence is:
\begin{equation}
f\_{s,\text{new}} = \frac{f\_{s,\text{old}}}{M}
\end{equation}
<div class="exampl">
For example, assume that an analog sinewave has been sampled to produce \\(x\_{\text{old}}(n)\\).
The downsampled sequence is:
\\[ x\_{\text{new}}(m) = x\_{\text{old}}(Nm) \\]
where \\(M=3\\), the result is shown in Figure [1](#figure--fig:decimation-example).
<a id="figure--fig:decimation-example"></a>
{{< figure src="/ox-hugo/decimation_example.png" caption="<span class=\"figure-number\">Figure 1: </span>Sample rate conversion: (a) original sequence; (b) downsampled by \\(M=3\\) sequence" >}}
</div>
The spectral implications of downsampling are what we should expect as shown in Figure
<a id="figure--fig:decimation-spectral-aliasing"></a>
{{< figure src="/ox-hugo/decimation_spectral_aliasing.png" caption="<span class=\"figure-number\">Figure 2: </span>Decimation by a factor of three: (a) spectrum of original \\(x\_{\text{old}}(n)\\) signal; (b) spectrum after downsampling by three." >}}
There is a limit to the amount of downsampling that can be performed relative to the bandwidth \\(B\\) of the original signal.
We must ensure that \\(f\_{s,\text{new}} > 2B\\) to present overlapped spectral replications (aliasing errors) after downsampling.
If a decimation application requires \\(f\_{s,\text{new}}\\) to be less than \\(2B\\), then \\(x\_{\text{old}}(n)\\) must be low pass filtered before the downsampling process if performed.
## Two Stage Decimation {#two-stage-decimation}
When the desired decimation factor \\(M\\) is larger, say \\(M > 20\\), there is an important feature of the filter / decimation process to keep in mind.
Significant low pass filter computational savings may be obtained by implementing the two-stage decimation, shown in Figure [3](#figure--fig:decimation-two-stages) (b).
<a id="figure--fig:decimation-two-stages"></a>
{{< figure src="/ox-hugo/decimation_two_stages.png" caption="<span class=\"figure-number\">Figure 3: </span>Decimation: (a) single-stage; (b) two-stage" >}}
The question is: "Given a desired total downsampling factor \\(M\\), what should be the values of \\(M\_1\\) and \\(M\_2\\) to minimize the number of taps in low-pass filters \\(\text{LPF}\_1\\) and \\(\text{LPF}\_2\\)"?
For two stage decimation, the optimum value for \\(M\_1\\) is:
\begin{equation} \label{eq:M1opt}
M\_{1,\text{opt}} \approx 2 M \cdot \frac{1 - \sqrt{MF/(2-F)}}{2 - F(M+1)}
\end{equation}
where \\(F\\) is the ratio of single-stage low pass filter's transition region width to that filter's stop-band frequency:
\begin{equation}
F = \frac{f\_{\text{stop}} - B^\prime}{f\_{\text{stop}}}
\end{equation}
After using Eq. <eq:M1opt> to determine the optimum first downsampling factor, and setting \\(M\_1\\) equal to the integer sub-multiple of \\(M\\) that is closest to \\(M\_{1,\text{opt}}\\), the second downsampling factor is:
\begin{equation} \label{eq:M2\_from\_M1}
M\_2 = \frac{M}{M\_1}
\end{equation}
<div class="exampl">
Let's assume we have an \\(x\_{\text{old}}(n)\\) input signal arriving at a sample rate of \\(400\\,kHz\\), and we must decimate that signal by a factor of \\(M=100\\) to obtain a final sample rate of \\(4\\,kHz\\).
Also, let's assume the base-band frequency range of interest is from \\(0\\) to \\(B^\prime = 1.8\\,kHz\\), and we want \\(60\\,dB\\) of filter stop-band attenuation.
A single stage decimation low-pass filter's frequency response is shown in Figure [4](#figure--fig:decimation-two-stage-example) (a).
The number of taps \\(N\\) required for a single-stage decimation would be:
\begin{equation}
N = \frac{\text{Atten}}{22 (f\_{\text{stop}} - f\_{\text{pass}})} = \frac{60}{22(2.2/400 - 1.8/400)} = 2727
\end{equation}
which is way too large for practical implementation.
To reduce the number of necessary filter taps, we can partition the decimation problem into two stages.
With \\(M = 100\\), \\(F = (2200-1800)/2200\\), Eq. <eq:M1opt> yields \\(M\_{1,\text{opt}} = 26.4\\).
The integer sub-multiple of 100 closest to \\(26.4\\) is \\(25\\), so we set \\(M\_1 = 25\\).
Next, from Eq. <eq:M2_from_M1>, \\(M\_2 = 4\\) is found.
The first low pass filter has a pass-band cutoff frequency of \\(1.8\\,kHz\\) and its stop-band is \\(400/25 - 1.8 = 14.2\\,kHz\\) (Figure [4](#figure--fig:decimation-two-stage-example) (d)).
The second low pass filter has a pass-band cutoff frequency of \\(1.8\\,kHz\\) and its stop-band is \\(4-1.8 = 2.2\\,kHz\\).
The total number of required taps is:
\begin{equation}
N\_{\text{total}} = N\_{\text{LPF}\_1} + N\_{\text{LPF}\_2} = \frac{60}{22(14.2/400-1.8/400)} + \frac{60}{22(2.2/16 - 1.8/16)} \approx 197
\end{equation}
Which is much more efficient that the single stage decimation.
<a id="figure--fig:decimation-two-stage-example"></a>
{{< figure src="/ox-hugo/decimation_two_stage_example.png" caption="<span class=\"figure-number\">Figure 4: </span>Two stage decimation: (a) single-stage filter response; (b): decimation by 100; (c) spectrum of original signal; (d) output spectrum of the \\(M=25\\) down-sampler; (e) output spectrum of the \\(M=4\\) down-sampler." >}}
</div>
## References: {#references}
<lyons11_under_digit_signal_proces>

View File

@@ -1,18 +1,129 @@
+++
title = "Power Spectral Density"
author = ["Thomas Dehaeze"]
author = ["Dehaeze Thomas"]
draft = false
+++
Tags
: [Signal to Noise Ratio]({{< relref "signal_to_noise_ratio" >}})
: [Signal to Noise Ratio]({{<relref "signal_to_noise_ratio.md#" >}})
Tutorial about Power Spectral Density is accessible [here](https://research.tdehaeze.xyz/spectral-analysis/).
A good article about how to use the `pwelch` function with Matlab ([Schmid 2012](#org7c6692c)).
A good article about how to use the `pwelch` function with Matlab <schmid12_how_to_use_fft_matlab>.
## Parseval's Theorem - Linking the Frequency and Time domain {#parseval-s-theorem-linking-the-frequency-and-time-domain}
## Bibliography {#bibliography}
For non-periodic finite duration signals, the energy in the time domain is described by:
<a id="org7c6692c"></a>Schmid, Hanspeter. 2012. “How to Use the FFT and Matlabs Pwelch Function for Signal and Noise Simulations and Measurements.” _Institute of Microelectronics_.
\begin{equation}
\text{Energy} = \int\_{-\infty}^\infty x(t)^2 dt
\end{equation}
Parseval's Theorem states that energy in the time domain equals energy in the frequency domain:
\begin{equation}
\text{Energy} = \int\_{-\infty}^{\infty} x(t)^2 dt = \int\_{-\infty}^{\infty} |X(f)|^2 df
\end{equation}
where \\(X(f)\\) is the Fourier transform of the time signal \\(x(t)\\):
\begin{equation}
X(f) = \int\_{-\infty}^{\infty} x(t) e^{-2\pi j f t} dt
\end{equation}
## Power Spectral Density function (PSD) {#power-spectral-density-function--psd}
The power distribution over frequency of a time signal \\(x(t)\\) is described by the PSD denoted the \\(S\_x(f)\\).
A PSD is a power density function with units \\([\text{SI}^2/Hz]\\), meaning that the area underneath the PSD curve equals the power (units \\([\text{SI}^2]\\)) of the signal (SI is the unit of the signal, e.g. \\(m/s\\)).
Using the definition of signal power \\(\bar{x^2}\\) and Parseval's theorem, we can link power in the time domain with power in the frequency domain:
\begin{equation}
\text{power} = \lim\_{T \to \infty} \frac{1}{2T} \int\_{-T}^{T} x\_T(t)^2 dt = \lim\_{T \to \infty} \frac{1}{2T} \int\_{-\infty}^{\infty} |X\_T(f)|^2 df = \int\_{-\infty}^{\infty} \left( \lim\_{T \to \infty} \frac{|X\_T(f)|^2}{2T} \right) df
\end{equation}
where \\(X\_T(f)\\) denotes the Fourier transform of \\(x\_T(t)\\), which equals \\(x(t)\\) on the interval \\(-T \le t \le T\\) and is zero outside this interval.
This term is referred to as the two-sided spectral density:
\begin{equation}
S\_{x,two} (f) = \lim\_{T \to \infty} \frac{|X\_T(f)|^2}{2T}, \quad -\infty \le f \le \infty
\end{equation}
In practice, the **one sided PSD** is used, which is only defined on the positive frequency axis but also contains all the power.
It is defined as:
\begin{equation}
S\_{x}(f) = \lim\_{T \to \infty} \frac{|X\_T(f)|^2}{T}, \quad 0 \le f \le \infty
\end{equation}
For discrete time signals, the one-sided PSD estimate is defined as:
\begin{equation}
\hat{S}(f\_k) = \frac{|X\_L(f\_k)|^2}{L T\_s}
\end{equation}
where \\(L\\) equals the number of time samples and \\(T\_s\\) the sample time, \\(X\_L(f\_k)\\) is the N-point discrete Fourier Transform of the discrete time signal \\(x\_L[n]\\) containing \\(L\\) samples:
\begin{equation}
X\_L(f\_k) = \sum\_{n = 0}^{N-1} x\_L[n] e^{-j 2 \pi k n/N}
\end{equation}
## Matlab Code for computing the PSD and CPS {#matlab-code-for-computing-the-psd-and-cps}
Let's compute the PSD of a signal by "hand".
The signal is defined below.
```matlab
%% Signal generation
T_s = 1e-3; % Sampling Time [s]
t = T_s:T_s:100; % Time vector [s]
L = length(t);
x = lsim(1/(1 + s/2/pi/5), randn(1, L), t);
```
The computation is performed using the `fft` function.
```matlab
%% Parameters
T_r = L*T_s; % signal time range
d_f = 1/T_r; % width of frequency grid
F_s = 1/T_s; % sample frequency
F_n = F_s/2; % Nyquist frequency
F = [0:d_f:F_n]; % one sided frequency grid
% Discrete Time Fourier Transform Wxx
Wxx = fft(x - mean(x))/L;
% Two-sided Power Spectrum Pxx [SI^2]
Pxx = Wxx.*conj(Wxx);
% Two-sided Power Spectral Density Sxx_t [SI^2/Hz]
Sxx_t = Pxx/d_f;
% One-sided Power Spectral Density Sxx_o [SI^2/Hz] defined on F
Sxx_o = 2*Sxx_t(1:L/2+1);
```
The result is shown in Figure [1](#org41c99c6).
<a id="org41c99c6"></a>
{{< figure src="/ox-hugo/psd_manual_example.png" caption="Figure 1: Amplitude Spectral Density with manual computation" >}}
This can also be done using the `pwelch` function which integrated a "window" that permits to do some averaging.
```matlab
%% Computation using pwelch function
[pxx, f] = pwelch(x, hanning(ceil(5/T_s)), [], [], 1/T_s);
```
The comparison of the two method is shown in Figure [2](#orge7a31a8).
<a id="orge7a31a8"></a>
{{< figure src="/ox-hugo/psd_comp_pwelch_manual_example.png" caption="Figure 2: Amplitude Spectral Density with manual computation" >}}

View File

@@ -0,0 +1,19 @@
+++
title = "Stepper Motor"
author = ["Dehaeze Thomas"]
draft = false
+++
Tags
:
## 2 phase VS 5 phase stepper motor {#2-phase-vs-5-phase-stepper-motor}
<https://www.orientalmotor.com/stepper-motors/technology/2-phase-vs-5-phase-stepper-motors.html>
## Errors {#errors}
For a two phase stepper motor, there are (typically) 200 steps per revolution.
Errors with a period of 200 period/revolution can be expected.