Update Content - 2022-03-15
This commit is contained in:
@@ -108,6 +108,24 @@ Which is much more efficient that the single stage decimation.
|
||||
|
||||
</div>
|
||||
|
||||
There are two **practical issues** to consider for two-stage decimation:
|
||||
|
||||
- First, if the dual-filter system is required to have a pass-band peak-peak ripple of \\(R\\) dB, then both filters must be designed to have a pass-band peak-peak ripple of no greater than \\(R/2\\) dB.
|
||||
- Second, the number of multiplications needed to compute each \\(x\_{\text{new}}(m)\\) output sample is much larger than \\(N\_\text{total}\\) because we must compute so many \\(\text{LPF}\_1\\) and \\(\text{LPF}\_2\\) output samples destined to be discarded.
|
||||
|
||||
In order to cope with the second issue, an efficient decimation filter implementation scheme called _polyphase decomposition_ can be used.
|
||||
|
||||
<summary>The advantages of two stage decimation, over single-stage decimation are:
|
||||
|
||||
<ul class="org-ul">
|
||||
<li>an overall reduction in computation workload</li>
|
||||
<li>reduced signal and filter coefficient data storage</li>
|
||||
<li>simpler filter designs</li>
|
||||
<li>a decrease in the ill effects of finite binary-work-length filter coefficients</li>
|
||||
</ul>
|
||||
|
||||
These advantages become more pronounced as the overall desired decimation factor \(M\) becomes larger.</summary>
|
||||
|
||||
|
||||
## References: {#references}
|
||||
|
||||
|
Reference in New Issue
Block a user