next up previous contents
Next: The Wiener-like filtering in Up: Noise reduction from the Previous: Noise reduction from the

The convolution from the continuous wavelet transform

We will examine here the computation of a convolution by using the continuous wavelet transform in order to get a framework for linear smoothings. Let us consider the convolution product of two functions:
$\displaystyle h(x)=\int_{-\infty}^{+\infty} f(u)g(x-u)dx$     (14.68)

We introduce two real wavelets functions $\psi(x)$ and $\chi(x)$such that:
$\displaystyle C=\int_0^{+\infty} \frac{\hat{\psi}^*(\nu)\hat{\chi}(\nu)}{\nu}d\nu$     (14.69)

is defined. Wg(a,b) denotes the wavelet transform of g with the wavelet function $\psi(x)$:
$\displaystyle W_g(a,b)=\frac{1}{\sqrt a}\int_{-\infty}^{+\infty}g(x)\psi^*(\frac{x-b}{a})dx$     (14.70)

We restore g(x) with the wavelet function $\chi(x)$:
$\displaystyle g(x)=\frac{1}{C}\int_0^{+\infty}\int_{-\infty}^{+\infty}\frac{1}{\sqrt a}
W_g(a,b)\chi(\frac{x-b}{a})\frac{dadb}{a^2}$     (14.71)

The convolution product can be written as:
$\displaystyle h(x)=\frac{1}{C}\int_0^{+\infty}\frac{da}{a^{\frac{5}{
W_g(a,b)db\int_{-\infty}^{+\infty}f(u)\chi(\frac{x-u-b}{a}) du$     (14.72)

Let us denote $\tilde{\chi}(x)=\chi(-x)$. The wavelet transform Wf(a,b) of f(x) with the wavelet $\tilde{\chi}(x)$ is:
$\displaystyle \tilde W_f(a,b)=\frac{1}{\sqrt
a}\int_{-\infty}^{+\infty}f(x)\tilde{\chi}(\frac{x-b}{a})dx$     (14.73)

That leads to:
$\displaystyle h(x)=\frac{1}{C}\int_0^{+\infty}\frac{da}{a^2}\int_{-\infty}^{+\infty}
\tilde W_f(a,x-b)W_g(a,b)db$     (14.74)

Then we get the final result:
$\displaystyle h(x)={1\over C}\int_0^{+\infty}\tilde W_f(a,x)\otimes W_g(a,x)\frac{da}{
a^2}$     (14.75)

In order to compute a convolution with the continuous wavelet transform:

The wavelet transform permits us to perform any linear filtering. Its efficiency depends on the number of terms in the wavelet transform associated with g(x) for a given signal f(x). If we have a filter where the number of significant coefficients is small for each scale, the complexity of the algorithm is proportional to N. For a classical convolution, the complexity is also proportional to N, but the number of operations is also proportional to the length of the convolution mask. The main advantage of the present technique lies in the possibility of having a filter with long scale terms without computing the convolution on a large window. If we achieve the convolution with the FFT algorithm, the complexity is of order $N\log_2N$. The computing time is longer than the one obtained with the wavelet transform if we concentrate the energy on very few coefficients.

next up previous contents
Next: The Wiener-like filtering in Up: Noise reduction from the Previous: Noise reduction from the
Petra Nass