## A Repertoire of DSP transforms and esp. Hilbert transforms and their significance— part 3

The Fourier transform of $(1/t)$ is essentially a step function. We shall see that the convolution with $(1/t)$ leads to an important integral transform. Specifically, the Fourier transform of

$( -1/(\pi t))$ is $i .sgn (\omega)$ This pair and its dual are shown in Figure 1. Since $(1/t)$

has a pole at its origin, its Fourier transform integral diverges there as $(1/t)$ has a transform only  in a limiting sense.  (It can be evaluated using contour integration). Likewise, the Fourier integral of sgn (x), in computing either the forward or inverse transform, does not  exist in the conventional sense because sgn (x) is not absolutely integrable. The transform of sgn (x) can be defined by considering a sequence of transformable functions that approaches sgn (x) in the limit. We do not let these mathematical inconveniences deter us any more than they did in our previous discussion of $\delta$ functions and sinusoids, for  the pair Slatex (1/t)\$ — sgn has some intriguing properties.

Since $(1/t)$ is real and odd, its Fourier transform is odd and pure imaginary. But, more interestingly, its magnitude spectrum is obviously constant. (Could there be a delta function lurking nearby?). The interest in this transform pair comes from convolving a function with $(-1/\pi t)$ in the time domain. This convolution integral, called the Hilbert Transform is as follows:

$\mathcal{H}[f(t)]=-(1/\pi)\int_{-\infty}^{\infty}\frac {f(t^{'})dt^{'}}{t-t^{'}}$ Equation I

This transform arises in many applications in Digital Communications and Mathematical Physics. The Cauchy principal value, which is a kind of average familiar to those knowledgeable in contour integration, is to be used over singularities of the integrand. This mathematical inconvenience is avoided in the frequency domain where we can easily visualize the effect of the Hilbert transform.

Multiplication by $i.sgn (\omega)$ in the frequency domain produces a $90 \deg$ phase shift at all frequencies. The phase of $F(\omega)$ is advanced constant $90 \deg$ for all positive frequencies and retarded a constant $90\deg$ for all negative frequencies. The magnitude spectrum of $F(\omega)$ is unchanged since the spectrum of $i.sgn(\omega)$ is flat. The Hilbert transform operation in the frequency domain is summarized in Fig 2. The Fourier transform of the given function has its phase shifted $90 \deg$, in opposite directions for positive and negative frequencies, then, the inverse Fourier transform produces the time domain result.

The exact $90 \deg$ phase shift [including the $sgn (\omega)$ dependence] occurs in several different instances in wave propagation: the reflection of electromagnetic waves from metals at a certain angle of incidence involves an exact $90 \deg$ phase shift independent of frequency; special teleseismic ray paths produce the same type of phase shifts, and the propagation of point sources for all types of waves includes a $90\deg$ phase shift for all frequencies in the far field.

Nalin Pithwa

## The Wiener-Khintchine Theorem

An important method of treating such signals that do not decay at infinity, due independently to Wiener in 1930 and Khintchine in 1934, starts from the form of the convolution theorem given in the previous blog [equation 15 reproduced again:

FT $\int_{-\infty}^{\infty} f(\tau)f(t+\tau)d\tau = |F(\omega)|^{2}$ ]

The inverse Fourier transform of this equation is

$\int_{-\infty}^{\infty}f(\tau)f^{*}(t+\tau)d\tau=(1/2\pi)\int_{-\infty}^{\infty}|F(\omega)|^{2}e^{i\omega t}d\omega$ equation 16

For many signals of interest, such as sinusoids, step functions, and random noise of fixed statistical properties, the autocorrelation integral on the left does not converge. But, if we define a truncated version of $f(t)$ by

$f_{T}(t)= f(t) if -T and $f_{T}(t)=0 otherwise$

then we can write

$\int_{-\infty}^{\infty}f_{T}(\tau)f_{T}(t+\tau)d\tau$ which equals

$(1/2\pi)\int_{-\infty}^{\infty}|F_{T}(\omega)|^{2}e^{i\omega t} d\omega$ equation 17

where $F_{T}(\omega)$ is the Fourier transform of $f_{T}(t)$. Dividing by the time interval 2T and taking the limit, equation 17 becomes

$\lim_{T \rightarrow \infty} (1/2\pi) \int_{-T}^{T}f(\tau)f(t+\tau)d\tau=(1/2\pi)\int_{-\infty}^{\infty}\lim_{T \rightarrow \infty} (|F_{T}(\omega)|^{2})(e^{i\omega t})/(2T)d\omega$

Wiener (1949) was able to show that, under the condition that the limit on the left exists, the limit inside of the right hand integral converges to a function:

$P(\omega)=\lim_{T \rightarrow \infty} (|F_{t}(\omega)|^{2})/(2T)$ equation 18

which we call the power spectrum density of f. Using these revised definitions of autocorrelation,

$\phi (t)=\lim_{T \rightarrow \infty} (1/2T) \int_{-T}^{T} f(\tau)f(\tau +t)d\tau$ equation 19

and power spectrum, our result now reads

$P(\omega)=(1/2\pi)\int_{-\infty}^{\infty}P(\omega)e^{i \omega t}d\omega$ equation 20

which is called the Wiener-Khintchine theorem, the autocorrelation is the inverse Fourier transform of  the power spectrum. This is a very significant result, not a simple restatement of our starting point, equation 16. In Equation 16, both $f(t)$ and $F(\omega)$ must be square integrable,  that is, they must contain finite energy over all time and frequency. In equation 20, $f(t)$ must only be sufficiently well behaved so that

$\lim_{T \rightarrow \infty} (1/2T)\int_{-\infty}^{\infty} |f(t)|^{2}dt < \infty$

That is to say, $f(t)$ is only required to have finite power (signal squared per unit time) to have a power spectrum, but $f(t)$ must have a finite energy (that is, square integrable or, with additional restrictions, be only absolutely integrable0 to possess a Fourier transform. Two classes of functions of interest, periodic functions and some types of random noise, satisfy the first condition, but not the second.

We will exploit the Wiener-Khintchine theorem further when we discuss Power Spectral Estimation. Here, we have introduced it to show how Fourier integral theory can be generalized to include functions with infinite energy but finite power. Having presented this vignette of the theory of generalized Fourier integrals, we now feel free to abandon further convergence questions in our heuristic discussion of Fourier transform pairs.

More later…

Nalin Pithwa

## The Continuous Fourier Integral Transform Part 2: Properties of the Fourier Integral Transform

Symmetry properties and a few simple theorems play a central role in our thinking about the DFT. These same properties carry over to  the Fourier integral  transform as the DFT sums go over into  Fourier integrals. Since these properties are generally quite easy to prove, (and, hopefully, you are already familiar with them), we will simply list them in this section. (The proofs are exercises for you :-))

The symmetry properties are, of course, crucial for understanding and manipulating Fourier transforms. They can be summarized by

$\mathcal {F}f^{*}(t)=F^{*}(-\omega)$  Equation 8a

and $\mathcal{F}f(-t)=F(-\omega)$ Equation 8b

The applications of these basic symmetry properties leads to

$\mathcal{F}f^{*}(-t)=F^{8}(\omega)$

and for two cases of special interest we can show that

if $f(t)$ is real, then $F(\omega)$ is Hermitian, $F^{*}(\omega)=F(\omega)$

and if $f(t)$ is imaginary, then $F(\omega)$ is anti-Hermitian, $F^{*}(-\omega)=-F(\omega)$

The similarity theorem results from a simple change of variable in the Fourier integral

$\mathcal{F}f(\omega)=(1/|a|)F((\omega)/a)$ Equation 9

and likewise for the familiar shift theorem

$\mathcal{F}f(t-\tau)=e^{-i\tau \omega}F(\omega)$ Equation 10

The power theorem which states that

$2\pi\int_{-\infty}^{\infty}f(t)g^{*}(t)dt=\int_{-\infty}^{\infty}F(\omega)G^{*}(\omega)d\omega$ Equation 11

can be specialized to Rayleigh’s theorem by setting $g=f$

$2\pi\int_{-\infty}^{\infty}|f(t)|^{2}dt=\int_{-infty}^{\infty}|F(\omega)|^{2}d\omega$ Equation 12

In more mathematical works, Rayleigh’s theorem is sometimes called Plancherel’s theorem.

Of course, the important and powerful convolution theorem — meaning linear convolution — is valid in continuum theory  also:

$\mathcal{F} \int_{-\infty}^{\infty}f(\tau)g(t-\tau)d\tau=F(\omega)G(\omega)$ Equation 13

The many variations of the convolution theorem arising from the symmetry properties of the Fourier transform apply as well. For example, the autocorrelation theorem is as follows:

$\mathcal{F}\int_{-\infty}^{\infty}f(\tau)g(t+\tau)d\tau=F(\omega)G^{*}(\omega)$ Equation 14

The function $FG^{*}$ is called the cross-power spectrum. When we set $g=f$, this equation states the important result that the Fourier transform of the autocorrelation of a function is its power spectrum:

$\mathcal{F}\int_{-\infty}^{\infty}f(\tau)f(t+\tau)d\tau=|F(\omega)|^{2}$ Equation 15

The formal similarity between these continuous-theory properties and those of the DFT makes them easy to remember and to visualize. But, there are essential differences between the two. The DFT with its finite sum has no convergence questions. The Fourier integral transform, on the other hand, has demanded the attention of some of our greatest mathematicians to elucidate its convergence properties. As we know, the absolute integrable condition is only a start; it can be relaxed — quite easily at the heuristic level — to include the sine wave/$\delta$ function pair. The sine wave’s ill behaviour is characteristic of a wide class of functions of  interest in DSP that do not  decay sufficiently fast at infinity for them to possess a Fourier Transform in the normal sense.

More later…

Nalin Pithwa

## The Continuous Fourier Integral Transform Part I

Everything should be as simple as possible, and not simpler —- Albert Einstein.

I wish to provide a fast-track to the high priesthood of Digital Signal Processing. The number of professionals requiring knowledge of DSP is vast because the amazing electronics revolution (after the fabrication of the first IC chip by Fairchild Semiconductor, USA) has made convenient collection and processing of digital data available to many disciplines. (Sounds like Big Data/Analytics!!!). The fields of interest are impossible to enumerate. The applications of DSP are limited to human imagination. I  had heard in a TI DSP Conference that a dentist in UK had used DSP to reduce the amount of noise of his drilling machine (His reason was that the dentist’s cutting of teeth is not so painful, but the  noise of his tool creates a scare in the minds of patients). Applications of DSP range widely from astrophysics, meteorology, geophysics, and computer science to VLSI design, control systems, communications, RADARs, speech analysis/synthesis, medical technology, and of course, finance/economics.

I wish to minimize the mathematical paraphernalia in these series. There are many elementary texts of DSP and I do not claim any novelty in the presentation. But, these series might help both the enthusiast and the expert. Towards the goal of simplification of the math involved, I assume that signals are deterministic thereby avoiding a heavy reliance on the theory of random variables and stochastic processes. The reader need only have a familiarity with differential and integral calculus, complex numbers, and simple matrix algebra. For the purpose of this article, I assume you are aware of the basic concepts of signals and systems, sampled data and Z-transform, the concept/computation of frequency response and the DFT. I will talk a bit about the DFT though.

Digital signal processing has become extremely important in recent years because of digital electronics. In treating the processing of analog signals, which require any reasonable amount of computation, it is usually beneficial to digitize the signals and to use digital computers for their subsequent processing. The advantage results from both the extremely high computation speeds of modern digital computers and the flexibility afforded by computer programs that can be stored in software or firmware or hardwired into the design. Low cost, VLSI chips, have made this approach beneficial even for devices limited to special purpose computing applications or restricted by throwaway economics. This computational asset has been a major impetus for thinking of signals as discrete time sequences. An additional advantage of representing signals in discrete time has been their pleasant mathematical simplicity; continuous time theory requires far more advanced mathematics than the algebra of polynomials, some simple trigonometric function theory, and the behaviour of the geometric series that we have employed. The digital revolution seduces us into viewing every situation in its terms; still, we are haunted by the concept of underlying continuous relationships. We need to know the effect of digitizing continuous time signals, and the essential difference between these digitized signals and those signals that are inherently digital from the left. Are continuous time and discrete time versions simply alternate models of the real physical world from which we are free to choose? Some say, yes; yet there are essential differences.

For example, meteorological data, such as the barometric pressure at a given location, would certainly seem to be a continuous signal that conceptually extends infinitely far from the past into the future. Physical considerations force us to conclude that its spectrum is bandlimited. The pressure wave from a nuclear blast, on the other hand, has a definite beginning and extends with decaying amplitude infinitely long thereafter. I will show you soon that such a signal must have a frequency spectrum that is not bandlimited and therefore this signal cannot be digitized without aliasing. Still, other signals seem inherently digital: the price of a stock is determined not only has definite beginning and ending. Furthermore, no business lasts for ever; its stock trading has a definite beginning and ending. The stock quote’s spectrum must be repetitive as well as inherently  broadbanded.

The spectra of the signals in these three examples are quite different. Clearly, to apply digital signal processing in an intelligent manner, we need to more about continuous time functions. We need to develop  a continuous time theory of signals, and then use it to gain insight into its relationship with DSP.

The Fourier Integral Transform Developed from the DFT

Our development of mathematical machinery will follow a natural course motivated only by considering LTI digital systems and operators. The concept of the spectrum arises because sinusoids are eigenfunctions of LTI systems. The convenience of sampling the spectrum at equally spaced intervals gives rise to the DFT. The DFT, with its equally spaced points in both time and frequency, places us in a position to easily take the limit $N \rightarrow \infty$ to pass over to continuous  time and frequency. We start with the inverse transform

$f_{l}=(1/N)\sum_{-N/2}^{(n/2)-1}F_{\nu}e^{i2\pi\nu l/N}$ equation I

and recognize that this sum may be evaluated for any t; it may be considered a continuous  function of time. [Just like  the similar sum for  the spectral response $H(\omega)$ of an LTI operator. Equation I can be evaluated at any time. ] The frequency interval used in this summation is

$\Delta\omega$ that is  $\frac{2\pi}{N}$

Therefore, the frequency (in radians per unit  time) is

$\omega=2\pi\nu/N$ equation II

and as N becomes infinite,

$1/N=\Delta\omega/2\pi\rightarrow 0$

giving for the limit of  the sum in equation I

$f_{\nu}=lim_{N\rightarrow\infty} \sum_{N/2}^{(N/2)-1}F_{\nu}e^{i\omega t}\Delta\omega/2\pi$

as $\Delta \omega \rightarrow 0$

In the limit $N \rightarrow \infty$, this  sum becomes an integration — a continuously infinite number of terms separated by the infinitesimal frequency interval $\Delta \omega \rightarrow d\omega$

$f(t)=(1/(2\pi)) \int_{-\infty}^{\infty} F(\omega)e^{it\omega}d\omega$ equation 3

Now, both $F(\omega)$ and $f(t)$ are continuous functions. To invert this  equation, the orthogonality relation,

$(1/N)\sum_{-N/2}^{(N/2)-1}e^{i2\pi\nu\frac{t'-t}{N}}=\delta(t',t)$

must likewise be converted to  continuous time and frequency. The same limiting process, $N\rightarrow \infty$ and

$(1/N)=(\Delta\omega)/(2\pi) \rightarrow 0$ gives the continuous version:

$(1/(2\pi) )\int_{-\infty}^{\infty} e ^{i \omega (t'-t)}d\omega = \delta (t'-t)$ equation 4

Before continuing, we need to elaborate a little on this result: the Kronecker $\delta$ has gone over in the limit into a continuous function called the Dirac $\delta$ function. Strictly speaking, it is not a function in  the mathematical sense at all; nonetheless, it has been put on a firm mathematical basis by Lighthill. For our purposes, the $\delta$ function can be thought of as the limiting form of a narrow symmetrical pulse located at $t'$ whose width goes to zero and height goes to infinity such that its area is constant and normalized to  unity:

$\int_{-\infty}^{\infty} \delta (t - t')dt = 1$.

Figure 1 shows an example of this limiting concept along with the development of  the sampling property of  the

$\delta$ function which has the property

$\int_{a}^{b} f(t)\delta (t - t') dt$ which is equal to $f(t')$

where $f(t)$ is any continuous function of time and $f(t')$ is its sampled value at $t'$. This sampling property of the $\delta$ function provides an important connection between continuous-time theory and discrete-time theory. In addition, the orthogonality of complex sinusoids expressed by Equation 4 clearly possesses a companion relation, obtained simply by relabeling variables as follows:

$((1/(2 \pi)) \int_{-\infty}^{\infty} e^{i(\omega - (\omega)')(t)}dt=\delta (\omega - (\omega)')$ Equation 5

Now, we can return to find the inverse of Equation 3 by using Equation 5. Multiplying both sides of Equation 3 by

$exp(-i (\omega)'t)$ and integrating over time gives

$\int_{-\infty}^{\infty} e^{-i (\omega)'t}f(t)dt$ which equals

$(1/(2\pi)) \int_{-\infty}^{\infty}e^{-i(\omega)'t}\int_{-\infty}^{\infty}F(\omega)e^{i\omega t}d\omega dt$

which is equal to

$(1/(2\pi)) \int_{-\infty}^{\infty}F(\omega)\int_{-\infty}^{\infty}e^{i(\omega - (\omega)'}dt d\omega$

The last integral on the right side yields the $\delta$ function from Equation 5. Thus, we get

$\int_{-\infty}^{\infty} e^{-i\omega t}f(t)dt= \int_{-\infty}^{\infty}F(\omega)\delta (\omega - (\omega)')d\omega =F((\omega)')$

which is the desired relation giving $F(\omega)$ in terms of $f(t)$. For reference, we rewrite this result and Equation 3 as a transform pair:

$F(\omega)=int_{-\infty}^{\infty}f(t)e^{-i\omega t}dt$ Equation 6A

$f(t)=(1/2(\pi))int_{-\infty}^{\infty}F(\omega)e^{i\omega t}d\omega$ Equation 6B

This pair of equations affords a continuous time and continuous frequency Fourier transformation, which are collectively simply called the Fourier transform. Sometimes, one is more specific and calls equation 6A the forward Fourier transform of $f(t)$, and then equation 6B is called the inverse Fourier transform of $F(\omega)$. Note the logical resemblance of these equations to the DFT. Again, as in the DFT case, there is an obvious duality of the Fourier transform that results from an interchange of time and frequency by a simple relabeling, or redefinition, of variables. One consequence of this duality is the lack of a standard definition for the Fourier transform, sometimes the forward and inverse versions of the transform in equations 6 are interchanged. Different placements of the factor of $2\pi$ provides further possibilities for definitions of the Fourier transform.

As we shall see, the similarities between the Fourier transform and the DFT will allow us to exploit the computational advantages of  the DFT. But, the differences between the Fourier transform and the DFT, though perhaps few in number, are profound in character. We have approached the Fourier transform from a desire to represent both time and frequency as continuous variables. The resulting transformation equations contain integrals over all values of these variables from minus infinity to plus infinity. This property is a double-edged sword. On  the one hand, it does let us represent signals that the DFT does not allow, such as a one-sided transient that decays infinitely far into the future. But, on the other hand, to exactly compute the frequency response of such a signal using a numerical scheme, we would need a continuously infinite number of data points. We will see how to deal with, but not completely solve, this problem later. Another concern, which is nonexistent for the DFT, arises because of the Fourier transform’s integrals; we need to know something of their convergence properties.

The convergence of Fourier integrals is a fascinating subject of Fourier theory, explored by famous mathematician such as Plancherel, Titchmarsh and Wiener. Various conditions have been found that prove the convergence of Fourier integrals for functions displaying rather strange behaviour compared to our view of naturally occurring signals.  Because our interest is limited to realistic signals and systems, we can afford to start our discussion with an overrestrictive (sufficient but not necessary) convergence condition: the Fourier integral transform of $f(t)$ is absolutely integrable over the open interval, that is,

$\int_{-\infty}^{\infty}|f(t)|dt<\infty$

Under these conditions, the repeated integral

$\hat{f}(t)=(1/(2\pi))\int_{-\infty}^{\infty}e^{i\omega t}\int_{-\infty}^{\infty}f(t')e^{-i\omega t'}dt'd\omega$ equation 7

called the Fourier integral representation of f, converges to the average value of $f(t)$ at a discontinuity. That is,

$\hat{f}(t_{0})=(1/2)(f(t_{0}^{+})-f(t_{0}^{-}))$

when there is a discontinuity at $t_{0}$.

Some functions, such as step functions, impulses, and sinusoids, never really occur in nature, nonetheless, they are very convenient for  thinking about signals and systems. The absolutely integrable condition immediately disqualifies many of these favourite funtions; clearly, any periodic function, including sinusoids themselves, are excluded from functions possessing Fourier transform pairs, if we accept this condition. However, a sufficiently rich class of functions possessing Fourier integral transforms will result if we allow the Dirac $\delta$ function to be included. Lighthill had shown with mathematical rigour how  to include $\delta$ functions in Fourier integral theory. We simply note that equation 5 is indeed, a Fourier transform of a complex sinusoid. This equation shows that the spectrum of $exp(-i(\omega)'t)$ is eminently resonable; it contains exactly one pure frequency at $(\omega)'$.

Furthermore, after our discussion in the next section/blog on the convolution  theorem, we will show  how Wiener was able to include signals, such as periodic functions and random noise, in frequency analysis. Even though these signals do not possess a Fourier integral transform, they may have a power spectrum.

More later…

Nalin Pithwa