## A Repertoire of DSP transforms and their importance with an interesting lesson from astronomy — part I

There are tremendous advantages to thinking about signals and systems in the frequency domain. Generally speaking, deeper understanding can be gained when a subject is viewed from more than one angle. To better understand how the frequency domain relates to the time domain and the implication  (and limitation) of this relationship for digital signal proceessing (DSP), we need a repertoire of transforms available at our fingertips. This repertoire together with the properties of the Fourier transform discussed in the preceding blogs will be indispensable for understanding and applying Fourier transforms in practical applications.

Most of  the transforms of this repertoire are easy to compute. Therefore, we will concentrate on their implications for our subject, leaving  their derivation, for the most part, to problems for  the reader/student/enthusiast. Furthermore, to simplify the repertoire, the time domain signal will be either real-symmetric or real-antisymmetric so the transform will be either real-symmetric or odd-imaginary, simplifying the drawings and discussions.

The delta function is perhaps the easiest function to integrate. Substitution of $f(t)=\delta (t)$ in equation Ia below (mentioned in a previous blog and reproduced here)

$F(\omega)=\int_{-\infty}^{\infty}f(t)e^{-i \omega t}dt$ equation I

immediately tells us that the Fourier transform of a spike at the origin is a constant of unit height as shown in Fig 1a. Physical reasoning tells us that the $\delta$ function could not exist in any real analog system; its spectrum substantiates this by demanding an infinite bandwidth. Nor can this spike ever exist in a digital system — it is infinitely large. Some of the Fourier transform properties of  the previous DSP blog article can be applied to the $\delta$ function. Others defy ordinary arithmetic — try Rayleigh’s theorem. The $\delta$ function is like the girl with the little curl  on  her forehead; when she was good, she was very, very good; when she was bad, she was horrid. Mathematically, the $\delta$ function is indeed somewhat horrid. On the other hand, it is quite good — really indispensable — for relating continuous theory to discrete signals and systems.

The complementary transform is equally easy to compute. Substituting $f(t)=constant$ in Equation I and using Equation II below:

$(1/2\pi)\int_{-\infty}^{\infty}e^{i(\omega -(\omega)^{'})t}$ Equation II

tells us that the Fourier transform of a constant is a $\delta$ function as shown in Fig 1b. Thus, the Fourier transform concentrates the dc component (the average value) of a signal into  the zero frequency contribution of the spectrum. Conversely, from Fig 1a, a signal concentrated at one point in time is spread out over all frequencies. The infinitely sharp time domain signal has an infinitely broad spectrum; the infinitely broad time domain signal has an infinitely sharp spectrum. It turns out that this is a manifestation of a general property of the Fourier transform.

The Gaussian function is one of  the few waveforms that possess the same functional form in both domains;  the Fourier transform of a Gaussian is a Gaussian. These functions, and their half-width located $1/e$ down from the maximum are shown Fig 2. Using these half-widths, the spreading property observed for the $\delta$ function can easily be quantified for the Gaussian pair

$\Delta t \Delta \omega = (1/\alpha)(2\alpha)$

The product of the widths is a constant for all Gaussians. If we imagine the Gaussian time function getting progressively narrower, then its spectrum gets increasingly broader, and vice-versa. This reciprocal time- frequency bandwidth is an inherent property of  the Fourier transform and is best put on a firm mathematical basis by defining the widths in a somewhat more complicated fashion than we have done here. With this more complicated definition of width, it can be shown (Bracewell, 1978) that

$(width in t)(width in \omega) \geq 1/2$ equation III

for all Fourier transform pairs. With this definition of width, the Gaussian pair satisfies the equality, a fairly unusual situation for a transform pair. This relationship between widths in the two domains is called the Uncertainty Principle in Quantum Mechanics, but it was well-known to mathematicians and electrical engineers well before the foundations of Quantum theory were developed.

Usually, it is sufficient to think of  the $\Delta t \Delta \omega$ product as being roughly unity. For example, we can then observe that if a certain amplifier is designed to pass a 1 microsecond width pulse, then its bandwidth must be on the order of 1 MHz. Of course, the actual design requirements of such an amplifier depend on its purpose. Any practical bandwidth will necessarily distort the pulsed signal. The tolerance for such a distortion may depend on signal-to-noise considerations versus fidelity requirements.

As we introduce this repertoire of Fourier transform pairs, it will be instructive to apply some of the fundamental properties of the preceding blog/article to them. The convolution theorem, for example, immediately tells us that the convolution of a Gaussian is another, wider Gaussian. We see this easily by thinking of the operation in the other domain: there multiplication of  two Gaussian clearly produces another, narrower Gaussian.

The boxcar/sinc transform pair gives us a greater insight into both Fourier transforms and DSP than perhaps any other single transform pair. The Fourier transform of the boxcar function, shown in Fig 3a, is of the form $(\sin x)/x$. This type of function is commonly called a sinc function. Think of it as a sine wave that decays as $1/x$ for large x, and note that $(\sin x)/x$ converges to 1 at $x=0$.

Taking the Fourier transform of  the boxcar is a straightforward integration that you can easily do. The reverse, the inverse Fourier transform of  the sinc function, turns out to require advanced integration techniques that are tangential to our discussion. Perhaps, it is not surprising that this inverse Fourier transform :

$boxcar =(1/2\pi)\int_{-\infty}^{\infty}\sin (T\omega)/(T \omega)e^{i \omega t}d\omega$

requires some tricky integration because it has a rather curious behaviour: its value is completely independent of t for t between -T and T. Then, when t exceeds this interval, the value drops abruptly to zero for all other t. This situation, where an ordinary continuous function is related to a discontinuous one, is a common occurence among Fourier transform pairs. On the other hand, the forward Fourier tranform,

$sinc = \int_{-T}^{T}e^{-i \omega t}dt$

is an elementary definite integral relating two continuous functions.

Note, again, the relationship between the boxcar’s duration and its frequency bandwidth: if we define the sinc’s bandwidth by the first zero crossings, then $\Delta t \Delta \omega=4\pi$. Narrow pulses have wide bandwidths and vice-versa.

For another example of the utility of the convolution theorem, we discuss a theorem that originally arose in probability theory and was given the name Central Limit Theorem George Polya in 1920. The connection with probability occurs because the probability of drawing several numbers from a distribution that add up to a predetermined sum can be written as a convolution. The theorem, stated in terms of these convolutions, says that if a large number of functions are convolved together, the result tends toward the Gaussian functions (normal distributions in probability theory).

Let us see if the boxcar/sinc pair satisfies this theorem. In Fig 4, we show successive convolutions of the boxcar and the associated successive multiplications of the sinc in the frequency domain. The boxcar does appear to be approaching the shape of a Gaussian. The frequency domain version likewise appears to be approaching a Gaussian shape; indeed, it must, since the tranform of a Gaussian is another Gaussian.

Now, that we have demonstrated that the boxcar appears to satisfy the Central Limit Theorem, it is natural to ask if  the sinc function does also. Convolving successive sinc functions can easily be done mentally by thinking of multiplying the boxcars in the other domain. Clearly, the product of any two boxcars with  other sinc functions always produces sinc functions; the Gaussian is never approached.

The Central Limit Theorem applies to convolutions among a wide class of  functions, but as we have seen, it is not true for all functions. However, it does seem to apply for many naturally occurring functions. In these cases, the effect of the convolution operation is to produce a smeared out result resembling the Gaussian shape, and frequently this approach to this bell-shaped curve is surprisingly fast.

On the other hand, many specially designed convolution operators of interest in signal processing do not  obey the central limit  theorem. Differential and inverse operators, for example, serve to sharpen a signal rather than smooth it. Some required properties of functions satisfying the central limit theorem can be found using the fundamental properties of the Fourier transform.

Another very important application of the boxcar/sinc pair arises when we think of the boxcar as truncating a data stream. Imagine, for example, that a radio astronomer wishes to analyze  a signal received from a distant source for  certain periodicities. The researcher can only record a small portion of this signal for spectral analysis. We can say that the data are recorded over a window in time, represented by multiplication of  the natural signal by a boxcar of width $2T$. The truncation of the data stream in the time domain manifests itself as a convolution of the data’s spectrum with the sinc function of width $2\pi/T$ (width between first zero crossings). The effect of  this convolution is to smear out the frequency spectrum of the data, thereby reducing the resolution of the spectrum analysis. This loss of resolution due to a finite data record is unavoidable in every experimental situation. If the data record is very long, the sinc function is very narrow. In the limit of an infinitely long data record, the sinc function associated with truncation will be infinitesimally wide. Convolution with this spike is the same as convolution with a $\delta$ : it is the identity operation and no loss of frequency resolution because of convolution with a very broad sinc function. Later on, we will see important ramifications of  this data truncation when we discuss the relationship  of  the DFT to the Fourier transform.

One effect of sampling analog data can be seen by looking at the boxcar/sinc pair shown in Fig 4. Here, the sampling time of an analog to digital converter is represented by a boxcar of width $\Delta t = T$ called the sampling aperture). In this  worst case A/D conversion, the signal is averaged over one whole sample period, thereby losing some high frequencies. The  sampling is represented by a convolution of the ideal samples with the boxcar. In the frequency domain, shown in Fig 4b, which has a lowpass filter effect, producing a roll-off that is down $2/\pi$ at Nyquist. This unavoidable distortion of the signal’s spectrum is called Aperture Error.

For  our last example, extolling the  importance of  the boxcar/sinc pair, we look to an optical diffraction experiment. Consider monochromatic light waves impinging on  the aperture shown in Fig 5a. Let  the waves amplitude vary  across the aperture according to  $A(x)$. let the waves amplitude vary  across the aperture according to $A(x)$ The disturbance emanating from an element of the aperture dx is, using the complex exponential  form of sinusoids

$A(x)e^{-i \omega t}dx$

As the figure shows, the phase shift for the ray pictured is

$\delta = (2 \pi x \sin \theta) /(\lambda)$

which equals $(2\pi x\theta)/\lambda)$ for small $\theta$. The disturbance arriving at the screen from $dx$ is  then

$A(x)e^{-i\omega t - i2\pi x \theta/\lambda}dx$

The total illumination $T(\theta)$ reaching the screen from the entire aperture is just the superposition of all these contributions :

$T(\theta)= \int A(x)_{aperture}e^{-i \omega t - i2\pi x \theta /\lambda}dx$

Hence, we get

$T(\theta)=e^{-i \omega t}\int A(x)_{aperture}e^{-i2\pi x \theta /\lambda}dx$

The complex exponential in  front of  the integral expresses the  sinusoidal time dependence of  the signal at every point on  the  screen. The integral is  the amplitude of this  signal versus $\theta$ at each point on the screen, we recognize it as the Fourier transform of  the amplitude across the aperture. Waves of any kind propagate to infinity via the  Fourier transform. To avoid screens at infinity, a lens can be used between the aperture and the screen to focus rays on the focal plane of the lens as shown in Fig 6b. Now, the aperture, lens, screen, and a plane wave light source form a simple and practical Fourier transformer. If a semitransparent film with its transmission varying as $A(x)$ is placed over the aperture, the Fourier transform of $A(x)$ is cast onto  the screen. Before the days of the modern digital computer, clever data processing engineers and scientists used equipment of this type to perform Fourier transforms, convolutions, cross-correlations  and filtering.

The boxcar/sinc pair arises in the case where $A(x)$ equals a constant across the  aperture. For example, an astronomical telescope focused on Sirius, a binary star system in the constellation Caius Major, would project two sinc type images onto its focal plane, one  for each member of the system as shown in Fig 6c. We know that the  width of  these sinc functions varies inversely  with the aperture size. Furthermore, additional optics cannot increase the separation between these sinc patterns; they only magnify the entire picture. In the case of Canis Major, it turns out that one star is considerably fainter than the other and in most telescopes it gets lost in the side lobes of  the sinc function of the brighter star. In this case, resolution is not the problem — it is the same side lobes produced by the boxcar-like aperture. But, a knowledge of Fourier transform saves the day and allows the weaker star to  be detected by coating the objective lens (the aperture) so as to reduce its transmission gradually  away from the center, the shape of the original sinc function can be altered to  a new pattern having smaller side lobes. For example, if this coating reduced transmission in a Gaussian fashion, each star’s image would be Gaussian. Although the Gaussian does have a wider central peak than the sinc function, it has the virtue of lacking side lobes. The superposition of two Gaussians of  the weaker peak, given sufficiently sensitive detection methods. This technique of tapering the transmission characteristics of optical instruments, called apodization, has been sucessfully applied to  telescopes It has an important counterpart in signal processing as well. In coming blog articles, we shall see several applications of  tapering data streams, as opposed to truncating them boxcar fashion.

Before leaving the discussion of  the boxcar sinc pair, we mention its Fourier dual pair, the sinc boxcar in Fig 3b, which is, of course, easily obtained from the first pair by a relabelling of variables. One immediate significance of this sinc boxcar pair for signal processing is that the boxcar represents an ideal lowpass filter in the frequency domain. We see that it requires an infinitely long time domain convolution  operator to  achieve this low pass filter — a physical impossibility. We will see how  to design substitute  time domain operators of  tolerable lengths in some later blog article.

More later…

Nalin Pithwa