
Everything should be as simple as possible, and not simpler —- Albert Einstein.
I wish to provide a fast-track to the high priesthood of Digital Signal Processing. The number of professionals requiring knowledge of DSP is vast because the amazing electronics revolution (after the fabrication of the first IC chip by Fairchild Semiconductor, USA) has made convenient collection and processing of digital data available to many disciplines. (Sounds like Big Data/Analytics!!!). The fields of interest are impossible to enumerate. The applications of DSP are limited to human imagination. I had heard in a TI DSP Conference that a dentist in UK had used DSP to reduce the amount of noise of his drilling machine (His reason was that the dentist’s cutting of teeth is not so painful, but the noise of his tool creates a scare in the minds of patients). Applications of DSP range widely from astrophysics, meteorology, geophysics, and computer science to VLSI design, control systems, communications, RADARs, speech analysis/synthesis, medical technology, and of course, finance/economics.
I wish to minimize the mathematical paraphernalia in these series. There are many elementary texts of DSP and I do not claim any novelty in the presentation. But, these series might help both the enthusiast and the expert. Towards the goal of simplification of the math involved, I assume that signals are deterministic thereby avoiding a heavy reliance on the theory of random variables and stochastic processes. The reader need only have a familiarity with differential and integral calculus, complex numbers, and simple matrix algebra. For the purpose of this article, I assume you are aware of the basic concepts of signals and systems, sampled data and Z-transform, the concept/computation of frequency response and the DFT. I will talk a bit about the DFT though.
Digital signal processing has become extremely important in recent years because of digital electronics. In treating the processing of analog signals, which require any reasonable amount of computation, it is usually beneficial to digitize the signals and to use digital computers for their subsequent processing. The advantage results from both the extremely high computation speeds of modern digital computers and the flexibility afforded by computer programs that can be stored in software or firmware or hardwired into the design. Low cost, VLSI chips, have made this approach beneficial even for devices limited to special purpose computing applications or restricted by throwaway economics. This computational asset has been a major impetus for thinking of signals as discrete time sequences. An additional advantage of representing signals in discrete time has been their pleasant mathematical simplicity; continuous time theory requires far more advanced mathematics than the algebra of polynomials, some simple trigonometric function theory, and the behaviour of the geometric series that we have employed. The digital revolution seduces us into viewing every situation in its terms; still, we are haunted by the concept of underlying continuous relationships. We need to know the effect of digitizing continuous time signals, and the essential difference between these digitized signals and those signals that are inherently digital from the left. Are continuous time and discrete time versions simply alternate models of the real physical world from which we are free to choose? Some say, yes; yet there are essential differences.
For example, meteorological data, such as the barometric pressure at a given location, would certainly seem to be a continuous signal that conceptually extends infinitely far from the past into the future. Physical considerations force us to conclude that its spectrum is bandlimited. The pressure wave from a nuclear blast, on the other hand, has a definite beginning and extends with decaying amplitude infinitely long thereafter. I will show you soon that such a signal must have a frequency spectrum that is not bandlimited and therefore this signal cannot be digitized without aliasing. Still, other signals seem inherently digital: the price of a stock is determined not only has definite beginning and ending. Furthermore, no business lasts for ever; its stock trading has a definite beginning and ending. The stock quote’s spectrum must be repetitive as well as inherently broadbanded.
The spectra of the signals in these three examples are quite different. Clearly, to apply digital signal processing in an intelligent manner, we need to more about continuous time functions. We need to develop a continuous time theory of signals, and then use it to gain insight into its relationship with DSP.
The Fourier Integral Transform Developed from the DFT
Our development of mathematical machinery will follow a natural course motivated only by considering LTI digital systems and operators. The concept of the spectrum arises because sinusoids are eigenfunctions of LTI systems. The convenience of sampling the spectrum at equally spaced intervals gives rise to the DFT. The DFT, with its equally spaced points in both time and frequency, places us in a position to easily take the limit to pass over to continuous time and frequency. We start with the inverse transform
equation I
and recognize that this sum may be evaluated for any t; it may be considered a continuous function of time. [Just like the similar sum for the spectral response of an LTI operator. Equation I can be evaluated at any time. ] The frequency interval used in this summation is
that is
Therefore, the frequency (in radians per unit time) is
equation II
and as N becomes infinite,
giving for the limit of the sum in equation I
as
In the limit , this sum becomes an integration — a continuously infinite number of terms separated by the infinitesimal frequency interval
equation 3
Now, both and
are continuous functions. To invert this equation, the orthogonality relation,
must likewise be converted to continuous time and frequency. The same limiting process, and
gives the continuous version:
equation 4
Before continuing, we need to elaborate a little on this result: the Kronecker has gone over in the limit into a continuous function called the Dirac
function. Strictly speaking, it is not a function in the mathematical sense at all; nonetheless, it has been put on a firm mathematical basis by Lighthill. For our purposes, the
function can be thought of as the limiting form of a narrow symmetrical pulse located at
whose width goes to zero and height goes to infinity such that its area is constant and normalized to unity:
.
Figure 1 shows an example of this limiting concept along with the development of the sampling property of the
function which has the property
which is equal to
where is any continuous function of time and
is its sampled value at
. This sampling property of the
function provides an important connection between continuous-time theory and discrete-time theory. In addition, the orthogonality of complex sinusoids expressed by Equation 4 clearly possesses a companion relation, obtained simply by relabeling variables as follows:
Equation 5
Now, we can return to find the inverse of Equation 3 by using Equation 5. Multiplying both sides of Equation 3 by
and integrating over time gives
which equals
which is equal to
The last integral on the right side yields the function from Equation 5. Thus, we get
which is the desired relation giving in terms of
. For reference, we rewrite this result and Equation 3 as a transform pair:
Equation 6A
Equation 6B
This pair of equations affords a continuous time and continuous frequency Fourier transformation, which are collectively simply called the Fourier transform. Sometimes, one is more specific and calls equation 6A the forward Fourier transform of , and then equation 6B is called the inverse Fourier transform of
. Note the logical resemblance of these equations to the DFT. Again, as in the DFT case, there is an obvious duality of the Fourier transform that results from an interchange of time and frequency by a simple relabeling, or redefinition, of variables. One consequence of this duality is the lack of a standard definition for the Fourier transform, sometimes the forward and inverse versions of the transform in equations 6 are interchanged. Different placements of the factor of
provides further possibilities for definitions of the Fourier transform.
As we shall see, the similarities between the Fourier transform and the DFT will allow us to exploit the computational advantages of the DFT. But, the differences between the Fourier transform and the DFT, though perhaps few in number, are profound in character. We have approached the Fourier transform from a desire to represent both time and frequency as continuous variables. The resulting transformation equations contain integrals over all values of these variables from minus infinity to plus infinity. This property is a double-edged sword. On the one hand, it does let us represent signals that the DFT does not allow, such as a one-sided transient that decays infinitely far into the future. But, on the other hand, to exactly compute the frequency response of such a signal using a numerical scheme, we would need a continuously infinite number of data points. We will see how to deal with, but not completely solve, this problem later. Another concern, which is nonexistent for the DFT, arises because of the Fourier transform’s integrals; we need to know something of their convergence properties.
The convergence of Fourier integrals is a fascinating subject of Fourier theory, explored by famous mathematician such as Plancherel, Titchmarsh and Wiener. Various conditions have been found that prove the convergence of Fourier integrals for functions displaying rather strange behaviour compared to our view of naturally occurring signals. Because our interest is limited to realistic signals and systems, we can afford to start our discussion with an overrestrictive (sufficient but not necessary) convergence condition: the Fourier integral transform of is absolutely integrable over the open interval, that is,
Under these conditions, the repeated integral
equation 7
called the Fourier integral representation of f, converges to the average value of at a discontinuity. That is,
when there is a discontinuity at .
Some functions, such as step functions, impulses, and sinusoids, never really occur in nature, nonetheless, they are very convenient for thinking about signals and systems. The absolutely integrable condition immediately disqualifies many of these favourite funtions; clearly, any periodic function, including sinusoids themselves, are excluded from functions possessing Fourier transform pairs, if we accept this condition. However, a sufficiently rich class of functions possessing Fourier integral transforms will result if we allow the Dirac function to be included. Lighthill had shown with mathematical rigour how to include
functions in Fourier integral theory. We simply note that equation 5 is indeed, a Fourier transform of a complex sinusoid. This equation shows that the spectrum of
is eminently resonable; it contains exactly one pure frequency at
.
Furthermore, after our discussion in the next section/blog on the convolution theorem, we will show how Wiener was able to include signals, such as periodic functions and random noise, in frequency analysis. Even though these signals do not possess a Fourier integral transform, they may have a power spectrum.
More later…
Nalin Pithwa