A Repertoire of DSP transforms and esp. Hilbert transforms and their significance— part 3

hilbertxform1 hilbertxform2The Fourier transform of (1/t) is essentially a step function. We shall see that the convolution with (1/t) leads to an important integral transform. Specifically, the Fourier transform of

( -1/(\pi t)) is i .sgn (\omega) This pair and its dual are shown in Figure 1. Since (1/t)

has a pole at its origin, its Fourier transform integral diverges there as (1/t) has a transform only  in a limiting sense.  (It can be evaluated using contour integration). Likewise, the Fourier integral of sgn (x), in computing either the forward or inverse transform, does not  exist in the conventional sense because sgn (x) is not absolutely integrable. The transform of sgn (x) can be defined by considering a sequence of transformable functions that approaches sgn (x) in the limit. We do not let these mathematical inconveniences deter us any more than they did in our previous discussion of \delta functions and sinusoids, for  the pair Slatex (1/t)$ — sgn has some intriguing properties.

Since (1/t) is real and odd, its Fourier transform is odd and pure imaginary. But, more interestingly, its magnitude spectrum is obviously constant. (Could there be a delta function lurking nearby?). The interest in this transform pair comes from convolving a function with (-1/\pi t) in the time domain. This convolution integral, called the Hilbert Transform is as follows:

\mathcal{H}[f(t)]=-(1/\pi)\int_{-\infty}^{\infty}\frac {f(t^{'})dt^{'}}{t-t^{'}} Equation I

This transform arises in many applications in Digital Communications and Mathematical Physics. The Cauchy principal value, which is a kind of average familiar to those knowledgeable in contour integration, is to be used over singularities of the integrand. This mathematical inconvenience is avoided in the frequency domain where we can easily visualize the effect of the Hilbert transform.

Multiplication by i.sgn (\omega) in the frequency domain produces a 90 \deg phase shift at all frequencies. The phase of F(\omega) is advanced constant 90 \deg for all positive frequencies and retarded a constant 90\deg for all negative frequencies. The magnitude spectrum of F(\omega) is unchanged since the spectrum of i.sgn(\omega) is flat. The Hilbert transform operation in the frequency domain is summarized in Fig 2. The Fourier transform of the given function has its phase shifted 90 \deg, in opposite directions for positive and negative frequencies, then, the inverse Fourier transform produces the time domain result.

The exact 90 \deg phase shift [including the sgn (\omega) dependence] occurs in several different instances in wave propagation: the reflection of electromagnetic waves from metals at a certain angle of incidence involves an exact 90 \deg phase shift independent of frequency; special teleseismic ray paths produce the same type of phase shifts, and the propagation of point sources for all types of waves includes a 90\deg phase shift for all frequencies in the far field.

More about Hilbert Transforms later…

Nalin Pithwa

What is a DSP Engineer

Let me digress a bit from the light theory of DSP, which have been discussing till now, to offer a viewpoint on what is a DSP Engineer. I reproduce below, an opinion, which I also hold, to some extent.

***************************************************************************************************

Comment What is a DSP Engineer

Feb 12 2008

 

By Shiv Balakrishnan, Forward Concepts

 

Once upon a time, DSP stood for Digital Signal Processing. In the late 1980’s, however, our beloved acronym began move inexorably towards meaning Digital Signal Processors. By then of course, TI and other chip companies had spent many millions of dollars in creating a new market segment, so who’s to grudge them hijacking the very end of an old piece of jargon?

Some of us nit-picky curmudgeons, that’s who. My own anecdotal experience is that the field of Digital Signal Processing has lost out in subtle ways. Here’s the background:

In the 1960’s, DSP art developed around two broad areas: digital filtering and FFTs. When someone said they did DSP, you knew what they did. You also knew where they congregated: The International Conference on Acoustics, Speech, and Signal Processing (ICASSP) was the main international watering hole. The ICASSP acronym gives away the dominance of “Acoustics” and “Speech” at the time

You even knew what DSP engineers were reading. Quality textbooks from Oppenheim, Schafer, Rabiner, Gold, Rader, and a host of others joined earlier guideposts like Blackman and Tukey’s “Measurement of Power Spectra.” Other sources of knowledge included the classic Cooley-Tukey FFT paper, Widrow and Hoff’s LMS work, and a collection of seminal work from Bell Labs—the latter including early vocoder work from Dudley and speech breakthroughs from Flanagan’s group. There was quantization analysis (Jackson and many others) and the elegant Remez Exchange-based FIR design work of Parks and McClellan.

Thanks to this training, DSP practitioners had a firm grasp of the signal processing chain. They worked with algorithms, hardware, and software. They understood the interplay between analog and digital signals, and the effects of noise and non-linearity in both domains. Most could recognize when an analog implementation was more efficient—an important talent, in my opinion.

Fast forward to 2008. Acoustics and speech still matter, but there is much more emphasis on multimedia and communications. The hardware options are replete with DSPs, microprocessors, FPGAs, IP cores, ASICs, ASSPs and what not accompanied by a wealth of tools: compilers, debuggers, high-level design tools and more. So what’s the meaning of “DSP” now? What’s our new-millennium DSP engineer doing?

Here are my observations (and I’d love to have them challenged): “DSP” means Processors a la TI, ADI et al. (Note to self: get over it!) A DSP engineer typically does one of the following:

  1. Writes C/C++ code to implement a function (e.g., a Viterbi decoder) on a DSP or core. The engineer then optimizes the code for performance/size/power, sometimes even having to (shudder) write assembly code to wring out the last iota of performance
  2. Architects a hardware block (e.g., Viterbi decoder), implements it in VHDL/Verilog, verifies it, and optimizes it. The engineer checks the RTL against a high-level model (e.g., in MATLAB) as well as the correctness at the logic level with black box testing, test vectors and such.
  3. Architects the system and designs algorithms. These engineers typically use MATLAB and C, and have domain expertise. They usually do not deal with assembly code or Verilog.

Now let me stick my neck out and make some wild generalizations:

Engineers in categories 1 and 2 never use their basic DSP theory after college—not that I can blame them. They can’t tell you why and when an IIR filter might be unstable, or who needs any transform other than a DCT or radix-2 FFT. As far as quantization noise and round off effects are concerned, fuggedaboutit. The systems guy took care of that.

The #3 folks are closest to the DSP engineers of a couple of decades ago, though they tend to be “Architects” or “System Designers” as opposed to DSP programmers. They are also the ones closest to understanding the complete system, although they are lmited by increasing system complexity. Thus one has MPEG engineers, and WiMAX engineers, and Graphics engineers, etc. These systems engineers normally don’t get too close to the hardware in large projects.

Here’s a scenario: the product is a complex SoC or system; the chip comes back and it’s bring-up time. I’m guessing that

  • 70% of the above folks will not be dying to get in the lab
  • 80% will want nothing to do with the hardware
  • 90% will want nothing to do with the analog and data converters
  • 100% will want nothing to do with the RF

Bottom line: complexity and the concomitant specialization have forced most engineers on any large project to limit themselves to a small part of the product or solution. This is not unique to DSP, of course. So who are today’s DSP engineers and which direction is they going?

The answer seems to be: C / MATLAB programmer with some application knowledge. Software skills (compilers, tools, RTOS, IDEs, etc.) help in any job and DSP work is no exception. Knowledge of Z-transforms, FFTs, etc. is optional—there are cookbooks for these things anyway.

So why do I say that the discipline of DSP has lost out? My contention is that if engineers are not fluent in the basics of DSP, perhaps they should not be called DSP engineers. Since it is a bit late for that, maybe we need a better term for those who are concerned with how the bits arrived at the processor and where they are headed.

On the systems issue, this curmudgeon feels lucky to have been able to work on some pretty complex products. I am surprised by one thing: many DSP engineers I have encountered in the last ~15 years in Silicon Valley are not interested (!) in learning the other stuff even if they have the chance. Maybe they’re doing the next big thing that will make today’s DSP engineers will seem as quaintly old-fashioned as I am today.

************************************************************************************

More later…

Nalin Pithwa

A repertoire of DSP transforms, especially sine & cosine transforms & their significance —- part two

Just onc comment before we delve further: the trick in DSP and Digital Control and Digital Communications is to treat all possible practical signals via a few elementary or test signals. The \delta function is an elementary signals, so also are the complex exponentials or sinusoids, the unit step function, the boxcar, the  sinc pulse, and the signum function.

We continue our light study with transforms of sinusoids.

The Fourier transform of sines and cosines plays a central role in the study of linear time invariant systems for three quite related reasons: (1) they are eigenfunctions of LTI systems (2) the spectrum consists of the eigenvalues associated with these eigenfunctions, and (3) the weighting factor (sometimes called the kernel) in the Fourier transform is a complex sinusoid. Clearly, we want to include sinusoids in our Fourier transform repertoire.

We have already seen that the Fourier transform of the complex exponential is a \delta function located at the center of the complex sinusoid. That is,

\int_{-\infty}^{\infty}e^{i \omega_{0} t}e^{-i \omega t}dt=2 \pi \delta (\omega - \omega_{0})

and letting \omega_{0} \rightarrow -\omega_{0}

we get \int_{-\infty}^{\infty}e^{-i \omega_{0} t}e^{i \omega t}dt which equals

2\pi \omega (\omega + \omega_{0}).

The Fourier transform of the cosine and the sine follows directly from these two equations. Adding the two equations gives

\int_{-\infty}^{\infty}\cos (\omega_{0} t)e^{-i \omega t}dt=\pi [\delta (\omega + \omega_{0})+\delta (\omega - \omega_{0})] Equation I 

while subtracting them gives

\int_{-\infty}^{\infty}\sin (\omega_{0}t)e^{-i \omega t}dt=\pi [\delta (\omega +\omega_{0})-\delta (\omega - \omega_{0}] Equation II

These transforms are just the combinations of \delta functions that are required by the basic symmetry properties of the Fourier transform; because the cosine is a real and even function, its transform is real and symmetric. The sinc is real and odd; therefore, it has an odd imaginary transform.

Alternately, we could derive the sine and cosine transforms by using the phase-shift theorem; shifting a function along the axis in one domain introduces a complex sinusoid in the other domain. For example, if we want to generate the dual pairs in equations I and II, we apply the phase shift theorem to the \delta function and write

FT(\delta (t-t_{0}))=e^{-i\omega t_{0}}

and FT (\delta (t+t_{0}))=e^{i \omega_{0} \omega}

Adding and subtracting these two equations gives

FT(\delta (t-t_{0}) ) + \pi \delta(t + t_{0}) = 2 \cos (\omega t_{0}) equation III

FT(\delta (t+t_{0}) + \pi \delta (t-t_{0}) = i2 \sin (\omega t_{0}). equation IV

The sine and the cosine transforms and their duals are shown in the attached figure 1.

Thus, the Fourier transforms of sines and cosines can be viewed as resulting from forcing certain symmetries into  the \delta function transform after it is shifted along the axis; shifting the \delta function off the origin in the frequency domain and then requiring a symmetric symmetric spectrum results in equation I. An antisymmetric spectrum leads to Equation II. Analogous statements apply to equations for \delta function shifts along the time axis.

The \delta functions in Equations I and II make for easy convolutions leading to an important observation. For example, convolving equation I in the frequency domain with a given function F(\omega) and approximately multiplying by f(t) in the time domain gives

FT(f(t)\cos (\omega_{0} t)) = \pi (F(\omega + \omega_{0})+F(\omega - \omega_{0})) equation V

This equation and its sine wave counterpart are sometimes referred to as the modulation theorem. They show that modulating the amplitude of a function by a sinusoid shifts the function’s spectrum to higher frequencies by an amount equal to the modulating frequency. . This effect is well understood in television and radio work where a sinusoidal carrier is modulated by the program signal. Obtaining transforms of sines, cosines, and related functions has been a good example of exploiting the basic Fourier transform properties. Our next example is no so obliging.

More later…

Nalin Pithwa

A Repertoire of DSP transforms and their importance with an interesting lesson from astronomy — part I

fig3 repertoirefig 4 repertoirefig 5 repertoirefig 6 repertoirefig 1 n 2 repertoireThere are tremendous advantages to thinking about signals and systems in the frequency domain. Generally speaking, deeper understanding can be gained when a subject is viewed from more than one angle. To better understand how the frequency domain relates to the time domain and the implication  (and limitation) of this relationship for digital signal proceessing (DSP), we need a repertoire of transforms available at our fingertips. This repertoire together with the properties of the Fourier transform discussed in the preceding blogs will be indispensable for understanding and applying Fourier transforms in practical applications.

Most of  the transforms of this repertoire are easy to compute. Therefore, we will concentrate on their implications for our subject, leaving  their derivation, for the most part, to problems for  the reader/student/enthusiast. Furthermore, to simplify the repertoire, the time domain signal will be either real-symmetric or real-antisymmetric so the transform will be either real-symmetric or odd-imaginary, simplifying the drawings and discussions.

The delta function is perhaps the easiest function to integrate. Substitution of f(t)=\delta (t) in equation Ia below (mentioned in a previous blog and reproduced here)

F(\omega)=\int_{-\infty}^{\infty}f(t)e^{-i \omega t}dt equation I

immediately tells us that the Fourier transform of a spike at the origin is a constant of unit height as shown in Fig 1a. Physical reasoning tells us that the \delta function could not exist in any real analog system; its spectrum substantiates this by demanding an infinite bandwidth. Nor can this spike ever exist in a digital system — it is infinitely large. Some of the Fourier transform properties of  the previous DSP blog article can be applied to the \delta function. Others defy ordinary arithmetic — try Rayleigh’s theorem. The \delta function is like the girl with the little curl  on  her forehead; when she was good, she was very, very good; when she was bad, she was horrid. Mathematically, the \delta function is indeed somewhat horrid. On the other hand, it is quite good — really indispensable — for relating continuous theory to discrete signals and systems.

The complementary transform is equally easy to compute. Substituting f(t)=constant in Equation I and using Equation II below:

(1/2\pi)\int_{-\infty}^{\infty}e^{i(\omega -(\omega)^{'})t} Equation II

tells us that the Fourier transform of a constant is a \delta function as shown in Fig 1b. Thus, the Fourier transform concentrates the dc component (the average value) of a signal into  the zero frequency contribution of the spectrum. Conversely, from Fig 1a, a signal concentrated at one point in time is spread out over all frequencies. The infinitely sharp time domain signal has an infinitely broad spectrum; the infinitely broad time domain signal has an infinitely sharp spectrum. It turns out that this is a manifestation of a general property of the Fourier transform.

The Gaussian function is one of  the few waveforms that possess the same functional form in both domains;  the Fourier transform of a Gaussian is a Gaussian. These functions, and their half-width located 1/e down from the maximum are shown Fig 2. Using these half-widths, the spreading property observed for the \delta function can easily be quantified for the Gaussian pair

\Delta t \Delta \omega = (1/\alpha)(2\alpha)

The product of the widths is a constant for all Gaussians. If we imagine the Gaussian time function getting progressively narrower, then its spectrum gets increasingly broader, and vice-versa. This reciprocal time- frequency bandwidth is an inherent property of  the Fourier transform and is best put on a firm mathematical basis by defining the widths in a somewhat more complicated fashion than we have done here. With this more complicated definition of width, it can be shown (Bracewell, 1978) that

(width in t)(width in \omega) \geq 1/2 equation III

for all Fourier transform pairs. With this definition of width, the Gaussian pair satisfies the equality, a fairly unusual situation for a transform pair. This relationship between widths in the two domains is called the Uncertainty Principle in Quantum Mechanics, but it was well-known to mathematicians and electrical engineers well before the foundations of Quantum theory were developed.

Usually, it is sufficient to think of  the \Delta t \Delta \omega product as being roughly unity. For example, we can then observe that if a certain amplifier is designed to pass a 1 microsecond width pulse, then its bandwidth must be on the order of 1 MHz. Of course, the actual design requirements of such an amplifier depend on its purpose. Any practical bandwidth will necessarily distort the pulsed signal. The tolerance for such a distortion may depend on signal-to-noise considerations versus fidelity requirements.

As we introduce this repertoire of Fourier transform pairs, it will be instructive to apply some of the fundamental properties of the preceding blog/article to them. The convolution theorem, for example, immediately tells us that the convolution of a Gaussian is another, wider Gaussian. We see this easily by thinking of the operation in the other domain: there multiplication of  two Gaussian clearly produces another, narrower Gaussian.

The boxcar/sinc transform pair gives us a greater insight into both Fourier transforms and DSP than perhaps any other single transform pair. The Fourier transform of the boxcar function, shown in Fig 3a, is of the form (\sin x)/x. This type of function is commonly called a sinc function. Think of it as a sine wave that decays as 1/x for large x, and note that (\sin x)/x converges to 1 at x=0.

Taking the Fourier transform of  the boxcar is a straightforward integration that you can easily do. The reverse, the inverse Fourier transform of  the sinc function, turns out to require advanced integration techniques that are tangential to our discussion. Perhaps, it is not surprising that this inverse Fourier transform :

boxcar =(1/2\pi)\int_{-\infty}^{\infty}\sin (T\omega)/(T \omega)e^{i \omega t}d\omega

requires some tricky integration because it has a rather curious behaviour: its value is completely independent of t for t between -T and T. Then, when t exceeds this interval, the value drops abruptly to zero for all other t. This situation, where an ordinary continuous function is related to a discontinuous one, is a common occurence among Fourier transform pairs. On the other hand, the forward Fourier tranform,

sinc = \int_{-T}^{T}e^{-i \omega t}dt

is an elementary definite integral relating two continuous functions.

Note, again, the relationship between the boxcar’s duration and its frequency bandwidth: if we define the sinc’s bandwidth by the first zero crossings, then \Delta t \Delta \omega=4\pi. Narrow pulses have wide bandwidths and vice-versa.

For another example of the utility of the convolution theorem, we discuss a theorem that originally arose in probability theory and was given the name Central Limit Theorem George Polya in 1920. The connection with probability occurs because the probability of drawing several numbers from a distribution that add up to a predetermined sum can be written as a convolution. The theorem, stated in terms of these convolutions, says that if a large number of functions are convolved together, the result tends toward the Gaussian functions (normal distributions in probability theory).

Let us see if the boxcar/sinc pair satisfies this theorem. In Fig 4, we show successive convolutions of the boxcar and the associated successive multiplications of the sinc in the frequency domain. The boxcar does appear to be approaching the shape of a Gaussian. The frequency domain version likewise appears to be approaching a Gaussian shape; indeed, it must, since the tranform of a Gaussian is another Gaussian.

Now, that we have demonstrated that the boxcar appears to satisfy the Central Limit Theorem, it is natural to ask if  the sinc function does also. Convolving successive sinc functions can easily be done mentally by thinking of multiplying the boxcars in the other domain. Clearly, the product of any two boxcars with  other sinc functions always produces sinc functions; the Gaussian is never approached.

The Central Limit Theorem applies to convolutions among a wide class of  functions, but as we have seen, it is not true for all functions. However, it does seem to apply for many naturally occurring functions. In these cases, the effect of the convolution operation is to produce a smeared out result resembling the Gaussian shape, and frequently this approach to this bell-shaped curve is surprisingly fast.

On the other hand, many specially designed convolution operators of interest in signal processing do not  obey the central limit  theorem. Differential and inverse operators, for example, serve to sharpen a signal rather than smooth it. Some required properties of functions satisfying the central limit theorem can be found using the fundamental properties of the Fourier transform.

Another very important application of the boxcar/sinc pair arises when we think of the boxcar as truncating a data stream. Imagine, for example, that a radio astronomer wishes to analyze  a signal received from a distant source for  certain periodicities. The researcher can only record a small portion of this signal for spectral analysis. We can say that the data are recorded over a window in time, represented by multiplication of  the natural signal by a boxcar of width 2T. The truncation of the data stream in the time domain manifests itself as a convolution of the data’s spectrum with the sinc function of width 2\pi/T (width between first zero crossings). The effect of  this convolution is to smear out the frequency spectrum of the data, thereby reducing the resolution of the spectrum analysis. This loss of resolution due to a finite data record is unavoidable in every experimental situation. If the data record is very long, the sinc function is very narrow. In the limit of an infinitely long data record, the sinc function associated with truncation will be infinitesimally wide. Convolution with this spike is the same as convolution with a \delta : it is the identity operation and no loss of frequency resolution because of convolution with a very broad sinc function. Later on, we will see important ramifications of  this data truncation when we discuss the relationship  of  the DFT to the Fourier transform.

One effect of sampling analog data can be seen by looking at the boxcar/sinc pair shown in Fig 4. Here, the sampling time of an analog to digital converter is represented by a boxcar of width \Delta t = T called the sampling aperture). In this  worst case A/D conversion, the signal is averaged over one whole sample period, thereby losing some high frequencies. The  sampling is represented by a convolution of the ideal samples with the boxcar. In the frequency domain, shown in Fig 4b, which has a lowpass filter effect, producing a roll-off that is down 2/\pi at Nyquist. This unavoidable distortion of the signal’s spectrum is called Aperture Error. 

For  our last example, extolling the  importance of  the boxcar/sinc pair, we look to an optical diffraction experiment. Consider monochromatic light waves impinging on  the aperture shown in Fig 5a. Let  the waves amplitude vary  across the aperture according to  A(x). let the waves amplitude vary  across the aperture according to A(x) The disturbance emanating from an element of the aperture dx is, using the complex exponential  form of sinusoids

A(x)e^{-i \omega t}dx

As the figure shows, the phase shift for the ray pictured is

\delta = (2 \pi x \sin \theta) /(\lambda)

which equals (2\pi x\theta)/\lambda) for small \theta. The disturbance arriving at the screen from dx is  then

A(x)e^{-i\omega t - i2\pi x \theta/\lambda}dx

The total illumination T(\theta) reaching the screen from the entire aperture is just the superposition of all these contributions :

T(\theta)= \int A(x)_{aperture}e^{-i \omega t - i2\pi x \theta /\lambda}dx

Hence, we get

T(\theta)=e^{-i \omega t}\int A(x)_{aperture}e^{-i2\pi x \theta /\lambda}dx

The complex exponential in  front of  the integral expresses the  sinusoidal time dependence of  the signal at every point on  the  screen. The integral is  the amplitude of this  signal versus \theta at each point on the screen, we recognize it as the Fourier transform of  the amplitude across the aperture. Waves of any kind propagate to infinity via the  Fourier transform. To avoid screens at infinity, a lens can be used between the aperture and the screen to focus rays on the focal plane of the lens as shown in Fig 6b. Now, the aperture, lens, screen, and a plane wave light source form a simple and practical Fourier transformer. If a semitransparent film with its transmission varying as A(x) is placed over the aperture, the Fourier transform of A(x) is cast onto  the screen. Before the days of the modern digital computer, clever data processing engineers and scientists used equipment of this type to perform Fourier transforms, convolutions, cross-correlations  and filtering.

The boxcar/sinc pair arises in the case where A(x) equals a constant across the  aperture. For example, an astronomical telescope focused on Sirius, a binary star system in the constellation Caius Major, would project two sinc type images onto its focal plane, one  for each member of the system as shown in Fig 6c. We know that the  width of  these sinc functions varies inversely  with the aperture size. Furthermore, additional optics cannot increase the separation between these sinc patterns; they only magnify the entire picture. In the case of Canis Major, it turns out that one star is considerably fainter than the other and in most telescopes it gets lost in the side lobes of  the sinc function of the brighter star. In this case, resolution is not the problem — it is the same side lobes produced by the boxcar-like aperture. But, a knowledge of Fourier transform saves the day and allows the weaker star to  be detected by coating the objective lens (the aperture) so as to reduce its transmission gradually  away from the center, the shape of the original sinc function can be altered to  a new pattern having smaller side lobes. For example, if this coating reduced transmission in a Gaussian fashion, each star’s image would be Gaussian. Although the Gaussian does have a wider central peak than the sinc function, it has the virtue of lacking side lobes. The superposition of two Gaussians of  the weaker peak, given sufficiently sensitive detection methods. This technique of tapering the transmission characteristics of optical instruments, called apodization, has been sucessfully applied to  telescopes It has an important counterpart in signal processing as well. In coming blog articles, we shall see several applications of  tapering data streams, as opposed to truncating them boxcar fashion.

Before leaving the discussion of  the boxcar sinc pair, we mention its Fourier dual pair, the sinc boxcar in Fig 3b, which is, of course, easily obtained from the first pair by a relabelling of variables. One immediate significance of this sinc boxcar pair for signal processing is that the boxcar represents an ideal lowpass filter in the frequency domain. We see that it requires an infinitely long time domain convolution  operator to  achieve this low pass filter — a physical impossibility. We will see how  to design substitute  time domain operators of  tolerable lengths in some later blog article.

More later…

Nalin Pithwa

 

The Doctrine of the Whole Mathematics

Let us digress to the spirit of  mathematics. This is the view I hold and the reason for these blogs. It is adapted from “Non Analytic Aspects of Mathematics and their Implications for Research and Education” by P J Davis and J A Anderson, SIAM Review, (1979), pp. 112-125. That article goes by  the title “The Doctrine  of the Whole Man”.

“Mathematics has elements that are spatial, kinesthetic, elements that are arithmetic or algebraic, elements that are verbal or programmatic. It has elements that are logical, didactic and elements that are intuitive, or even counter-intuitive. It has elements that are rational and elements that are irrational or  mystical. These may  be compared to different modes of consciousness.

To place undue emphasis on one element or group of  elements upsets a balance. It results in an impoverishment of the science and represents an unfulfilled potential. The doctrine of the whole man says that we must bring everything we have to bear on our subject. We must not block off arbitrarily any mode of experience or thought. “There is a Nemesis,” says Alfred North Whitehead,”which waits upon those who  deliberately avoid avenues of knowledge.

We must realize that the future of  the subject depends only in part on the contribution of  those who have rigid establishment interest or training in  the subject. As regards this training and our own teaching, we must

1) restore geometry

2) restore kinesthetics and mechanics

3) restore combinatorics

4) restore intuitive and experimental mathematics

5) deemphasize somewhat the theorem-proof  style of lecturing

6) give a proper place to computing and programmatics

7) make full use of computer graphics

8) eliminate the  fear of metaphysics, recognizing that in  such principles may lie the seeds of future growth.

What we want to do is create as rich and diverse a brew of thought  and action as we can. This is the kind of culture that has fostered mathematics in the past and and should be our very present hope for the future.”

More later…

Nalin Pithwa

 

The Time Limited Band Limited Theorem

We have seen that the behaviour of signals and their Fourier transforms at infinity can be a major concern. Indeed, in any practical digital computing scheme, signals and the Fourier transforms have to be of limited length. Normally then, any general statement that can be made about the length of Fourier transform pairs will have considerable bearing on digital signal processing, both in theory and in practice.

Our discussion addresses time-limited and band-limited signals. A time-limited signal is one that is confined to a finite length of time and is zero outside that interval. For a time-limited signal, its total energy is contained in an interval

2a:

A= \int_{-a}^{a} |f(t)|^{2}dt

Likewise, a band-limited signal has its spectrum completely confined to a finite frequency interval, so that its total energy is

B= \int_{-b}^{b}|F(\omega)|^{2}d\omega

The time-limited band-limited theorem says that no signal can be both time-limited and band-limited, except for the trivial case where f(t) is identically equal to zero. We prove this important theorem by assuming f(t) is both time-limited and band-limited; then, we show that f(t) necessarily must be the null function. First, observe that not only is this signal

f(t)=\int_{-b}^{b}F(\omega)e^{i\omega t }d\omega =0 for |t| \geq a

but all of its derivatives must also be zero at |t| \geq a. Therefore, differentiating wrt to time under the integral n times gives

\int_{-b}^{b}F(\omega)e^{i\omega a}(\omega)^{n}d\omega for n=0,1,2 \ldots

Next, we write the inverse Fourier transform of a band-limited signal in a special form. For such a signal, we can write

f(t)=\int_{-b}^{b}F(\omega)e^{i\omega t}d\omega=\int_{-b}^{b}F(\omega)e^{i\omega (t-a)}e^{i\omega a}d\omega

Then, using the power series expansion for the exponential function allows term by term integration to give

f(t)= \sum_{n=0}^{\infty} \frac {(i(t-a))^{n}}{n!}\int_{-b}^{b}F(\omega)e^{i\omega a }(\omega)^{n}d\omega

as an alternative representation of a band-limited signal in terms of its spectrum. But, we have shown that if the signal is also time limited, each of the integrals in this sum is identically zero. Hence, f(t)=0 is the only function that can be both time-limited and band-limited.

The theorem immediately raises a spectre of a fundamental nature for digital signal processing because it says that every signal must be infinitely long either in time domain or in the frequency domain, or both. We will see the consequences of this requirement later where we develop the relationship  between continuous signals and their sampled counterparts.

So far, our discussion of the continuous Fourier integral transform has been on a general level, giving powerful theorems and properties applicable to a wide variety of continuous functions. Next, to exemplify these theorems and also to form a basis for further discussion, we introduce a repertoire of Fourier transforms particularly important to digital signal processing.

To continue later…

Nalin Pithwa

 

The Wiener-Khintchine Theorem

An important method of treating such signals that do not decay at infinity, due independently to Wiener in 1930 and Khintchine in 1934, starts from the form of the convolution theorem given in the previous blog [equation 15 reproduced again:

FT \int_{-\infty}^{\infty} f(\tau)f(t+\tau)d\tau = |F(\omega)|^{2} ]

The inverse Fourier transform of this equation is

\int_{-\infty}^{\infty}f(\tau)f^{*}(t+\tau)d\tau=(1/2\pi)\int_{-\infty}^{\infty}|F(\omega)|^{2}e^{i\omega t}d\omega equation 16

For many signals of interest, such as sinusoids, step functions, and random noise of fixed statistical properties, the autocorrelation integral on the left does not converge. But, if we define a truncated version of f(t) by

f_{T}(t)= f(t) if -T<t<T and f_{T}(t)=0 otherwise

then we can write

\int_{-\infty}^{\infty}f_{T}(\tau)f_{T}(t+\tau)d\tau which equals

(1/2\pi)\int_{-\infty}^{\infty}|F_{T}(\omega)|^{2}e^{i\omega t} d\omega equation 17

where F_{T}(\omega) is the Fourier transform of f_{T}(t). Dividing by the time interval 2T and taking the limit, equation 17 becomes

\lim_{T \rightarrow \infty} (1/2\pi) \int_{-T}^{T}f(\tau)f(t+\tau)d\tau=(1/2\pi)\int_{-\infty}^{\infty}\lim_{T \rightarrow \infty} (|F_{T}(\omega)|^{2})(e^{i\omega t})/(2T)d\omega

Wiener (1949) was able to show that, under the condition that the limit on the left exists, the limit inside of the right hand integral converges to a function:

P(\omega)=\lim_{T \rightarrow \infty} (|F_{t}(\omega)|^{2})/(2T) equation 18

which we call the power spectrum density of f. Using these revised definitions of autocorrelation,

\phi (t)=\lim_{T \rightarrow \infty} (1/2T) \int_{-T}^{T} f(\tau)f(\tau +t)d\tau equation 19

and power spectrum, our result now reads

P(\omega)=(1/2\pi)\int_{-\infty}^{\infty}P(\omega)e^{i \omega t}d\omega equation 20 

which is called the Wiener-Khintchine theorem, the autocorrelation is the inverse Fourier transform of  the power spectrum. This is a very significant result, not a simple restatement of our starting point, equation 16. In Equation 16, both f(t) and F(\omega) must be square integrable,  that is, they must contain finite energy over all time and frequency. In equation 20, f(t) must only be sufficiently well behaved so that

\lim_{T \rightarrow \infty} (1/2T)\int_{-\infty}^{\infty} |f(t)|^{2}dt < \infty

That is to say, f(t) is only required to have finite power (signal squared per unit time) to have a power spectrum, but f(t) must have a finite energy (that is, square integrable or, with additional restrictions, be only absolutely integrable0 to possess a Fourier transform. Two classes of functions of interest, periodic functions and some types of random noise, satisfy the first condition, but not the second.

We will exploit the Wiener-Khintchine theorem further when we discuss Power Spectral Estimation. Here, we have introduced it to show how Fourier integral theory can be generalized to include functions with infinite energy but finite power. Having presented this vignette of the theory of generalized Fourier integrals, we now feel free to abandon further convergence questions in our heuristic discussion of Fourier transform pairs.

More later…

Nalin Pithwa

 

The Continuous Fourier Integral Transform Part 2: Properties of the Fourier Integral Transform

Symmetry properties and a few simple theorems play a central role in our thinking about the DFT. These same properties carry over to  the Fourier integral  transform as the DFT sums go over into  Fourier integrals. Since these properties are generally quite easy to prove, (and, hopefully, you are already familiar with them), we will simply list them in this section. (The proofs are exercises for you :-))

The symmetry properties are, of course, crucial for understanding and manipulating Fourier transforms. They can be summarized by

\mathcal {F}f^{*}(t)=F^{*}(-\omega)  Equation 8a

and \mathcal{F}f(-t)=F(-\omega) Equation 8b

The applications of these basic symmetry properties leads to

\mathcal{F}f^{*}(-t)=F^{8}(\omega)

and for two cases of special interest we can show that

if f(t) is real, then F(\omega) is Hermitian, F^{*}(\omega)=F(\omega)

and if f(t) is imaginary, then F(\omega) is anti-Hermitian, F^{*}(-\omega)=-F(\omega)

The similarity theorem results from a simple change of variable in the Fourier integral

\mathcal{F}f(\omega)=(1/|a|)F((\omega)/a) Equation 9

and likewise for the familiar shift theorem

\mathcal{F}f(t-\tau)=e^{-i\tau \omega}F(\omega) Equation 10

The power theorem which states that

2\pi\int_{-\infty}^{\infty}f(t)g^{*}(t)dt=\int_{-\infty}^{\infty}F(\omega)G^{*}(\omega)d\omega Equation 11

can be specialized to Rayleigh’s theorem by setting g=f

2\pi\int_{-\infty}^{\infty}|f(t)|^{2}dt=\int_{-infty}^{\infty}|F(\omega)|^{2}d\omega Equation 12

In more mathematical works, Rayleigh’s theorem is sometimes called Plancherel’s theorem.

Of course, the important and powerful convolution theorem — meaning linear convolution — is valid in continuum theory  also:

\mathcal{F} \int_{-\infty}^{\infty}f(\tau)g(t-\tau)d\tau=F(\omega)G(\omega) Equation 13

The many variations of the convolution theorem arising from the symmetry properties of the Fourier transform apply as well. For example, the autocorrelation theorem is as follows:

\mathcal{F}\int_{-\infty}^{\infty}f(\tau)g(t+\tau)d\tau=F(\omega)G^{*}(\omega) Equation 14

The function FG^{*} is called the cross-power spectrum. When we set g=f, this equation states the important result that the Fourier transform of the autocorrelation of a function is its power spectrum:

\mathcal{F}\int_{-\infty}^{\infty}f(\tau)f(t+\tau)d\tau=|F(\omega)|^{2} Equation 15

The formal similarity between these continuous-theory properties and those of the DFT makes them easy to remember and to visualize. But, there are essential differences between the two. The DFT with its finite sum has no convergence questions. The Fourier integral transform, on the other hand, has demanded the attention of some of our greatest mathematicians to elucidate its convergence properties. As we know, the absolute integrable condition is only a start; it can be relaxed — quite easily at the heuristic level — to include the sine wave/\delta function pair. The sine wave’s ill behaviour is characteristic of a wide class of functions of  interest in DSP that do not  decay sufficiently fast at infinity for them to possess a Fourier Transform in the normal sense.

More later…

Nalin Pithwa