More on Motion Control and DSP

Introduction to the TMSLF2407 DSP Controller.

The Texas Instruments TMS320LF2407 DSP Controller referred to as the LF2407 in this article is a programmable digital controller with a C2xx DSP CPU as the core processor. The LF2407 contains the DSP core processor and useful peripherals integrated into a single piece of silicon. The LF2407 combines the powerful CPU with on-chip memory and peripherals. With the DSP core and control-oriented peripherals integrated into a single chip, users can design very compact and cost-effective digital control systems.

The LF2407 DSP controller offers 40 MIPS performance. This high processing speed of the C2xx CPU allows users to compute parameters in real-time rather than look up approximations from tables stored in memory. This fast performance is suitable for processing control parameters in applications such as notch filters or sensorless motor control algorithms where a large amount of calculations must be computed quickly.

While the brain of the LF2407 DSP is the C2xx core, the LF2407 contains several control-oriented peripherals onboard. These peripherals make virtually any digital control requirement possible. Their applications range from analog to digital conversion to pulse width modulation (PWM) generation. Communications peripherals make possible the communication with external peripherals, personal computers, or other DSP processors. Below is a brief listing of the peripherals onboard the LF2407:

The LF2407 peripheral set includes:

• Two Event Managers (A and B)
• General Purpose (GP) Timers
• PWM generators for digital motor control
• Analog to Digital Converter
• Controller Area Network (CAN) interface
• Serial Peripheral Interface (SPI) — synchronous serial port
• Serial Communications Interface (SCI) — asynchronous serial port
• General purpose bidirectional digital I/O (GPIO) pins
• Watchdog Timer (“time-out” DSP reset device for system integrity)

Brief Introduction to Peripherals:

The following peripherals are those that are integrated onto the LF2407 chip

Event Managers (EVA, EVB)

There are two event managers on the LF2407, the EVA and the EVB. The Event Manager is the most important peripheral in digital motor control. It contains the necessary functions needed to control electromechanical devices. Each EV is composed of functional “blocks” including timers, comparators, capture units for triggering on an event, PWM logic circuits, quadrature encoded pulse (QEP) circuits and interrupt logic.

The Analog to Digital Converter (ADC)

The ADC on the LF2407 is used whenever an external analog signal needs to be sampled and converted to a digital number. Examples of ADC applications range from sampling a control signal for use in a digital notch filter algorithm or using the ADC in a control feedback loop to monitor motor performance. Additionally, the ADC is useful in motor control applications because it allows for current-sensing using a shunt resistor instead of inexpensive current sensor.

The Control Area Network (CAN) module:

While we will discuss the CAN module in a later blog, it is a useful peripheral for specific applications of the LF2407. The CAN module is used for multi-master serial communication between external hardware. The CAN has a high level of data integrity and is ideal for operation in noisy environments such as in an automobile, or industrial environments that require reliable communication and data integrity.

Serial Parallel Interface(SPI) and Serial Communications Interface (SCI):

The SPI is a high speed synchronous communications port that is mainly used for communicating between the DSP and external peripherals or another DSP device. Typical uses of the SPI include communications with external shift registers, display drivers or ADC’s.

The SCI is an asynchronous communications port that supports asynchronous serial (UART) digital communications between the CPU and other asynchronous peripherals that use the standard NRZ (non-return to zero) format. It is useful in communications between external devices and the DSP. Since these communications peripherals are not directly related to motor control applications, they will not be discussed in this article.

Watchdog Timer

The Watchdog Timer (WD) monitors software and hardware operations and asserts a system reset when its internal counter overflows. The WD timer (when enabled) will count for a specific amount of time. It is necessary for the user’s software to reset the WD timer periodically so that an unwanted reset does not occur. If for some reason there is a CPU disruption, the watch dog timer will generate a system reset. For example, if the software enters an endless loop or if the CPU becomes temporarily disrupted, the WD timer will overflow and a DSP reset will occur, which will cause the DSP program to branch to its initial starting point. Most error conditions that temporarily disrupt chip operation and inhibit proper CPU function can be cleared by the WD function. In this way, the WD increases the reliability of the CPU, thus ensuring system integrity.

General Purpose Bi-directional Digital I/O (GPIO) pins

Since there are only a finite number of pins available on the LF2407 device many of the pins are multiplexed to either their primary function or the secondary GPIO function. In most cases, a pin’s second function will be as a general-purpose input/output pin. The GPIO capability of the LF2407 is very useful as a means of controlling the functionality of pins and also provides another method to input or output data to and from the device. Nine 16-bit control registers control all the I/O and shared pins. There are two types of these registers:

• I/O MUX Control Registers (MCRx) — Used to control the multiplexer selection that chooses between the primary function of a pin or the general purpose I/O function.
• Data and Direction Control Registers (PxDATDIR): Used to  control the data and data direction of bi-directional I/O pins.

Joint Test Action Group (JTAG) Port

The JTAG port provides a standard method of interfacing a personal computer with the DSP controller for emulation and development. The XDS510PP or equivalent emulator pod provides the connection between the JTAG module on the LF2407 and the personal computer. The JTAG module allows the PC to take full control over the DSP processor while Code Composer Studio is running. The schematic below shows the connection scheme from computer to the DSP board.

$Computer \hspace{0.1in} Parallel Port \Longrightarrow XDS510PP \hspace{0.1in} Plus \hspace{0.1in} Emulator Port \Longrightarrow TI \hspace{0.1in} LF2407 \hspace{0.1in} Evaluation \hspace{0.1in} Module$

Phase Locked Loop (PLL) Clock Module

The phase locked loop (PLL) module is basically an input clock multiplier that allows the user to control the input clocking frequency to the DSP core. External to the LF2407, a clock reference (can be oscillator/crystal) is generated. This signal is fed into  the LF2407 and is multiplied or divided by the PLL. This new (higher or lower frequency) clock signal is then used to clock the DSP core. The LF2407’s PLL allows the user to select a multiplication factor ranging from 0.5X to 4X that of the external clock signal. The default value of the PLL is 4X.

Memory Allocation Space

The LF2407 DSP Controller has three different allocations of memory it can use: Data, Program and I/O memory space. Data space is used for program calculations, look-up tables, and any other memory used by an algorithm. Data memory can be in the form of the on-chip  RAM or external RAM. Program memory is the location of user’s program code. Program memory on the LF2407 is either mapped to the off-chip RAM (MP/MC-pin=1) or to the on-chip flash memory (MP/MC-=0), depending on the logic value of the MP/MC-pin.

I/O space is not really memory but a virtual memory address used to output data to peripherals external to the LF2407. For example, the digital-to-analog converter (DAC) on the Spectrum Digital Evaluation Module is accessed with I/O memory. If one desires to output data to the DAC, the data is simply sent to the configured address of I/O space with the “OUT” command. This process is similar to writing to data memory except that the OUT command is used and the data is copied to and outputted on the DAC instead of being stored in memory.

Types of Physical Memory.

Random Access Memory (RAM):

The LF2407 has 544 words of 16 bits each in the on-chip DARAM. These 544 words are partitioned into three blocks: B0, B1, and B2. Blocks B1 and B2 are allocated for use only as data memory. Memory block B0 is different than B1 and B2. The memory block is normally configured as Data Memory, and hence primarily used to hold data, but in the case of the B0 block, it can also be configured as Program Memory. B0 memory can be configured as program or data memory depending on the value of the core level CNF bit.

• (CNF =0) maps B0 to data memory.
• (CNF=1) maps B0 to program memory.

The LF2407 also has 2K of single access RAM (SARAM). The addresses associated with the SARAM can be used for both data memory and program memory, and are software configurable to  the internal SARAM or external memory.

Non-Volatile Flash Memory

The LF2407 contains 32K of on-chip flash memory that can be mapped to program space if the MP/MC-pin is made logic 0 (tied to ground). The flash memory provides a permanent location to store code that is unaffected by cutting power to the device. The flash memory can be electronically programmed and erased many times to allow for code development. Usually, the external RAM on the LF2407 Evaluation Module (EVM) board is used instead of the flash for code development due to the fact that a separate “flash programming” routine must be performed to flash code into the flash memory. The on-chip flash is normally used in situations where the DSP program needs to be tested where a JTAG connection is not practical or where the DSP needs to be tested as a “stand-alone” device. For example, if a LF2407 was used to develop a DSP control solution to an automobile braking system, it would be somewhat impractical to have a DSP/JTAG/PC interface in a car that is undergoing performance testing.

More later,

Nalin Pithwa

DSP Processors, Embodiments and Alternatives: Part I

Uptil now, we described digital signal processing in general terms, focusing on DSP fundamentals, systems and application areas. Now, we narrow our focus in DSP processors. We begin with a high level description of the features common to virtually all DSP processors. We then describe typical embodiments of DSP processors, and briefly discuss alternatives to DSP processors such as general purpose microprocessors, microcontrollers(for comparison purposes) (TMS320C20) and FPGA’s. The next several blogs provide a detailed treatment of DSP processor architectures and features.

So, what are the “special” things that a DSP processor can perform? Well, like the name says, DSP processors do signal processing very well. What does “signal processing” mean? Really, it’s a set of algorithms for processing signals in the digital domain. There are analog equivalents to these algorithms, but processing them digitally has been proven to be more efficient. This has been trend for many many years. Signal processing algorithms are the basic building blocks for many applications in the world from cell phones to MP3 players, digital still cameras, and so on. A summary of these algorithms is shown below:

• FIR Filter: $y(n)=\sum_{k=0}^{N}a_{k}x(n-k)$
• IIR Filter: $y(n)=\sum_{k=0}^{M}a_{k}x(n-k)+\sum_{k=1}^{N}b_{k}y(n-k)$
• Convolution: $y(n)=\sum_{k=0}^{N}x(k)h(n-k)$
• Discrete Fourier Transform: $X(k)=\sum_{n=0}^{N-1}x(n)\exp{[-j(2 \pi l/N)nk]}$
• Discrete Cosine Transform: $F(u)=\sum_{x=0}^{N-1}c(u).f(x).\cos{\frac{\pi}{2N}u(2x+1)}$

One or more of these algorithms are used in almost every signal processing application. FIR filters and IIR filters are almost fundamental to any DSP application. They remove unwanted noise from signals being processed, convolution algorithms are used to look for similarities in signals, discrete Fourier transforms are used to represent signals in formats that are easier to process, and discrete cosine transforms are used in image processing applications. We will discuss the details of some of these algorithms later, but there are things to notice about  this entire list of algorithms. First, they all have a summing operation, the function. In the computer world, this is equivalent to an accumulation of a large number of elements which is implemented using a “for” loop. DSP processors are designed to have large accumulators because of this characteristic. They are specialized in this way. DSPs also have special hardware to perform the “for” loop operation so that the programmer does not have to implement this in software, which would be much slower.

The algorithms above also have multiplication of two different operands. Logically, if we were to speed up this operation, we would design a processor to accommodate the multiplication and accumulation of two operands like this very quickly. In fact, this is what has been done with DSPs. They are designed to support the multiplication and accumulation of data sets like this very quickly; for most processors, in just one cycle. Since these algorithms are very common in most DSP applications, tremendous execution savings can be obtained by exploiting these processor optimizations.

There are also inherent structures in DSP algorithms that allow them to be separated and operated on in parallel. Just as in real life, if I can do more things in parallel, I can get more done in the same amount of time. As it turns out, signal processing algorithms have this characteristic as well. So, we can take advantage of this by putting multiple orthogonal (nondependent) execution units in our DSP processors and exploit this parallelism when implementing these algorithms.

DSP processors must also add some reality in the mix of these algorithms shown above. Take the IIR filter described above. You may be able to tell just by looking at this algorithm that there is a feedback component that essentially feeds back previous outputs into the calculation of the current output. Whenever you deal with feedback, there is always an inherent stability issue. IIR filters can become unstable just like other feedback systems. Careless implementation of feedback systems like the IIR filter can cause the output to oscillate instead of asymptotically decaying to zero (the preferred approach). This problem is compounded in the digital world where we must deal with finite word lengths, a key limitation in all digital systems. We can alleviate this using saturation checks in software or use a specialized instruction to do this for us. DSP processors, because of the nature of signal processing algorithms, use specialized saturation underflow/overflow instructions to deal with these conditions efficiently.

There is more I can say about this, but you get the point. Specialization is really all it is about with DSP processors; these processors specifically designed to do signal processing really well. DSP processors may not be as good as other processors when dealing with nonsignal processing centric algorithms (that’s fine; I am not any good at medicine either). So, it’s important to understand your application and choose the right processor. (A previous blog about DAC and ADC did mention this issue).

(We describe below common features of DSP processors.)

Dozens of families of DSP processors are available on the market today. The salient feature of some of the commonly used families of DSP Processors are summarized in Table 1. Throughout these series, we will use these processors as examples to illustrate the architectures and features that can be found in commercial DSP processors.

Most DSP processors share some common features designed to support repetitive, numerically intensive tasks. The most important of these features are introduced briefly here. Each of these features and many others will be examined in greater detail in this blog article series.

Table 1.

$\begin{tabular}{|c|c|c|c|c|} \hline Vendor & Processor Family & Arithmetic Type & Data Width & Speed (MIPS) or MHz\\ \hline Analog Devices & ADSP 21xx & Fixed Point & 16 bit & 25MIPS\\ \hline Texas Instruments & TMS320C55x & Fixed Point & 16 bit & 50MHz ro 300MHz \\ \hline \end{tabular}$

Fast Multiply Accumulate

The most often cited of DSP feature of DSP processors is the ability to perform a multiply-accumulate operation (often called a MAC) in a single instruction cycle. The multiply-accumulate operation is useful in algorithms that involve computing a vector product, or matrix product, such as digital filters, correlation, convolution and Fourier transforms. To achieve this functionality, DSP processors include a multiplier and accumulator integrated into  the main arithmetic processing unit (called the data path) of the processor. In addition, to allow a series of multiply-accumulate operations to proceed without the possibility of arithmetic overflow, DSP processors generally provide extra bits in their accumulator registers to accommodate growth of  the accumulated result. DSP processor data paths will be discussed in detail in some later blog here.

Multiple Access Memory Architecture.

A second feature shared by most DSP processors is the ability to complete several accesses to memory in a single instruction cycle. This allows the processor to fetch an instruction while simultaneously fetching operands for the instruction or storing the result of the previous instruction to memoryHigh bandwidth between the processor and memory is essential for good performance if repetitive data intensive operations are required in an algorithm, as is common in many DSP applications.

In many processors, single cycle multiple memory accesses are subject to restrictions. Typically, all but one of the memory locations must reside on-chip, and multiple memory accesses can take place only with certain instructions. To support simultaneous access of multiple memory locations, DSP processors provide multiple on-chip  buses, multiported on-chip memories, and in some casesm multiple independent memory banks. DSP memory structures are quite distinct from those of general purpose processors and microcontrollers. DSP processor memory architectures will be investigated in detail later.

Specialized Execution Control.

Because many DSP algorithms involve performing repetitive computations, most DSP processors provide special support for efficient looping. Often, a special loop or repeat instruction is provided that allows the programmer to implement a for-next loop without expending any instruction cycles for updating and testing the loop counter or for jumping back to the top of the loop.

Some DSP processors provide other execution control features to improve performance, such as context switching and low-latency, low overhead interrupts for fast input/output.

Hardware looping and interrupts will also be discussed later in this blog series.

Perfipheral and Input/Output Interfaces

To allow low-cost, high performance input and output (I/O), most DSP processors incorporate one or more serial or parallel I/O interfaces, and specialized I/O handling mechanisms such as Direct Memory Access (DMA). DSP processor peripheral interfaces are often designed to interface directly with common peripheral devices like analog-to-digital and digital-to-analog converters.

As integrated circuit manufacturing techniques have improved in terms of density and flexibility, DSP processor vendors have included not just peripheral interfaces, but complete peripheral devices on-chip. Examples of this are chips designed for cellular telephone applications, several of which incorporate a DAC and ADC on chip.

Various features of DSP processor peripherals will be described later in this blog series.

In the next blog, we will discuss DSP Processor embodiments.

More later,

Aufwiedersehen,

Nalin Pithwa

A Brief Word on Motion Control and DSP

DSP based electromechanical motion control is very hard to implement in real-life engineering systems.

So, why would we choose to integrate a DSP into a motion control system? Well, the advantages of such a design are numerous. DSP based control gives us a large degree of freedom in developing computationally extensive algorithms that would otherwise be difficult or impossible without a DSP. Advanced control algorithms can sometimes drasfically increase the performance and efficiency of the electromechanical system being controlled.

For example, consider a typical Heating-Ventilation-and-Air-Conditioning (HVAC) system. A standard HVAC system contains at least three electric motors: compressor motor, condenser, fan motor, and the air handler fan motor. Typically, indoor temperature is controlled by simply cycling (turning on and off) the system. This control method puts unnecessary wear on system components and is inefficient. An advanced motor drive system incorporating DSP control could continuously adjust both the air-conditioner compressor speed and indoor fan to maintain the desired temperature and optimal system performance. This control scheme would be much more energy efficient and could extend the operational lifespan of the system.

More later,

Nalin Pithwa

DSP Processors, Embodiments and Alternatives: Part II

DSP Processor Embodiments:

The most familiar form of DSP processor is the single chip processor, which is incorporated into a printed circuit board design by the system designer. However, with the widespread proliferation of DSP processors into many kinds of applications, the increasing level of integration in all kinds of DSP products, and the development of new packaging techniques, DSP processors can now be found in many different forms, sometimes masquerading as something else entirely. In this blog, we briefly discuss some of the forms that DSP processors take.

Multichip Modules.

A multi-chip module (MCM) is generically an electronic assembly (such as a package with a number of conductor terminals or “pins”) where multiple integrated circuits (ICs), semiconductor dies and/or other discrete components are integrated, usually onto a unifying substrate, so that in use it is treated as if it were a single component (as though a larger IC).[1] Other terms, such as “hybrid” or Hybrid integrated circuit, also refer to MCMs.

One advantage of this approach is achieving higher packaging density — more circuits per square inch of printed circuit board. This in turn results in increased operating speed and reduced power dissipation. (As multichip module packaging technology advanced, vendors began to offer multichip modules containing DSP processors. )

For example, Texas Instruments sells the 42 dual C40 multichip module (MCM) containing two SMJ320C40 digital signal processors (DSPs) with 128K words × 32 bits (42D) or 256K words × 32 bits (42C) of zero-wait-state SRAMs mapped to each local bus. Global address and data buses with two sets of control signals are routed externally for each processor, allowing external memory to be accessed. The external global bus provides a continuous address reach of 2G words.

It gives a whopping performance of 80 Million Floating-Point Operations Per Second (MFLOPS) With 496-MBps-Burst I/O Rate for 40-MHz Modules.

Multiple Processors on a Chip.

As IC manufacturing technology became more sophisticated, DSP processors now squeeze more features and performance onto a single-chip  processor, and they even combine multiple processors on a single IC. As with multichip modules, multiprocessor chips provide increased performance and reduced power compared with design using multiple, separately packaged processors. However, the selection of multiprocessor chip offerings is limited to only a few devices.

Chip Sets

n a computer system, a chipset is a set of electronic components in an integrated circuit that manages the data flow between the processor, memory and peripherals. It is usually found on the motherboard. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance.

DSP chipsets seemed to follow the move towards processor integration in PC’s. While some manufacturers combine multiple processors on a single chip and others use multichip modules to combine multiple chips into one package, another variation on DSP processor packaging is to divide the DSP into two or more separate packages. This was the approach that Butterfly DSP had taken with their DSP chip set, which consisted of the LH9320 address generator and the LH9124 processor. Dividing the processor into two or more packages may make sense if the processor is very complex and if the number of input/output pins is very large. Splitting functionality into multiple integrated circuits may allow the use of much less expensive IC packages, and thereby provide cost savings. This approach also provides added flexibility allowing the system designer to combine the individual ICs in the configuration best suited for the application. For example, with the Butterfly chip set, multiple address generator chips could be used in conjunction with one processor chip. Finally, chip sets have the potential of providing more I/O pins than individual chips. In the case of the Butterfly chip set, the use of separate address generator and processor chips allowed the processor to have eight 24-bit external data buses, many more provided by more common single-chip processors.

We will continue with DSP cores in a later blog,

Nalin Pithwa

Digital Signal Processing and DSP Systems

We can look upon a Digital Signal Processing (DSP) System to be any electronic system making use of digital signal processing. Our informal definition of digital signal processing is the application of mathematical operations to digitally represent signals. Signals are represented digitally as sequences of samples. Often, these samples are obtained from physical signals (for example, audio signals) through the use of transducers (such as microphones) and analog-to-digital converters. After mathematical processing, digital signals may be converted back to physical signals via digital to analog converters.

In some systems, the use of DSP is central to the operation of the system. For example, modems and digital cellular phones or the so-called smartphones rely very heavily on DSP technology. In other products, the use of DSP is less central, but often offers important competitive advantages in terms of features, power/performance and price. For example, manufacturers of primarily analog consumer electronics devices like audio amplifiers are employing DSP technology to provide new features.

In this little article, a high level overview of digital signal processing is presented. We first discuss the advantages of DSP over analog systems. We then describe some salient features and characteristics of DSP systems in general. We conclude with a brief look at some important classes of DSP applications.

1.1) Advantages of DSP

Digital signal processing enjoys several advantages over analog signal processing. The most significant of these is that DSP systems are able to accomplish tasks inexpensively that would be difficult or even impossible using analog electronics. Examples of such applications include speech synthesis, speech recognition, and high-speed modems involving error-correction coding. All of these tasks involve a combination of signal processing and control (example, making decisions regarding received bits or received speech) that is extremely difficult to implement using analog techniques.

DSP systems also enjoy two additional advantages over analog systems.

• Insensitivity to environment: Digital systems, by their very nature, are considerably less sensitive to environmental conditions than analog systems. For example, an analog circuits behaviour depends on its temperature. In contrast, barring catastrophic  failures, a DSP system’s operation does not depend on its environment — whether in the snow or in the desert, a DSP system delivers the same response.
• Insensitivity to component tolerances: Analog components are manufactured to particular tolerances — a resistor, for example, might be guaranteed to have a resistance within a one percent of its nominal value. The overall response of an analog system depends on the actual values of all of the analog components used. As a result, two analog systems of exactly the same design will have slightly different responses due to slight variations in their components. In contrast, correctly functioning digital components always produce the same outputs given the same inputs.

These two advantages combine synergestically to give DSP systems an additional advantage over analog systems:

• Predictable, repeatable behaviour. Because a DSP system’s output does not vary due to environmental factors or component variations, it is possible to design systems having exact, known responses that do not vary.

Finally, some DSP systems may also have two other advantages over analog systems.

• Reprogrammability. If a DSP system is based on programmable processors, it can be reprogrammed — even in the field — to perform other tasks. In contrast, analog systems require physically different components to perform different tasks.
• Size: The size of analog components varies with their values, for example, a 100 microFarad capacitor used in an analog filter is physically larger than a 10 picoFarad capacitor used in a different analog filter. In contrast, DSP implementations of both filters might well be the same size — indeed, might even use the same hardware, differing only in their filter coefficients — and might be smaller than either of the two analog implementations.

These advantages, coupled with the fact that DSP can take advanrage of denser and denser VLSI systems manufacturing process, increasingly make DSP the solution of choice for signal processing.

1.2) Characteristics of DSP Systems:

In this section, we describe a number of characteristics common to all DSP systems, such as algorithms, sample rate, clock rate and arithmetic type.

Algorithms

DSP systems are often characterized by the algorithms. The algorithm specifies the arithmetic operations to be performed, but does not specify how that arithmetic is to be implemented. It might be implemented in software on an ordinary microprocessor or programmable signal processor, or it might be implemented in custom IC. The selection of an implementation technology is determined in part by the required speed and arithmetic precision. The table given below lists some common types of DSP algorithms and some applications in which they are typically used.

Table 1-1: Common DSP algorithms and typical applications

$\begin{tabular}{|l|||l|} \hline DSP Algorithm & System Application \\ \hline Speech Coding Decoding & Cell phones, personal comm systems, secure comm \\ \hline Speech encryption decryption & Cell phones, personal comm systems, secure comm \\ \hline Speech Recognition & multimedia workstations, robotics, automotive applications \\ \hline Speech Synthesis & Multimedia PCs, advanced user interfaces, robotics \\ \hline Speaker Identification & Security, multimedia workstations, advanced user interfaces \\ \hline Hi-Fi audio encoding and decoding & Consumer audio, consumer video, digital audio broadcast, professional audio and multimedia computers \\ \hline Modem algorithms & Digital cellular phones, GPS, data fax modems, secure comm \\ \hline Noise cancellation & Professional audio, adv vehicular audio, industrial applications \\ \hline Audio equalization & Consumer audio, professional audio, adv vehicular audio, music \\ \hline Ambient acoustics emulation & Consumer, prof audio, adv vehicular audio, music \\ \hline Audio mixing and editing & Professional audio, music, multimedia computers \\ \hline Sound Synthesis & Professional audio, music, multimedia PC's, adv user interfaces \\ \hline Vision & Security, multimedia PC's, instrumentation, robotics, navigation \\ \hline Image Compression decompression & Digital photos, digital video,video-over-voice \\ \hline Image compositing & Multimedia computers, consumer video, navigation \\ \hline Beamforming & Navigation, medical imaging, radar,sonar, signals intelligence \\ \hline Echo cancellation & Speakerphones, modems and telephone switches \\ \hline Spectral estimation & Signal intelligence, radar, sonar, professional audio, music \\ \hline \end{tabular}$

Sample Rates

A key characteristic of a DSP system is its sample rate: the rate at which samples are consumed, produced, or processed. Combined with the complexity of the algorithms, the sample rate determines the required speed of the implementation technology. A familiar example is the digital audio compact disc (CD) player, which produces samples at a rate of 44.1 kHz on two channels.

Of course, a DSP system may use more than one sample rate; such  systems are said to be multirate DSP systems. An example is a converter from the CD sample rate of 44.1 kHz to the digital audio tape (DAT) rate of 48 kHz. Because of the awkward ratio between these sample rates, the conversion is usually done in stages, typically with at least two intermediate sample rates. Another example of a multirate algorithm is a filter bank, used in applications such as speech, audio, and video encoding and some signal analysis algorithms. Filter banks, typically consist of stages that divide the signal into high and low frequency portions. These new signals are then downsampled and divided again. In multirate applications, the ratio between the highest and the lowest sample rates in the system can become quite large, sometimes exceeding 100,000.

The range of sample rates encountered in signal processing systems is huge. Roughly speaking, sample rates for applications range over 12 orders of magnitude. Only at the very top  of that range is digital implementation rare. This is because the cost and difficulty of implementing a given algorithm digitally increases with the sample rate. DSP algorithms used at higher sample rates tend to be simpler than those at lower sample rates.

Many DSP systems must meet extremely rigorous speed goals, since they operate on lengthy segments of real world signals in real time. Where other kinds of systems (like databases) may be required to meet performance goals on an average, real time DSP systems often must meet such goals in every instance. In such systems, failure to maintain the necessary processing rates is considered a serious malfunction. Such systems are often said to be subject to hard real time constraints. For example, let’s suppose that the compact disc to digital audio tape sample rate converter discussed above is to be implemented as a real-time system, accepting digital signals at the CD sample rate of 44.1 kHz and producing digital signals at the DAT sample rate of 48 kHz. The converter must be ready to accept a new sample from the CD source every 22.6 microseconds and must produce a new output sample for the DAT device every 20.8 microseconds. If the system ever fails to accept or produce a sample rate on this schedule, data are lost and the resulting output signal is corrupted. The need to meet such hard real-time constraints creates special challenges in the design and debugging of real-time DSP systems.

Clock Rates

Digital electronic systems are often characterized by their clock rates. The clock rate usually refers to the rate at which the systems performs its most basic unit of work. In mass-produced commercial products, clock rates of up to 100 MHz are common, with faster rates found in some high-performance products. For DSP systems, the ratio of system clock rate to sample rate is one of the most important characteristics used to determine how the system will be implemented. The relationship between the clock rate and the sample rate partially determines the amount of hardware needed to implement an algorithm with a given complexity in real-time. As the ratio of sample rate to clock rate increases, so does the amount and complexity of hardware required to implement the algorithm.

Numeric Representations

Arithmetic operations such as addition and multiplication are at the heart of DSP algorithms and systems. As a result, the numeric representations and type of arithmetic used can have a profound influence on the behaviour and performance of a DSP system. The most important choice for the designer is between fixed point and floating point arithmetic. Fixed point arithmetic represents numbers in a fixed range (example, -1.0 to +1.0) with a finite number of bits of precision (called the word width). For example, an eight bit fixed point number provides a resolution of $1/256$ of the range over which the number is allowed to vary. Numbers outside of the specified range cannot be represented; arithmetic operations that would result in a number outside this range either saturate (that is, are limited to the largest positive or negative representable value) or wrap around (that is, the extra bits from the arithmetic operation are discarded).

Floating point arithmetic greatly expands the representable range of values. Floating point arithmetic represents every number in two parts: a mantissa and an exponent. The mantissa is, in effect, forced to lie between -1.0 and +1.0, while the exponent keeps track of the amount by which the mantissa must be scaled (in terms of powers of two) in order to create the actual value represented. That is,

$value=mantissa \times 2^{exponent}$

Floating point arithmetic provides much greater dynamic range (that is,, the ratio between the largest and smallest values that can be represented) than the fixed point arithmetic. Because it reduces the probability of overflow and the necessity of scaling, it can be considerably simplify algorithm and software design. Unfortunately, floating point arithmetic is generally slower and more expensive than fixed point arithmetic, and is more complicated to implement in hardware than fixed point arithmetic.

Classes of DSP Applications.

Digital signal processing, in general, and DSP processors in particular, are used in an extremely diverse range of applications, from radar systems to consumer electronics. Naturally, no  one processor can meet the needs of all or even most applications. Therefore, the first task for the designer selecting a DSP processor is to weigh the relative importance of performance, price, power consumption, integration, ease of development, and other factors for the application at hand. Here, we briefly touch on the needs of just a few categories of DSP applications.

Low cost embedded systems.

The largest applications (in terms of dollar volume) for digital signal processors are inexpensive, high-volume embedded systems, such as cellular telephones, disk drives (where DSP’s are used for servo control) and modems. In these applications, cost and integration considerations are paramount. For portable, battery operated products, power consumption is also critical. In these high volume, embedded applications, performance and ease of development considerations  are often given less weight, even though these applications usually involve the development of software to run on the DSP and custom hardware that interfaces with the DSP.

High Performance Applications

Another class of important applications is involving processing large volumes of data with complex algorithms for specialized needs. This includes uses like sonar and seismic explorations, where production volumes are lower, algorithms are more demanding, and product designs are larger and more complex. As a result, designers favour processors with maximum performance, ease of use, and support for multiprocessor configurations. In some cases, rather than designing their own hardware and software from scratch, designers of these systems assemble systems using standard development boards and ease their software development tasks by using existing software libraries.

Personal Computer Based Multimedia.

Another class of applications is PC based multimedia functions. Increasingly, PCs are incorporating DSP processors to provide such varied capabilities as voice mail, data and fax modems, music and speech synthesis, and image compression. For example, Software Defined Radios (SDR’s ) and also, smartphones. As with other high-volume embedded applications, PC multimedia also demands high performance, since a DSP processor in a multimedia PC may be called on to perform multiple functions simultaneously. In addition to performing each function efficiently, the DSP processor must have the ability to efficiently switch between functions. Memory capacity may also be an issue in these applications, because many multimedia applications require the ability to manipulate large amounts of data.

There has also been a trend to incorporate DSP like functions into general purpose processors to better handle signal processing tasks. In some cases, such augmented microprocessors may be able to handle certain tasks without the need of a separate DSP processor. However, I think the use of a separate general purpose processor (like ARM for control purposes in a smartphone) and DSP processors has much offer and will continue to be the dominant implementation of PC or SDR or smartphone based multimedia for some time to come.

More later,

Nalin Pithwa

Joseph Lechleider, father of Internet DSL technology dies at 82

In the late 1980s, Joseph W. Lechleider came up with a clever solution to a puzzling technical problem, making it possible to bring high-speed Internet service to millions of households. His idea earned him a place in the National Inventors Hall of Fame as one of the fathers of the Internet service known as DSL.

Mr. Lechleider, who died on April 18 at his home in Philadelphia at 82, was an electrical engineer at the Bell telephone companies’ research laboratory, Bellcore. At the time, the phone companies wanted to figure out a way to send signals at high speed across ordinary copper wire into homes, mainly to compete with cable television companies and offer interactive video services.

Applying digital technology was the best route to sidestep the limitations of copper wire, but there was still a barrier. When the data speeds in both directions — downloading and uploading — were the same, there was a lot of electrical interference that slowed data traffic to a crawl.

Mr. Lechleider figured out that such meddlesome interference — known as electrical crosstalk — could be drastically reduced if the download speeds were far faster than the upload speeds. This approach became known as the asymmetric digital subscriber line. And these digital subscriber lines, or DSL, were how big phone companies like AT&T and Verizon brought fast broadband Internet into homes.

“Joe Lechleider’s idea was a simple, elegant solution to the problem,” said John M. Cioffi, an emeritus professor of electrical engineering at Stanford University. “His contribution was essential to the development and spread of the Internet.”

Mr. Lechleider’s death was confirmed by his son, Dr. Robert Lechleider, who said the cause was cancer of the esophagus. Besides his son, he is survived by his wife, Marie; his daughter, Pamela; and four grandchildren.

Joseph William Lechleider was born on Feb. 22, 1933, in Brooklyn. He attended Brooklyn Technical High School and earned his undergraduate degree from Cooper Union and a Ph.D. from the Polytechnic Institute of Brooklyn.

Upon graduation, he went work for General Electric for a few years, and in 1955 he joined Bell Labs. After the 1982 court order breaking up American Telephone & Telegraph, the research arm of the regional Bell companies was established as Bellcore.

Mr. Lechleider’s insight about how to increase data speed came when he was 55. He had spent decades studying signal processing, so he was deeply grounded in the field. But Mr. Lechleider, according to Mr. Cioffi, was something of an iconoclast in a large, often bureaucratic organization.

“He was not afraid to take a risk and fight for a new idea,” Mr. Cioffi said.

Mr. Lechleider was fueled by a wide-ranging curiosity, his son said. Two walls of his study, he recalled, were bookshelves, double-stacked, with books on subjects ranging from physics to philosophy. The study also had a bust of Albert Einstein, whom Mr. Lechleider revered for advancing ideas that challenged accepted wisdom.

Digital subscriber lines were not an immediate success. Early versions were not capable of video-on-demand services, the market the Bell companies originally wanted to enter. And when the Internet began to take off in the 1990s, most consumers went online using dial-up modems, which increased the demand for second phone lines in homes. That was a good business for the phone companies, and a familiar one. Why opt for this new DSL technology?

“There was considerable skepticism,” Mr. Lechleider said an interview with The Wall Street Journal in 2003. “There were people who didn’t want to deploy it. There were people who didn’t think it would work. Many of them weren’t sure there was a market for it.”

But as the web added more data-rich images, music and video, the demand for affordable, higher-speed communications services surged. And DSL technology afforded the phone companies a path to do that for years without having to undertake the costly alternative of installing fiber-optic cable into homes.

Mr. Lechleider contributed a key idea. But it was younger engineers like Mr. Cioffi who developed DSL modem technology.

The inexpensive cleverness of DSL technology, industry analysts say, meant the phone companies did not invest heavily to upgrade their broadband systems, as did the cable companies, like Comcast and Time Warner Cable. The cable operators initially feared competition from satellite television, but their investment paid off, allowing them to offer ever-faster Internet service.

“DSL allowed the phone companies to milk another two decades out of their copper infrastructure,” said Craig Moffett, a telecommunications analyst. “But the phone companies now find themselves far behind the cable companies in the speeds they can offer.”

Mr. Lechleider was not an early adopter of Internet technology. But when he signed up for high-speed service, he chose cable.

Yet his son recalled first getting DSL service years ago in his own home and, realizing how much faster it was than dial-up service, thinking of his father. “I loved him for it,” he said.

Note from Nalin Pithwa: This obituary appeared in today’s newspaper, The New York Times on the web.

Interpolation, Decimation and Multiplexing

Frequently, there is the need in DSP to change the sampling rate of existing data. Increasing the number of samples per unit time, sometimes called upsampling, amounts to interpolation. Decreasing the number of samples per unit time, sometimes called downsampling, is decimation of the data. (The original meaning of the word decimation comes from losing one-tenth of an army through battle or from self-punishment; we apply it to data using various reduction ratios.) Of course, interpolation and decimation can occur in frequency as well as time.

In fact, we have already encountered frequency domain interpolation; zero padding in time followed by the DFT interpolates the hidden sinc functions in the DFT spectrum. We can do the opposite also: zero padding in the frequency domain which produces interpolated time function. We will now investigate this type of upsampling, applied to interpolation of time domain data, in a little greater detail.

Consider the discrete data stream shown in Fig 1a along with its continuous spectrum. As we now realize, this DFT spectrum has different possible interpretations, depending on our data model. For purposes of discussion, let us say that this data results from sampling a band-limited (or, nearly band-limited) continuous signal. Then, in the limit of a very long data window, sampled at a sufficiently high rate, no leakage or aliasing occurs. Time domain interpolation will correctly recover the original analog signal if it does not alter the spectrum in Fig 1a. The periodicity induced into the spectrum by the data sampling process can be eliminating by extracting just one replica. This extraction, accompanied by frequency domain multiplication with the boxcar shown in the right side of Fig 1b, convolves the discrete time domain data with the continuous time function to reproduce the original analog signal. It thus seems evident that a truly band-limited signal can be recovered completely from its sampled version providing that the sampling rate is sufficiently high and that the sample is sufficiently long. The statement is commonly made that a band-limited analog signal can be uniquely recovered from its sampled version provided that it is sampled at a rate greater than twice the highest frequency contained in its spectrum; this statement is called the Sampling Theorem. Several aspects of this theorem have been proved in mathematical detail in many reference texts. However, from our previous discussions in these blogs, any such band-limited signal must be infinitely long, making the exact determination of its spectrum impossible in the first place.

Thus, in practice, we must always be content with an approximate reconstruction of the original analog signal. Preferring a digital scheme for this reconstruction, we convolve the boxcar spectral window of Fig 1b with the sampling function shown in Fig 1c. The result tells us how to exploit the DFT for the recovery of the analog signal — use zero padding in the frequency domain. In our example, we use $2:1$ zero padding, which produces the midpoint interpolation operator shown in Fig 1d. The result of this operator acting on the original data in Fig 1a is shown in Fig 1e. In the frequency domain, one simply appends zeros to the DFT spectrum. It is interesting to note that during the convolution process the sinc operator in the time domain appropriately has its zeros aligned with the unknown midpoints except at the point currently being interpolated; every interpolated point is a linear combination of all other original points, weighted by the sinc function; see Fig 1f. This interpolation, sometimes called sinc interpolation, can only be carried out in an  approximation because the sinc function will have to be truncated somewhere. In the frequency domain, the result of truncating the sinc manifests itself as a convolution of the ideal low pass filter of Fig 1d with a narrow sinc arising from the truncation of the interpolating sinc operator. As a result, the final unsampled data has the same spectrum as the original data only to some approximation.

Decimation, or downsampling, is the reverse operation of the sinc interpolation. To decimate $2:1$ with no loss of information from the original data, the data must be oversampled to begin with. Fig 2a shows data that is nearly oversampled $2:1$ to produce a spectrum that has very little energy in the upper half of the Nyquist interval. As is usually done, we low pass filter in preparation for decimation. In our example of Fig 2b, the upper half of the Nyquist interval has been filtered out with an appropriate filter. Then, the $2:1$ decimation operation simply consists of extracting every other sample in the time domain. This operation can be perceived as multiplication in time and convolution in frequency, with the sampling function shown in Fig 2c. The decimated signal, in Fig 2d, now has a new sampling rate and Nyquist frequency — its spectrum just filled in to meet the new Nyquist criterion. The lowpass filtering has assured that no aliasing occurs in the decimated data.

The next two examples of manipulating data and their spectra employ the combinations of filtering, sampling, interpolation and decimation. Consider the spectrum shown in Fig 3a, which is divided into four separate bands. Each of these bands contains information that we wish to separate from the original spectrum. In one important case in communications applications, each frequency band contains an independent information channel. The modulation theorem, expressed in continuous form, shows that if we modulate a given channel with a sinusoid of frequency $\omega_{0}$, the spectrum is translated by $\pm$\omega_{0}\$. Thus, each of the four frequency bands of Fig 3 could represent separate channels formed by frequency division multiplexing. (FDM) using an appropriate carrier frequency, $\omega_{0}$, $2\omega_{0}$, $3\omega_{0}$ and

$4\omega_{0}$, for each band. Analog versions of FDM had been extensively used for years in communications applications such as AM radio, stereo broadcasting, television and radiotelemetry. Digital FDM is similar, except the spectrum is repetitive.

Recovering a given channel, called demodulation or demultiplexing, is accomplished by first isolating the selected channel using bandpass filtering and then decimating the result. Fig 3 shows channel three demultiplexed by filtering followed by a $4:1$ decimation. The resulting digital data has a new sampling rate, meeting the Nyquist criterion. If the original channels are well-sampled, gaps occur in between the spectral bands of Fig 3a, which are called guard bands.

Another application of isolating a given frequency band in this fashion occurs when we simply desire to pick off a given portion of the spectrum of a signal for more detailed examination and

processing. In this case, the original spectrum of Fig 3a belongs to  just one digital signal, and the bands are portions of the spectrum of special interest. In our example then, band three has been selected for closer examination. The process has given us time domain data that require only one-fourth the original samples, an important savings in some applications where further processing on the spectrum is desired, such as in spectral estimation. When used in this fashion, this procedure is called zoom processing because it zooms in on the spectrum of interest.

For our second example of multiplexing, we address a situation that is complementary to FDM. In FDM, the information channels are mixed in a complicated way in the time domain because of the modulation of sinusoids, but the channels are quite separate in the frequency domain. The reverse situation has the channels easily separated in time, but mixed in frequency. For our example, we consider only two different digital information channels. An obvious way to combine them in time is to interlace the samples, with every other sample belonging to the same channel, called time division multiplexing (TDM). Multiplexing and Demultiplexing in the time domain is then a simple matter of using every other sample. However, let us explore the frequency behaviour of this process.

In Fig 4a, we show one of the two data channels, called channel A. It is oversampled by

$2:1$ so that its spectrum occupies only one-half of the Nyquist interval. Decimation using the sample function of Fig 4b yields the result shown in Fig 4c. But, instead of redefining the sampling rate as in normal decimation, we put a twist into the processing by interpreting the results of Fig 4c as having the same sampling rate as the original data. Thus, the time domain data has zeros at every other point. This zero interlacing produces a spectrum that is folded at one-half the Nyquist frequency as shown. To conserve energy using this interpretation, the  spectrum must be renormalized to one-half the original values. The other channel, the channel B, is similarly oversampled by $2:1$ and then it is decimated by the shifted sampling function shown in Fig 4d. Again, its spectral amplitudes are reduced by a factor of one-half as a consequence of the zero interlacing. Finally, the TDM is completed by adding the results of the two channels. Figures 4c and 4e sum to Fig 4f. As anticipated in TDM, while the time data are easily separated, the frequency data are mixed. Even so, note that now the Nyquist interval is filled with the nonredundant information that can be used to separate the spectrum of the two channels since $A+B$ and $A-B$ are linearly independent. Clearly, TDM demultiplexing could be done in either domain.

Nalin Pithwa.

Applications of Fourier Transform to DSP – Part 2 — Relationship between Fourier Transform and Discrete Fourier Transform: Resolution and Leakage

In many applications, our digital signal is the result of sampling a continuous signal. Examples abound: the voltage of an electronic signal may be sampled with an A/D converter, a meteorologist may read a barometer at the same time each day, and a navigator may determine the position of his vessel with a noontime sun shot using a sextant or he may determine his position every second with electronic aids, such as satellite navigation. In each case, it may be desired to treat the resulting equally spaced data with an LTI operator. For example, the satellite navigation data may need to be smoothed in some manner to better estimate the ship’s position. Whatever LTI operation is performed on the data, it can be performed by either convolution in the time domain or multiplication in the frequency domain. Therefore, it will be beneficial to see the effects of sampling in both domains.

We start with the continuous signal that exists before sampling. It and its continuous magnitude spectrum are shown in Figure 1a; we omit the phase spectrum for simplicity. In many situations, the signal and its spectrum will be rather broad, as shown.

Figure 1b shows the sampling function with spacing T. Multiplication in the time domain produces the sampled signal; convolution in the frequency domain produces its resulting spectrum. The result is shown in Figure 1c. Convolution with a row of $\delta$ function just places the original function at each $\delta$– function location, replicating the original function. Thus, sampling in the time domain has induced periodicity in the spectrum. If the spectrum is wider than $\pi/T$, an overlap superimposes portions of the true spectrum causing a distortion. This overlapping of the spectrum is aliasing. The amount of the overlap depends on the spectral width compared to $\pi/T$, the spacing between repeated spectra. For any given spectrum, the effect can be reduced by more time domain sampling, which spaces the replicated spectra farther apart. The Nyquist sampling theorem applies only to a signal that is bandlimited so that the spectrum has exactly zero components above frequency $\pi/T$. We know from the time-limited/band-limited theorem that such a band limited signal would have to be infinitely long — an impossibillity. Thus, fundamentally, there are no band limited signals and aliasing becomes a matter of degree. In practice, however, the long tail of the spectrum may be masked by other effects, such as noise, so that a sufficiently high sample rate can separate the replicated spectra to a point where the only meaningless information exists in the vicinity of the Nyquist frequency. The result of any further increase in the sampling rate depends on the nature of the contaminating noise. For example, if the noise had large, high-frequency components, they may be folded down into the low-frequency portion of the signal’s spectrum by an insufficient sampling rate. In this case, a higher sampling rate separating the noise spectrum from the signal’s spectrum will improve the data’s usefulness for subsequent signal processing.

Putting noise considerations aside, there is another fundamental limitation that we must impose on the digitized data shown in Figure 1c. In many situations, we can only record a portion of the actual signal. In effect, this truncation is multiplication of the data stream with the boxcar data window of length $T_{0}$ shown in Figure 1d. In the frequency domain, the spectrum of the data is convolved with the Fourier transform of the data — a sinc function of width

$(4\pi)/T_{0}$. This convolution with the sinc function smears out the sharp features of the spectrum, limiting the spectral resolution. The two sharp lines in our example are now smeared out as shown in Figure 1c. The longer the data window $T_{0}$, the sharper is the sinc function, and hence, the less loss of resolution. In the limit of an infinitely long data window, its associated sinc function becomes a $\delta$ function, resulting in the identity operation when convolved in the frequency domain, only then is there no loss of resolution. The dependence of frequency resolution on data window length is a manifestation of the uncertainty principle: it requires a long sample to obtain a precise frequency determination. Note that the frequency resolution depends only on the length and the shape of the data window. An increased sampling rate only separates the replicated spectra, reducing aliasing, but it does not increase spectral resolution.

In addition to limiting spectral resolution, this convolution of the spectrum with the sinc function has another effect. The slowly decaying side lobes of the sinc function mix spectral energy from one spectrum to its adjacent replicas, as shown in Figure 1e. This interspectral mixing is called leakage and is distinct from the loss of resolution that occurs within the spectral replica. Unlike resolution, leakage can be reduced by increased time domain sampling because of the increased separation between spectral replicas.

Both resolution and leakage depend on the shape of the data window. We have used the boxcar for this discussion; clearly, others would produce a different effect on the spectrum. It turns out that compared to other reasonable window shapes the boxcar’s sinc function has a relatively narrow central peak with comparatively large side lobes. It thus produces good spectral resolution at the expense of poorer leakage properties. Whenever we are using only a portion of a data stream, sampled over some length of time as discussed here, we can never observe the true spectrum, but we see it through this data window: we see an estimate of the spectrum, smoothed by a moving average forced on us by the convolution operation. The selection of the best shape for data window, which in turn determines the kind of smoothing done on the spectrum is called windowing ( we will discuss an approach to windowing later in these blogs).

Up to this point, we have only discussed the acquisition of sampled data over a finite window length. These data have a continuous spectrum that is distorted in both resolution and leakage. To view this spectrum, we need to calculate it from the sampled data, using a convenient computational scheme — the DFT, of course. The DFT samples the spectrum at equally spaced points in frequency. This frequency domain sampling is represented by multiplication with the sampling function, as shown in Figure 1f. In the time domain, the data is convolved with the sampling function, producing the replicated data samples shown in Figure 1g. Again, sampling in one domain forces periodicity in the other so that our data is now unavoidably periodic. In our example, we have sampled the spectrum at sufficiently close frequency intervals, $\Delta\omega$, that time domain replicas have gaps (that is, zero values) between them. This development shows how to use the DFT to compute the spectrum at any number of frequency points — simply append zeros to the data. This is called zero padding and is normally a good practice in computing spectra of discrete data via the DFT. The spectrum of discrete data is a continuous function of frequency, zero padding is simply a method of computing the spectrum at a large number of points using the DFT. Figure 1 is an important summary of our discussion.

This relationship is sufficiently important for using the DFT in practical applications to warrant some examples. A particularly instructive case is no zero padding. Then, the data window, shown in Figure 2a, is replicated with no gaps. This condition occurs when the replicating spikes in the time domain are separated by the length of the window, T. Then, the frequency sampling points, which we will call the DFT frequencies, are spaced by

$\Delta\omega=\frac{2\pi}{T}$ up to $\frac{\pi N}{T}$ for an N point DFT. The important observation is that the zeros of the data window’s sinc function exactly fall on the DFT frequencies at multiples of $\frac{2\pi}{T}$.

The significance of this result can be seen by considering sinusoidal data with frequencies that are multiples of $\frac{2\pi}{T}$. Then, an integer number of periods, n, just fits inside the data window, as shown in Fig2c for $n=3$. Convolution with the data window’s sinc function produces two superimposed sinc functions located at $\pm\frac{(2\pi)}{T}$ as shown in Figure 2c. But, because of the special frequency of the sine wave, the DFT’s frequency sampling just hits the zeros of these combined sinc functions at every frequency except

$\pm\frac{2\pi}{T}$, where the sinc’s central peak is sampled. The resulting DFT spectrum, shown in Figure 2d, is deceptively accurate: the time domain signal has been smoothly joined by its replicas to make an infinitely long sine wave whose spectrum is properly described by two zero-width spikes.

Have we beaten the uncertainty principle in this example, demonstrating infinite resolution and zero leakage? By no  means. The sinc function associated with the boxcar data window is really lurking in the background of the DFT spectrum. Zero padding reveals it. The spectrum is continuous, and it can be computed at any frequency points that we desire by using the DFT with zero padding. Figure 3 shows the result of $8:1$ zero padding for our example.

One may argue that the zero padding of Figure 3 is an abomination of our original sine wave data. This suggestion brings us to an extremely important point for interpreting DFT results. To properly use the DFT, one needs to have more information about the signal than just available in the data window. For example, if we know that our signal is just the cosine burst of Figure 3, then a lot of zero padding is the correct thing to do. It makes the repetitive time domain signal of the DFT look more like an isolated burst. Also, because we know that an isolated burst has an infinitely broad spectrum, the higher sampling rates will reduce aliasing. On the other hand, if it were known that the signal under study were periodic with a period exactly equal to the data window, then no zero padding is the correct application of the DFT. If this signal were also band limited and unaliased, the resulting DFT line spectrum would then be exact.

By using the proper procedure, it is possible to use the computational convenience of the DFT to compute all four kinds of Fourier transforms that we have discussed. In some cases, exact results are possible; in others, we must be content with approximations. To see the kind of thinking required to use the DFT in Fourier transform problems, consider the following scenario.

Imagine a digitized sinusoid as sketched in Figure 4a. Approximately, not exactly, one half of a period of the sinusoid occupies the data window. Therefore, energy will be spread out across the entire DFT spectrum because the frequency of the sinusoid does not fall right on a DFT computation frequency. What then, is the interpretation of the DFT spectrum sketched in Figure 4a? To answer this question, we require knowledge of the signal beyond the data window. If we assume it is periodic with a period just equal to the data, then the signal fits the DFT formulization. Is the spectrum exact then? No, because the periodic extension of the signal suggested by the dotted line in the figure, has discontinuities that require it to be broadbanded. This infinite bandwidth signal is therefore necessarily aliased, and the DFT spectrum is inexact. A higher sampling rate, as shown in Figure 4b, will reduce the aliasing and improve the spectral estimation. This demonstrates the method of using the DFT to estimate the Fourier series coefficients of a broadbanded continuous function.

We might ask what signal does have the exact spectrum shown in Figure 4b? Zero padding in the frequency domain, as shown in Figure 4c, gives the answer. This zero padding is saying that the DFT spectrum is exact out to Nyquist and there is no spectral energy beyond. That is, we are now saying that the signal is bandlimited. This bandlimited signal is recovered by taking the IDFT of the zero padded spectrum as shown in Figure 4c. This procedure is really interpolating between the discrete points in the time domain by assuming that the signal is band limited and unaliased. Any amount of interpolation is possible; it just requires more frequency domain zero padding to get more interpolated points in the time domain. In the limit, a continuous, band limited time domain signal is approached. It is this signal that has the DFT spectrum as Fourier series coefficients.

The last possibility that we consider is that the signal is a single burst of the sinusoid. Such a signal, of course, is broadbanded, having spectral components extending to infinity. Zero padding in the time domain will make the periodic DFT representation look more like a single burst, as suggested in Fig 4d. As the sampling rate is increased to lower aliasing and zero padding is increased, the DFT spectrum approaches the broadbanded, non periodic, continuous spectrum of the sinusoidal burst, but never completely reaches it.

In summary, then, essentially the same data can be used to generate DFT spectra with greatly different interpretations. In two cases, the bandlimited nature of a signal permits the DFT to produce an exact spectrum. The most common of these two cases is the discrete, finite length sequence whose spectrum is naturally periodic. Time domain zero padding reveals the exact spectrum. The less common case is the continuous periodic signal that is known to be band limited. If the data window can be matched to exactly one period and a sufficiently high sample rate is used, the DFT will produce the true line spectrum.

In two other cases, the broadband nature of the signals allows only an estimate of the true spectrum via the DFT. If the signal is periodic and sampled for an integer multiple of its fundamental period, the DFT spectrum will approach the exact line spectrum as the sample rate increases. If the signal is known to be zero outside of the data window, the DFT spectrum will approach the correct continuous Fourier spectrum with increased sampling and increased zero padding.

It is a central point in DFT signal processing that one needs to know information about the data beyond the data window to determine the proper DFT treatment. this formulation of a signal’s behaviour beyond the observed data window is called data modelling. In addition to the four models discussed here, many more are possible; we will examine them later in spectral estimation. Next, we will use the sampling function and the properties of zero padding that we have discussed to explore some important manipulations of data and their DFT spectra.

Until the next blog,

Nalin Pithwa

Applications of the Fourier Transform to Digital Signal Processing (DSP) Part I

In the previous blogs, we invested our time and energy understanding the continuous signal theory because many of the signals that find their way into digital signal processing are thought to arise from some underlying continuous function. In the next few blogs, we discuss the relationship between these underlying continuous signals and the discrete signals that are produced by their sampling. Fundamental problems will be encountered. A parallel development of the sampling process, in both the time and frequency domains, will clarify thse problems and show how to cope with them. However, the deeper insight provided by this development will permit us to evaluate the severity of the limitations encountered in a given circumstance. Finally, after we understand the relationship between the Fourier integral transform and the DFT, new useful ideas will emerge on interpolation, decimation and modulation.

Uptil now we have followed (or assumed to have followed!) a natural course through three kinds of time-frequency transformations: First, evaluation of the frequency response of discrete LTI systems led to the discrete time/continuous frequency description of digital impulse responses and their spectra. Then, by computing this spectra at discrete points in frequency, we were led to the discrete time/discrete frequency domain of the DFT. Lastly, suitable limits took the DFT over into the Fourier integral transform of continuous time and frequency.

To complete our journey into time-frequency transformations, we next address the fourth possibility:  continuous time and discrete frequency.

Continuous Time, Discrete Frequency: The Fourier Series

Discrete spectra are familiar from areas such as spectroscopy, where they are called line spectra. Viewed mathematical entity by itself, apart from continuous signals, we know that the DFT relates equally spaced discrete spectra to equally spaced data points in time. We now ask what are the consequences of demanding equally spaced discrete spectra from a set of continuous time domain data. That is, we require that

$F(\omega)=2\pi\sum_{n=-\infty}^{\infty}a_{n}\delta(\omega - n\omega_{0})$ Eqn 1

which places spectral lines of magnitude $2\pi a_{n}$ at multiples of the fundamental frequency $\omega_{0}$. The time domain signal with such a spectrum is

$f(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i \omega t}dt$, which in turn is equal to

$\frac{1}{2\pi}\int (2\pi)\sum a_{n} \delta(\omega - n \omega_{0})e^{i \omega t}$

or $f(t)=\sum_{n=-\infty}^{\infty}a_{n}e^{i n \omega_{0}t}$ Eqn 2

Clearly, $f(t)$ is still a continuous function of time, but the discrete, equally spaced spectrum has produced periodicity in the time domain of period

$T=\frac{2\pi}{\omega_{0}}$

since $f(t+(2\pi)/\omega_{0})=f(t)$

Equation 2 is exactly equivalent to the equation connecting discrete, equally spaced time domain data to continuous periodic spectra with the roles of time and frequency interchanged.

To complete the continuous time/discrete frequency picture,we should find the forward transform that takes $f(t)$ into $F(\omega)$. Because $f(t)$ is periodic, no new information is contained outside of the interval $\frac {-T}{2}$ to

$\frac{T}{2}$.

Thus motivated, we integrate equation II over these limits with the appropriate Fourier kernel:

$\int_{(-T)/2}^{T/2}f(t)e^{-im\omega_{0}t}dt$ which is equal to

$\sum_{n}\int_{(-T)/2}^{T/2}e^{i(n-m)\omega_{0}t}dt$

Direct elementary integration shows that

$\int_{(-T)/2}^{T/2}e^{i(n-m)\omega_{0}t}dt=T\delta (nm)$ Eqn III

giving $\int_{(-T)/2}^{T/2}f(t)e^{-im\omega_{0}t}dt=Ta_{m}$

This result, combined with equation II, provides the desired pair of transformations, relating continuous time periodic functions with their equally spaced discrete spectra:

$a_{n}=(1/T)\int_{(-T)/2}^{T/2}f(t)e^{-in\omega_{0}t}dt$ Eqn IVa

$f(t)=\sum_{n=-\infty}^{\infty}a_{n}e^{in\omega_{0}t}$ Eqn IVb

Equations IV called the Fourier Series, play a major role in continuous theory because periodic, continuous time signals are so common place in many applications. Equation IVa is sometimes called the analysis equation because it separates the periodic time signal into its component line spectra, $a_{n}$. Equation IVb represents the Fourier synthesis of a periodic, continuous time signal from the superposition of complex sinusoids with frequencies that are multiples of the fundamental. The sum may or may not extend to infinity. The $a_{n}$ are called Fourier coefficients. Three common examples of Fourier series, the square wave, the triangle wave, and the full wave rectified sine wave are shown in Figure 1.

It is interesting to note that the Fourier series, which has dominated Fourier theory since its inception, takes on only a very minor part in digital signal processing. In fact, we introduce it here mainly to complete the discussion of the four types of Fourier transforms arising from the four possible combinations of discrete/continuous-time, discrete-frequency Fourier series in our discussion for two purposes to examine its least-squares convergence properties and to introduce the sampling function.

For reference, we summarize the four types of Fourier transformations in Figure 2, reviewing just how they arise in our (assumed, if need be) natural course of discussion. Clearly, there are two types of transformations that might properly be called Fourier series discrete time and continuous time Fourier series. Mathematically, these two transformations are identical, but with the roles of time and frequency interchanged. Next, we explore the interesting convergence properties of these two types of Fourier series.

The Least Squares Convergence of the Fourier Series

The convergence property of the Fourier series is both curious and interesting in its own right; additionally, it has important implications for digital filter design. Our approach is to pretend that we do not know how to compute the Fourier coefficients as given in equation IVa. Instead, we wish to take an alternate tack to the course of the previous section and determine the

$a_{n}$ by curve fitting. That is, following the notion of Fourier synthesis, suppose that we wish to synthesize a given continuous periodic function $f(t)$ from a superposition of a finite number of sinusoids equally spaced in frequency. The result of such a superposition of N sinusoids is

$\hat{f}(t)=\sum_{n=0}^{N}a_{n}e^{in\omega t}$ Eqn V

Generally, when the finite sum is used, $\hat{f}(t)$ will be different from $f(t)$. Of all the possible criteria for minimizing this difference, we choose to minimize the total squared error over the period of $f(t)$; that is, we wish to determine the $a_{n}$ that minimizes:

$the-sum - squared - error =\int_{(-T)/2}^{T/2}|f(t)-\hat{f}(t)|^{2}dt$ Eqn 6

The minimum is found by differentiating with respect to $a_{m}^{*}$ (note that

$a_{m}$ and $a_{m}^{*}$ are independent):

$\frac{\partial}{\partial a_{m}^{*}}\int_{(-T)/2}^{T/2}|f(t)-\sum_{n=0}^{N}e^{in\omega t}|^{2}dt=0$

giving

$\int_{-T/2}^{T/2}(f(t)-\sum a_{n})e^{-im\omega t}dt=0$, or

$\int f(t)e^{-im\omega t}dt=\sum_{n}a_{n}\int_{(-T)/2}^{T/2}e^{i(n-m)\omega t}dt$, which, in turn is equal to

$T\sum_{n}a_{n}\delta(nm)$

Thus, the coefficient that minimizes the total squared error is

$a_{m}=(1/T)\int_{(-T)/2}^{T/2}f(t)e^{-im\omega t}dt$ just the normal Fourier series coefficient.

The conclusion is somewhat surprising: simple truncation of a function’s Fourier series expansion approximates the function in the least-squares sense. This serendipitous result allows us to take satisfaction in the knowledge that as each term is added to the partial sum in Equation V, the best possible fit to $f(t)$ is obtained, for that number of terms, in the least squares sense. In addition, a second surprise concerning the nature of Fourier series convergence occurs as a discontinuity.

Three examples of partial sums for a square wave synthesis are shown in Figure V. In each case, an overshoot occurs near the discontinuity, and the partial sums equal the average value of the jump at the discontinuity. The unexpected behaviour of the convergence is that the amount of overshoot remains constant as more terms are added. In fact, it can be shown that this overshoot approaches a constant 8.9% of the jump at the discontinuity as the number of terms in the partial sum tends to infinity. It might be natural to expect that this overshoot tends to zero with an increasing number of terms, but this is not the case. This unexpected convergence behaviour was first described by the British mathematician H. Wilbraham, but it was made more well known by a public exchange in “Nature” journal between American physicists A. Michelson and J. W. Gibbs. In a 1898 letter to “Nature”, Michelson explained that he was having difficulty synthesizing a discontinuity from the summation of the first 80 terms of a Fourier series. He suspected that the device he used to determine the Fourier coefficients, called a harmonic analyzer, was defective or that the mathematicians were all wet in saying that a discontinuity could be replaced by a summation of continuous functions. In a responding letter, one year later, Gibbs explained the true nature of convergence, which is now called the Gibbs phenomenon.

As terms are added to the partial sum, the integrated squared error of Equation VI does indeed decrease uniformly, the wriggles of the Gibbs phenomenon increase in frequency and shift toward the discontinuity as seen in Figure 3, decreasing the squared area between them and the desired function. The amplitude of the largest wriggle does not decrease, but the integrated squared error does decrease uniformly.

The importance of the Gibbs phenomenon to digital signal processing can be appreciated by considering the design of an ideal lowpass filter. The frequency response of such a filter is a continuous periodic function with a discontinuity at the cutoff frequency. The discrete time domain operator consists of the Fourier coefficients of this frequency response function. These coefficients are easily computed for any cut off frequency, but the operator will be infinitely long. The situation is similar to that shown in Figure 5, with the roles of time and frequency interchanged. If the infinitely long time domain operator is simply truncated to produce a workable digital filter, the Gibbs phenomenon is produced in the frequency response, making the ideal low pass filter unobtainable with a finite length operator. The question then of how to obtain the best time domain coefficients of a given length becomes the interesting and challenging problem of digital filter design. Later, in these blogs, we will see several approaches to designing digital filters, given their desired frequency response.

The Sampling Function

We next exploit the continuous time Fourier series to introduce the sampling function, a transform pair that will serve as a major structural element in building a bridge of insight and interpretation between continuous signals and the DFT. In the continuous time Fourier series representation, $f(t)$ is periodic of period $T$. By selecting $f(t)=\delta(t)$, a series of delta function spikes is formed in the time domain equally spaced by T. Such a series of spikes would be written as:

$f(t)=\sum_{n=-\infty}^{\infty}\delta(t-nT)$

The Fourier coefficients of this spike train are given by

$a_{n}=(1/T)\int_{(-T)/2}^{T/2}\delta(t)e^{-in\omega_{0}t}dt$ where

$\omega_{0}=\frac{2\pi}{T}$

or $a_{n}=1/T$

So, the spectrum is also a series of equally spaced spikes, with frequency spacing

$\omega-{0}=\frac{2\pi}{T}$. Rewriting this time domain function and substituting

$a_{n}=1/T$ in Equation I gives for this transform pair of equally spaced spikes

$f(t)=\sum_{-\infty}^{\infty}\delta(t-nT)$ Eqn VIIa

$F(\omega)=\frac{2\pi}{T}\sum_{n=-\infty}^{\infty}\delta(\omega -n \omega_{0})$ Eqn VIIb

Either member of this transfom pair, shown in Figure 4, is called a sampling function because multiplication (in either domain) with a continuous function just produces equally spaced samples of that function. Notice again the reciprocal relationship  between time spacing T and frequency spacing $\omega_{0}$ of this pair $T\omega_{0}=2\pi$. The sampling function, like the Gaussian, is one of those rare functions that is its own transform, that is, the functional dependence is the same in both domains.

In the course of these blogs, we have taken or have assumed to taken a circuitous route via the continuous time Fourier series to introduce the sampling function because it seems like the easiest approach. Indeed, the computation of the spectrum of the Equation VIIa, using the continuous Fourier integral formulation is impossible since all the conditions for the existence of the Fourier integral are violated. The sampling function with its infinite number of infinite discontinuities and its nonconvergent Fourier integral is the most pathological function that we will need. Yet, in the next blog it will prove most easy to manipulate and useful in understanding the sampling of signals.

More later,

Nalin Pithwa

A Repertoire of DSP transforms — part 4

We continue our exploration of the Hilbert transforms…

For a more elementary example of the Hilbert transform type of phase shift, we turn to the sine and cosine functions that are frequently said to be 90 degree phase shifts of each other.However, the actual phase shift required to transform between sines and cosines is not this simple 90 degree phase shift; rather, it is again, the somewhat curious 90 degree phase shift of the Hilbert transform where positive and negative frequencies are treated oppositely. Fig 6.12a shows that the $i.sgn (\omega)$ phase shift applied to the sine spectrum produces the cosine spectrum. Likewise Fig 6.12b shows that the Hilbert transform phase shift applied to the cosine spectrum produces the negative of the sine spectrum. Thus, sines and cosines are Hilbert transform pairs:

$\cos (\omega t)$ equals

$(1/\pi)\int_{-\infty}^{\infty}\frac {\sin (\omega^{'}t^{'})}{(t-t^{'})}dt^{'}$   Equation 6.28a

$\sin(\omega t)$ equals

$(1/\pi)\int_{-\infty}^{\infty}\frac {\cos (\omega t^{'})}{(t-t^{'})}dt^{'}$   Equation 6.28b

This manipulation of the spectra of line spectra of sines and cosines can be done in yet another way, which again relates to the Hilbert transform. In Fig 6.13, we show that the cosine spectrum plus i times the sine spectrum is equal to the complex exponential”s spectrum. That is, we observe that

$e^{i\omega t}=\cos (\omega t)+i\sin (\omega t)$

Additional significance of this familiar result can now be seen. While both the sine and cosine have two-sided spectra, the complex exponential only contains one positive frequency. We see now why the complex exponential form is so suitable in describing rotors in ac circuit theory; it represents a rotor moving counterclockwise with only one frequency. On the other hand, a sine or cosine represents a superposition of two rotors moving in opposite directions, making phases impossible to track.

Using the fact that the sine is the negative Hilbert transform of the cosine, we can write the complex exponential as

$e^{i \omega t}=\cos (\omega t)-\mathcal{H}(\cos (\omega t))$

This generation of a complex spectrum from the real function $\cos (\omega t)$ can be similarly to applied any function:

$g(t)=f(t)-i\mathcal{H}(f(t))$ Equation 6.29

and then $g(t)$ is called the analytic signal of $f(t)$. Sometimes,

$\mathcal{H}(f(t))$ is called the quadrature function or, equivalently, the allied function of $f(t)$. This analytic signal, like the exponential, is one sided in the frequency domain — it only contains positive frequencies. This is easily seen, for the Fourier transform of Equation 6.29 is

$G(\omega)=F(\omega)-i[isgn(\omega)]F(\omega)=F(\omega)[1+sgn(\omega)]$. Hence, we get

$G(\omega)=2F(\omega)$ for $\omega >0$ and

$G(\omega)=0$ for $\omega < 0$ Equation 6.30

If we turn this idea around by generating analytic signals in the frequency domain, they will be one-sided, that is, causal, in the time domain. Thus, Hilbert transforms are intimately related to causality. We explore that relationship later in these blogs.

Our last application of the Hilbert transform is the envelope function given by

$E(t)=|g(t)|=\sqrt{f^{2}(t)+(\mathcal {H}(f)^{2}}$ Equation 6.31

This non-linear function of $f(t)$ has several interesting properties. First, the envelope is tangent to $f(t)$ at points where $E(t)$ and $f(t)$ meet. Since the envelope is clearly greater than $f(t)$ everywhere, the envelope circumscribes $f(t)$ giving the envelope its name. An example of this property is shown in Fig 6.14

As a consequence of this tangency property, the envelope can arise as a so-called singular solution to certain nonlinear, first order differential equations. A family of solutions $f(t)$ is generated by using integration constants. But the envelope although not a member of the family, still satisfies the differential equation because it has the same derivative at each point that members of the family have.

The envelope becomes particularly interesting for signal processing, when we identify the family of functions $f(t)$ that form the envelope in terms of the frequency domain. We can show that a constant frequency phase rotation $\phi sgn(\omega)$ of the spectrum of

$f(t)$, a generalized Hilbert transform, leaves the envelope of $f(t)$ unchanged. Thus, the envelope outlines all the possible functions that are obtainable from a given

$f(t)$ by rotating its phase spectrum through an arbitrary frequency independent angle. This characteristic of the envelope is useful in describing the effects that filters of unknown phase spectra may have had on data.

Examples come from instrumentation transducers and seismic prospecting. It may be difficult or impossible to measure the transfer characteristic of a transducer because the measurement of the input may require yet another transducer.  In the seismic case, the source signal is generally impossible to measure. Frequently, however, the relevant magnitude spectrum can be determined, leaving only the phase spectrum in question. Then, the effects of the transducer’s (or, seismic source’s) magnitude spectrum can be removed by division in the frequency domain. A display of the resulting envelope then provides a result that is independent of the unknown phase shifts. The assumption is made that the unknown phase shift is constant across the bandwidth or the signal, a reasonable assumption for our example, where it is known that

$\phi (\omega)$ is at least slowly varying over a narrow bandwidth.

In these blogs of four parts, we gave a brief introduction to continuous Fourier transform theory with examples devoted to signal processing. More often than not, digital signals are derived from continuous ones, hence, a sound understanding of continuous theory is necessary. Fortunately, the knowledge of basic symmetries, properties and five transform pairs of our repertoire will suffice for pursuing our study of Digital Signal Processing.

More later,

Nalin Pithwa