Parameters for selection of a processor — part 2

Processors suitable for digital control range from standard microprocessors like 8051 to special purpose DSP processors, the primary difference being in the instruction sets and speed(s) of particular instruction(s), such as multiply. Standard microprocessors or general purpose processors are intended for laptops, workstations, and general digital data bookkeeping. Naturally, because digital control involves much numerical computation, the instruction sets for special-purpose DSP processors are rich in math capabilities and are better suited for control applications than the standard microprocessors or general purpose processors.

Processors, such as those found in microwave ovens have a broad range of requirements. For example, the speed, instruction set, memory, word length, and addressing mode requirements are all very minimal for the microwave oven. The consequences of a data error are minimal as well, especially, relative to a data error in a PC/laptop while it is calculating an income tax return. PC’s or laptops, on the other hand, require huge megabytes of memory, and they benefit from speed, error correction, larger word size, and sophisticated addressing modes.

DSP processors generally need speed, word length, and math instructions such as multiply, multiply-and-accumulate, and circular addressing. One typical feature of signal processors not found in general purpose processors is the use of a Harvard architecture, which consists of separate data and program memory. Although separate data and program memory offer significant speed advantages, the IC pin count is higher assuming external memory is allowed because instruction address, instruction data, data address, and data buses are separate. A modified Harvard architecture has been used which maintains some speed advantage, while eliminating the requirement for separate program and data buses, greatly reducing pin count in processors that have external memory capability (almost all have this feature).

While thinking of control versus signal processing applications, in the former, we often employ saturation and therefore absolutely require saturation arithmetic; whereas in the latter, to ensure signal fidelity, in most signal processing applications the algorithms must be designed to prevent saturation by scaling signals appropriately.

The consequences of numerical overflow in control computations can be serious, even destabilizing. In most forms of numerical computation, it is usually better to suffer the non-linearity of signal saturation than the effects of numerical overflow.

For most control applications, it is advantageous to select a processor that does not require much support hardware. One of the most commonly cited advantages of digital control is the freedom from noise in the control processor. Although it is true that controller noise is nominally limited to equalization noise, it is not true that the digital controller enjoys an infinite immunity from noise. Digital logic is designed with certain noise margins, which of course are finite. When electromagnetic radiation impinges on the digital control system, there ia finite probability of making an error. One of the consequences of digital control is that although it can have a very high threshold of immunity, without error detection and correction it is equally likely that the system will make a large error as a small one — the MSB and the LSB of a bus have equal margin against noise.

In addition to external sources of error-causing signals, the possibility for circuit failure exists. If a digital logic circuit threshold drifts outside the design range, the consequences are usually catastrophic.

For operational integrity, error detection is a very important feature.

I hope to compare, if possible, some families of Digital Control processors here, a bit later.

Regards,

Nalin Pithwa

Reference:

Digital Control of Dynamic Systems, Franklin, Powell and Workman.

Bilinear Transformation via Synthetic Division: The Matrix Method

(Authors: Prof. Navdeep M. Singh, VJTI, University of Mumbai and Nalin Pithwa, 1992).

Abstract: The bilinear transformation can be achieved by using the method of synthetic division. A good deal of simplification is obtained when the process is implemented as a sequence of matrix operations. Besides, the matrices are found to have simple structures suitable for algorithmic implementation.

I) INTRODUCTION:

Davies [1] proposed a method for bilinear transformation using synthetic division. This method can be quite simplified when the synthetic division is carried out as a set of matrix operations because the operator matrices are found to have simple structures and so can be easily generated.

II) THE ALGORITHM:

Given a discrete time transfer function H(z), it is transformed to G(s) in the s-plane under the transformation:

z=\frac{s+1}{s-1}

This can be sequentially achieved as :-

H(z) \rightarrow H(s+1) \rightarrow H(\frac{1}{s}+1) \rightarrow H(\frac{2}{s}+1) \rightarrow H(\frac{2}{s-1}+1)=H(\frac{s+1}{s-1})=G(s)

The first step is to represent the given characteristic polynomial in the standard companion form. Since the companion form represents a monic polynomial appropriate scaling is required in the course of the algorithm to ensure that the polynomial generated is monic after each step of transformation.

The method is developed for a third degree polynomial and then generalized.

Step 1:

H(z) \rightarrow H_{1}(s+1) \rightarrow H(s+1) (decreasing of roots by one)

Given H(z)=z^{3}+a_{2}z^{2}+a_{1}z+a_{0} (a_{3}=1) (monic)

Then, H_{1}(s+1)=s^{3}+b_{2}s^{2}+b_{1}s+b_{0} where b_{2}=a_{2}+1, b_{1}=a_{1}+b_{2}, b_{0}=a_{0}+b_{1}

H(s+1)=s^{3}+(a_{2}+3)s^{2}+(a_{1}+2a_{2}+3)s+(a_{0}+a_{1}+a_{2}+1).

In the companion form, obviously the following transformation is sought:

A = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0} & -a_{1} & -a_{2} \end{array}\right] and B=\left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 &1 \\-b_{0} & -b_{1} & -b_{2} \end{array} \right] and C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0}-a_{1}-a_{2}-1 & -a_{1} - 2a_{2}-3 & -a_{2}-3 \end{array} \right]

Performing elementary row and column transformations on A using matrix operators, the final row operator and column operator matrices, P_{1} and Q_{1}, respectively are found to be P_{1} = \left[ \begin{array}{ccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & 1/a_{0} \\ -1 & 1 & 0 \\ 0 & -1 & 1\end{array} \right] and Q_{1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{array}\right].

Thus, P_{1}AQ_{1}=B.

In general, for a polynomial of degree n,

P_{1} = \left[ \begin{array}{ccccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & a_{3}/a_{0} & \ldots & 1\\ -1 & 1 & 0 & \ldots & a_{0}\\ 0 & -1 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & -1 & 1\end{array}\right] and Q_{1} = \left[ \begin{array}{ccccc} 1 & 0 & 0 \ldots & 0 \\ 1 & 1 & 0 & \ldots & \ldots \\ 1 & 1 & 1 & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & 1 & \ldots & 1\end{array}\right], where both the matrices are n \times n.

and B = P_{1}AQ_{1} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b_{0} & -b_{1} & -b_{2} & \ldots & -b_{n-1} \end{array}\right], and B is also n \times n.

Where b_{j} = \sum_{i=j}^{n-1}a_{i} + 1, where i, j = 0, 1, 2, \ldots, (n-1).

Now, the transformation B \rightarrow C is sought respectively. The row and column operator matrices P_{c1} and Q_{c1} respectively are : P_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -2 & 1 \end{array}\right] and Q_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right].

P_{c1} and Q_{c1} are found to have the following general structures:

P_{c1} = \left[ \begin{array}{ccccc}1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ldots & 0 \\ -1 & 3 & -3 & 1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right] and Q_{c1} = \left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & 1 & 1 & \ldots & 0 \\ 0 & 1 & 2 & 1 &\ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right], both general matrices P_{c1} and Q_{c1}, being of dimensions n \times n.

P_{c1} is lower triangular and can be generated as : P_{c1}(i,j) = [|P_{c1(i-j)}|+|P_{c1}(i-1,j-1)|] \times (-1)^{(i+j)}, where i=2, 3, 4, \ldots, n and so we get

P_{c1}(i,1)=(-1)^{(i-1)}, where i=1, 2, 3, \ldots, n, and P_{c1}(i,i)=1.

Similarly, Q_{c1} is lower triangular and can be generated as :

Q_{c1}(i,1)=0, where i=2,3,4, \ldots, n

Q_{c1}(i,j) = Q_{c1}(i-1,j) + Q_{c1}(i-1,j-1).

Thus, when A is the companion form of a polynomial of any degree n, then P_{c1}(P_{1}AQ_{1})Q_{c1} gives H(s+1) in the companion form.

Step 2:

H(s+1) \rightarrow (\frac{s^{3}}{b_{0}}).H(1/s + 1) (scaled inversion).

Let H(s+1) = s^{3} + b_{2}s^{2}+b_{1}s+b_{0} where b_{0}=a_{0} + a_{1} + a_{2} +1, b_{1} = a_{1} + 2a_{2}+3, b_{2}=a_{2}+3.

The scaling of the entire polynomial by b_{0} ensures that the polynomial generated is monic and hence, can be represented in the companion form.

The following transformation is sought: C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -b_{0} & -b_{1} & -b_{2} \end{array}\right] \rightarrow D = \left[\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0})\end{array}\right]

The row and column operator matrices P_{2} and Q_{2} respectively are:

P_{2}=\left[\begin{array}{ccc} b_{1}/b_{2} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & 1/b_{0}\end{array}\right] and Q_{2} = \left[ \begin{array} {ccc}1/b_{0} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & b_{1}/b_{2}\end{array}\right]

In general, P_{2} = \left[ \begin{array}{ccccc} b_{1}/b_{n-1} & 0 & 0 & 0 & \ldots \\ 0 & 0 & 0 & \ldots & \ldots \\ 0 & 0 & \ldots & \ldots & \ldots \\ 0 & \ldots & \ldots & b_{n-1}/b_{1} \\ \ldots & \ldots & \ldots & \ldots & 1/b_{0}\end{array}\right] and Q_{2} = \left[ \begin{array}{ccccc} 1/b_{0} & 0 & 0 & \ldots & 0\\ 0 & b_{n-1}/b_{0} & \ldots & \ldots & 0 \\ 0 & 0 & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & b_{1}/b_{n-1}\end{array}\right] , where both the general matrices P_{2} and Q_{2} are of dimensions n \times n.

So, we get D=P_{2}A_{1}Q_{2} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1/b_{0} & (-b_{n-1}/b_{0}) & (-b_{n-2}/b_{0}) & \ldots & (-b_{1}/b_{0})\end{array}\right], which is also a matrix of dimension n \times n.

Step 3:

(s^{3}/b_{0})H(1/s + 1) \rightarrow (s^{3}/b_{0})H(1 + k/s), with k=2 (scaling of roots)

If F(x) = a_{n}x^{n} + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \ldots + s_{0}, then F(x/k) = a_{n}x^{n} + ka_{n-1}x^{n-1} + k^{2}a_{n-2}x^{n-2}+ \ldots + a_{0}k^{n}.

The following transformation is sought:

D = \left[\begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0}) \end{array}\right] and E = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -k^{3}/b_{0} & (-k^{2}b_{2}/b_{0}) & (-kb_{1}/b_{0})\end{array}\right].

The row and column operators, P_{3} and Q_{3}, respectively are:

P_{3}= \left[ \begin{array}{ccc} 1/k^{2} & 0 & 0 \\ 0 & 1/k & 0 \\ 0 & 0 &\  1\end{array}\right], and Q_{3}=\left[ \begin{array}{ccc} k^{3} & 0 & 0 \\ 0 & k^{2} & 0 \\ 0 & 0 & k \end{array}\right]

In general, P_{3} = \left[ \begin{array}{ccccc}1/k^{n-1} & 0 & 0 & \ldots & 0 \\ 0 & 1/k^{n-2} & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & \vdots & \vdots & \vdots & 1/k \\ 0 & \ldots & \ldots & \ldots & 1\end{array}\right], and Q_{3} = \left[ \begin{array}{ccccc}k^{n} & 0 & 0 & \ldots & 0 \\ 0 & k^{n-1} & 0 & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots &  k\end{array}\right], where the general matrices P_{3} and Q_{3} are both of dimensions n \times n

and E=P_{3}A_{2}Q_{3} = \left[ \begin{array}{ccccc}0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ (-k^{n}/b_{0}) & (-b_{n-1}k^{n-1}/b_{0}) & \ldots & \ldots & (-kb_{1}/b_{0})\end{array}\right]

Step 4:

(s^{3}/b_{0})H(1+k/s) \rightarrow (s^{3}/b_{0})H(1+k/(s-1)) (increasing of roots by one)

For the third degree case, the following transformation is sought:

E = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -A_{0} & -A_{1} & -A_{2}\end{array}\right] and F = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -B_{0} & -B_{1} & -B_{2}\end{array}\right] and G = \left[ \begin{array}{ccc}0 & 1 & 0\\ 0 & 0 & 1 \\ (-A_{0}+A_{1}-A_{2}+1) & (-A_{1}+2A_{2}-3) & (3-A_{2}) \end{array}\right],

where B_{2}=A_{2}-1, B_{1}=A_{1}-B_{2}, B_{0}=A_{0}-B_{1}.

The row and column operators P_{4} and Q_{4} are :

P_{4}= \left[ \begin{array}{ccc} (A_{0}-A_{1})/A_{0} & (-A_{2}/A _{0}) & (-1/A_{0}) \\ 1 & 1 & 0 \\ 0 & 1 & 1\end{array}\right] and Q_{4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -1 & 1\end{array}\right]

In general, P_{4} = \left[ \begin{array}{ccccc} (C_{0}-C_{1})/C_{0} & (-C_{2}/C_{0}) & (-C_{3}/C_{0}) & \ldots & (1/C_{0}) \\ 1 & 1 & \ldots & \ldots & 0\\ 0 & 1 & 1 & \ldots & 0\\0 & \ldots & \ldots & 1 & 1 \\\vdots & \vdots & \vdots & \vdots & 1 \end{array}\right], and Q_{4}=\left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & 0 & \ldots & 0 \\ 1 & -1 & 1 & 0 & \ldots\\ \ldots & \ldots & \ldots & \ldots & 1 \\ \vdots & \vdots & \vdots & \vdots & 1\end{array}\right], both the general matrices P_{4} and Q_{4} being of dimensions n \times n.

Where C_{0}=k^{n}/b_{0} and C_{j}=k^{n-j}b_{n-j}/b_{0}, where j=1, 2, 3, \ldots, (n-1) and Q_{4} is a lower triangular matrix where q_{4}(i,j) = (-1)^{i+j}.

In general, we have F = P_{4}A_{3}Q_{4} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ -\hat{C_{0}} & -\hat{C_{1}} & -\hat{C_{2}} & \ldots & -\hat{C_{n-1}}\end{array}\right]

where -\hat{C_{0}} = -C_{0} + C_{1} - C_{2} + \ldots + (-1)^{n-1}

where -\hat{C_{1}} = -C_{1} + C_{2} - C_{3} + \ldots - (-1)^{n-1}

where -\hat{C_{2}} = -C_{2} + C_{3} - C_{4} + \ldots + (-1)^{n-1}

and so on \vdots

Now, the transformation F \rightarrow G is to be achieved. The row and column operators, P_{c4} and Q_{c4}, respectively are: P_{c4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0\\ 1 & 2 & 1\end{array}\right] and Q_{c4}=\left[ \begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1\end{array}\right]

In general, P_{c4}=\left[ \begin{array}{ccccc} 1 & 0 & 0 & \ldots & 0\\ 1 & 1 & 0 & \ldots & 0 \\ 1 & 2 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & \ldots & \ldots & \ldots & 1\end{array}\right] and Q_{c4}= \left[ \begin{array} {ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & -1  & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right], where both the general matrices P_{c4} and Q_{c4} are of dimensions n \times n.

P_{c4}, a lower triangular matrix can be easily generated as:

p_{c4}(i,i)=1; p_{c4}(i,j) = p_{c4}(i-1,j) + p_{c4}(i-1,j-1).

and Q_{c4}, also a lower triangular matrix can be easily generated as:

q_{c4}(i,i)=1; and q_{c4}(i,1)=0 when i \neq 1; and

q_{c4}(i,j) = [|| Q_{c4}(i-1,j)+ |Q_{c4}(i-1,j-1)|] \times (-1)^{i+j}.

Thus, P_{c4}(P_{4}EQ_{4})Q_{c4} finishes the process of bilinear transformation. Steps 2 and 3 can be combined so that the algorithm reduces to three steps only.

If the original polynomial is non-monic (that is, a_{n} \neq 1), then multiplying  the final tranformed polynomial by b_{0}a_{n} restores it to the standard form.

III. Stability considerations:

In the \omega plane, the Schwarz canonical approach can be applied as an algorithm directly to the canonical form of bilinear transformation of the polynomial obtained previously because a companion form is a non-derogatory matrix.

IV. An Example:

F_{0}(z) = 2z^{4} + 4z^{3} + 6z^{2} + 5z +1

F(z) = z^{4} + 2z^{3} + 3z^{2} + 2.5z + (0.5)z

b_{0} = a_{0} + a_{1} + a_{2} + 1 =9

b_{1} = a_{1} + a_{2} + a_{3} + 1 = 8.5

Similarly, b_{2}=6 and b_{3}=3

B=P_{1}AQ_{1}= \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right]

C = \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 1 & -2 & 1 & 0 \\ -1 & 3 & -3 & 1\end{array}\right] \times \left[ \begin{array}{cccc}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 2 & 1\end{array}\right]

Hence, C = \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -9 & -18.5 & -15 & -6\end{array}\right]

H(s+1)=s^{4}+6s^{3} + 15s^{2}+18.5s+9

Steps 2 and 3:

E=\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -2^{4}/9 & -2^{3}(6)/9 & -2^{2}(15)/9 & (-2)(18.5)/9\end{array}\right] = \left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ (-16/9) & (-48/9) & (-60/9) & (-37/9)\end{array}\right]

Step 4:

C_{0}=16/9, C_{1}=48/9, 60/9, 37/9.

-\hat{C_{0}}=0, -\hat{C_{1}}=-16/9, -\hat{C_{2}}=-32/9, -\hat{C_{3}}=-28/9

P_{c4}(P_{4}EQ_{4})Q_{c4}= \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 1 & 3 & 3 & 1\end{array}\right] \times \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -16/9 & -32/9 & -28/9\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 1 & -2 & 1\end{array}\right], so we get

P_{c4}(P_{4}EQ_{4})Q_{c4}=\left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -3/9 & -3/9 & -1/9\end{array}\right]

The final monic polynomial is s^{4} + (1/9)s^{3}+(3/9)s^{2} + (3/9)s, and multiplying it by b_{0}a_{n}, that is, 9 \times 2 =18, restores it to the non-monic form:

18s^{4}+2s^{3}+6s^{2}+6s

V. Conclusion:

Since the operator matrices have lesser non-zero elements, storage requirements are lesser. The computational complexity should reduce for higher-order systems because the non-zero elements lesser manipulations are also lesser, besides lesser storage requirements. Additionally, the second and third steps can be combined giving a three step method only. Thus, the algorithm easily achieves bilinear transformation, especially, for higher systems compared to other available methods hitherto.

VI. References:

  1. Davies, A.C., “Bilinear Transformation of Polynomials”, IEEE Automatic Control, Nov. 1974.
  2. Barnett and Storey, “Matrix Methods in Stability Theory”, Thomas Nelson and Sons Ltd.
  3. Datta, B. N., “A Solution of the Unit Circle Problem via the Schwarz Canonical Form”, IEEE Automatic Control, Volume AC 27, No. 3, June 1982.
  4. Parthsarthy R., and Jaysimha, K. N., “Bilinear Transformation by Synthetic Division”, IEEE Automatic Control, Volume AC 29, No. 6, June 1986.
  5. Jury, E.I., “Theory and Applications of the z-Transform Method”, John Wiley and Sons Inc., 1984.

 

More on Motion Control and DSP

Introduction to the TMSLF2407 DSP Controller.

The Texas Instruments TMS320LF2407 DSP Controller referred to as the LF2407 in this article is a programmable digital controller with a C2xx DSP CPU as the core processor. The LF2407 contains the DSP core processor and useful peripherals integrated into a single piece of silicon. The LF2407 combines the powerful CPU with on-chip memory and peripherals. With the DSP core and control-oriented peripherals integrated into a single chip, users can design very compact and cost-effective digital control systems.

The LF2407 DSP controller offers 40 MIPS performance. This high processing speed of the C2xx CPU allows users to compute parameters in real-time rather than look up approximations from tables stored in memory. This fast performance is suitable for processing control parameters in applications such as notch filters or sensorless motor control algorithms where a large amount of calculations must be computed quickly.

While the brain of the LF2407 DSP is the C2xx core, the LF2407 contains several control-oriented peripherals onboard. These peripherals make virtually any digital control requirement possible. Their applications range from analog to digital conversion to pulse width modulation (PWM) generation. Communications peripherals make possible the communication with external peripherals, personal computers, or other DSP processors. Below is a brief listing of the peripherals onboard the LF2407:

The LF2407 peripheral set includes:

  • Two Event Managers (A and B)
  • General Purpose (GP) Timers
  • PWM generators for digital motor control
  • Analog to Digital Converter
  • Controller Area Network (CAN) interface
  • Serial Peripheral Interface (SPI) — synchronous serial port
  • Serial Communications Interface (SCI) — asynchronous serial port
  • General purpose bidirectional digital I/O (GPIO) pins
  • Watchdog Timer (“time-out” DSP reset device for system integrity)

Brief Introduction to Peripherals:

The following peripherals are those that are integrated onto the LF2407 chip

Event Managers (EVA, EVB)

There are two event managers on the LF2407, the EVA and the EVB. The Event Manager is the most important peripheral in digital motor control. It contains the necessary functions needed to control electromechanical devices. Each EV is composed of functional “blocks” including timers, comparators, capture units for triggering on an event, PWM logic circuits, quadrature encoded pulse (QEP) circuits and interrupt logic.

The Analog to Digital Converter (ADC)

The ADC on the LF2407 is used whenever an external analog signal needs to be sampled and converted to a digital number. Examples of ADC applications range from sampling a control signal for use in a digital notch filter algorithm or using the ADC in a control feedback loop to monitor motor performance. Additionally, the ADC is useful in motor control applications because it allows for current-sensing using a shunt resistor instead of inexpensive current sensor.

The Control Area Network (CAN) module:

While we will discuss the CAN module in a later blog, it is a useful peripheral for specific applications of the LF2407. The CAN module is used for multi-master serial communication between external hardware. The CAN has a high level of data integrity and is ideal for operation in noisy environments such as in an automobile, or industrial environments that require reliable communication and data integrity.

Serial Parallel Interface(SPI) and Serial Communications Interface (SCI):

The SPI is a high speed synchronous communications port that is mainly used for communicating between the DSP and external peripherals or another DSP device. Typical uses of the SPI include communications with external shift registers, display drivers or ADC’s.

The SCI is an asynchronous communications port that supports asynchronous serial (UART) digital communications between the CPU and other asynchronous peripherals that use the standard NRZ (non-return to zero) format. It is useful in communications between external devices and the DSP. Since these communications peripherals are not directly related to motor control applications, they will not be discussed in this article.

Watchdog Timer

The Watchdog Timer (WD) monitors software and hardware operations and asserts a system reset when its internal counter overflows. The WD timer (when enabled) will count for a specific amount of time. It is necessary for the user’s software to reset the WD timer periodically so that an unwanted reset does not occur. If for some reason there is a CPU disruption, the watch dog timer will generate a system reset. For example, if the software enters an endless loop or if the CPU becomes temporarily disrupted, the WD timer will overflow and a DSP reset will occur, which will cause the DSP program to branch to its initial starting point. Most error conditions that temporarily disrupt chip operation and inhibit proper CPU function can be cleared by the WD function. In this way, the WD increases the reliability of the CPU, thus ensuring system integrity.

General Purpose Bi-directional Digital I/O (GPIO) pins

Since there are only a finite number of pins available on the LF2407 device many of the pins are multiplexed to either their primary function or the secondary GPIO function. In most cases, a pin’s second function will be as a general-purpose input/output pin. The GPIO capability of the LF2407 is very useful as a means of controlling the functionality of pins and also provides another method to input or output data to and from the device. Nine 16-bit control registers control all the I/O and shared pins. There are two types of these registers:

  • I/O MUX Control Registers (MCRx) — Used to control the multiplexer selection that chooses between the primary function of a pin or the general purpose I/O function.
  • Data and Direction Control Registers (PxDATDIR): Used to  control the data and data direction of bi-directional I/O pins.

Joint Test Action Group (JTAG) Port

The JTAG port provides a standard method of interfacing a personal computer with the DSP controller for emulation and development. The XDS510PP or equivalent emulator pod provides the connection between the JTAG module on the LF2407 and the personal computer. The JTAG module allows the PC to take full control over the DSP processor while Code Composer Studio is running. The schematic below shows the connection scheme from computer to the DSP board.

Computer \hspace{0.1in} Parallel Port \Longrightarrow XDS510PP \hspace{0.1in} Plus \hspace{0.1in} Emulator Port \Longrightarrow TI \hspace{0.1in} LF2407 \hspace{0.1in} Evaluation \hspace{0.1in} Module

Phase Locked Loop (PLL) Clock Module

The phase locked loop (PLL) module is basically an input clock multiplier that allows the user to control the input clocking frequency to the DSP core. External to the LF2407, a clock reference (can be oscillator/crystal) is generated. This signal is fed into  the LF2407 and is multiplied or divided by the PLL. This new (higher or lower frequency) clock signal is then used to clock the DSP core. The LF2407’s PLL allows the user to select a multiplication factor ranging from 0.5X to 4X that of the external clock signal. The default value of the PLL is 4X.

Memory Allocation Space

The LF2407 DSP Controller has three different allocations of memory it can use: Data, Program and I/O memory space. Data space is used for program calculations, look-up tables, and any other memory used by an algorithm. Data memory can be in the form of the on-chip  RAM or external RAM. Program memory is the location of user’s program code. Program memory on the LF2407 is either mapped to the off-chip RAM (MP/MC-pin=1) or to the on-chip flash memory (MP/MC-=0), depending on the logic value of the MP/MC-pin.

I/O space is not really memory but a virtual memory address used to output data to peripherals external to the LF2407. For example, the digital-to-analog converter (DAC) on the Spectrum Digital Evaluation Module is accessed with I/O memory. If one desires to output data to the DAC, the data is simply sent to the configured address of I/O space with the “OUT” command. This process is similar to writing to data memory except that the OUT command is used and the data is copied to and outputted on the DAC instead of being stored in memory.

Types of Physical Memory.

Random Access Memory (RAM):

The LF2407 has 544 words of 16 bits each in the on-chip DARAM. These 544 words are partitioned into three blocks: B0, B1, and B2. Blocks B1 and B2 are allocated for use only as data memory. Memory block B0 is different than B1 and B2. The memory block is normally configured as Data Memory, and hence primarily used to hold data, but in the case of the B0 block, it can also be configured as Program Memory. B0 memory can be configured as program or data memory depending on the value of the core level CNF bit.

  • (CNF =0) maps B0 to data memory.
  • (CNF=1) maps B0 to program memory.

The LF2407 also has 2K of single access RAM (SARAM). The addresses associated with the SARAM can be used for both data memory and program memory, and are software configurable to  the internal SARAM or external memory.

Non-Volatile Flash Memory

The LF2407 contains 32K of on-chip flash memory that can be mapped to program space if the MP/MC-pin is made logic 0 (tied to ground). The flash memory provides a permanent location to store code that is unaffected by cutting power to the device. The flash memory can be electronically programmed and erased many times to allow for code development. Usually, the external RAM on the LF2407 Evaluation Module (EVM) board is used instead of the flash for code development due to the fact that a separate “flash programming” routine must be performed to flash code into the flash memory. The on-chip flash is normally used in situations where the DSP program needs to be tested where a JTAG connection is not practical or where the DSP needs to be tested as a “stand-alone” device. For example, if a LF2407 was used to develop a DSP control solution to an automobile braking system, it would be somewhat impractical to have a DSP/JTAG/PC interface in a car that is undergoing performance testing.

More later,

Nalin Pithwa

DSP Processors, Embodiments and Alternatives: Part I

Uptil now, we described digital signal processing in general terms, focusing on DSP fundamentals, systems and application areas. Now, we narrow our focus in DSP processors. We begin with a high level description of the features common to virtually all DSP processors. We then describe typical embodiments of DSP processors, and briefly discuss alternatives to DSP processors such as general purpose microprocessors, microcontrollers(for comparison purposes) (TMS320C20) and FPGA’s. The next several blogs provide a detailed treatment of DSP processor architectures and features.

So, what are the “special” things that a DSP processor can perform? Well, like the name says, DSP processors do signal processing very well. What does “signal processing” mean? Really, it’s a set of algorithms for processing signals in the digital domain. There are analog equivalents to these algorithms, but processing them digitally has been proven to be more efficient. This has been trend for many many years. Signal processing algorithms are the basic building blocks for many applications in the world from cell phones to MP3 players, digital still cameras, and so on. A summary of these algorithms is shown below:

  • FIR Filter: y(n)=\sum_{k=0}^{N}a_{k}x(n-k)
  • IIR Filter: y(n)=\sum_{k=0}^{M}a_{k}x(n-k)+\sum_{k=1}^{N}b_{k}y(n-k)
  • Convolution: y(n)=\sum_{k=0}^{N}x(k)h(n-k)
  • Discrete Fourier Transform: X(k)=\sum_{n=0}^{N-1}x(n)\exp{[-j(2 \pi l/N)nk]}
  • Discrete Cosine Transform: F(u)=\sum_{x=0}^{N-1}c(u).f(x).\cos{\frac{\pi}{2N}u(2x+1)}

One or more of these algorithms are used in almost every signal processing application. FIR filters and IIR filters are almost fundamental to any DSP application. They remove unwanted noise from signals being processed, convolution algorithms are used to look for similarities in signals, discrete Fourier transforms are used to represent signals in formats that are easier to process, and discrete cosine transforms are used in image processing applications. We will discuss the details of some of these algorithms later, but there are things to notice about  this entire list of algorithms. First, they all have a summing operation, the function. In the computer world, this is equivalent to an accumulation of a large number of elements which is implemented using a “for” loop. DSP processors are designed to have large accumulators because of this characteristic. They are specialized in this way. DSPs also have special hardware to perform the “for” loop operation so that the programmer does not have to implement this in software, which would be much slower.

The algorithms above also have multiplication of two different operands. Logically, if we were to speed up this operation, we would design a processor to accommodate the multiplication and accumulation of two operands like this very quickly. In fact, this is what has been done with DSPs. They are designed to support the multiplication and accumulation of data sets like this very quickly; for most processors, in just one cycle. Since these algorithms are very common in most DSP applications, tremendous execution savings can be obtained by exploiting these processor optimizations.

There are also inherent structures in DSP algorithms that allow them to be separated and operated on in parallel. Just as in real life, if I can do more things in parallel, I can get more done in the same amount of time. As it turns out, signal processing algorithms have this characteristic as well. So, we can take advantage of this by putting multiple orthogonal (nondependent) execution units in our DSP processors and exploit this parallelism when implementing these algorithms.

DSP processors must also add some reality in the mix of these algorithms shown above. Take the IIR filter described above. You may be able to tell just by looking at this algorithm that there is a feedback component that essentially feeds back previous outputs into the calculation of the current output. Whenever you deal with feedback, there is always an inherent stability issue. IIR filters can become unstable just like other feedback systems. Careless implementation of feedback systems like the IIR filter can cause the output to oscillate instead of asymptotically decaying to zero (the preferred approach). This problem is compounded in the digital world where we must deal with finite word lengths, a key limitation in all digital systems. We can alleviate this using saturation checks in software or use a specialized instruction to do this for us. DSP processors, because of the nature of signal processing algorithms, use specialized saturation underflow/overflow instructions to deal with these conditions efficiently.

There is more I can say about this, but you get the point. Specialization is really all it is about with DSP processors; these processors specifically designed to do signal processing really well. DSP processors may not be as good as other processors when dealing with nonsignal processing centric algorithms (that’s fine; I am not any good at medicine either). So, it’s important to understand your application and choose the right processor. (A previous blog about DAC and ADC did mention this issue).

(We describe below common features of DSP processors.)

Dozens of families of DSP processors are available on the market today. The salient feature of some of the commonly used families of DSP Processors are summarized in Table 1. Throughout these series, we will use these processors as examples to illustrate the architectures and features that can be found in commercial DSP processors.

Most DSP processors share some common features designed to support repetitive, numerically intensive tasks. The most important of these features are introduced briefly here. Each of these features and many others will be examined in greater detail in this blog article series.

Table 1.

\begin{tabular}{|c|c|c|c|c|} \hline    Vendor & Processor Family & Arithmetic Type & Data Width & Speed (MIPS) or MHz\\ \hline    Analog Devices & ADSP 21xx & Fixed Point & 16 bit & 25MIPS\\ \hline    Texas Instruments & TMS320C55x & Fixed Point & 16 bit & 50MHz ro 300MHz \\ \hline    \end{tabular}

Fast Multiply Accumulate

The most often cited of DSP feature of DSP processors is the ability to perform a multiply-accumulate operation (often called a MAC) in a single instruction cycle. The multiply-accumulate operation is useful in algorithms that involve computing a vector product, or matrix product, such as digital filters, correlation, convolution and Fourier transforms. To achieve this functionality, DSP processors include a multiplier and accumulator integrated into  the main arithmetic processing unit (called the data path) of the processor. In addition, to allow a series of multiply-accumulate operations to proceed without the possibility of arithmetic overflow, DSP processors generally provide extra bits in their accumulator registers to accommodate growth of  the accumulated result. DSP processor data paths will be discussed in detail in some later blog here.

Multiple Access Memory Architecture.

A second feature shared by most DSP processors is the ability to complete several accesses to memory in a single instruction cycle. This allows the processor to fetch an instruction while simultaneously fetching operands for the instruction or storing the result of the previous instruction to memoryHigh bandwidth between the processor and memory is essential for good performance if repetitive data intensive operations are required in an algorithm, as is common in many DSP applications.

In many processors, single cycle multiple memory accesses are subject to restrictions. Typically, all but one of the memory locations must reside on-chip, and multiple memory accesses can take place only with certain instructions. To support simultaneous access of multiple memory locations, DSP processors provide multiple on-chip  buses, multiported on-chip memories, and in some casesm multiple independent memory banks. DSP memory structures are quite distinct from those of general purpose processors and microcontrollers. DSP processor memory architectures will be investigated in detail later.

Specialized Addressing Modes.

To allow arithmetic processing to proceed at maximum speed and to allow specification of multiple operands in a small instruction word, DSP processors incorporate dedicated address generation units. Once the appropriate addressing registers have been configured, the address generation units operate in the background, forming the addresses required for operand accesses in parallel with the execution of arithmetic instructions. Address generation units typically support a selection of addressing modes tailored to DSP applications. The most common of these is register-indirect addressing with post-increment, which is used in situations where a repertitive computation is performed on a series of data stored sequentially in memory. Special addressing modes (called circular or modulo addressing) are often supported to simplify the use of data buffers. Some processors support bit-reversed addressing, which eases the task of interpreting the results of the FFT algorithm. Addressing modes will be described in detail later.

Specialized Execution Control.

Because many DSP algorithms involve performing repetitive computations, most DSP processors provide special support for efficient looping. Often, a special loop or repeat instruction is provided that allows the programmer to implement a for-next loop without expending any instruction cycles for updating and testing the loop counter or for jumping back to the top of the loop.

Some DSP processors provide other execution control features to improve performance, such as context switching and low-latency, low overhead interrupts for fast input/output.

Hardware looping and interrupts will also be discussed later in this blog series.

Perfipheral and Input/Output Interfaces

To allow low-cost, high performance input and output (I/O), most DSP processors incorporate one or more serial or parallel I/O interfaces, and specialized I/O handling mechanisms such as Direct Memory Access (DMA). DSP processor peripheral interfaces are often designed to interface directly with common peripheral devices like analog-to-digital and digital-to-analog converters.

As integrated circuit manufacturing techniques have improved in terms of density and flexibility, DSP processor vendors have included not just peripheral interfaces, but complete peripheral devices on-chip. Examples of this are chips designed for cellular telephone applications, several of which incorporate a DAC and ADC on chip.

Various features of DSP processor peripherals will be described later in this blog series.

In the next blog, we will discuss DSP Processor embodiments.

More later,

Aufwiedersehen,

Nalin Pithwa

A Brief Word on Motion Control and DSP

DSP based electromechanical motion control is very hard to implement in real-life engineering systems.

So, why would we choose to integrate a DSP into a motion control system? Well, the advantages of such a design are numerous. DSP based control gives us a large degree of freedom in developing computationally extensive algorithms that would otherwise be difficult or impossible without a DSP. Advanced control algorithms can sometimes drasfically increase the performance and efficiency of the electromechanical system being controlled.

For example, consider a typical Heating-Ventilation-and-Air-Conditioning (HVAC) system. A standard HVAC system contains at least three electric motors: compressor motor, condenser, fan motor, and the air handler fan motor. Typically, indoor temperature is controlled by simply cycling (turning on and off) the system. This control method puts unnecessary wear on system components and is inefficient. An advanced motor drive system incorporating DSP control could continuously adjust both the air-conditioner compressor speed and indoor fan to maintain the desired temperature and optimal system performance. This control scheme would be much more energy efficient and could extend the operational lifespan of the system.

More later,

Nalin Pithwa

DSP Processors, Embodiments and Alternatives: Part II

DSP Processor Embodiments:

The most familiar form of DSP processor is the single chip processor, which is incorporated into a printed circuit board design by the system designer. However, with the widespread proliferation of DSP processors into many kinds of applications, the increasing level of integration in all kinds of DSP products, and the development of new packaging techniques, DSP processors can now be found in many different forms, sometimes masquerading as something else entirely. In this blog, we briefly discuss some of the forms that DSP processors take.

Multichip Modules.

A multi-chip module (MCM) is generically an electronic assembly (such as a package with a number of conductor terminals or “pins”) where multiple integrated circuits (ICs), semiconductor dies and/or other discrete components are integrated, usually onto a unifying substrate, so that in use it is treated as if it were a single component (as though a larger IC).[1] Other terms, such as “hybrid” or Hybrid integrated circuit, also refer to MCMs.

One advantage of this approach is achieving higher packaging density — more circuits per square inch of printed circuit board. This in turn results in increased operating speed and reduced power dissipation. (As multichip module packaging technology advanced, vendors began to offer multichip modules containing DSP processors. )

For example, Texas Instruments sells the 42 dual C40 multichip module (MCM) containing two SMJ320C40 digital signal processors (DSPs) with 128K words × 32 bits (42D) or 256K words × 32 bits (42C) of zero-wait-state SRAMs mapped to each local bus. Global address and data buses with two sets of control signals are routed externally for each processor, allowing external memory to be accessed. The external global bus provides a continuous address reach of 2G words.

It gives a whopping performance of 80 Million Floating-Point Operations Per Second (MFLOPS) With 496-MBps-Burst I/O Rate for 40-MHz Modules.

Multiple Processors on a Chip.

As IC manufacturing technology became more sophisticated, DSP processors now squeeze more features and performance onto a single-chip  processor, and they even combine multiple processors on a single IC. As with multichip modules, multiprocessor chips provide increased performance and reduced power compared with design using multiple, separately packaged processors. However, the selection of multiprocessor chip offerings is limited to only a few devices.

Chip Sets

n a computer system, a chipset is a set of electronic components in an integrated circuit that manages the data flow between the processor, memory and peripherals. It is usually found on the motherboard. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance.

DSP chipsets seemed to follow the move towards processor integration in PC’s. While some manufacturers combine multiple processors on a single chip and others use multichip modules to combine multiple chips into one package, another variation on DSP processor packaging is to divide the DSP into two or more separate packages. This was the approach that Butterfly DSP had taken with their DSP chip set, which consisted of the LH9320 address generator and the LH9124 processor. Dividing the processor into two or more packages may make sense if the processor is very complex and if the number of input/output pins is very large. Splitting functionality into multiple integrated circuits may allow the use of much less expensive IC packages, and thereby provide cost savings. This approach also provides added flexibility allowing the system designer to combine the individual ICs in the configuration best suited for the application. For example, with the Butterfly chip set, multiple address generator chips could be used in conjunction with one processor chip. Finally, chip sets have the potential of providing more I/O pins than individual chips. In the case of the Butterfly chip set, the use of separate address generator and processor chips allowed the processor to have eight 24-bit external data buses, many more provided by more common single-chip processors.

We will continue with DSP cores in a later blog,

Nalin Pithwa

Digital Signal Processing and DSP Systems

We can look upon a Digital Signal Processing (DSP) System to be any electronic system making use of digital signal processing. Our informal definition of digital signal processing is the application of mathematical operations to digitally represent signals. Signals are represented digitally as sequences of samples. Often, these samples are obtained from physical signals (for example, audio signals) through the use of transducers (such as microphones) and analog-to-digital converters. After mathematical processing, digital signals may be converted back to physical signals via digital to analog converters.

In some systems, the use of DSP is central to the operation of the system. For example, modems and digital cellular phones or the so-called smartphones rely very heavily on DSP technology. In other products, the use of DSP is less central, but often offers important competitive advantages in terms of features, power/performance and price. For example, manufacturers of primarily analog consumer electronics devices like audio amplifiers are employing DSP technology to provide new features.

In this little article, a high level overview of digital signal processing is presented. We first discuss the advantages of DSP over analog systems. We then describe some salient features and characteristics of DSP systems in general. We conclude with a brief look at some important classes of DSP applications.

1.1) Advantages of DSP

Digital signal processing enjoys several advantages over analog signal processing. The most significant of these is that DSP systems are able to accomplish tasks inexpensively that would be difficult or even impossible using analog electronics. Examples of such applications include speech synthesis, speech recognition, and high-speed modems involving error-correction coding. All of these tasks involve a combination of signal processing and control (example, making decisions regarding received bits or received speech) that is extremely difficult to implement using analog techniques.

DSP systems also enjoy two additional advantages over analog systems.

  • Insensitivity to environment: Digital systems, by their very nature, are considerably less sensitive to environmental conditions than analog systems. For example, an analog circuits behaviour depends on its temperature. In contrast, barring catastrophic  failures, a DSP system’s operation does not depend on its environment — whether in the snow or in the desert, a DSP system delivers the same response.
  • Insensitivity to component tolerances: Analog components are manufactured to particular tolerances — a resistor, for example, might be guaranteed to have a resistance within a one percent of its nominal value. The overall response of an analog system depends on the actual values of all of the analog components used. As a result, two analog systems of exactly the same design will have slightly different responses due to slight variations in their components. In contrast, correctly functioning digital components always produce the same outputs given the same inputs.

These two advantages combine synergestically to give DSP systems an additional advantage over analog systems:

  • Predictable, repeatable behaviour. Because a DSP system’s output does not vary due to environmental factors or component variations, it is possible to design systems having exact, known responses that do not vary.

Finally, some DSP systems may also have two other advantages over analog systems.

  • Reprogrammability. If a DSP system is based on programmable processors, it can be reprogrammed — even in the field — to perform other tasks. In contrast, analog systems require physically different components to perform different tasks.
  • Size: The size of analog components varies with their values, for example, a 100 microFarad capacitor used in an analog filter is physically larger than a 10 picoFarad capacitor used in a different analog filter. In contrast, DSP implementations of both filters might well be the same size — indeed, might even use the same hardware, differing only in their filter coefficients — and might be smaller than either of the two analog implementations.

These advantages, coupled with the fact that DSP can take advanrage of denser and denser VLSI systems manufacturing process, increasingly make DSP the solution of choice for signal processing.

1.2) Characteristics of DSP Systems:

In this section, we describe a number of characteristics common to all DSP systems, such as algorithms, sample rate, clock rate and arithmetic type.

Algorithms

DSP systems are often characterized by the algorithms. The algorithm specifies the arithmetic operations to be performed, but does not specify how that arithmetic is to be implemented. It might be implemented in software on an ordinary microprocessor or programmable signal processor, or it might be implemented in custom IC. The selection of an implementation technology is determined in part by the required speed and arithmetic precision. The table given below lists some common types of DSP algorithms and some applications in which they are typically used.

Table 1-1: Common DSP algorithms and typical applications

\begin{tabular}{|l|||l|} \hline    DSP Algorithm & System Application \\ \hline    Speech Coding Decoding & Cell phones, personal comm systems, secure comm \\ \hline    Speech encryption decryption & Cell phones, personal comm systems, secure comm \\ \hline    Speech Recognition & multimedia workstations, robotics, automotive applications \\ \hline    Speech Synthesis & Multimedia PCs, advanced user interfaces, robotics \\ \hline    Speaker Identification & Security, multimedia workstations, advanced user interfaces \\ \hline    Hi-Fi audio encoding and decoding & Consumer audio, consumer video, digital audio broadcast, professional audio and multimedia computers \\ \hline    Modem algorithms & Digital cellular phones, GPS, data fax modems, secure comm \\ \hline    Noise cancellation & Professional audio, adv vehicular audio, industrial applications \\ \hline    Audio equalization & Consumer audio, professional audio, adv vehicular audio, music \\ \hline    Ambient acoustics emulation & Consumer, prof audio, adv vehicular audio, music \\ \hline    Audio mixing and editing & Professional audio, music, multimedia computers \\ \hline    Sound Synthesis & Professional audio, music, multimedia PC's, adv user interfaces \\ \hline    Vision & Security, multimedia PC's,  instrumentation, robotics, navigation \\ \hline    Image Compression decompression & Digital photos, digital video,video-over-voice \\ \hline    Image compositing & Multimedia computers, consumer video, navigation \\ \hline    Beamforming & Navigation, medical imaging, radar,sonar, signals intelligence \\ \hline    Echo cancellation & Speakerphones, modems and telephone switches \\ \hline    Spectral estimation & Signal intelligence, radar, sonar, professional audio, music \\ \hline    \end{tabular}

Sample Rates

A key characteristic of a DSP system is its sample rate: the rate at which samples are consumed, produced, or processed. Combined with the complexity of the algorithms, the sample rate determines the required speed of the implementation technology. A familiar example is the digital audio compact disc (CD) player, which produces samples at a rate of 44.1 kHz on two channels.

Of course, a DSP system may use more than one sample rate; such  systems are said to be multirate DSP systems. An example is a converter from the CD sample rate of 44.1 kHz to the digital audio tape (DAT) rate of 48 kHz. Because of the awkward ratio between these sample rates, the conversion is usually done in stages, typically with at least two intermediate sample rates. Another example of a multirate algorithm is a filter bank, used in applications such as speech, audio, and video encoding and some signal analysis algorithms. Filter banks, typically consist of stages that divide the signal into high and low frequency portions. These new signals are then downsampled and divided again. In multirate applications, the ratio between the highest and the lowest sample rates in the system can become quite large, sometimes exceeding 100,000.

The range of sample rates encountered in signal processing systems is huge. Roughly speaking, sample rates for applications range over 12 orders of magnitude. Only at the very top  of that range is digital implementation rare. This is because the cost and difficulty of implementing a given algorithm digitally increases with the sample rate. DSP algorithms used at higher sample rates tend to be simpler than those at lower sample rates.

Many DSP systems must meet extremely rigorous speed goals, since they operate on lengthy segments of real world signals in real time. Where other kinds of systems (like databases) may be required to meet performance goals on an average, real time DSP systems often must meet such goals in every instance. In such systems, failure to maintain the necessary processing rates is considered a serious malfunction. Such systems are often said to be subject to hard real time constraints. For example, let’s suppose that the compact disc to digital audio tape sample rate converter discussed above is to be implemented as a real-time system, accepting digital signals at the CD sample rate of 44.1 kHz and producing digital signals at the DAT sample rate of 48 kHz. The converter must be ready to accept a new sample from the CD source every 22.6 microseconds and must produce a new output sample for the DAT device every 20.8 microseconds. If the system ever fails to accept or produce a sample rate on this schedule, data are lost and the resulting output signal is corrupted. The need to meet such hard real-time constraints creates special challenges in the design and debugging of real-time DSP systems.

Clock Rates

Digital electronic systems are often characterized by their clock rates. The clock rate usually refers to the rate at which the systems performs its most basic unit of work. In mass-produced commercial products, clock rates of up to 100 MHz are common, with faster rates found in some high-performance products. For DSP systems, the ratio of system clock rate to sample rate is one of the most important characteristics used to determine how the system will be implemented. The relationship between the clock rate and the sample rate partially determines the amount of hardware needed to implement an algorithm with a given complexity in real-time. As the ratio of sample rate to clock rate increases, so does the amount and complexity of hardware required to implement the algorithm.

Numeric Representations

Arithmetic operations such as addition and multiplication are at the heart of DSP algorithms and systems. As a result, the numeric representations and type of arithmetic used can have a profound influence on the behaviour and performance of a DSP system. The most important choice for the designer is between fixed point and floating point arithmetic. Fixed point arithmetic represents numbers in a fixed range (example, -1.0 to +1.0) with a finite number of bits of precision (called the word width). For example, an eight bit fixed point number provides a resolution of 1/256 of the range over which the number is allowed to vary. Numbers outside of the specified range cannot be represented; arithmetic operations that would result in a number outside this range either saturate (that is, are limited to the largest positive or negative representable value) or wrap around (that is, the extra bits from the arithmetic operation are discarded).

Floating point arithmetic greatly expands the representable range of values. Floating point arithmetic represents every number in two parts: a mantissa and an exponent. The mantissa is, in effect, forced to lie between -1.0 and +1.0, while the exponent keeps track of the amount by which the mantissa must be scaled (in terms of powers of two) in order to create the actual value represented. That is,

value=mantissa \times 2^{exponent}

Floating point arithmetic provides much greater dynamic range (that is,, the ratio between the largest and smallest values that can be represented) than the fixed point arithmetic. Because it reduces the probability of overflow and the necessity of scaling, it can be considerably simplify algorithm and software design. Unfortunately, floating point arithmetic is generally slower and more expensive than fixed point arithmetic, and is more complicated to implement in hardware than fixed point arithmetic.

Classes of DSP Applications.

Digital signal processing, in general, and DSP processors in particular, are used in an extremely diverse range of applications, from radar systems to consumer electronics. Naturally, no  one processor can meet the needs of all or even most applications. Therefore, the first task for the designer selecting a DSP processor is to weigh the relative importance of performance, price, power consumption, integration, ease of development, and other factors for the application at hand. Here, we briefly touch on the needs of just a few categories of DSP applications.

Low cost embedded systems.

The largest applications (in terms of dollar volume) for digital signal processors are inexpensive, high-volume embedded systems, such as cellular telephones, disk drives (where DSP’s are used for servo control) and modems. In these applications, cost and integration considerations are paramount. For portable, battery operated products, power consumption is also critical. In these high volume, embedded applications, performance and ease of development considerations  are often given less weight, even though these applications usually involve the development of software to run on the DSP and custom hardware that interfaces with the DSP.

High Performance Applications

Another class of important applications is involving processing large volumes of data with complex algorithms for specialized needs. This includes uses like sonar and seismic explorations, where production volumes are lower, algorithms are more demanding, and product designs are larger and more complex. As a result, designers favour processors with maximum performance, ease of use, and support for multiprocessor configurations. In some cases, rather than designing their own hardware and software from scratch, designers of these systems assemble systems using standard development boards and ease their software development tasks by using existing software libraries.

Personal Computer Based Multimedia.

Another class of applications is PC based multimedia functions. Increasingly, PCs are incorporating DSP processors to provide such varied capabilities as voice mail, data and fax modems, music and speech synthesis, and image compression. For example, Software Defined Radios (SDR’s ) and also, smartphones. As with other high-volume embedded applications, PC multimedia also demands high performance, since a DSP processor in a multimedia PC may be called on to perform multiple functions simultaneously. In addition to performing each function efficiently, the DSP processor must have the ability to efficiently switch between functions. Memory capacity may also be an issue in these applications, because many multimedia applications require the ability to manipulate large amounts of data.

There has also been a trend to incorporate DSP like functions into general purpose processors to better handle signal processing tasks. In some cases, such augmented microprocessors may be able to handle certain tasks without the need of a separate DSP processor. However, I think the use of a separate general purpose processor (like ARM for control purposes in a smartphone) and DSP processors has much offer and will continue to be the dominant implementation of PC or SDR or smartphone based multimedia for some time to come.

More later,

Nalin Pithwa