## Parameters for selection of a processor — part 2

Processors suitable for digital control range from standard microprocessors like 8051 to special purpose DSP processors, the primary difference being in the instruction sets and speed(s) of particular instruction(s), such as multiply. Standard microprocessors or general purpose processors are intended for laptops, workstations, and general digital data bookkeeping. Naturally, because digital control involves much numerical computation, the instruction sets for special-purpose DSP processors are rich in math capabilities and are better suited for control applications than the standard microprocessors or general purpose processors.

Processors, such as those found in microwave ovens have a broad range of requirements. For example, the speed, instruction set, memory, word length, and addressing mode requirements are all very minimal for the microwave oven. The consequences of a data error are minimal as well, especially, relative to a data error in a PC/laptop while it is calculating an income tax return. PC’s or laptops, on the other hand, require huge megabytes of memory, and they benefit from speed, error correction, larger word size, and sophisticated addressing modes.

DSP processors generally need speed, word length, and math instructions such as multiply, multiply-and-accumulate, and circular addressing. One typical feature of signal processors not found in general purpose processors is the use of a Harvard architecture, which consists of separate data and program memory. Although separate data and program memory offer significant speed advantages, the IC pin count is higher assuming external memory is allowed because instruction address, instruction data, data address, and data buses are separate. A modified Harvard architecture has been used which maintains some speed advantage, while eliminating the requirement for separate program and data buses, greatly reducing pin count in processors that have external memory capability (almost all have this feature).

While thinking of control versus signal processing applications, in the former, we often employ saturation and therefore absolutely require saturation arithmetic; whereas in the latter, to ensure signal fidelity, in most signal processing applications the algorithms must be designed to prevent saturation by scaling signals appropriately.

The consequences of numerical overflow in control computations can be serious, even destabilizing. In most forms of numerical computation, it is usually better to suffer the non-linearity of signal saturation than the effects of numerical overflow.

For most control applications, it is advantageous to select a processor that does not require much support hardware. One of the most commonly cited advantages of digital control is the freedom from noise in the control processor. Although it is true that controller noise is nominally limited to equalization noise, it is not true that the digital controller enjoys an infinite immunity from noise. Digital logic is designed with certain noise margins, which of course are finite. When electromagnetic radiation impinges on the digital control system, there ia finite probability of making an error. One of the consequences of digital control is that although it can have a very high threshold of immunity, without error detection and correction it is equally likely that the system will make a large error as a small one — the MSB and the LSB of a bus have equal margin against noise.

In addition to external sources of error-causing signals, the possibility for circuit failure exists. If a digital logic circuit threshold drifts outside the design range, the consequences are usually catastrophic.

For operational integrity, error detection is a very important feature.

I hope to compare, if possible, some families of Digital Control processors here, a bit later.

Regards,

Nalin Pithwa

Reference:

Digital Control of Dynamic Systems, Franklin, Powell and Workman.

## Bilinear Transformation via Synthetic Division: The Matrix Method

(Authors: Prof. Navdeep M. Singh, VJTI, University of Mumbai and Nalin Pithwa, 1992).

Abstract: The bilinear transformation can be achieved by using the method of synthetic division. A good deal of simplification is obtained when the process is implemented as a sequence of matrix operations. Besides, the matrices are found to have simple structures suitable for algorithmic implementation.

I) INTRODUCTION:

Davies [1] proposed a method for bilinear transformation using synthetic division. This method can be quite simplified when the synthetic division is carried out as a set of matrix operations because the operator matrices are found to have simple structures and so can be easily generated.

II) THE ALGORITHM:

Given a discrete time transfer function $H(z)$, it is transformed to $G(s)$ in the s-plane under the transformation:

$z=\frac{s+1}{s-1}$

This can be sequentially achieved as :-

$H(z) \rightarrow H(s+1) \rightarrow H(\frac{1}{s}+1) \rightarrow H(\frac{2}{s}+1) \rightarrow H(\frac{2}{s-1}+1)=H(\frac{s+1}{s-1})=G(s)$

The first step is to represent the given characteristic polynomial in the standard companion form. Since the companion form represents a monic polynomial appropriate scaling is required in the course of the algorithm to ensure that the polynomial generated is monic after each step of transformation.

The method is developed for a third degree polynomial and then generalized.

Step 1:

$H(z) \rightarrow H_{1}(s+1) \rightarrow H(s+1)$ (decreasing of roots by one)

Given $H(z)=z^{3}+a_{2}z^{2}+a_{1}z+a_{0}$ (a_{3}=1) (monic)

Then, $H_{1}(s+1)=s^{3}+b_{2}s^{2}+b_{1}s+b_{0}$ where $b_{2}=a_{2}+1$, $b_{1}=a_{1}+b_{2}$, $b_{0}=a_{0}+b_{1}$

$H(s+1)=s^{3}+(a_{2}+3)s^{2}+(a_{1}+2a_{2}+3)s+(a_{0}+a_{1}+a_{2}+1)$.

In the companion form, obviously the following transformation is sought:

$A = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0} & -a_{1} & -a_{2} \end{array}\right]$ and $B=\left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 &1 \\-b_{0} & -b_{1} & -b_{2} \end{array} \right]$ and $C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0}-a_{1}-a_{2}-1 & -a_{1} - 2a_{2}-3 & -a_{2}-3 \end{array} \right]$

Performing elementary row and column transformations on A using matrix operators, the final row operator and column operator matrices, $P_{1}$ and $Q_{1}$, respectively are found to be $P_{1} = \left[ \begin{array}{ccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & 1/a_{0} \\ -1 & 1 & 0 \\ 0 & -1 & 1\end{array} \right]$ and $Q_{1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{array}\right]$.

Thus, $P_{1}AQ_{1}=B$.

In general, for a polynomial of degree n,

$P_{1} = \left[ \begin{array}{ccccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & a_{3}/a_{0} & \ldots & 1\\ -1 & 1 & 0 & \ldots & a_{0}\\ 0 & -1 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & -1 & 1\end{array}\right]$ and $Q_{1} = \left[ \begin{array}{ccccc} 1 & 0 & 0 \ldots & 0 \\ 1 & 1 & 0 & \ldots & \ldots \\ 1 & 1 & 1 & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & 1 & \ldots & 1\end{array}\right]$, where both the matrices are $n \times n$.

and $B = P_{1}AQ_{1} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b_{0} & -b_{1} & -b_{2} & \ldots & -b_{n-1} \end{array}\right]$, and B is also $n \times n$.

Where $b_{j} = \sum_{i=j}^{n-1}a_{i} + 1$, where $i, j = 0, 1, 2, \ldots, (n-1)$.

Now, the transformation $B \rightarrow C$ is sought respectively. The row and column operator matrices $P_{c1}$ and $Q_{c1}$ respectively are : $P_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -2 & 1 \end{array}\right]$ and $Q_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right]$.

$P_{c1}$ and $Q_{c1}$ are found to have the following general structures:

$P_{c1} = \left[ \begin{array}{ccccc}1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ldots & 0 \\ -1 & 3 & -3 & 1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right]$ and $Q_{c1} = \left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & 1 & 1 & \ldots & 0 \\ 0 & 1 & 2 & 1 &\ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right]$, both general matrices $P_{c1}$ and $Q_{c1}$, being of dimensions $n \times n$.

$P_{c1}$ is lower triangular and can be generated as : $P_{c1}(i,j) = [|P_{c1(i-j)}|+|P_{c1}(i-1,j-1)|] \times (-1)^{(i+j)}$, where $i=2, 3, 4, \ldots, n$ and so we get

$P_{c1}(i,1)=(-1)^{(i-1)}$, where $i=1, 2, 3, \ldots, n$, and $P_{c1}(i,i)=1$.

Similarly, $Q_{c1}$ is lower triangular and can be generated as :

$Q_{c1}(i,1)=0$, where $i=2,3,4, \ldots, n$

$Q_{c1}(i,j) = Q_{c1}(i-1,j) + Q_{c1}(i-1,j-1)$.

Thus, when A is the companion form of a polynomial of any degree n, then $P_{c1}(P_{1}AQ_{1})Q_{c1}$ gives $H(s+1)$ in the companion form.

Step 2:

$H(s+1) \rightarrow (\frac{s^{3}}{b_{0}}).H(1/s + 1)$ (scaled inversion).

Let $H(s+1) = s^{3} + b_{2}s^{2}+b_{1}s+b_{0}$ where $b_{0}=a_{0} + a_{1} + a_{2} +1$, $b_{1} = a_{1} + 2a_{2}+3$, $b_{2}=a_{2}+3$.

The scaling of the entire polynomial by $b_{0}$ ensures that the polynomial generated is monic and hence, can be represented in the companion form.

The following transformation is sought: $C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -b_{0} & -b_{1} & -b_{2} \end{array}\right] \rightarrow D = \left[\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0})\end{array}\right]$

The row and column operator matrices $P_{2}$ and $Q_{2}$ respectively are:

$P_{2}=\left[\begin{array}{ccc} b_{1}/b_{2} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & 1/b_{0}\end{array}\right]$ and $Q_{2} = \left[ \begin{array} {ccc}1/b_{0} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & b_{1}/b_{2}\end{array}\right]$

In general, $P_{2} = \left[ \begin{array}{ccccc} b_{1}/b_{n-1} & 0 & 0 & 0 & \ldots \\ 0 & 0 & 0 & \ldots & \ldots \\ 0 & 0 & \ldots & \ldots & \ldots \\ 0 & \ldots & \ldots & b_{n-1}/b_{1} \\ \ldots & \ldots & \ldots & \ldots & 1/b_{0}\end{array}\right]$ and $Q_{2} = \left[ \begin{array}{ccccc} 1/b_{0} & 0 & 0 & \ldots & 0\\ 0 & b_{n-1}/b_{0} & \ldots & \ldots & 0 \\ 0 & 0 & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & b_{1}/b_{n-1}\end{array}\right]$, where both the general matrices $P_{2}$ and $Q_{2}$ are of dimensions $n \times n$.

So, we get $D=P_{2}A_{1}Q_{2} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1/b_{0} & (-b_{n-1}/b_{0}) & (-b_{n-2}/b_{0}) & \ldots & (-b_{1}/b_{0})\end{array}\right]$, which is also a matrix of dimension $n \times n$.

Step 3:

$(s^{3}/b_{0})H(1/s + 1) \rightarrow (s^{3}/b_{0})H(1 + k/s)$, with $k=2$ (scaling of roots)

If $F(x) = a_{n}x^{n} + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \ldots + s_{0}$, then $F(x/k) = a_{n}x^{n} + ka_{n-1}x^{n-1} + k^{2}a_{n-2}x^{n-2}+ \ldots + a_{0}k^{n}$.

The following transformation is sought:

$D = \left[\begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0}) \end{array}\right]$ and $E = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -k^{3}/b_{0} & (-k^{2}b_{2}/b_{0}) & (-kb_{1}/b_{0})\end{array}\right]$.

The row and column operators, $P_{3}$ and $Q_{3}$, respectively are:

$P_{3}= \left[ \begin{array}{ccc} 1/k^{2} & 0 & 0 \\ 0 & 1/k & 0 \\ 0 & 0 &\ 1\end{array}\right]$, and $Q_{3}=\left[ \begin{array}{ccc} k^{3} & 0 & 0 \\ 0 & k^{2} & 0 \\ 0 & 0 & k \end{array}\right]$

In general, $P_{3} = \left[ \begin{array}{ccccc}1/k^{n-1} & 0 & 0 & \ldots & 0 \\ 0 & 1/k^{n-2} & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & \vdots & \vdots & \vdots & 1/k \\ 0 & \ldots & \ldots & \ldots & 1\end{array}\right]$, and $Q_{3} = \left[ \begin{array}{ccccc}k^{n} & 0 & 0 & \ldots & 0 \\ 0 & k^{n-1} & 0 & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & k\end{array}\right]$, where the general matrices $P_{3}$ and $Q_{3}$ are both of dimensions $n \times n$

and $E=P_{3}A_{2}Q_{3} = \left[ \begin{array}{ccccc}0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ (-k^{n}/b_{0}) & (-b_{n-1}k^{n-1}/b_{0}) & \ldots & \ldots & (-kb_{1}/b_{0})\end{array}\right]$

Step 4:

$(s^{3}/b_{0})H(1+k/s) \rightarrow (s^{3}/b_{0})H(1+k/(s-1))$ (increasing of roots by one)

For the third degree case, the following transformation is sought:

$E = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -A_{0} & -A_{1} & -A_{2}\end{array}\right]$ and $F = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -B_{0} & -B_{1} & -B_{2}\end{array}\right]$ and $G = \left[ \begin{array}{ccc}0 & 1 & 0\\ 0 & 0 & 1 \\ (-A_{0}+A_{1}-A_{2}+1) & (-A_{1}+2A_{2}-3) & (3-A_{2}) \end{array}\right]$,

where $B_{2}=A_{2}-1$, $B_{1}=A_{1}-B_{2}$, $B_{0}=A_{0}-B_{1}$.

The row and column operators $P_{4}$ and $Q_{4}$ are :

$P_{4}= \left[ \begin{array}{ccc} (A_{0}-A_{1})/A_{0} & (-A_{2}/A _{0}) & (-1/A_{0}) \\ 1 & 1 & 0 \\ 0 & 1 & 1\end{array}\right]$ and $Q_{4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -1 & 1\end{array}\right]$

In general, $P_{4} = \left[ \begin{array}{ccccc} (C_{0}-C_{1})/C_{0} & (-C_{2}/C_{0}) & (-C_{3}/C_{0}) & \ldots & (1/C_{0}) \\ 1 & 1 & \ldots & \ldots & 0\\ 0 & 1 & 1 & \ldots & 0\\0 & \ldots & \ldots & 1 & 1 \\\vdots & \vdots & \vdots & \vdots & 1 \end{array}\right]$, and $Q_{4}=\left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & 0 & \ldots & 0 \\ 1 & -1 & 1 & 0 & \ldots\\ \ldots & \ldots & \ldots & \ldots & 1 \\ \vdots & \vdots & \vdots & \vdots & 1\end{array}\right]$, both the general matrices $P_{4}$ and $Q_{4}$ being of dimensions $n \times n$.

Where $C_{0}=k^{n}/b_{0}$ and $C_{j}=k^{n-j}b_{n-j}/b_{0}$, where $j=1, 2, 3, \ldots, (n-1)$ and $Q_{4}$ is a lower triangular matrix where $q_{4}(i,j) = (-1)^{i+j}$.

In general, we have $F = P_{4}A_{3}Q_{4} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ -\hat{C_{0}} & -\hat{C_{1}} & -\hat{C_{2}} & \ldots & -\hat{C_{n-1}}\end{array}\right]$

where $-\hat{C_{0}} = -C_{0} + C_{1} - C_{2} + \ldots + (-1)^{n-1}$

where $-\hat{C_{1}} = -C_{1} + C_{2} - C_{3} + \ldots - (-1)^{n-1}$

where $-\hat{C_{2}} = -C_{2} + C_{3} - C_{4} + \ldots + (-1)^{n-1}$

and so on $\vdots$

Now, the transformation $F \rightarrow G$ is to be achieved. The row and column operators, $P_{c4}$ and $Q_{c4}$, respectively are: $P_{c4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0\\ 1 & 2 & 1\end{array}\right]$ and $Q_{c4}=\left[ \begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1\end{array}\right]$

In general, $P_{c4}=\left[ \begin{array}{ccccc} 1 & 0 & 0 & \ldots & 0\\ 1 & 1 & 0 & \ldots & 0 \\ 1 & 2 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & \ldots & \ldots & \ldots & 1\end{array}\right]$ and $Q_{c4}= \left[ \begin{array} {ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & -1 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right]$, where both the general matrices $P_{c4}$ and $Q_{c4}$ are of dimensions $n \times n$.

$P_{c4}$, a lower triangular matrix can be easily generated as:

$p_{c4}(i,i)=1$; $p_{c4}(i,j) = p_{c4}(i-1,j) + p_{c4}(i-1,j-1)$.

and $Q_{c4}$, also a lower triangular matrix can be easily generated as:

$q_{c4}(i,i)=1$; and $q_{c4}(i,1)=0$ when $i \neq 1$; and

$q_{c4}(i,j) = [|| Q_{c4}(i-1,j)+ |Q_{c4}(i-1,j-1)|] \times (-1)^{i+j}$.

Thus, $P_{c4}(P_{4}EQ_{4})Q_{c4}$ finishes the process of bilinear transformation. Steps 2 and 3 can be combined so that the algorithm reduces to three steps only.

If the original polynomial is non-monic (that is, $a_{n} \neq 1$), then multiplying  the final tranformed polynomial by $b_{0}a_{n}$ restores it to the standard form.

III. Stability considerations:

In the $\omega$ plane, the Schwarz canonical approach can be applied as an algorithm directly to the canonical form of bilinear transformation of the polynomial obtained previously because a companion form is a non-derogatory matrix.

IV. An Example:

$F_{0}(z) = 2z^{4} + 4z^{3} + 6z^{2} + 5z +1$

$F(z) = z^{4} + 2z^{3} + 3z^{2} + 2.5z + (0.5)z$

$b_{0} = a_{0} + a_{1} + a_{2} + 1 =9$

$b_{1} = a_{1} + a_{2} + a_{3} + 1 = 8.5$

Similarly, $b_{2}=6$ and $b_{3}=3$

$B=P_{1}AQ_{1}= \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right]$

$C = \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 1 & -2 & 1 & 0 \\ -1 & 3 & -3 & 1\end{array}\right] \times \left[ \begin{array}{cccc}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 2 & 1\end{array}\right]$

Hence, $C = \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -9 & -18.5 & -15 & -6\end{array}\right]$

$H(s+1)=s^{4}+6s^{3} + 15s^{2}+18.5s+9$

Steps 2 and 3:

$E=\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -2^{4}/9 & -2^{3}(6)/9 & -2^{2}(15)/9 & (-2)(18.5)/9\end{array}\right] = \left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ (-16/9) & (-48/9) & (-60/9) & (-37/9)\end{array}\right]$

Step 4:

$C_{0}=16/9$, $C_{1}=48/9$, $60/9$, $37/9$.

$-\hat{C_{0}}=0$, $-\hat{C_{1}}=-16/9$, $-\hat{C_{2}}=-32/9$, $-\hat{C_{3}}=-28/9$

$P_{c4}(P_{4}EQ_{4})Q_{c4}= \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 1 & 3 & 3 & 1\end{array}\right] \times \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -16/9 & -32/9 & -28/9\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 1 & -2 & 1\end{array}\right]$, so we get

$P_{c4}(P_{4}EQ_{4})Q_{c4}=\left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -3/9 & -3/9 & -1/9\end{array}\right]$

The final monic polynomial is $s^{4} + (1/9)s^{3}+(3/9)s^{2} + (3/9)s$, and multiplying it by $b_{0}a_{n}$, that is, $9 \times 2 =18$, restores it to the non-monic form:

$18s^{4}+2s^{3}+6s^{2}+6s$

V. Conclusion:

Since the operator matrices have lesser non-zero elements, storage requirements are lesser. The computational complexity should reduce for higher-order systems because the non-zero elements lesser manipulations are also lesser, besides lesser storage requirements. Additionally, the second and third steps can be combined giving a three step method only. Thus, the algorithm easily achieves bilinear transformation, especially, for higher systems compared to other available methods hitherto.

VI. References:

1. Davies, A.C., “Bilinear Transformation of Polynomials”, IEEE Automatic Control, Nov. 1974.
2. Barnett and Storey, “Matrix Methods in Stability Theory”, Thomas Nelson and Sons Ltd.
3. Datta, B. N., “A Solution of the Unit Circle Problem via the Schwarz Canonical Form”, IEEE Automatic Control, Volume AC 27, No. 3, June 1982.
4. Parthsarthy R., and Jaysimha, K. N., “Bilinear Transformation by Synthetic Division”, IEEE Automatic Control, Volume AC 29, No. 6, June 1986.
5. Jury, E.I., “Theory and Applications of the z-Transform Method”, John Wiley and Sons Inc., 1984.

## More on Motion Control and DSP

Introduction to the TMSLF2407 DSP Controller.

The Texas Instruments TMS320LF2407 DSP Controller referred to as the LF2407 in this article is a programmable digital controller with a C2xx DSP CPU as the core processor. The LF2407 contains the DSP core processor and useful peripherals integrated into a single piece of silicon. The LF2407 combines the powerful CPU with on-chip memory and peripherals. With the DSP core and control-oriented peripherals integrated into a single chip, users can design very compact and cost-effective digital control systems.

The LF2407 DSP controller offers 40 MIPS performance. This high processing speed of the C2xx CPU allows users to compute parameters in real-time rather than look up approximations from tables stored in memory. This fast performance is suitable for processing control parameters in applications such as notch filters or sensorless motor control algorithms where a large amount of calculations must be computed quickly.

While the brain of the LF2407 DSP is the C2xx core, the LF2407 contains several control-oriented peripherals onboard. These peripherals make virtually any digital control requirement possible. Their applications range from analog to digital conversion to pulse width modulation (PWM) generation. Communications peripherals make possible the communication with external peripherals, personal computers, or other DSP processors. Below is a brief listing of the peripherals onboard the LF2407:

The LF2407 peripheral set includes:

• Two Event Managers (A and B)
• General Purpose (GP) Timers
• PWM generators for digital motor control
• Analog to Digital Converter
• Controller Area Network (CAN) interface
• Serial Peripheral Interface (SPI) — synchronous serial port
• Serial Communications Interface (SCI) — asynchronous serial port
• General purpose bidirectional digital I/O (GPIO) pins
• Watchdog Timer (“time-out” DSP reset device for system integrity)

Brief Introduction to Peripherals:

The following peripherals are those that are integrated onto the LF2407 chip

Event Managers (EVA, EVB)

There are two event managers on the LF2407, the EVA and the EVB. The Event Manager is the most important peripheral in digital motor control. It contains the necessary functions needed to control electromechanical devices. Each EV is composed of functional “blocks” including timers, comparators, capture units for triggering on an event, PWM logic circuits, quadrature encoded pulse (QEP) circuits and interrupt logic.

The Analog to Digital Converter (ADC)

The ADC on the LF2407 is used whenever an external analog signal needs to be sampled and converted to a digital number. Examples of ADC applications range from sampling a control signal for use in a digital notch filter algorithm or using the ADC in a control feedback loop to monitor motor performance. Additionally, the ADC is useful in motor control applications because it allows for current-sensing using a shunt resistor instead of inexpensive current sensor.

The Control Area Network (CAN) module:

While we will discuss the CAN module in a later blog, it is a useful peripheral for specific applications of the LF2407. The CAN module is used for multi-master serial communication between external hardware. The CAN has a high level of data integrity and is ideal for operation in noisy environments such as in an automobile, or industrial environments that require reliable communication and data integrity.

Serial Parallel Interface(SPI) and Serial Communications Interface (SCI):

The SPI is a high speed synchronous communications port that is mainly used for communicating between the DSP and external peripherals or another DSP device. Typical uses of the SPI include communications with external shift registers, display drivers or ADC’s.

The SCI is an asynchronous communications port that supports asynchronous serial (UART) digital communications between the CPU and other asynchronous peripherals that use the standard NRZ (non-return to zero) format. It is useful in communications between external devices and the DSP. Since these communications peripherals are not directly related to motor control applications, they will not be discussed in this article.

Watchdog Timer

The Watchdog Timer (WD) monitors software and hardware operations and asserts a system reset when its internal counter overflows. The WD timer (when enabled) will count for a specific amount of time. It is necessary for the user’s software to reset the WD timer periodically so that an unwanted reset does not occur. If for some reason there is a CPU disruption, the watch dog timer will generate a system reset. For example, if the software enters an endless loop or if the CPU becomes temporarily disrupted, the WD timer will overflow and a DSP reset will occur, which will cause the DSP program to branch to its initial starting point. Most error conditions that temporarily disrupt chip operation and inhibit proper CPU function can be cleared by the WD function. In this way, the WD increases the reliability of the CPU, thus ensuring system integrity.

General Purpose Bi-directional Digital I/O (GPIO) pins

Since there are only a finite number of pins available on the LF2407 device many of the pins are multiplexed to either their primary function or the secondary GPIO function. In most cases, a pin’s second function will be as a general-purpose input/output pin. The GPIO capability of the LF2407 is very useful as a means of controlling the functionality of pins and also provides another method to input or output data to and from the device. Nine 16-bit control registers control all the I/O and shared pins. There are two types of these registers:

• I/O MUX Control Registers (MCRx) — Used to control the multiplexer selection that chooses between the primary function of a pin or the general purpose I/O function.
• Data and Direction Control Registers (PxDATDIR): Used to  control the data and data direction of bi-directional I/O pins.

Joint Test Action Group (JTAG) Port

The JTAG port provides a standard method of interfacing a personal computer with the DSP controller for emulation and development. The XDS510PP or equivalent emulator pod provides the connection between the JTAG module on the LF2407 and the personal computer. The JTAG module allows the PC to take full control over the DSP processor while Code Composer Studio is running. The schematic below shows the connection scheme from computer to the DSP board.

$Computer \hspace{0.1in} Parallel Port \Longrightarrow XDS510PP \hspace{0.1in} Plus \hspace{0.1in} Emulator Port \Longrightarrow TI \hspace{0.1in} LF2407 \hspace{0.1in} Evaluation \hspace{0.1in} Module$

Phase Locked Loop (PLL) Clock Module

The phase locked loop (PLL) module is basically an input clock multiplier that allows the user to control the input clocking frequency to the DSP core. External to the LF2407, a clock reference (can be oscillator/crystal) is generated. This signal is fed into  the LF2407 and is multiplied or divided by the PLL. This new (higher or lower frequency) clock signal is then used to clock the DSP core. The LF2407’s PLL allows the user to select a multiplication factor ranging from 0.5X to 4X that of the external clock signal. The default value of the PLL is 4X.

Memory Allocation Space

The LF2407 DSP Controller has three different allocations of memory it can use: Data, Program and I/O memory space. Data space is used for program calculations, look-up tables, and any other memory used by an algorithm. Data memory can be in the form of the on-chip  RAM or external RAM. Program memory is the location of user’s program code. Program memory on the LF2407 is either mapped to the off-chip RAM (MP/MC-pin=1) or to the on-chip flash memory (MP/MC-=0), depending on the logic value of the MP/MC-pin.

I/O space is not really memory but a virtual memory address used to output data to peripherals external to the LF2407. For example, the digital-to-analog converter (DAC) on the Spectrum Digital Evaluation Module is accessed with I/O memory. If one desires to output data to the DAC, the data is simply sent to the configured address of I/O space with the “OUT” command. This process is similar to writing to data memory except that the OUT command is used and the data is copied to and outputted on the DAC instead of being stored in memory.

Types of Physical Memory.

Random Access Memory (RAM):

The LF2407 has 544 words of 16 bits each in the on-chip DARAM. These 544 words are partitioned into three blocks: B0, B1, and B2. Blocks B1 and B2 are allocated for use only as data memory. Memory block B0 is different than B1 and B2. The memory block is normally configured as Data Memory, and hence primarily used to hold data, but in the case of the B0 block, it can also be configured as Program Memory. B0 memory can be configured as program or data memory depending on the value of the core level CNF bit.

• (CNF =0) maps B0 to data memory.
• (CNF=1) maps B0 to program memory.

The LF2407 also has 2K of single access RAM (SARAM). The addresses associated with the SARAM can be used for both data memory and program memory, and are software configurable to  the internal SARAM or external memory.

Non-Volatile Flash Memory

The LF2407 contains 32K of on-chip flash memory that can be mapped to program space if the MP/MC-pin is made logic 0 (tied to ground). The flash memory provides a permanent location to store code that is unaffected by cutting power to the device. The flash memory can be electronically programmed and erased many times to allow for code development. Usually, the external RAM on the LF2407 Evaluation Module (EVM) board is used instead of the flash for code development due to the fact that a separate “flash programming” routine must be performed to flash code into the flash memory. The on-chip flash is normally used in situations where the DSP program needs to be tested where a JTAG connection is not practical or where the DSP needs to be tested as a “stand-alone” device. For example, if a LF2407 was used to develop a DSP control solution to an automobile braking system, it would be somewhat impractical to have a DSP/JTAG/PC interface in a car that is undergoing performance testing.

More later,

Nalin Pithwa

## DSP Processors, Embodiments and Alternatives: Part I

Uptil now, we described digital signal processing in general terms, focusing on DSP fundamentals, systems and application areas. Now, we narrow our focus in DSP processors. We begin with a high level description of the features common to virtually all DSP processors. We then describe typical embodiments of DSP processors, and briefly discuss alternatives to DSP processors such as general purpose microprocessors, microcontrollers(for comparison purposes) (TMS320C20) and FPGA’s. The next several blogs provide a detailed treatment of DSP processor architectures and features.

So, what are the “special” things that a DSP processor can perform? Well, like the name says, DSP processors do signal processing very well. What does “signal processing” mean? Really, it’s a set of algorithms for processing signals in the digital domain. There are analog equivalents to these algorithms, but processing them digitally has been proven to be more efficient. This has been trend for many many years. Signal processing algorithms are the basic building blocks for many applications in the world from cell phones to MP3 players, digital still cameras, and so on. A summary of these algorithms is shown below:

• FIR Filter: $y(n)=\sum_{k=0}^{N}a_{k}x(n-k)$
• IIR Filter: $y(n)=\sum_{k=0}^{M}a_{k}x(n-k)+\sum_{k=1}^{N}b_{k}y(n-k)$
• Convolution: $y(n)=\sum_{k=0}^{N}x(k)h(n-k)$
• Discrete Fourier Transform: $X(k)=\sum_{n=0}^{N-1}x(n)\exp{[-j(2 \pi l/N)nk]}$
• Discrete Cosine Transform: $F(u)=\sum_{x=0}^{N-1}c(u).f(x).\cos{\frac{\pi}{2N}u(2x+1)}$

One or more of these algorithms are used in almost every signal processing application. FIR filters and IIR filters are almost fundamental to any DSP application. They remove unwanted noise from signals being processed, convolution algorithms are used to look for similarities in signals, discrete Fourier transforms are used to represent signals in formats that are easier to process, and discrete cosine transforms are used in image processing applications. We will discuss the details of some of these algorithms later, but there are things to notice about  this entire list of algorithms. First, they all have a summing operation, the function. In the computer world, this is equivalent to an accumulation of a large number of elements which is implemented using a “for” loop. DSP processors are designed to have large accumulators because of this characteristic. They are specialized in this way. DSPs also have special hardware to perform the “for” loop operation so that the programmer does not have to implement this in software, which would be much slower.

The algorithms above also have multiplication of two different operands. Logically, if we were to speed up this operation, we would design a processor to accommodate the multiplication and accumulation of two operands like this very quickly. In fact, this is what has been done with DSPs. They are designed to support the multiplication and accumulation of data sets like this very quickly; for most processors, in just one cycle. Since these algorithms are very common in most DSP applications, tremendous execution savings can be obtained by exploiting these processor optimizations.

There are also inherent structures in DSP algorithms that allow them to be separated and operated on in parallel. Just as in real life, if I can do more things in parallel, I can get more done in the same amount of time. As it turns out, signal processing algorithms have this characteristic as well. So, we can take advantage of this by putting multiple orthogonal (nondependent) execution units in our DSP processors and exploit this parallelism when implementing these algorithms.

DSP processors must also add some reality in the mix of these algorithms shown above. Take the IIR filter described above. You may be able to tell just by looking at this algorithm that there is a feedback component that essentially feeds back previous outputs into the calculation of the current output. Whenever you deal with feedback, there is always an inherent stability issue. IIR filters can become unstable just like other feedback systems. Careless implementation of feedback systems like the IIR filter can cause the output to oscillate instead of asymptotically decaying to zero (the preferred approach). This problem is compounded in the digital world where we must deal with finite word lengths, a key limitation in all digital systems. We can alleviate this using saturation checks in software or use a specialized instruction to do this for us. DSP processors, because of the nature of signal processing algorithms, use specialized saturation underflow/overflow instructions to deal with these conditions efficiently.

There is more I can say about this, but you get the point. Specialization is really all it is about with DSP processors; these processors specifically designed to do signal processing really well. DSP processors may not be as good as other processors when dealing with nonsignal processing centric algorithms (that’s fine; I am not any good at medicine either). So, it’s important to understand your application and choose the right processor. (A previous blog about DAC and ADC did mention this issue).

(We describe below common features of DSP processors.)

Dozens of families of DSP processors are available on the market today. The salient feature of some of the commonly used families of DSP Processors are summarized in Table 1. Throughout these series, we will use these processors as examples to illustrate the architectures and features that can be found in commercial DSP processors.

Most DSP processors share some common features designed to support repetitive, numerically intensive tasks. The most important of these features are introduced briefly here. Each of these features and many others will be examined in greater detail in this blog article series.

Table 1.

$\begin{tabular}{|c|c|c|c|c|} \hline Vendor & Processor Family & Arithmetic Type & Data Width & Speed (MIPS) or MHz\\ \hline Analog Devices & ADSP 21xx & Fixed Point & 16 bit & 25MIPS\\ \hline Texas Instruments & TMS320C55x & Fixed Point & 16 bit & 50MHz ro 300MHz \\ \hline \end{tabular}$

Fast Multiply Accumulate

The most often cited of DSP feature of DSP processors is the ability to perform a multiply-accumulate operation (often called a MAC) in a single instruction cycle. The multiply-accumulate operation is useful in algorithms that involve computing a vector product, or matrix product, such as digital filters, correlation, convolution and Fourier transforms. To achieve this functionality, DSP processors include a multiplier and accumulator integrated into  the main arithmetic processing unit (called the data path) of the processor. In addition, to allow a series of multiply-accumulate operations to proceed without the possibility of arithmetic overflow, DSP processors generally provide extra bits in their accumulator registers to accommodate growth of  the accumulated result. DSP processor data paths will be discussed in detail in some later blog here.

Multiple Access Memory Architecture.

A second feature shared by most DSP processors is the ability to complete several accesses to memory in a single instruction cycle. This allows the processor to fetch an instruction while simultaneously fetching operands for the instruction or storing the result of the previous instruction to memoryHigh bandwidth between the processor and memory is essential for good performance if repetitive data intensive operations are required in an algorithm, as is common in many DSP applications.

In many processors, single cycle multiple memory accesses are subject to restrictions. Typically, all but one of the memory locations must reside on-chip, and multiple memory accesses can take place only with certain instructions. To support simultaneous access of multiple memory locations, DSP processors provide multiple on-chip  buses, multiported on-chip memories, and in some casesm multiple independent memory banks. DSP memory structures are quite distinct from those of general purpose processors and microcontrollers. DSP processor memory architectures will be investigated in detail later.

Specialized Execution Control.

Because many DSP algorithms involve performing repetitive computations, most DSP processors provide special support for efficient looping. Often, a special loop or repeat instruction is provided that allows the programmer to implement a for-next loop without expending any instruction cycles for updating and testing the loop counter or for jumping back to the top of the loop.

Some DSP processors provide other execution control features to improve performance, such as context switching and low-latency, low overhead interrupts for fast input/output.

Hardware looping and interrupts will also be discussed later in this blog series.

Perfipheral and Input/Output Interfaces

To allow low-cost, high performance input and output (I/O), most DSP processors incorporate one or more serial or parallel I/O interfaces, and specialized I/O handling mechanisms such as Direct Memory Access (DMA). DSP processor peripheral interfaces are often designed to interface directly with common peripheral devices like analog-to-digital and digital-to-analog converters.

As integrated circuit manufacturing techniques have improved in terms of density and flexibility, DSP processor vendors have included not just peripheral interfaces, but complete peripheral devices on-chip. Examples of this are chips designed for cellular telephone applications, several of which incorporate a DAC and ADC on chip.

Various features of DSP processor peripherals will be described later in this blog series.

In the next blog, we will discuss DSP Processor embodiments.

More later,

Aufwiedersehen,

Nalin Pithwa