Conversion of analog to digital signal

There are several sources of errors during conversion of analog signal to digital signal. Control and prediction/quantification of these errors is very important for the overall quality of a digital control system. However, in practical control systems, the price is also one of the most important limiting factors and the designer is forced to compromise solution. Also, it seems to me, apart from the three P’s (price, power, performance),another very important parameter is security including protection of the IP of the company.

In control systems, it is necessary to observe the work of the controlled object. This is usually achieved in power electronics systems by measuring currents and voltages. So, essentially such a digital control system processes analog currents and voltage signals by converting them to digital form.

Typical circuits of analog inputs for measuring voltages and currents :- in the voltage measurement circuit, voltage from the voltage divider goes through the amplifier input and the antialiasing filter to the sample-and-hold circuit and ADC. Finally digital signal corresponding to the measured voltage is sent to the processor control system. A similar process takes place in the current measurement circuit, where instead of a voltage divider, current transducer is applied. (Current and voltage measurement issues will be discussed later on this blog).

Let us consider a very simple example of a power electronic circuit.with an open loop/multi rate digital control. (this circuit may be part of the control system for DC/DC, DC/AC, AC/DC, AC/AC converters or active power filters etc). An open system is chosen here for study purposes as a feedback system anslysis is much more involved.

Analog input signal x(t) is converted to digital form x(n) by an A/D converter with sampling ratio f_{s1} and b_{1}-bit resolution. In the next stage, a digital control algorithm using DSP is executed. The algorithm is calculated with b_{2}-bit resolution and sampling ratio f_{s2}. Finally, output control signal y(n) is transferred to a digital PWM with b_{3}-bit resolution sampling ratio f_{s3}. The PWM controls output power switches S_{1} and S_{2}. The switches work with switching frequency. The digital PWM with two power switches S_{1} and S_{2} and an analog output filter L_{C} and C_{C} works as a digital to analog D/A power converter. It converts energy from a direct current source (DC) to the output. Typically the main issue is the quality of the output voltage and current. Therefore, the following signal parameters should be considered:

  • f_{s1}, f_{s2}, f_{s3} — signal sampling ratios
  • b_{1}, b_{2}, b_{3}— signal resolution in bits.
  • f_{C} — transistor switching frequency
  • THD — total harmonic distortion ratio.
  • SNR — output signal to noise ratio.
  • SINAD — output signal to noise and distortion ratio.

Finally, the required SINAD value of output current or voltage of power electronics circuit are dependent on the application. For example, for battery charger SINAD value equal to 30 dB is sufficient. However, for the high quality audio power amplifier the SINAD value should be bigger than 100 dB. As we can see the spread of these parameters of digital control systems for power electronics circuits is very high. Now-a-days there is a huge offer of A/D converters and DSPs, so it is easy to choose circuits with adequate parameters b_{1}, b_{2}, f_{s1} and f_{s2}. The only considerable limitation may be the cost of these circuits. The last stage, with digital PWM may be a bottleneck of the whole system, especially for high resolution, for example, high quality power audio amplifier with parameters : f_{s3}=44.1kHz and f_{s3}=44.1kHz and b_{3}=16 bit, the clock frequency of PWM counters can be calculated using the equation:

f_{h} =f_{s3}2^{b3} \approx 2.8GHz.

The calculated value of clock frequency is too high for ordinary digital circuits, therefore, resolution of digital PWM should be reduced. However, it will result in deterioration of the signal-to-ratio.

Total Harmonic Distortion:

Total harmonic distortion THD ratio is a form of nonlinear distortion in circuits in which harmonics (signals whose frequency is an integer multiple of the input signal) are generated. A wide-class of non-linear circuits can be described by the equation:

y(t) = a_{1}x(t) + a_{2}x(t)^{2}+a_{3}x(t)^{3}+ \ldots + a_{k}x(t)^{k}

In linear circuits, only a_{1} coefficient is nonzero.

THD ratio is measured in percents or in decibels (dB) and harmonic distortion is calculated as the ratio of the level of the harmonic to the level of the original frequency

THD = \frac{\sqrt{\sum_{k=2}^{N}U_{k}^{2}}}{U_{1}}

where U_{1} — amplitude of first or fundamental harmonic and so on.

A/D converters of TMS320F2837xD

TI is permanently engaged in the design of a wide range of microcontrollers/DSPs intended for use in power electronics systems. One of the most advanced is the TMS320F2837xD microcontroller/DSP family. The TMS320F2837xD includes four independent high performance A/D converters (with a total input of 24 channels possible), allowing the device to efficiently manage multiple analog signals for enhanced overall system throughput. Each ADC has a single SH circuit, and using multiple ADC modules enables it to perform simultaneous sampling or independent operation. The ADC is implemented using a successive approximation and it has the configurable resolution of either 16 bits or 12 bits. These ADC’s have many modes of operation. The most important is the possibility to sample four analog signals. So we cannot, quite clearly, sample 8 analog signals at a time.

Cheers,

Nalin Pithwa

DSP Filter to generate Fibonacci numbers

The title is slight misnomer; but I am presenting below is a closed form expression for the nth term of the Fibonacci sequence.

Reference: Digital Signal Processing by Proakis and Manolakis, Sixth Edition.

The one-sided z-transform is a very efficient tool for the solution of difference equations with nonzero intial conditions. It achieves that by reducing the difference equation relating the two time-domain signals to an equivalent algebraic equation relating their one-sided z-transforms. This equation can be easily solved to obtain the transform of the desired signal. The signal in the time domain is obtained by inverting the resulting z-transform. For instance:

Example: The well-known Fibonacci sequence of integers is obtained by computing each term as the sum of the two-previous ones. The first few terms of the sequence are: 1,1,2,3,5, 8, \ldots

Determine a closed form expression for the nth term of the Fibonacci sequence.

Solution: Let y(n) be the nth term of the Fibonacci sequence. Clearly, y(n) satisfies the difference equation:

y(n) = y(n-1) + y(n-2)…..Equation A

with initial conditions

y(0)=y(-1)+y(-2)=1…..B

y(1)=y(0)+y(-1)=1….C

From the above, y(-1)=0 and y(-2)=1. Thus, we have to determine y(n) for n \geq 0, which satisfies equation A with initial conditions y(-1)=0 and y(-2)=1.

By taking the one-sided z-transform of the equation A and using the shifting property, we obtain:

Y^{+}(z) = (z^{-1}Y^{+}(z)+y(-1))+(z^{-1}Y^{+}(z)+y(-2)+y(-1)z^{-1})

or Y^{+}(z) = \frac{1}{1-z^{-1}-z^{2}} = \frac{z^{2}}{z^{2}-z-1}….equation D

where we have used the fact that y(-1)=0 and y(-2)=1.

We can invert Y^{+}(z) by the partial fraction expansion method. The poles of Y^{+}(z) are:

p_{1} = \frac{1+\sqrt{5}}{2}, and p_{2} = \frac{1-\sqrt{5}}{2}

and the corresponding coefficients are A_{1} = \frac{p_{1}}{\sqrt{5}} and A_{2}=\frac{-p_{1}}{\sqrt{5}}. Therefore,

y(n) = (\frac{1+\sqrt{5}}{2\sqrt{5}})(\frac{1+\sqrt{5}}{2})^{n}u(n) - \frac{1-\sqrt{5}}{2\sqrt{5}}(\frac{1-\sqrt{5}}{2})^{n}u(n)

or, equivalently,

y(n) = \frac{1}{\sqrt{5}}(\frac{1}{2})^{n+1}((1+\sqrt{5})^{n+1}-(1-\sqrt{5})^{n+1})u(n).

One smallish comment: We can implement a difference equation of a filter as either FIR or IIR. Of course, based on the physical nature of the signal/processing, one or the other might be preferable.

Regards,

Nalin Pithwa.

Parameters for selection of a processor — part 2

Processors suitable for digital control range from standard microprocessors like 8051 to special purpose DSP processors, the primary difference being in the instruction sets and speed(s) of particular instruction(s), such as multiply. Standard microprocessors or general purpose processors are intended for laptops, workstations, and general digital data bookkeeping. Naturally, because digital control involves much numerical computation, the instruction sets for special-purpose DSP processors are rich in math capabilities and are better suited for control applications than the standard microprocessors or general purpose processors.

Processors, such as those found in microwave ovens have a broad range of requirements. For example, the speed, instruction set, memory, word length, and addressing mode requirements are all very minimal for the microwave oven. The consequences of a data error are minimal as well, especially, relative to a data error in a PC/laptop while it is calculating an income tax return. PC’s or laptops, on the other hand, require huge megabytes of memory, and they benefit from speed, error correction, larger word size, and sophisticated addressing modes.

DSP processors generally need speed, word length, and math instructions such as multiply, multiply-and-accumulate, and circular addressing. One typical feature of signal processors not found in general purpose processors is the use of a Harvard architecture, which consists of separate data and program memory. Although separate data and program memory offer significant speed advantages, the IC pin count is higher assuming external memory is allowed because instruction address, instruction data, data address, and data buses are separate. A modified Harvard architecture has been used which maintains some speed advantage, while eliminating the requirement for separate program and data buses, greatly reducing pin count in processors that have external memory capability (almost all have this feature).

While thinking of control versus signal processing applications, in the former, we often employ saturation and therefore absolutely require saturation arithmetic; whereas in the latter, to ensure signal fidelity, in most signal processing applications the algorithms must be designed to prevent saturation by scaling signals appropriately.

The consequences of numerical overflow in control computations can be serious, even destabilizing. In most forms of numerical computation, it is usually better to suffer the non-linearity of signal saturation than the effects of numerical overflow.

For most control applications, it is advantageous to select a processor that does not require much support hardware. One of the most commonly cited advantages of digital control is the freedom from noise in the control processor. Although it is true that controller noise is nominally limited to equalization noise, it is not true that the digital controller enjoys an infinite immunity from noise. Digital logic is designed with certain noise margins, which of course are finite. When electromagnetic radiation impinges on the digital control system, there ia finite probability of making an error. One of the consequences of digital control is that although it can have a very high threshold of immunity, without error detection and correction it is equally likely that the system will make a large error as a small one — the MSB and the LSB of a bus have equal margin against noise.

In addition to external sources of error-causing signals, the possibility for circuit failure exists. If a digital logic circuit threshold drifts outside the design range, the consequences are usually catastrophic.

For operational integrity, error detection is a very important feature.

I hope to compare, if possible, some families of Digital Control processors here, a bit later.

Regards,

Nalin Pithwa

Reference:

Digital Control of Dynamic Systems, Franklin, Powell and Workman.

Bilinear Transformation via Synthetic Division: The Matrix Method

(Authors: Prof. Navdeep M. Singh, VJTI, University of Mumbai and Nalin Pithwa, 1992).

Abstract: The bilinear transformation can be achieved by using the method of synthetic division. A good deal of simplification is obtained when the process is implemented as a sequence of matrix operations. Besides, the matrices are found to have simple structures suitable for algorithmic implementation.

I) INTRODUCTION:

Davies [1] proposed a method for bilinear transformation using synthetic division. This method can be quite simplified when the synthetic division is carried out as a set of matrix operations because the operator matrices are found to have simple structures and so can be easily generated.

II) THE ALGORITHM:

Given a discrete time transfer function H(z), it is transformed to G(s) in the s-plane under the transformation:

z=\frac{s+1}{s-1}

This can be sequentially achieved as :-

H(z) \rightarrow H(s+1) \rightarrow H(\frac{1}{s}+1) \rightarrow H(\frac{2}{s}+1) \rightarrow H(\frac{2}{s-1}+1)=H(\frac{s+1}{s-1})=G(s)

The first step is to represent the given characteristic polynomial in the standard companion form. Since the companion form represents a monic polynomial appropriate scaling is required in the course of the algorithm to ensure that the polynomial generated is monic after each step of transformation.

The method is developed for a third degree polynomial and then generalized.

Step 1:

H(z) \rightarrow H_{1}(s+1) \rightarrow H(s+1) (decreasing of roots by one)

Given H(z)=z^{3}+a_{2}z^{2}+a_{1}z+a_{0} (a_{3}=1) (monic)

Then, H_{1}(s+1)=s^{3}+b_{2}s^{2}+b_{1}s+b_{0} where b_{2}=a_{2}+1, b_{1}=a_{1}+b_{2}, b_{0}=a_{0}+b_{1}

H(s+1)=s^{3}+(a_{2}+3)s^{2}+(a_{1}+2a_{2}+3)s+(a_{0}+a_{1}+a_{2}+1).

In the companion form, obviously the following transformation is sought:

A = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0} & -a_{1} & -a_{2} \end{array}\right] and B=\left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 &1 \\-b_{0} & -b_{1} & -b_{2} \end{array} \right] and C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -a_{0}-a_{1}-a_{2}-1 & -a_{1} - 2a_{2}-3 & -a_{2}-3 \end{array} \right]

Performing elementary row and column transformations on A using matrix operators, the final row operator and column operator matrices, P_{1} and Q_{1}, respectively are found to be P_{1} = \left[ \begin{array}{ccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & 1/a_{0} \\ -1 & 1 & 0 \\ 0 & -1 & 1\end{array} \right] and Q_{1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{array}\right].

Thus, P_{1}AQ_{1}=B.

In general, for a polynomial of degree n,

P_{1} = \left[ \begin{array}{ccccc} (a_{0}+a_{1})/a_{0} & a_{2}/a_{0} & a_{3}/a_{0} & \ldots & 1\\ -1 & 1 & 0 & \ldots & a_{0}\\ 0 & -1 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & -1 & 1\end{array}\right] and Q_{1} = \left[ \begin{array}{ccccc} 1 & 0 & 0 \ldots & 0 \\ 1 & 1 & 0 & \ldots & \ldots \\ 1 & 1 & 1 & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & 1 & 1 & \ldots & 1\end{array}\right], where both the matrices are n \times n.

and B = P_{1}AQ_{1} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b_{0} & -b_{1} & -b_{2} & \ldots & -b_{n-1} \end{array}\right], and B is also n \times n.

Where b_{j} = \sum_{i=j}^{n-1}a_{i} + 1, where i, j = 0, 1, 2, \ldots, (n-1).

Now, the transformation B \rightarrow C is sought respectively. The row and column operator matrices P_{c1} and Q_{c1} respectively are : P_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -2 & 1 \end{array}\right] and Q_{c1} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right].

P_{c1} and Q_{c1} are found to have the following general structures:

P_{c1} = \left[ \begin{array}{ccccc}1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ldots & 0 \\ -1 & 3 & -3 & 1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right] and Q_{c1} = \left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & 1 & 1 & \ldots & 0 \\ 0 & 1 & 2 & 1 &\ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right], both general matrices P_{c1} and Q_{c1}, being of dimensions n \times n.

P_{c1} is lower triangular and can be generated as : P_{c1}(i,j) = [|P_{c1(i-j)}|+|P_{c1}(i-1,j-1)|] \times (-1)^{(i+j)}, where i=2, 3, 4, \ldots, n and so we get

P_{c1}(i,1)=(-1)^{(i-1)}, where i=1, 2, 3, \ldots, n, and P_{c1}(i,i)=1.

Similarly, Q_{c1} is lower triangular and can be generated as :

Q_{c1}(i,1)=0, where i=2,3,4, \ldots, n

Q_{c1}(i,j) = Q_{c1}(i-1,j) + Q_{c1}(i-1,j-1).

Thus, when A is the companion form of a polynomial of any degree n, then P_{c1}(P_{1}AQ_{1})Q_{c1} gives H(s+1) in the companion form.

Step 2:

H(s+1) \rightarrow (\frac{s^{3}}{b_{0}}).H(1/s + 1) (scaled inversion).

Let H(s+1) = s^{3} + b_{2}s^{2}+b_{1}s+b_{0} where b_{0}=a_{0} + a_{1} + a_{2} +1, b_{1} = a_{1} + 2a_{2}+3, b_{2}=a_{2}+3.

The scaling of the entire polynomial by b_{0} ensures that the polynomial generated is monic and hence, can be represented in the companion form.

The following transformation is sought: C = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -b_{0} & -b_{1} & -b_{2} \end{array}\right] \rightarrow D = \left[\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0})\end{array}\right]

The row and column operator matrices P_{2} and Q_{2} respectively are:

P_{2}=\left[\begin{array}{ccc} b_{1}/b_{2} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & 1/b_{0}\end{array}\right] and Q_{2} = \left[ \begin{array} {ccc}1/b_{0} & 0 & 0 \\ 0 & b_{2}/b_{1} & 0 \\ 0 & 0 & b_{1}/b_{2}\end{array}\right]

In general, P_{2} = \left[ \begin{array}{ccccc} b_{1}/b_{n-1} & 0 & 0 & 0 & \ldots \\ 0 & 0 & 0 & \ldots & \ldots \\ 0 & 0 & \ldots & \ldots & \ldots \\ 0 & \ldots & \ldots & b_{n-1}/b_{1} \\ \ldots & \ldots & \ldots & \ldots & 1/b_{0}\end{array}\right] and Q_{2} = \left[ \begin{array}{ccccc} 1/b_{0} & 0 & 0 & \ldots & 0\\ 0 & b_{n-1}/b_{0} & \ldots & \ldots & 0 \\ 0 & 0 & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & b_{1}/b_{n-1}\end{array}\right] , where both the general matrices P_{2} and Q_{2} are of dimensions n \times n.

So, we get D=P_{2}A_{1}Q_{2} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1/b_{0} & (-b_{n-1}/b_{0}) & (-b_{n-2}/b_{0}) & \ldots & (-b_{1}/b_{0})\end{array}\right], which is also a matrix of dimension n \times n.

Step 3:

(s^{3}/b_{0})H(1/s + 1) \rightarrow (s^{3}/b_{0})H(1 + k/s), with k=2 (scaling of roots)

If F(x) = a_{n}x^{n} + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \ldots + s_{0}, then F(x/k) = a_{n}x^{n} + ka_{n-1}x^{n-1} + k^{2}a_{n-2}x^{n-2}+ \ldots + a_{0}k^{n}.

The following transformation is sought:

D = \left[\begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -1/b_{0} & (-b_{2}/b_{0}) & (-b_{1}/b_{0}) \end{array}\right] and E = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -k^{3}/b_{0} & (-k^{2}b_{2}/b_{0}) & (-kb_{1}/b_{0})\end{array}\right].

The row and column operators, P_{3} and Q_{3}, respectively are:

P_{3}= \left[ \begin{array}{ccc} 1/k^{2} & 0 & 0 \\ 0 & 1/k & 0 \\ 0 & 0 &\  1\end{array}\right], and Q_{3}=\left[ \begin{array}{ccc} k^{3} & 0 & 0 \\ 0 & k^{2} & 0 \\ 0 & 0 & k \end{array}\right]

In general, P_{3} = \left[ \begin{array}{ccccc}1/k^{n-1} & 0 & 0 & \ldots & 0 \\ 0 & 1/k^{n-2} & 0 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ 0 & \vdots & \vdots & \vdots & 1/k \\ 0 & \ldots & \ldots & \ldots & 1\end{array}\right], and Q_{3} = \left[ \begin{array}{ccccc}k^{n} & 0 & 0 & \ldots & 0 \\ 0 & k^{n-1} & 0 & \ldots & 0 \\ 0 & 0 & \ldots & \ldots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots &  k\end{array}\right], where the general matrices P_{3} and Q_{3} are both of dimensions n \times n

and E=P_{3}A_{2}Q_{3} = \left[ \begin{array}{ccccc}0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ 0 & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ (-k^{n}/b_{0}) & (-b_{n-1}k^{n-1}/b_{0}) & \ldots & \ldots & (-kb_{1}/b_{0})\end{array}\right]

Step 4:

(s^{3}/b_{0})H(1+k/s) \rightarrow (s^{3}/b_{0})H(1+k/(s-1)) (increasing of roots by one)

For the third degree case, the following transformation is sought:

E = \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -A_{0} & -A_{1} & -A_{2}\end{array}\right] and F = \left[ \begin{array}{ccc}0 & 1 & 0 \\ 0 & 0 & 1 \\ -B_{0} & -B_{1} & -B_{2}\end{array}\right] and G = \left[ \begin{array}{ccc}0 & 1 & 0\\ 0 & 0 & 1 \\ (-A_{0}+A_{1}-A_{2}+1) & (-A_{1}+2A_{2}-3) & (3-A_{2}) \end{array}\right],

where B_{2}=A_{2}-1, B_{1}=A_{1}-B_{2}, B_{0}=A_{0}-B_{1}.

The row and column operators P_{4} and Q_{4} are :

P_{4}= \left[ \begin{array}{ccc} (A_{0}-A_{1})/A_{0} & (-A_{2}/A _{0}) & (-1/A_{0}) \\ 1 & 1 & 0 \\ 0 & 1 & 1\end{array}\right] and Q_{4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 1 & -1 & 1\end{array}\right]

In general, P_{4} = \left[ \begin{array}{ccccc} (C_{0}-C_{1})/C_{0} & (-C_{2}/C_{0}) & (-C_{3}/C_{0}) & \ldots & (1/C_{0}) \\ 1 & 1 & \ldots & \ldots & 0\\ 0 & 1 & 1 & \ldots & 0\\0 & \ldots & \ldots & 1 & 1 \\\vdots & \vdots & \vdots & \vdots & 1 \end{array}\right], and Q_{4}=\left[ \begin{array}{ccccc} 1 & 0 & \ldots & \ldots & 0 \\ -1 & 1 & 0 & \ldots & 0 \\ 1 & -1 & 1 & 0 & \ldots\\ \ldots & \ldots & \ldots & \ldots & 1 \\ \vdots & \vdots & \vdots & \vdots & 1\end{array}\right], both the general matrices P_{4} and Q_{4} being of dimensions n \times n.

Where C_{0}=k^{n}/b_{0} and C_{j}=k^{n-j}b_{n-j}/b_{0}, where j=1, 2, 3, \ldots, (n-1) and Q_{4} is a lower triangular matrix where q_{4}(i,j) = (-1)^{i+j}.

In general, we have F = P_{4}A_{3}Q_{4} = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ -\hat{C_{0}} & -\hat{C_{1}} & -\hat{C_{2}} & \ldots & -\hat{C_{n-1}}\end{array}\right]

where -\hat{C_{0}} = -C_{0} + C_{1} - C_{2} + \ldots + (-1)^{n-1}

where -\hat{C_{1}} = -C_{1} + C_{2} - C_{3} + \ldots - (-1)^{n-1}

where -\hat{C_{2}} = -C_{2} + C_{3} - C_{4} + \ldots + (-1)^{n-1}

and so on \vdots

Now, the transformation F \rightarrow G is to be achieved. The row and column operators, P_{c4} and Q_{c4}, respectively are: P_{c4}=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0\\ 1 & 2 & 1\end{array}\right] and Q_{c4}=\left[ \begin{array}{ccc}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1\end{array}\right]

In general, P_{c4}=\left[ \begin{array}{ccccc} 1 & 0 & 0 & \ldots & 0\\ 1 & 1 & 0 & \ldots & 0 \\ 1 & 2 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & \ldots & \ldots & \ldots & 1\end{array}\right] and Q_{c4}= \left[ \begin{array} {ccccc} 1 & 0 & \ldots & \ldots & 0 \\ 0 & 1 & 0 & \ldots & 0 \\ 0 & -1  & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \ldots & \ldots & \ldots & \ldots & 1\end{array}\right], where both the general matrices P_{c4} and Q_{c4} are of dimensions n \times n.

P_{c4}, a lower triangular matrix can be easily generated as:

p_{c4}(i,i)=1; p_{c4}(i,j) = p_{c4}(i-1,j) + p_{c4}(i-1,j-1).

and Q_{c4}, also a lower triangular matrix can be easily generated as:

q_{c4}(i,i)=1; and q_{c4}(i,1)=0 when i \neq 1; and

q_{c4}(i,j) = [|| Q_{c4}(i-1,j)+ |Q_{c4}(i-1,j-1)|] \times (-1)^{i+j}.

Thus, P_{c4}(P_{4}EQ_{4})Q_{c4} finishes the process of bilinear transformation. Steps 2 and 3 can be combined so that the algorithm reduces to three steps only.

If the original polynomial is non-monic (that is, a_{n} \neq 1), then multiplying  the final tranformed polynomial by b_{0}a_{n} restores it to the standard form.

III. Stability considerations:

In the \omega plane, the Schwarz canonical approach can be applied as an algorithm directly to the canonical form of bilinear transformation of the polynomial obtained previously because a companion form is a non-derogatory matrix.

IV. An Example:

F_{0}(z) = 2z^{4} + 4z^{3} + 6z^{2} + 5z +1

F(z) = z^{4} + 2z^{3} + 3z^{2} + 2.5z + (0.5)z

b_{0} = a_{0} + a_{1} + a_{2} + 1 =9

b_{1} = a_{1} + a_{2} + a_{3} + 1 = 8.5

Similarly, b_{2}=6 and b_{3}=3

B=P_{1}AQ_{1}= \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right]

C = \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 1 & -2 & 1 & 0 \\ -1 & 3 & -3 & 1\end{array}\right] \times \left[ \begin{array}{cccc}0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1\\ -9 & -8.5 & -6 & -3\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 2 & 1\end{array}\right]

Hence, C = \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -9 & -18.5 & -15 & -6\end{array}\right]

H(s+1)=s^{4}+6s^{3} + 15s^{2}+18.5s+9

Steps 2 and 3:

E=\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ -2^{4}/9 & -2^{3}(6)/9 & -2^{2}(15)/9 & (-2)(18.5)/9\end{array}\right] = \left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ (-16/9) & (-48/9) & (-60/9) & (-37/9)\end{array}\right]

Step 4:

C_{0}=16/9, C_{1}=48/9, 60/9, 37/9.

-\hat{C_{0}}=0, -\hat{C_{1}}=-16/9, -\hat{C_{2}}=-32/9, -\hat{C_{3}}=-28/9

P_{c4}(P_{4}EQ_{4})Q_{c4}= \left[ \begin{array}{cccc}1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 2 & 1 & 0 \\ 1 & 3 & 3 & 1\end{array}\right] \times \left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -16/9 & -32/9 & -28/9\end{array}\right] \times \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 1 & -2 & 1\end{array}\right], so we get

P_{c4}(P_{4}EQ_{4})Q_{c4}=\left[ \begin{array}{cccc}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 0 & -3/9 & -3/9 & -1/9\end{array}\right]

The final monic polynomial is s^{4} + (1/9)s^{3}+(3/9)s^{2} + (3/9)s, and multiplying it by b_{0}a_{n}, that is, 9 \times 2 =18, restores it to the non-monic form:

18s^{4}+2s^{3}+6s^{2}+6s

V. Conclusion:

Since the operator matrices have lesser non-zero elements, storage requirements are lesser. The computational complexity should reduce for higher-order systems because the non-zero elements lesser manipulations are also lesser, besides lesser storage requirements. Additionally, the second and third steps can be combined giving a three step method only. Thus, the algorithm easily achieves bilinear transformation, especially, for higher systems compared to other available methods hitherto.

VI. References:

  1. Davies, A.C., “Bilinear Transformation of Polynomials”, IEEE Automatic Control, Nov. 1974.
  2. Barnett and Storey, “Matrix Methods in Stability Theory”, Thomas Nelson and Sons Ltd.
  3. Datta, B. N., “A Solution of the Unit Circle Problem via the Schwarz Canonical Form”, IEEE Automatic Control, Volume AC 27, No. 3, June 1982.
  4. Parthsarthy R., and Jaysimha, K. N., “Bilinear Transformation by Synthetic Division”, IEEE Automatic Control, Volume AC 29, No. 6, June 1986.
  5. Jury, E.I., “Theory and Applications of the z-Transform Method”, John Wiley and Sons Inc., 1984.