Analog to Digital Conversion and Digital to Analog Conversion

The first step in a signal processing system is getting the information from the real world into the system. This requires transforming an analog signal to a digital representation suitable for processing by the digital system. This signal passes through a device called an analog-to-digital converter (A/D or ADC). The ADC converts the analog signal to a digital representation by sampling or measuring the signal at a periodic rate. Each sample is assigned a digital code. These digital codes can then be processed by the DSP. The number of different codes or states is almost always a power of two. The simplest digital signals have only two states. These are referred to as binary signals.

Examples of analog signals are waveforms representing human speech and signals from a television camera. Each of these analog signals can be converted to a digital form using ADC and then processed using a programmable DSP.

Digital signals can be processed more efficiently than analog signals. Digital signals are generally well-defined and orderly, which makes them easier for electronic circuits to distinguish from noise, which is chaotic. Noise is basically unwanted information. Noise can be background noise from an automobile, or a scratch on a picture that has been converted to digital. In the analog world, noise can be represented as electrical or electromagnetic energy that degrades the quality of signals and data. Noise, however, occurs in both digital and analog systems. Sampling errors can degrade digital signals as well. Too much noise can degrade all forms of information including text, programs, images, audio and video, and telemetry. Digital signal processing provides an effective way to mmimize the effects of noise by making it easy to filter this “bad” information out of the signal.

As an example, assume that an analog signal needs to be converted into a digital signal for further processing. The first question to consider is how often to sample or measure the analog signal in order to represent that signal accurately in the digital domain. The sample rate is the number of samples of an analog event (like sound) that are taken per second to represent the event in the digital domain. Let’s assume that we are going to sample the signal at a rate of T seconds. This can be represented as

Sampling period(T)=1/Sampling frequency (fs)

where the sampling frequency is measured in hertz. If the sampling frequency is 8 kilohertz (KHz), this would be equivalent to 8000 cycles per second. The sampling period would then be:

T=1/8000=125 microseconds=0.000125 seconds.

This tells us that, for a signal being sampled at this rate, we would have 0.000125 seconds to perform all the processing necessary before the next sample arrived (remember that these samples are arriving on a continuous basis and we cannot fall behind in processing them). This is a common restriction for real-time systems, which we have discussed in an earlier blog too.

Since we now know the time restriction, we can determine the processor speed required to keep up with the sampling rate. Processor “speed” is measured not by how fast the clock rate is for the processor, but fast the processor executes instruction. Once we know the processor instruction cycle time, we can determine how many instructions we have available to process the sample.

Sampling period(T)/Instruction cycle time=number of instructions per sample.

For a 100MHz processor that executes one instruction per cycle the instruction cycle time would be 1/100 MHz equal to 10 nanoseconds.

125 microseconds/10 ns=12,500 instructions per sample

125 microseconds/5 ns=25,000 instructions per sample (for a 200 MHz processor)

125 microseconds/2 ns=62,500 instructions per sample (for a 500 MHz processor)

As this example demonstrated, the higher the processor instruction cycle execution, the more processing we can do on each sample. If it were that easy, we could just choose the highest processor speed available and have plenty of processing margin. Unfortunately, it is not as easy as this. Many other factors including cost, accuracy, and power limitations must be considered. Embedded systems have many constraints such as these as well as size and weight(important for portable devices). For example, how do we know how fast we should sample the input analog signal to represent it accurately in the digital domain. If we do not sample fast enough, the information we obtain will not be representative of the true signal. If we sample too much we may be “over designing” the system and overly constrain ourselves.

Digital to Analog Conversion.

In many applications, a signal must be sent back out to the real world after being processed, enhanced and/or transformed, while inside the DSP. Digital to analog conversion (DAC) is a process in which signals having a few (usually two) defined levels or states (digital) are converted into signals having a very large number of states (analog).

Both the ADC and DAC are of significance in many applications of digital signal processing. The fidelity of an analog signal can often be improved by converting the analog input to digital form using a DAC, clarifying or enhancing the digital signal and then converting the enhanced digital impulses back to analog form using an ADC. (A single digital output level produces a DC output voltage).

More later,

Nalin Pithwa

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.