The following is one of the greatest triumphs of pure mathematics — its applications to the glamorous world of IT, finance, science and engineering! Hats off to Prof. Yves Meyer, winner of Abel Prize, 2017:
Reproduced from newspaper, DNA, print edition, Mumbai, Sunday, Mar 5, 2017, (Section on Health):
Biomedical solutions via math:
(A new model combines mathematics with biology to set the stage for cancer cure and other diseases):
How do our genes give rise to proteins, proteins to cells, and cells to tissues and organs? What makes a cluster of cells become a liver or a muscle? The incredible complexity of these biological systems drives the work of biomedical scientists. But, two mathematicians have introduced a new way of thinking that may help set the stage for better understanding of our bodies and other living things.
The pair from University of Michigan Medical School and University of California, Berkeley talk of using math to understand how generic information and interactions between cells give rise to the actual function of a particular type of tissue. While the duo admit that it’s a highly idealized framework, which does not take into account every detail of this process, that’s what’s needed. By stepping back and making a simplified model based on mathematics, they hope to create a basis for scientists to understand the changes that happen over time within and between cells to make living tissues possible. It could also help with understanding of how diseases such as cancer can arise when things don’t go as planned.
Turning to Turing’s machine:
U-M Medical School Assistant Professor of Computational Medicine, Indika Rajapakse and Berkeley Professor Emeritus, Stephen Smale have worked on the concepts for several years. “All the time, this process is happening in our bodies, as cells are dying and arising, and yet, they keep the function of the tissue going,” says Rajapakse. “We need to use beautiful mathematics and beautiful biology together to understand the beauty of a tissue.”
For the new work, they even hearken back to the work of Alan Turing, the pioneering Btitish mathematician famous for his “Turing machine” computer that cracked the codes during World War II.
Toward the end of his life, Turing began looking at the mathematical underpinnings of morphogenesis — the process that allows natural patterns such as a zebra’s stripes to develop as a living thing grows from an embryo to an adult.
“Our approach adapts Turing’s technique, combining genome dynamics within the cell and the diffusion dynamics between cells,” says Rajapakse, who leads the U-M 4D —- Genome Lab in the Department of Computational Medicine and Bio-Informatics.
His team of biologists and engineers conduct experiments that capture human genome dynamics to three dimensions using bio-chemical methods and high resolution imaging.
Bringing math and the genome together
Smale, who retired from Berkeley, but is still active in research, is considered a pioneer of modelling dynamic systems. Several years ago, Rajapakse approached him during a visit to U-M, where Smale earned his undergraduate and graduate degrees. They began exploring how to study the human genome — the set of genes in an organism’s DNA — as a dynamic system.
They based their work on the idea that while the genes of an organism remain the same throughout life, how cells use them does not.
Last spring, they published a paper that lays a mathematical foundation for gene regulation — the process that governs how often and when genes get “read” by cells in order to make proteins.
Instead of the nodes of those networks being static, as Turing assumed, the new work sees them as dynamic systems. The genes may be “hard-wired” into the cell, but how they are expressed depends on factors such as epigenetic tags added as a result of environmental factors, and more.
As a result of his work with Smale, Rajapakse now has funding from the Defense Advanced Research Projects Agency (DARPA), to keep exploring the issue of emergence of function — including what happens when the process changes.
Cancer, for instance, arises from a cell development and proliferation cycle gone awry. And the process by which induced pluripotent stem cells are made in a lab —- essentially turning back the clock on a cell type so that it regains the ability to become other cell types — is another example.
Rajapakse aims to use data from real world genome and cell biology experiments in his lab to inform future work, focused on cancer and cell reprogramming.
He’s also organizing a gathering of mathematicians from around the world to look at computational biology and the genome this summer in Barcelona.
Thanks to DNA, Prof. Stephen Smale and Prof. Indika Rajapakse; this, according to me, is one of the several applications of math.
The following are the references I used:
- Digital Signal Processing by John H Karl
- DSP Processor Architectures — Lapsley et al.
- DSP Software Development Techniques for Embedded and Real Time Systems — Rob Oshana
The first step in a signal processing system is getting the information from the real world into the system. This requires transforming an analog signal to a digital representation suitable for processing by the digital system. This signal passes through a device called an analog-to-digital converter (A/D or ADC). The ADC converts the analog signal to a digital representation by sampling or measuring the signal at a periodic rate. Each sample is assigned a digital code. These digital codes can then be processed by the DSP. The number of different codes or states is almost always a power of two. The simplest digital signals have only two states. These are referred to as binary signals.
Examples of analog signals are waveforms representing human speech and signals from a television camera. Each of these analog signals can be converted to a digital form using ADC and then processed using a programmable DSP.
Digital signals can be processed more efficiently than analog signals. Digital signals are generally well-defined and orderly, which makes them easier for electronic circuits to distinguish from noise, which is chaotic. Noise is basically unwanted information. Noise can be background noise from an automobile, or a scratch on a picture that has been converted to digital. In the analog world, noise can be represented as electrical or electromagnetic energy that degrades the quality of signals and data. Noise, however, occurs in both digital and analog systems. Sampling errors can degrade digital signals as well. Too much noise can degrade all forms of information including text, programs, images, audio and video, and telemetry. Digital signal processing provides an effective way to mmimize the effects of noise by making it easy to filter this “bad” information out of the signal.
As an example, assume that an analog signal needs to be converted into a digital signal for further processing. The first question to consider is how often to sample or measure the analog signal in order to represent that signal accurately in the digital domain. The sample rate is the number of samples of an analog event (like sound) that are taken per second to represent the event in the digital domain. Let’s assume that we are going to sample the signal at a rate of T seconds. This can be represented as
where the sampling frequency is measured in hertz. If the sampling frequency is 8 kilohertz (KHz), this would be equivalent to 8000 cycles per second. The sampling period would then be:
This tells us that, for a signal being sampled at this rate, we would have 0.000125 seconds to perform all the processing necessary before the next sample arrived (remember that these samples are arriving on a continuous basis and we cannot fall behind in processing them). This is a common restriction for real-time systems, which we have discussed in an earlier blog too.
Since we now know the time restriction, we can determine the processor speed required to keep up with the sampling rate. Processor “speed” is measured not by how fast the clock rate is for the processor, but fast the processor executes instruction. Once we know the processor instruction cycle time, we can determine how many instructions we have available to process the sample.
Sampling period(T)/Instruction cycle time=number of instructions per sample.
For a 100MHz processor that executes one instruction per cycle the instruction cycle time would be 1/100 MHz equal to 10 nanoseconds.
125 microseconds/10 ns=12,500 instructions per sample
125 microseconds/5 ns=25,000 instructions per sample (for a 200 MHz processor)
125 microseconds/2 ns=62,500 instructions per sample (for a 500 MHz processor)
As this example demonstrated, the higher the processor instruction cycle execution, the more processing we can do on each sample. If it were that easy, we could just choose the highest processor speed available and have plenty of processing margin. Unfortunately, it is not as easy as this. Many other factors including cost, accuracy, and power limitations must be considered. Embedded systems have many constraints such as these as well as size and weight(important for portable devices). For example, how do we know how fast we should sample the input analog signal to represent it accurately in the digital domain. If we do not sample fast enough, the information we obtain will not be representative of the true signal. If we sample too much we may be “over designing” the system and overly constrain ourselves.
Digital to Analog Conversion.
In many applications, a signal must be sent back out to the real world after being processed, enhanced and/or transformed, while inside the DSP. Digital to analog conversion (DAC) is a process in which signals having a few (usually two) defined levels or states (digital) are converted into signals having a very large number of states (analog).
Both the ADC and DAC are of significance in many applications of digital signal processing. The fidelity of an analog signal can often be improved by converting the analog input to digital form using a DAC, clarifying or enhancing the digital signal and then converting the enhanced digital impulses back to analog form using an ADC. (A single digital output level produces a DC output voltage).