## The power and limits of AI: Prof. Cedric Villani, Fields Medalist

The following is one of the greatest triumphs of pure mathematics — its applications to the glamorous world of IT, finance, science and engineering! Hats off to Prof. Yves Meyer, winner of Abel Prize, 2017:

## Biomedical solutions via math

Reproduced from newspaper, DNA, print edition, Mumbai, Sunday, Mar 5, 2017, (Section on Health):

Biomedical solutions via math:

(A new model combines mathematics with biology to set the stage for cancer cure and other diseases):

Ann Arbor:

How do our genes give rise to proteins, proteins to cells, and cells to tissues and organs? What makes a cluster of cells become a liver or a muscle? The incredible complexity of these biological systems drives the work of biomedical scientists. But, two mathematicians have introduced a new way of thinking that may help set the stage for better understanding of our bodies and other living things.

The pair from University of Michigan Medical School and University of California, Berkeley talk of using math to understand how generic information and interactions between cells give rise to the actual function of a particular type of tissue. While the duo admit that it’s a highly idealized framework, which does not take into account every detail of this process, that’s what’s needed. By stepping back and making a simplified model based on mathematics, they hope to create a basis for scientists to understand the changes that happen over time within and between cells to make living tissues possible. It could also help with understanding of how diseases such as cancer can arise when things don’t go as planned.

Turning to Turing’s machine:

U-M Medical School Assistant Professor of Computational Medicine, Indika Rajapakse and Berkeley Professor Emeritus, Stephen Smale have worked on the concepts for several years. “All the time, this process is happening in our bodies, as cells are dying and arising, and yet, they keep the function of the tissue going,” says Rajapakse. “We need to use beautiful mathematics and beautiful biology together to understand the beauty of a tissue.”

For the new work, they even hearken back to the work of Alan Turing, the pioneering Btitish mathematician famous for his “Turing machine” computer that cracked the codes during World War II.

Toward the end of his life, Turing began looking at the mathematical underpinnings of morphogenesis — the process that allows natural patterns such as a zebra’s stripes to develop  as a living thing grows from an embryo to an adult.

“Our approach adapts Turing’s technique, combining genome dynamics within the cell and the diffusion dynamics between cells,” says Rajapakse, who leads the U-M 4D —- Genome Lab in the Department of Computational Medicine and Bio-Informatics.

His team of biologists and engineers conduct experiments that capture human genome dynamics to three dimensions using bio-chemical methods and high resolution imaging.

Bringing math and the genome together

Smale, who retired from Berkeley, but is still active in research, is considered a pioneer of modelling dynamic systems. Several years ago, Rajapakse approached him during a visit to U-M, where Smale earned his undergraduate and graduate degrees. They began exploring how to study the human genome — the set of genes in an organism’s DNA — as a dynamic system.

They based their work on the idea that while the genes of an organism remain the same throughout life, how cells use them does not.

Last spring, they published a paper that lays a mathematical foundation for gene regulation — the process that governs how often and when genes get “read” by cells in order to make proteins.

Instead of the nodes of those networks being static, as Turing assumed, the new work sees them as dynamic systems. The genes may be “hard-wired” into the cell, but how they are expressed depends on factors such as epigenetic tags added as a result of environmental factors, and more.

Next Step:

As a result of his work with Smale, Rajapakse now has funding from the Defense Advanced Research Projects Agency (DARPA), to keep exploring the issue of emergence of function — including what happens when the process changes.

Cancer, for instance, arises from a cell development and proliferation cycle gone awry. And the process by which induced pluripotent stem cells are made in a lab —- essentially turning back the clock on a cell type so that it regains the ability to become other cell types — is another example.

Rajapakse aims to use data from real world genome and cell biology experiments in his lab to inform future work, focused on cancer and cell reprogramming.

He’s also organizing a gathering of mathematicians from around the world to look at computational biology and the genome this summer in Barcelona.

**************************************************************************************

Thanks to DNA, Prof. Stephen Smale and Prof. Indika Rajapakse; this, according to me, is one of the several applications of math.

Nalin Pithwa.

## References used uptil now

The following are the references I used:

1. Digital Signal Processing by John H Karl
2. DSP Processor Architectures — Lapsley et al.
3. DSP Software Development Techniques for Embedded and Real Time Systems — Rob Oshana
4. Internet

Regards,

Nalin Pithwa

## Analog to Digital Conversion and Digital to Analog Conversion

The first step in a signal processing system is getting the information from the real world into the system. This requires transforming an analog signal to a digital representation suitable for processing by the digital system. This signal passes through a device called an analog-to-digital converter (A/D or ADC). The ADC converts the analog signal to a digital representation by sampling or measuring the signal at a periodic rate. Each sample is assigned a digital code. These digital codes can then be processed by the DSP. The number of different codes or states is almost always a power of two. The simplest digital signals have only two states. These are referred to as binary signals.

Examples of analog signals are waveforms representing human speech and signals from a television camera. Each of these analog signals can be converted to a digital form using ADC and then processed using a programmable DSP.

Digital signals can be processed more efficiently than analog signals. Digital signals are generally well-defined and orderly, which makes them easier for electronic circuits to distinguish from noise, which is chaotic. Noise is basically unwanted information. Noise can be background noise from an automobile, or a scratch on a picture that has been converted to digital. In the analog world, noise can be represented as electrical or electromagnetic energy that degrades the quality of signals and data. Noise, however, occurs in both digital and analog systems. Sampling errors can degrade digital signals as well. Too much noise can degrade all forms of information including text, programs, images, audio and video, and telemetry. Digital signal processing provides an effective way to mmimize the effects of noise by making it easy to filter this “bad” information out of the signal.

As an example, assume that an analog signal needs to be converted into a digital signal for further processing. The first question to consider is how often to sample or measure the analog signal in order to represent that signal accurately in the digital domain. The sample rate is the number of samples of an analog event (like sound) that are taken per second to represent the event in the digital domain. Let’s assume that we are going to sample the signal at a rate of T seconds. This can be represented as

$Sampling period(T)=1/Sampling frequency$ (fs)

where the sampling frequency is measured in hertz. If the sampling frequency is 8 kilohertz (KHz), this would be equivalent to 8000 cycles per second. The sampling period would then be:

$T=1/8000=125 microseconds=0.000125 seconds$.

This tells us that, for a signal being sampled at this rate, we would have 0.000125 seconds to perform all the processing necessary before the next sample arrived (remember that these samples are arriving on a continuous basis and we cannot fall behind in processing them). This is a common restriction for real-time systems, which we have discussed in an earlier blog too.

Since we now know the time restriction, we can determine the processor speed required to keep up with the sampling rate. Processor “speed” is measured not by how fast the clock rate is for the processor, but fast the processor executes instruction. Once we know the processor instruction cycle time, we can determine how many instructions we have available to process the sample.

Sampling period(T)/Instruction cycle time=number of instructions per sample.

For a 100MHz processor that executes one instruction per cycle the instruction cycle time would be 1/100 MHz equal to 10 nanoseconds.

125 microseconds/10 ns=12,500 instructions per sample

125 microseconds/5 ns=25,000 instructions per sample (for a 200 MHz processor)

125 microseconds/2 ns=62,500 instructions per sample (for a 500 MHz processor)

As this example demonstrated, the higher the processor instruction cycle execution, the more processing we can do on each sample. If it were that easy, we could just choose the highest processor speed available and have plenty of processing margin. Unfortunately, it is not as easy as this. Many other factors including cost, accuracy, and power limitations must be considered. Embedded systems have many constraints such as these as well as size and weight(important for portable devices). For example, how do we know how fast we should sample the input analog signal to represent it accurately in the digital domain. If we do not sample fast enough, the information we obtain will not be representative of the true signal. If we sample too much we may be “over designing” the system and overly constrain ourselves.

Digital to Analog Conversion.

In many applications, a signal must be sent back out to the real world after being processed, enhanced and/or transformed, while inside the DSP. Digital to analog conversion (DAC) is a process in which signals having a few (usually two) defined levels or states (digital) are converted into signals having a very large number of states (analog).

Both the ADC and DAC are of significance in many applications of digital signal processing. The fidelity of an analog signal can often be improved by converting the analog input to digital form using a DAC, clarifying or enhancing the digital signal and then converting the enhanced digital impulses back to analog form using an ADC. (A single digital output level produces a DC output voltage).

More later,

Nalin Pithwa

## The Algorithmic CEO

The Algorithmic CEO

Fortune magazine online

Ram Charan

Jan 22 2015.

Get ready for the most sweeping business change since the Industrial Revolution.

The single greatest instrument of change in today’s business world, and the one that is creating major uncertainties for an ever-growing universe of companies, is the advancement of mathematical algorithms and their related sophisticated software. Never before has so much artificial mental power been available to so many—power to deconstruct and predict patterns and changes in everything from consumer behavior to the maintenance requirements and operating lifetimes of industrial machinery. In combination with other technological factors—including broadband mobility, sensors, and vastly increased data-crunching capacity—algorithms are dramatically changing both the structure of the global economy and the nature of business.

Though still in its infancy, the use of algorithms has already become an engine of creative destruction in the business world, fracturing time-tested business models and implementing dazzling new ones. The effects are most visible so far in retailing, creating new and highly interactive relationships between businesses and their customers, and making it possible for giant corporations to deal with customers as individuals. At Macy’s, for instance, algorithmic technology is helping fuse the online and the in-store experience, enabling a shopper to compare clothes online, try something on at the store, order it online, and return it in person. Algorithms help determine whether to pull inventory from a fulfillment center or a nearby store, while location-based technologies let companies target offers to specific consumers while they are shopping in stores.

Now the revolution is entering a new and vastly expansive stage in which machines are communicating with other machines without human intervention, learning through artificial intelligence and making consistent decisions based on prescribed rules and processed through algorithms. This capability has rapidly expanded into potential connections between billions and billions of devices in the ever-expanding “Internet of things,” which integrates machines and devices with networked sensors and software, allowing the remote monitoring and adjustment of industrial machinery, for instance, or the management of supply chains.

Take, for example, General Electric  GE 0.82% , which has already turned itself into a math house. It has assembled a staff in Silicon Valley to provide customers with advanced analytics that do such things as predict when equipment maintenance is due. As of the middle of last year, this quintessential industrial company had about two-thirds of its \$250 billion backlog in orders from services based on its mathematical intellectual property.

Machine-to-machine communication and learning also help managers increase their capability and capacity and the speed of their decisions. The potential uses have barely been scratched, and the growth opportunities of this bend in the road can be immense for those who seize them.

The companies that have the new mathematical capabilities possess a huge advantage over those that don’t. Google  GOOG 1.04% , Facebook  FB 0.23% , and Amazon  AMZN 0.67%  were created as mathematical corporations. Apple  AAPL 0.52%  became a math corporation after Steve Jobs returned as CEO. This trend will accelerate. Legacy companies that can’t make the shift will be vulnerable to digitally minded competitors.

One of the biggest changes the algorithmic approach brings for both businesses and consumers is a rich new level of interactivity. The customer experience for many legacy companies is often secondhand or thirdhand. A company’s offerings are, for example, bought by distributor X, which in turn sells to retailer Y, which sells to an individual—so the actual user is not the purchaser. In today’s online math houses, by contrast, actual users are increasingly interacting directly with the company—buying and giving feedback without any intermediaries. The companies can track and even predict consumer preferences in real time and adjust strategies and offerings on the run to meet changing demands, which gives consumers leverage they never had before. The data accumulated from these interactions can be used for a variety of purposes. A company can map out in extreme detail all touch points of a user or buyer, gather information at each touch point, and convert it to a math engine from which managerial decisions can be made about resource allocation, product modification, innovation, and/or new product development. The data can also be used as a diagnostic tool—for example, it can reveal signals and seeds of potential external change and help identify uncertainties and new opportunities. It can point to anomalies from past trends and whether they are becoming a pattern, and help spot new needs or trends that are emerging and could make a business obsolete.

Indeed, the math house is shaping up as a new stage in the evolution of relations between businesses and consumers. The first stage, before the Industrial Revolution, was one-to-one transactions between artisans and their customers. Then came the era of mass production and mass markets, followed by the segmenting of markets and semi-customization of the buying experience. With companies such as Amazon able to collect and control information on the entire experience of a customer, the math house now can focus on each customer as an individual. In a manner of speaking, we are evolving back to the artisan model, where a market “segment” comprises one individual.

The ability to connect the corporation to the customer experience and touch points in real time has deep implications for the organization of the future. It speeds decision-making and allows leaders to flatten the organization, in some cases cutting organizational layers by half. A large proportion of traditional middle-management jobs (managers managing managers) will disappear, while the content of those jobs that remain will radically alter. The company’s overhead will be reduced by an order of magnitude. In addition, performance metrics will be totally redesigned and transparent, enhancing collaboration in a corporation—or its ecosystems—across silos, geographies, time zones, and cultures.

To some degree, every company will have to become a math house. This will require more than hiring new kinds of expertise and grafting new skills onto the existing organization. Many companies will need to substantially change the way they are organized, managed, and led. Every organization will have to make use of algorithms in its decision-making. The use of algorithms will have to become as much a part of tomorrow’s management vocabulary as, say, profit margins and the supply chain are today. And every member of the executive team will need to understand his or her role in growing the business.

Ram Charan is a veteran adviser to many Fortune 500 companies and co-author of the bestselling book, Execution.