The Algorithmic CEO
Fortune magazine online
Jan 22 2015.
Get ready for the most sweeping business change since the Industrial Revolution.
The single greatest instrument of change in today’s business world, and the one that is creating major uncertainties for an ever-growing universe of companies, is the advancement of mathematical algorithms and their related sophisticated software. Never before has so much artificial mental power been available to so many—power to deconstruct and predict patterns and changes in everything from consumer behavior to the maintenance requirements and operating lifetimes of industrial machinery. In combination with other technological factors—including broadband mobility, sensors, and vastly increased data-crunching capacity—algorithms are dramatically changing both the structure of the global economy and the nature of business.
Though still in its infancy, the use of algorithms has already become an engine of creative destruction in the business world, fracturing time-tested business models and implementing dazzling new ones. The effects are most visible so far in retailing, creating new and highly interactive relationships between businesses and their customers, and making it possible for giant corporations to deal with customers as individuals. At Macy’s, for instance, algorithmic technology is helping fuse the online and the in-store experience, enabling a shopper to compare clothes online, try something on at the store, order it online, and return it in person. Algorithms help determine whether to pull inventory from a fulfillment center or a nearby store, while location-based technologies let companies target offers to specific consumers while they are shopping in stores.
Now the revolution is entering a new and vastly expansive stage in which machines are communicating with other machines without human intervention, learning through artificial intelligence and making consistent decisions based on prescribed rules and processed through algorithms. This capability has rapidly expanded into potential connections between billions and billions of devices in the ever-expanding “Internet of things,” which integrates machines and devices with networked sensors and software, allowing the remote monitoring and adjustment of industrial machinery, for instance, or the management of supply chains.
Take, for example, General Electric GE 0.82% , which has already turned itself into a math house. It has assembled a staff in Silicon Valley to provide customers with advanced analytics that do such things as predict when equipment maintenance is due. As of the middle of last year, this quintessential industrial company had about two-thirds of its $250 billion backlog in orders from services based on its mathematical intellectual property.
Machine-to-machine communication and learning also help managers increase their capability and capacity and the speed of their decisions. The potential uses have barely been scratched, and the growth opportunities of this bend in the road can be immense for those who seize them.
The companies that have the new mathematical capabilities possess a huge advantage over those that don’t. Google GOOG 1.04% , Facebook FB 0.23% , and Amazon AMZN 0.67% were created as mathematical corporations. Apple AAPL 0.52% became a math corporation after Steve Jobs returned as CEO. This trend will accelerate. Legacy companies that can’t make the shift will be vulnerable to digitally minded competitors.
One of the biggest changes the algorithmic approach brings for both businesses and consumers is a rich new level of interactivity. The customer experience for many legacy companies is often secondhand or thirdhand. A company’s offerings are, for example, bought by distributor X, which in turn sells to retailer Y, which sells to an individual—so the actual user is not the purchaser. In today’s online math houses, by contrast, actual users are increasingly interacting directly with the company—buying and giving feedback without any intermediaries. The companies can track and even predict consumer preferences in real time and adjust strategies and offerings on the run to meet changing demands, which gives consumers leverage they never had before. The data accumulated from these interactions can be used for a variety of purposes. A company can map out in extreme detail all touch points of a user or buyer, gather information at each touch point, and convert it to a math engine from which managerial decisions can be made about resource allocation, product modification, innovation, and/or new product development. The data can also be used as a diagnostic tool—for example, it can reveal signals and seeds of potential external change and help identify uncertainties and new opportunities. It can point to anomalies from past trends and whether they are becoming a pattern, and help spot new needs or trends that are emerging and could make a business obsolete.
Indeed, the math house is shaping up as a new stage in the evolution of relations between businesses and consumers. The first stage, before the Industrial Revolution, was one-to-one transactions between artisans and their customers. Then came the era of mass production and mass markets, followed by the segmenting of markets and semi-customization of the buying experience. With companies such as Amazon able to collect and control information on the entire experience of a customer, the math house now can focus on each customer as an individual. In a manner of speaking, we are evolving back to the artisan model, where a market “segment” comprises one individual.
The ability to connect the corporation to the customer experience and touch points in real time has deep implications for the organization of the future. It speeds decision-making and allows leaders to flatten the organization, in some cases cutting organizational layers by half. A large proportion of traditional middle-management jobs (managers managing managers) will disappear, while the content of those jobs that remain will radically alter. The company’s overhead will be reduced by an order of magnitude. In addition, performance metrics will be totally redesigned and transparent, enhancing collaboration in a corporation—or its ecosystems—across silos, geographies, time zones, and cultures.
To some degree, every company will have to become a math house. This will require more than hiring new kinds of expertise and grafting new skills onto the existing organization. Many companies will need to substantially change the way they are organized, managed, and led. Every organization will have to make use of algorithms in its decision-making. The use of algorithms will have to become as much a part of tomorrow’s management vocabulary as, say, profit margins and the supply chain are today. And every member of the executive team will need to understand his or her role in growing the business.
Ram Charan is a veteran adviser to many Fortune 500 companies and co-author of the bestselling book, Execution.
American Mathematical Society. Please click the link for an awesome article — how mathematics can be used for chemistry. It is said that a chemist can synthesize anything from a cough syrup to a computer chip. How? Why? Find out yourself in a related link.
Merry Xmas !!
is This pair and its dual are shown in Figure 1. Since
has a pole at its origin, its Fourier transform integral diverges there as has a transform only in a limiting sense. (It can be evaluated using contour integration). Likewise, the Fourier integral of sgn (x), in computing either the forward or inverse transform, does not exist in the conventional sense because sgn (x) is not absolutely integrable. The transform of sgn (x) can be defined by considering a sequence of transformable functions that approaches sgn (x) in the limit. We do not let these mathematical inconveniences deter us any more than they did in our previous discussion of functions and sinusoids, for the pair Slatex (1/t)$ — sgn has some intriguing properties.
Since is real and odd, its Fourier transform is odd and pure imaginary. But, more interestingly, its magnitude spectrum is obviously constant. (Could there be a delta function lurking nearby?). The interest in this transform pair comes from convolving a function with in the time domain. This convolution integral, called the Hilbert Transform is as follows:
This transform arises in many applications in Digital Communications and Mathematical Physics. The Cauchy principal value, which is a kind of average familiar to those knowledgeable in contour integration, is to be used over singularities of the integrand. This mathematical inconvenience is avoided in the frequency domain where we can easily visualize the effect of the Hilbert transform.
Multiplication by in the frequency domain produces a phase shift at all frequencies. The phase of is advanced constant for all positive frequencies and retarded a constant for all negative frequencies. The magnitude spectrum of is unchanged since the spectrum of is flat. The Hilbert transform operation in the frequency domain is summarized in Fig 2. The Fourier transform of the given function has its phase shifted , in opposite directions for positive and negative frequencies, then, the inverse Fourier transform produces the time domain result.
The exact phase shift [including the dependence] occurs in several different instances in wave propagation: the reflection of electromagnetic waves from metals at a certain angle of incidence involves an exact phase shift independent of frequency; special teleseismic ray paths produce the same type of phase shifts, and the propagation of point sources for all types of waves includes a phase shift for all frequencies in the far field.
More about Hilbert Transforms later…
Let me digress a bit from the light theory of DSP, which have been discussing till now, to offer a viewpoint on what is a DSP Engineer. I reproduce below, an opinion, which I also hold, to some extent.
Comment What is a DSP Engineer
Feb 12 2008
By Shiv Balakrishnan, Forward Concepts
Once upon a time, DSP stood for Digital Signal Processing. In the late 1980’s, however, our beloved acronym began move inexorably towards meaning Digital Signal Processors. By then of course, TI and other chip companies had spent many millions of dollars in creating a new market segment, so who’s to grudge them hijacking the very end of an old piece of jargon?
Some of us nit-picky curmudgeons, that’s who. My own anecdotal experience is that the field of Digital Signal Processing has lost out in subtle ways. Here’s the background:
In the 1960’s, DSP art developed around two broad areas: digital filtering and FFTs. When someone said they did DSP, you knew what they did. You also knew where they congregated: The International Conference on Acoustics, Speech, and Signal Processing (ICASSP) was the main international watering hole. The ICASSP acronym gives away the dominance of “Acoustics” and “Speech” at the time
You even knew what DSP engineers were reading. Quality textbooks from Oppenheim, Schafer, Rabiner, Gold, Rader, and a host of others joined earlier guideposts like Blackman and Tukey’s “Measurement of Power Spectra.” Other sources of knowledge included the classic Cooley-Tukey FFT paper, Widrow and Hoff’s LMS work, and a collection of seminal work from Bell Labs—the latter including early vocoder work from Dudley and speech breakthroughs from Flanagan’s group. There was quantization analysis (Jackson and many others) and the elegant Remez Exchange-based FIR design work of Parks and McClellan.
Thanks to this training, DSP practitioners had a firm grasp of the signal processing chain. They worked with algorithms, hardware, and software. They understood the interplay between analog and digital signals, and the effects of noise and non-linearity in both domains. Most could recognize when an analog implementation was more efficient—an important talent, in my opinion.
Fast forward to 2008. Acoustics and speech still matter, but there is much more emphasis on multimedia and communications. The hardware options are replete with DSPs, microprocessors, FPGAs, IP cores, ASICs, ASSPs and what not accompanied by a wealth of tools: compilers, debuggers, high-level design tools and more. So what’s the meaning of “DSP” now? What’s our new-millennium DSP engineer doing?
Here are my observations (and I’d love to have them challenged): “DSP” means Processors a la TI, ADI et al. (Note to self: get over it!) A DSP engineer typically does one of the following:
- Writes C/C++ code to implement a function (e.g., a Viterbi decoder) on a DSP or core. The engineer then optimizes the code for performance/size/power, sometimes even having to (shudder) write assembly code to wring out the last iota of performance
- Architects a hardware block (e.g., Viterbi decoder), implements it in VHDL/Verilog, verifies it, and optimizes it. The engineer checks the RTL against a high-level model (e.g., in MATLAB) as well as the correctness at the logic level with black box testing, test vectors and such.
- Architects the system and designs algorithms. These engineers typically use MATLAB and C, and have domain expertise. They usually do not deal with assembly code or Verilog.
Now let me stick my neck out and make some wild generalizations:
Engineers in categories 1 and 2 never use their basic DSP theory after college—not that I can blame them. They can’t tell you why and when an IIR filter might be unstable, or who needs any transform other than a DCT or radix-2 FFT. As far as quantization noise and round off effects are concerned, fuggedaboutit. The systems guy took care of that.
The #3 folks are closest to the DSP engineers of a couple of decades ago, though they tend to be “Architects” or “System Designers” as opposed to DSP programmers. They are also the ones closest to understanding the complete system, although they are lmited by increasing system complexity. Thus one has MPEG engineers, and WiMAX engineers, and Graphics engineers, etc. These systems engineers normally don’t get too close to the hardware in large projects.
Here’s a scenario: the product is a complex SoC or system; the chip comes back and it’s bring-up time. I’m guessing that
- 70% of the above folks will not be dying to get in the lab
- 80% will want nothing to do with the hardware
- 90% will want nothing to do with the analog and data converters
- 100% will want nothing to do with the RF
Bottom line: complexity and the concomitant specialization have forced most engineers on any large project to limit themselves to a small part of the product or solution. This is not unique to DSP, of course. So who are today’s DSP engineers and which direction is they going?
The answer seems to be: C / MATLAB programmer with some application knowledge. Software skills (compilers, tools, RTOS, IDEs, etc.) help in any job and DSP work is no exception. Knowledge of Z-transforms, FFTs, etc. is optional—there are cookbooks for these things anyway.
So why do I say that the discipline of DSP has lost out? My contention is that if engineers are not fluent in the basics of DSP, perhaps they should not be called DSP engineers. Since it is a bit late for that, maybe we need a better term for those who are concerned with how the bits arrived at the processor and where they are headed.
On the systems issue, this curmudgeon feels lucky to have been able to work on some pretty complex products. I am surprised by one thing: many DSP engineers I have encountered in the last ~15 years in Silicon Valley are not interested (!) in learning the other stuff even if they have the chance. Maybe they’re doing the next big thing that will make today’s DSP engineers will seem as quaintly old-fashioned as I am today.
Just onc comment before we delve further: the trick in DSP and Digital Control and Digital Communications is to treat all possible practical signals via a few elementary or test signals. The function is an elementary signals, so also are the complex exponentials or sinusoids, the unit step function, the boxcar, the sinc pulse, and the signum function.
We continue our light study with transforms of sinusoids.
The Fourier transform of sines and cosines plays a central role in the study of linear time invariant systems for three quite related reasons: (1) they are eigenfunctions of LTI systems (2) the spectrum consists of the eigenvalues associated with these eigenfunctions, and (3) the weighting factor (sometimes called the kernel) in the Fourier transform is a complex sinusoid. Clearly, we want to include sinusoids in our Fourier transform repertoire.
We have already seen that the Fourier transform of the complex exponential is a function located at the center of the complex sinusoid. That is,
we get which equals
The Fourier transform of the cosine and the sine follows directly from these two equations. Adding the two equations gives
while subtracting them gives
These transforms are just the combinations of functions that are required by the basic symmetry properties of the Fourier transform; because the cosine is a real and even function, its transform is real and symmetric. The sinc is real and odd; therefore, it has an odd imaginary transform.
Alternately, we could derive the sine and cosine transforms by using the phase-shift theorem; shifting a function along the axis in one domain introduces a complex sinusoid in the other domain. For example, if we want to generate the dual pairs in equations I and II, we apply the phase shift theorem to the function and write
Adding and subtracting these two equations gives
. equation IV
The sine and the cosine transforms and their duals are shown in the attached figure 1.
Thus, the Fourier transforms of sines and cosines can be viewed as resulting from forcing certain symmetries into the function transform after it is shifted along the axis; shifting the function off the origin in the frequency domain and then requiring a symmetric symmetric spectrum results in equation I. An antisymmetric spectrum leads to Equation II. Analogous statements apply to equations for function shifts along the time axis.
The functions in Equations I and II make for easy convolutions leading to an important observation. For example, convolving equation I in the frequency domain with a given function and approximately multiplying by in the time domain gives
This equation and its sine wave counterpart are sometimes referred to as the modulation theorem. They show that modulating the amplitude of a function by a sinusoid shifts the function’s spectrum to higher frequencies by an amount equal to the modulating frequency. . This effect is well understood in television and radio work where a sinusoidal carrier is modulated by the program signal. Obtaining transforms of sines, cosines, and related functions has been a good example of exploiting the basic Fourier transform properties. Our next example is no so obliging.