Sensing and imaging systems are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically newsensing modalities. The foundation of today's digital data acquisition and processing systems is the Shannon/Nyquist sampling theorem, which asserts that to avoid losing information when digitizing a signal or image, one must sample at least two times faster than the signal's bandwidth, at the so-called Nyquist rate. Unfortunately, the physical limitations of current sensing systems combined with inherently high Nyquist ratesimpose a performance brick wall to a large class of important and emerging applications.
This talk will overview the foundations and recentprogress on compressive signal processing, a new approach to data acquisition and processing in which analog signals are digitized not via uniform sampling but via measurements using more general, even random, test functions. In stark contrast with conventional wisdom, the new theory asserts that one can combine "sub-Nyquist-rate sampling" with digital computational power for efficient and accurate signal acquisition when the signal has a sparse structure. The implications of compressive sensing are promising for many applications and enable the design of new kinds of communication systems, cameras, microscopes, and pattern recognition systems. Special emphasis will be placed on the pros and cons of the compressive sensing technique.