Theory and Practice of Randomized Algorithms for Ultra-Large-Scale Signal Processing
The dramatic increase in our abilities to observe massive amounts of measurements coming from distributed and disparate high-resolution sensors have been instrumental in enhancing our understanding of many physical phenomena. Signal processing (SP) has been the primary driving force in this knowledge of the unseen from observed measurements. However, in the last decade, the exponential increase in observations has outpaced our computing abilities to process, understand, and organize this massive but useful information. The significance of the problem has attracted a tremendous amount of interest. There are plenty of works trying to reduce the computational and memory bottleneck of signal processing algorithms. Randomized Numerical Linear Algebra (RandNLA) has proven to be a marriage of linear algebra and probability that provides a foundation for next-generation matrix computation in large-scale machine learning, data analysis, scientific computing, signal processing, and related applications. This research is motivated by two complementary long-term goals: first, extend the foundations of RandNLA by tailoring randomization directly towards downstream end goals provided by the underlying (signal processing, data analysis, etc.) problem, rather than intermediate matrix approximations goals; and second, use the statistical and optimization insights obtained from these downstream applications to transform and extend the foundations of RandNLA. For both of these related long-term goals, additional available structures in the problem are taken into account, which often includes implicit and/or explicit statistical effects, real-time and streaming issues, and implicit and/or explicit sparsity considerations.
Funding provided by NSF.