Real-Time Data Reduction Codesign at the Extreme Edge for Science
This project focuses on intelligent ML-based data reduction and processing as close as possible to the data source. Per sensor compression and efficient aggregation of information while preserving scientific fidelity can have a huge impact on data rates further downstream and the way that experiments are designed and operated. The research team is concentrating on powerful, specialized compute hardware at the extreme edge—such as FPGAs, ASICs, and systems-on-chip—which are typical initial processing layers of many experiments. The three main thrusts are to: (1) develop performant and reliable AI algorithms for science at the edge; (2) develop codesign tools to build efficient implementations of those algorithms in hardware; and (3) enable rapid exploration for domain scientists and system designers with an accessible tool flow.
These newly developed techniques will be applied to two complementary scientific exemplars: real-time trigger systems at the CERN Large Hadron Collider (LHC) and processing data streams in transmission electron microscopy (TEM). The LHC exemplar offers a complex, geometry constrained progressive data reduction flow, and the TEM exemplar involves fast feature extraction that must be capable of performing across a multitude of samples and experiments.