Audio and Multimedia
The Audio and Multimedia Group develops computational algorithms, systems, and methods to handle content composed of multiple types of data (such as news videos and social network posts). Multimedia research focuses on the scientific problems arising from the complementary nature of different data sources, each of which captures only partial information. The group takes a top-down viewpoint, studying how computer systems should be designed to integrate the different types of information in a document. These include metadata, such as geo-tags, and the context in which the content is presented, such as the social graph of an author. The top-down approach of multimedia computing succeeds in solving problems involving data whose scale and diversity challenge the current theory and practice of computer science.
Evolved from the Speech Group, the Audio and Multimedia group puts a special focus on audio analysis. Audio content is frequently complementary to visual content, as in videos, and its lower data rates often offer more tractable processing. Other work includes related questions in crowdsourcing and the privacy implications of multimedia retrieval.
Dr. Gerald Friedland most recently led Audio and Multimedia research at ICSI.