ICSI BEARS Open House
Thursday, February 13, 2014
12:45 - 5:00 p.m.
The ICSI annual open house, held in conjunction with the UC Berkeley EECS Department's Berkeley EECS Annual Research Symposium (BEARS), includes research demonstrations, talks, and posters as well as time to talk with our computer scientists about their work. A shuttle will be available to bring people from the conference to ICSI.
RSVP to [email protected] if you plan to attend the open house.
Schedule:
12:45 - 1:30 p.m. Posters and Demonstrations
1:30 - 2:15 p.m. Neural Networks for Speech Recognition: From Basic Structures to Piled Higher and Deeper by Nelson Morgan, Director of ICSI
2:15 - 3:00 p.m. Content, Actually: The Heart of Video Search by Gerald Friedland, Director of Audio and Multimedia Research at ICSI
3:00 - 5:00 p.m. Posters and Demonstrations
Poster and Demonstration Topics:
- Netalyzr for Android presented by Narseo Vallina
- FrameNet and related natural language processing applications presented by the FrameNet team
- Ready or Not? An app that shows how location information can be taken from posts on Twitter and Instagram presented by Jaeyoung Choi
- Multimedia video content-based search presented by Benjamin Elizalde
- Digital destitution in California presented by Blanca Gordo and Bryan Morgan
- The Tao of ATWV: probing the mysteries of keyword search performance presented by Steven Wegmann
- High-speed network switching and virtualization presented by Luigi Rizzo
- Semantic video search by multi-level concept fusion presented by Xiao-Yong Wei
- Modeling high resolution brain networks from MRI imagine presented by Wenson Hsieh and Leo Kam
- Overviews of all major research areas: AI, audio and multimedia, networking and security, speech, vision, and research initiatives
Talk Abstracts
1:30 - 2:15 p.m. Neural Networks for Speech Recognition: From Basic Structures to Piled Higher and Deeper by Nelson Morgan, Director of ICSI
Artificial neural networks have been applied to speech tasks for well over 50 years. In particular, multilayer perceptrons (MLPs) have been used as components in HMM-based systems for 25 years. This presentation will describe the long journey from early speech classification experiments with MLPs in the 1960s to the present-day implementations that are often called Deep Neural Networks. It will emphasize hybrid HMM/MLP approaches which have dominated the use of artificial neural networks for speech recognition since the late 1980s, but which have only recently gained mainstream adoption.
2:15 - 3:00 p.m. Content, Actually: The Heart of Video Search by Gerald Friedland, Director of Audio and Multimedia Research at ICSI
Current implementations of video search (e.g., by Google, Yahoo!, etc.) rely entirely on metadata, such as the video's title, description, and keyword tags, rather than the actual content of the video, i.e., its visual, acoustic, and temporal information. In this talk, I will present ICSI's research toward developing a video search engine that analyzes the actual content of online videos to identify events and concepts that users might want to search for. The methods were tested in the 2013 TrecVID Multimedia Event Detection evaluation and, of all methods submitted to the evaluation, came the closest to meeting the 0Ex challenge, in which systems try to identify whether a consumer-produced video matches a keyword without having been trained with examples of videos associated with that keyword. This talk will describe our approach, results, and future directions we are exploring for having computers identify the events and concepts that are important to humans.