Signal Processing and Machine Learning for Attention
|Event Date:||October 24, 2019|
|School or Program:||Electrical and Computer Engineering
Google Machine Hearing Research
Our devices work best when they understand what it is we are doing or trying to do. A large part of this problem is understanding what we are attending to. I’d like to talk about how we can do this in the visual domain (easy) and auditory domain (much harder and more interesting). The audio attention signal is buried in the brain and in the recent EEG (and ECoGand MEG) work gives us insight. Algorithms based on linear methods work well, while non-linear approaches are pending. This attention signal can then be used to improve the user interface for our assistants. In addition, the eyes provide useful information for understanding the auditory world. The rate of microsaccades tells us about auditory surprise, and what we look at suggests what we might say next. I’ll talk about using eye tracking to improve speech recognition and how we can use attention decoding to emphasize the most important audio signals, and to get insight about the cognitive load that our users are experiencing. Long term, I’ll argue that listening effort is an important new metric for improving our interfaces.
BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a research scientist in the AI Machine Hearing Group at Google. He is an Adjunct Professor at Stanford CCRMA, where he has led the Hearing Seminar for more than 20 years, and an Affiliate Faculty in the Electrical Engineering Department at the University of Washington. He has served as an Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing and IEEE Multimedia Magazine. He has given successful tutorials at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP, “Web-Scale Multimedia Data” at ACM Multimedia 2010, and "Sketching Tools for Big Data Signal Processing” at ICASSP 2019. He is a coauthor, with A. C. Kak, of the IEEE book “Principles of Computerized Tomographic Imaging”. This book was first published by IEEE press and then republished by SIAM in their “Classics in Applied Mathematics” Series. It has been in print for 31 years. He is coeditor, with Steven Greenberg, of the book “Computational Models of Auditory Function.” Before joining Google, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo! Research, and Microsoft Research. For more than a decade, he has led the auditory group at the Telluride Neuromorphic Cognition Engineering Workshop. Dr. Slaney’s recent work is on understanding attention and general audio perception.
He is a Senior Member of the ACM and a Fellow of the IEEE.
Prof. Avi Kak, email@example.com
2019-10-24 16:30:00 2019-10-24 17:30:00 America/New_York Signal Processing and Machine Learning for Attention EE 117