Task 001/002 - Neuro-inspired Algorithms and Theory
|Event Date:||May 19, 2022|
|Time:||11:00 am (ET) / 8:00am (PT)
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks
Abstract: Recent work suggests that feature constraints in the training datasets of deep neural networks (DNNs) drive robustness to adversarial noise (Ilyas et al., 2019). The representations learned by such adversarially robust networks have also been shown to be more human perceptually-aligned than non-robust networks via image manipulations (Santurkar et al., 2019, Engstrom et al., 2019). Despite appearing closer to human visual perception, it is unclear if the constraints in robust DNN representations match biological constraints found in human vision. Human vision seems to rely on texture-based/summary statistic representations in the periphery, which have been shown to explain phenomena such as crowding (Balas et al., 2009) and performance on visual search tasks (Rosenholtz et al., 2012). To understand how adversarially robust optimizations/representations compare to human vision, we performed a psychophysics experiment using a metamer task similar to Freeman \& Simoncelli, 2011, Wallis et al., 2016 and Deza et al., 2019 where we evaluated how well human observers could distinguish between images synthesized to match adversarially robust representations compared to non-robust representations and a texture synthesis model of peripheral vision (Texforms a la Long et al., 2018). We found that the discriminability of robust representation and texture model images decreased to near chance performance as stimuli were presented farther in the periphery. Moreover, performance on robust and texture-model images showed similar trends within participants, while performance on non-robust representations changed minimally across the visual field. These results together suggest that (1) adversarially robust representations capture peripheral computation better than non-robust representations and (2) robust representations capture peripheral computation similar to current state-of-the-art texture peripheral vision models. More broadly, our findings support the idea that localized texture summary statistic representations may drive human invariance to adversarial perturbations and that the incorporation of such representations in DNNs could give rise to useful properties like adversarial robustness.
Bio: Arturo Deza is a PostDoctoral Research Associate at the Massachusetts Institute of Technology under the direction of Professor Tomaso Poggio. Deza was previously a PostDoctoral Research Fellow with Talia Konkle at Harvard University where we worked on understanding the mechanisms of foveation in humans and machines. Deza's research focus is on understanding the human mechanisms of foveation and peripheral processing, as well as the inverse problem of how advanced computer vision systems may benefit from such peripheral representations. Deza performs research at the intersection of vision science, computer vision and machine learning.