Task 009/018: Multi-modal distributed learning/ Distributed Visual Analytics
|Event Date:||September 22, 2022|
|Time:||11:00 am (ET) / 8:00am (PT)
Anusha Devulapally, Pennsylvania State University Depth Estimation using sensor fusion
Estimating depth from the scene at pixel level from sensors like RGB, LiDAR, and event-based cameras has gained popularity these days and is widely used in autonomous vehicles, robot grasping, and point-cloud generation. In this talk, we present a brief on our previous work, 3D Shape Generation from RGB and Sparse Depth using Generative Adversarial Networks, and an elaborate discussion on depth estimation from event-based cameras and the motivation behind it.
Dynamic vision sensors, also known as "event cameras," capture the change in brightness at pixel level and generate events asynchronously. They are widely used for their advantages over standard cameras, such as no motion blur, high temporal resolution, and high dynamic range. However, they lack scene context. Existing approaches combine the advantages of both event and standard cameras to estimate depth. These approaches mainly use recurrent neural networks for the task. We intend to integrate RNN with adversarial learning to use GANs' generative power, subsequently improving the depth maps.
Anusha Devulapally is a PhD candidate in the Computer Science and Engineering Department at Pennsylvania State University. She is working with Prof. Vijaykrishnan Narayanan. Her research interests include computer vision and deep learning. She is currently working on Multi-Modal Monocular Depth Estimation from events and RGB. Prior to this, she worked on Condensed-Attention UNet for 3D Segmentation for Cancer Cells generated from Confocal Microscopy as her Master's thesis at the Indian Institute of Technology, Goa, India.