Computer Science 5th Year Masters Thesis Presentation

Location:
Virtual Presentation - ET - Remote Access - Zoom

Speaker:
CHITTESHWARAN THAVAMANI , Masters Student
Computer Science Department
Carnegie Mellon University

Foveated Attention for Neural Nets

Efficient processing of high-res video streams is safety-critical for many robotics applications such as autonomous driving. To maintain real-time performance, many practical systems downsample the video stream. But this can hurt downstream tasks such as (small) object detection. Instead, we take inspiration from biological vision systems that allocate more foveal "pixels" to salient parts of the scene. We introduce FOVEA, an approach for intelligent downsampling that ensures salient image regions remain "magnified" in the downsampled output. Given a high-res image, FOVEA applies a differentiable resampling layer that outputs a small fixed-size image canvas, which is then processed with an object detector, whose output is then differentiably backward mapped onto the original image size. In order to maintain overall efficiency, FOVEA makes use of cheap and readily available saliency cues, including dataset-specific spatial priors or temporal priors computed from recent object predictions. On the autonomous driving datasets Argoverse-HD and BDD100K, our proposed method boosts the detection AP over standard Faster-RCNN, both with and without finetuning. Without any noticeable increase in compute, we improve accuracy on small objects by over 2x without degrading performance on large objects. Finally, FOVEA sets a new record for streaming AP (from 17.8 to 23.0 on a GTX 1080 Ti GPU), a metric designed to capture both accuracy and latency. However, FOVEA is designed specifically for 2D object detection. To generalize to arbitrary spatial tasks, in our followup work, we "learn to zoom" in on the input image, compute spatial features, and then "unzoom" to revert any deformations (LZU). To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with spatial input and any model with spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argoverse-HD and a synthetic video COCO, semantic segmentation on Cityscapes, and RGB-based 3D detection on NuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well. Thesis Committee: Deva Ramanan (Chair) Deepak Pathak Zoom Participation.  See announcement.

For More Information:
tracyf@cs.cmu.edu


Add event to Google
Add event to iCal