Visit ShanghaiTech University | 中文 | How to find us
HOME > News and Events > Events
Light Field Imaging for Fun and Profit
Date: 2014/6/11             Browse: 1095

Speaker: Dr. Jingyi Yu

Time: 6/11,  15:00 to 16:00

Location: Room 220, Building 8



A light field captures a dense set of rays as scene descriptions in place of geometry. Recent advances on computational imaging have enabled novel and efficient light field acquisition devices. For example, the new Lytro and Raytrix cameras are able to capture light fields in a single shot but at a very low spatial and angular resolution. In this talk, I present a new class of image processing algorithms and camera designs that can significantly improve the spatial, angular, and temporal resolution in light field imaging.

Spatial Resolution: We develop a simple but effective technique by maneuvering the demosaicing process. We first show that traditional solutions that demosaic each individual microlense image and then blend them for refocusing is suboptimal. We instead propose to demosaic the synthesized view at the rendering stage by first mapping the rays onto the refocusing plane and then conduct resampling. 

Angular Resolution: We introduce a light field triangulation scheme to improve the angular resolution. Our triangulation technique aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for angular super-resolution. We develop a novel triangulation algorithm that uses the depths and structures of 3D lines as constraints for producing high quality triangulations. For robust depth estimation, we further present two light field stereo matching algorithms that greatly outperform the state-of-the-art. 

Spatial-Angular Resolution: We further present a unified framework to simultaneously enhance the spatial and angular resolutions by stitching multiple light fields. We first estimate the warping function between two light fields and then stitch them by finding an optimal cut through the overlapping region. We further accelerate the graph-cut algorithm via a coarse-to-fine scheme. We demonstrate various stitching applications to improve the field-of-view as well as translational and rotational parallaxes of the light fields for 3D displays.

Temporal Resolution: Finally, we construct a hybrid-resolution stereo camera system for acquiring and rendering dynamic light fields. Our system couples a high-res/low-res camera pair to replace the bulky camera array system. From the input stereo pair, we recover a low-resolution disparity map and upsample it via fast cross bilateral filters. We subsequently use the recovered high-resolution disparity map and its corresponding video frame to synthesize a light field using GPU-based disparity warping.  Our system can produce racking and tracking focus effects at a resolution of 640×480 at 15 fps. 



Jingyi Yu is an Associate Professor in the Department of Computer and Information Sciences and the Department of Electrical and Computer Engineering at the University of Delaware. He received B.S. from Caltech in 2000 and Ph.D. from MIT in 2005. His research interests span a range of topics in computer vision and computer graphics, especially on computational photography and non-conventional optics and camera designs. His research has been generously supported by the National Science Foundation (NSF), the National Institute of Health (NIH), the Army Research Office (ARO), and the Air Force Office of Scientific Research (AFOSR). He is a recipient of the NSF CAREER Award and the AFOSR YIP Award. 

                                                                                                        SIST-Seminar 14019