The lab explores the frontiers of imaging and computer vision, where algorithms promise super-human vision by extracting “invisible” information hidden in the measurements. We design computational cameras to overcome tomorrow’s capture and recognition challenges, including harsh environmental conditions, e.g. imaging and vision under ultra-low or high illumination or through dense fog, rain and snow, imaging at ultra-fast or slow time scales, imaging and computer vision at extreme scene scales, from super-resolution microscopy to kilometer-scale depth sensing, and imaging via “in the wild” proxy cameras, e.g. using nearby object surfaces as sensors. Breaking through these barriers may enable rich applications across diverse domains, including robotics, autonomous driving, health, scientific imaging, human-computer interaction, consumer applications, astronomy, and physics. To develop next-generation imaging and vision systems we conduct interdisciplinary research at the intersection of imaging, computer vision, computer graphics, optics, electrical engineering, applied physics, and robotics.
The lab’s core research areas are: computational cameras, computer vision, and computational optics and displays. We explore theoretical foundations, algorithms, and joint hardware-software vision and imaging systems that, together, allow us to overcome existing limitations. Selected research projects are listed below:
Vision for Autonomous Systems Group
Dr. Dengxin Dai has started a new research group Vision for Autonomous Systems (VAS) at the Department of Computer Vision and Machine Learning of MPI for Informatics. The VAS group will conduct cutting-edge research on deep perception for autonomous driving, especially on the scalability of deep perception methods to […]
Launch of ACDC Website
ACDC is a new large-scale driving dataset for training and testing semantic segmentation algorithms on adverse visual conditions, such as fog, nighttime, rain, and snow. The dataset and associated benchmarks are published at ICCV 2021 and are now publicly available at https://acdc.vision.ee.ethz.ch.
The dataset consists […]