Dr. Dengxin Dai

Senior Researcher
E-Mail Please contact TODO
Phone +41 xx xxx xx xx
Address Stampfenbachstrasse 48, 8092 Zürich, Switzerland
Room ETH Zurich, Computer Vision Lab

Biography

Dengxin Dai is a Senior Researcher working with the Computer Vision Lab at ETH Zurich. In 2016, he obtained his PhD in Computer Vision at ETH Zurich. Since then he is the Team Leader of TRACE-Zurich, working on Autonomous Driving within the R&D project “TRACE: Toyota Research on Automated Cars in Europe”. His research interests lie in autonomous driving, robust perception in adverse weather and illumination conditions, automotive sensors and computer vision under limited supervision.

Research Interests & Workshops

He has organized a CVPR’19 Workshop on Vision for All Seasons: Bad Weather and Nighttime , and is organizing an ICCV’19 workshop on Autonomous Driving. He has been a program committee member of several major computer vision conferences and received multiple outstanding reviewer awards. He is a guest editor for the IJCV special issue Vision for All Seasons and is an area chair for WACV 2020.

Publications & Projects

  • Authors: Yuhua Chen, Haoran Wang, Wen Li, Christos Sakaridis, Dengxin Dai and Luc Van Gool

    We present Scale-aware Domain Adaptive Faster R-CNN, a model aiming at improving the cross-domain robustness of object detection.

    Object detection typically assumes that training and test samples are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch may lead to a significant performance drop. In this work, we present Scale-aware Domain Adaptive Faster R-CNN, a model aiming at improving the cross-domain robustness of object detection. In particular, our model improves the traditional Faster R-CNN model by tackling the domain shift on two levels: (1) the image-level shift, such as image style, illumination, etc., and (2) the instance-level shift, such as object appearance, size, etc. The two domain adaptation modules are implemented by learning domain classifiers in an adversarial training manner. Moreover, we observe that the large variance in object scales often brings a crucial challenge to cross-domain object detection. Thus, we improve our model by explicitly incorporating the object scale into adversarial training. We evaluate our proposed model on multiple cross-domain scenarios, including object detection in adverse weather, learning from synthetic data, and cross-camera adaptation, where the proposed model outperforms baselines and competing methods by a significant margin. The promising results demonstrate the effectiveness of our proposed model for cross-domain object detection.
    Read More
  • Authors: Shijie Li, Xieyuanli Chen, Yun Liu, Dengxin Dai, Cyrill Stachniss and Juergen Gall

    We propose a projection-based method for semantic segmentation of LiDar data, called Multi-scale Interaction Network (MINet), which is very efficient and accurate.

    Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles and robots, which are usually equipped with an embedded platform and have limited computational resources. Approaches that operate directly on the point cloud use complex spatial aggregation operations, which are very expensive and difficult to deploy on embedded platforms. As an alternative, projection-based methods are more efficient and can run on embedded hardware. However, current projection-based methods either have a low accuracy or require millions of parameters. In this letter, we therefore propose a projection-based method, called Multi-scale Interaction Network (MINet), which is very efficient and accurate. The network uses multiple paths with different scales and balances the computational resources between the scales. Additional dense interactions between the scales avoid redundant computations and make the network highly efficient. The proposed network outperforms point-based, image-based, and projection-based methods in terms of accuracy, number of parameters, and runtime. Moreover, the network processes more than 24 scans, captured by a high-resolution LiDAR sensor with 64 beams, per second on an embedded platform, which is higher than the framerate of the sensor. The network is therefore suitable for robotics applications.
    Read More
  • Authors: Dengxin Dai, Arun Balajee Vasudevan, Jiri Matas and Luc Van Gool

    This work develops an approach for scene understanding purely based on binaural sounds.

    Humans can robustly recognize and localize objects by using visual and/or auditory cues. While machines are able to do the same with visual data already, less work has been done with sounds. This work develops an approach for scene understanding purely based on binaural sounds. The considered tasks include predicting the semantic masks of sound-making objects, the motion of sound-making objects, and the depth map of the scene. To this aim, we propose a novel sensor setup and record a new audio-visual dataset of street scenes with eight professional binaural microphones and a 360-degree camera. The co-existence of visual and audio cues is leveraged for supervision transfer. In particular, we employ a cross-modal distillation framework that consists of multiple vision teacher methods and a sound student method -- the student method is trained to generate the same results as the teacher methods do. This way, the auditory system can be trained without using human annotations. To further boost the performance, we propose another novel auxiliary task, coined Spatial Sound Super-Resolution, to increase the directional resolution of sounds. We then formulate the four tasks into one end-to-end trainable multi-tasking network aiming to boost the overall performance. Experimental results show that 1) our method achieves good results for all four tasks, 2) the four tasks are mutually beneficial -- training them together achieves the best performance, 3) the number and orientation of microphones are both important, and 4) features learned from the standard spectrogram and features obtained by the classic signal processing pipeline are complementary for auditory perception tasks.
    Read More
  • Authors: Rui Gong, Wen Li, Yuhua Chen, Dengxin Dai and Luc Van Gool

    We present a domain flow generation (DLOW) model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other.

    In this work, we present a domain flow generation (DLOW) model to bridge two different domains by generating a continuous sequence of intermediate domains flowing from one domain to the other. The benefits of our DLOW model are twofold. First, it is able to transfer source images into a domain flow, which consists of images with smoothly changing distributions from the source to the target domain. The domain flow bridges the gap between source and target domains, thus easing the domain adaptation task. Second, when multiple target domains are provided for training, our DLOW model is also able to generate new styles of images that are unseen in the training data. The new images are shown to be able to mimic different artists to produce a natural blend of multiple art styles. Furthermore, for the semantic segmentation in the adverse weather condition, we take advantage of our DLOW model to generate images with gradually changing fog density, which can be readily used for boosting the segmentation performance when combined with a curriculum learning strategy. We demonstrate the effectiveness of our model on benchmark datasets for different applications, including cross-domain semantic segmentation, style generalization, and foggy scene understanding
    Read More
  • Authors: Guolei Sun, Thomas Probst, Danda Pani Paudel, Nikola Popović, Menelaos Kanakis, Jagruti Patel, Dengxin Dai and Luc Van Gool

    We introduce Task Switching Networks (TSNs), a task-conditioned architecture with a single unified encoder/decoder for efficient multi-task learning.

    We introduce Task Switching Networks (TSNs), a task-conditioned architecture with a single unified encoder/decoder for efficient multi-task learning. Multiple tasks are performed by switching between them, performing one task at a time. TSNs have a constant number of parameters irrespective of the number of tasks. This scalable yet conceptually simple approach circumvents the overhead and intricacy of task-specific network components in existing works. In fact, we demonstrate for the first time that multi-tasking can be performed with a single task-conditioned decoder. We achieve this by learning task-specific conditioning parameters through a jointly trained task embedding network, encouraging constructive interaction between tasks. Experiments validate the effectiveness of our approach, achieving state-of-the-art results on two challenging multi-task benchmarks, PASCAL-Context and NYUD. Our analysis of the learned task embeddings further indicates a connection to task relationships studied in the recent literature
     
    Read More
  • A novel UDA method, DAFormer, consisting of a Transformer encoder and a multi-level context-aware feature fusion decoder, improve SOTA by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes

    As acquiring pixel-wise annotations of real-world images for semantic segmentation is a costly process, a model can instead be trained with more accessible synthetic data and adapted to real images without requiring their annotations. This process is studied in unsupervised domain adaptation (UDA). Even though a large number of methods propose new adaptation strategies, they are mostly based on outdated network architectures. As the influence of recent network architectures has not been systematically studied, we first benchmark different network architectures for UDA and then propose a novel UDA method, DAFormer, based on the benchmark results. The DAFormer network consists of a Transformer encoder and a multi-level context-aware feature fusion decoder. It is enabled by three simple but crucial training strategies to stabilize the training and to avoid overfitting DAFormer to the source domain: While the Rare Class Sampling on the source domain improves the quality of pseudo-labels by mitigating the confirmation bias of self-training towards common classes, the Thing-Class ImageNet Feature Distance and a learning rate warmup promote feature transfer from ImageNet pretraining. DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes and enables learning even difficult classes such as train, bus, and truck well.
    Read More
  • Preprint

    We propose a novel co-learning framework (CoSSL) with decoupled representation learning and classifier learning for imbalanced SSL.

    In this paper, we propose a novel co-learning framework (CoSSL) with decoupled representation learning and classifier learning for imbalanced SSL. To handle the data imbalance, we devise Tail-class Feature Enhancement (TFE) for classifier learning. Furthermore, the current evaluation protocol for imbalanced SSL focuses only on balanced test sets, which has limited practicality in real-world scenarios. Therefore, we further conduct a comprehensive evaluation under various shifted test distributions. In experiments, we show that our approach outperforms other methods over a large range of shifted distributions, achieving state-of-the-art performance on benchmark datasets ranging from CIFAR-10, CIFAR-100, ImageNet, to Food-101. Our code will be made publicly available.
    Read More
  • Authors: Jian Ding, Nan Xue, Gui-Song Xia and Dengxin Dai

    Preprint

    Zero-shot semantic segmentation (ZS3) aims to segment the novel categories that have not been seen in the training. We propose to decouple the ZS3 into two sub-tasks: 1) a class-agnostic grouping task to group the pixels into segments. 2) a zero-shot classification task on segments.

    Zero-shot semantic segmentation (ZS3) aims to segment the novel categories that have not been seen in the training. Existing works formulate ZS3 as a pixel-level zeroshot classification problem, and transfer semantic knowledge from seen classes to unseen ones with the help of language models pre-trained only with texts. While simple, the pixel-level ZS3 formulation shows the limited capability to integrate vision-language models that are often pre-trained with image-text pairs and currently demonstrate great potential for vision tasks. Inspired by the observation that humans often perform segment-level semantic labeling, we propose to decouple the ZS3 into two sub-tasks: 1) a class-agnostic grouping task to group the pixels into segments. 2) a zero-shot classification task on segments. The former sub-task does not involve category information and can be directly transferred to group pixels for unseen classes. The latter subtask performs at segment-level and provides a natural way to leverage large-scale vision-language models pre-trained with image-text pairs (e.g. CLIP) for ZS3. Based on the decoupling formulation, we propose a simple and effective zero-shot semantic segmentation model, called ZegFormer, which outperforms the previous methods on ZS3 standard benchmarks by large margins, e.g., 35 points on the PASCAL VOC and 3 points on the COCO-Stuff in terms of mIoU for unseen classes. Code will be released at https: //github.com/dingjiansw101/ZegFormer.
    Read More
  • This work presents a novel method for LiDAR-based 3D object detection in foggy weather by simulating foggy effects into standard LiDAR data..

    This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather. Collecting and annotating data in such a scenario is very time, labor and cost intensive. In this paper, we tackle this problem by simulating physically accurate fog into clear-weather scenes, so that the abundant existing real datasets captured in clear weather can be repurposed for our task. Our contributions are twofold: 1) We develop a physically valid fog simulation method that is applicable to any LiDAR dataset. This unleashes the acquisition of large-scale foggy training data at no extra cost. These partially synthetic data can be used to improve the robustness of several perception methods, such as 3D object detection and tracking or simultaneous localization and mapping, on real foggy data. 2) Through extensive experiments with several state-of-the-art detection approaches, we show that our fog simulation can be leveraged to significantly improve the performance for 3D object detection in the presence of fog. Thus, we are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset. Our code is available at this http URL.
    Read More
  • ACDC is a large-scale dataset for training and testing semantic segmentation methods for four adverse visual conditions: fog, nighttime, rain, and snow.

    Level 5 autonomy for self-driving cars requires a robust visual perception system that can parse input images under any visual condition. However, existing semantic segmentation datasets are either dominated by images captured under normal conditions or are small in scale. To address this, we introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. ACDC consists of a large set of 4006 images which are equally distributed between four common adverse conditions: fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality fine pixel-level semantic annotation, a corresponding image of the same scene taken under normal conditions, and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content. Thus, ACDC supports both standard semantic segmentation and the newly introduced uncertainty-aware semantic segmentation. A detailed empirical study demonstrates the challenges that the adverse domains of ACDC pose to state-of-the-art supervised and unsupervised approaches and indicates the value of our dataset in steering future progress in the field. Our dataset and benchmark are publicly available.
    Read More
  • Scale-Aware Domain Adaptive Faster R-CNN Publication "Scale-Aware Domain Adaptive Faster R-CNN" Yuhua Chen, Haoran Wang, Wen Li, [...]

    Read More
  • Multi-Scale Interaction for Real-Time LiDAR Data Segmentation on an Embedded Platform Publication "Multi-Scale Interaction for Real-Time LiDAR [...]

    Read More
  • Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural Sounds Publication "Binaural SoundNet: Predicting Semantics, Depth and [...]

    Read More
  • DLOW: Domain Flow and Applications Publication "DLOW: Domain Flow and Applications " Rui Gong, Wen Li, Yuhua [...]

    Read More
  • Task Switching Network for Multi-Task Learning Publication "Task Switching Network for Multi-Task Learning " Guolei Sun, Thomas [...]

    Read More
  • DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation Publication "DAFormer: Improving Network Architectures and [...]

    Read More
  • CoSSL: Co-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning Publication "CoSSL: Co-Learning of Representation and Classifier [...]

    Read More
  • Decoupling Zero-Shot Semantic Segmentation Publication "Decoupling Zero-Shot Semantic Segmentation" Jian Ding, Nan Xue, Gui-Song Xia, and Dengxin [...]

    Read More
  • Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather Publication "Fog Simulation [...]

    Read More
  • ACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding Abstract Level 5 autonomy for self-driving [...]

    Read More