Welcome











Welcome to the Computer Vision Group at RWTH Aachen University!
The Computer Vision group has been established at RWTH Aachen University in context with the Cluster of Excellence "UMIC - Ultra High-Speed Mobile Information and Communication" and is associated with the Chair Computer Sciences 8 - Computer Graphics, Computer Vision, and Multimedia. The group focuses on computer vision applications for mobile devices and robotic or automotive platforms. Our main research areas are visual object recognition, tracking, self-localization, 3D reconstruction, and in particular combinations between those topics.
We offer lectures and seminars about computer vision and machine learning.
You can browse through all our publications and the projects we are working on.
Important information for the Wintersemester 2023/2024: Unfortunately the following lectures are not offered in this semester: a) Computer Vision 2 b) Advanced Machine Learning
News
• |
GCPR'23 Two papers have been accepted for publication at the German Conference on Pattern Recognition 2023 (GCPR): |
Aug. 10, 2023 |
• |
ICCV'23 We have two papers accepted at the 2023 International Conference on Computer Vision (ICCV): |
July 16, 2023 |
• |
CVPR'23 Our TarVIS approach has been accepted as a highlighted paper (top 2.5%) at the 2023 Conference on Computer Vision and Pattern Recognition (CVPR): |
March 31, 2023 |
• |
ICRA'23 We have one paper accepted at the 2023 International Conference on Robotics and Automation (ICRA): |
Jan. 18, 2023 |
• |
WACV'23 We have two papers accepted at the 2023 Winter Conference on Applications of Computer Vision (WACV): |
Dec. 29, 2022 |
• |
ECCV'22 We have one paper accepted at the European Conference on Computer Vision (ECCV) 2022, AVVision Workshop: Furthermore, we will present a live demo: |
Sept. 30, 2022 |
Recent Publications
![]() DynaMITe: Dynamic Query Bootstrapping for Multi-object Interactive Segmentation Transformer International Conference on Computer Vision (ICCV) Most state-of-the-art instance segmentation methods rely on large amounts of pixel-precise ground-truth annotations for training, which are expensive to create. Interactive segmentation networks help generate such annotations based on an image and the corresponding user interactions such as clicks. Existing methods for this task can only process a single instance at a time and each user interaction requires a full forward pass through the entire deep network. We introduce a more efficient approach, called DynaMITe, in which we represent user interactions as spatio-temporal queries to a Transformer decoder with a potential to segment multiple object instances in a single iteration. Our architecture also alleviates any need to re-compute image features during refinement, and requires fewer interactions for segmenting multiple instances in a single image when compared to other methods. DynaMITe achieves state-of-the-art results on multiple existing interactive segmentation benchmarks, and also on the new multi-instance benchmark that we propose in this paper. ![]() |
![]() TarVis: A Unified Approach for Target-based Video Segmentation Conference on Computer Vision and Pattern Recognition (CVPR) 2023 (Highlight) The general domain of video segmentation is currently fragmented into different tasks spanning multiple benchmarks. Despite rapid progress in the state-of-the-art, current methods are overwhelmingly task-specific and cannot conceptually generalize to other tasks. Inspired by recent approaches with multi-task capability, we propose TarViS: a novel, unified network architecture that can be applied to any task that requires segmenting a set of arbitrarily defined 'targets' in video. Our approach is flexible with respect to how tasks define these targets, since it models the latter as abstract 'queries' which are then used to predict pixel-precise target masks. A single TarViS model can be trained jointly on a collection of datasets spanning different tasks, and can hot-swap between tasks during inference without any task-specific retraining. To demonstrate its effectiveness, we apply TarViS to four different tasks, namely Video Instance Segmentation (VIS), Video Panoptic Segmentation (VPS), Video Object Segmentation (VOS) and Point Exemplar-guided Tracking (PET). Our unified, jointly trained model achieves state-of-the-art performance on 5/7 benchmarks spanning these four tasks, and competitive performance on the remaining two. ![]() |
![]() BURST: A Benchmark for Unifying Object Recognition, Segmentation and Tracking in Video Winter Conference on Computer Vision (WACV) 2023 Multiple existing benchmarks involve tracking and segmenting objects in video e.g., Video Object Segmentation (VOS) and Multi-Object Tracking and Segmentation (MOTS), but there is little interaction between them due to the use of disparate benchmark datasets and metrics (e.g. J&F, mAP, sMOTSA). As a result, published works usually target a particular benchmark, and are not easily comparable to each another. We believe that the development of generalized methods that can tackle multiple tasks requires greater cohesion among these research sub-communities. In this paper, we aim to facilitate this by proposing BURST, a dataset which contains thousands of diverse videos with high-quality object masks, and an associated benchmark with six tasks involving object tracking and segmentation in video. All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison, and hence, more effectively pool knowledge from different methods across different tasks. Additionally, we demonstrate several baselines for all tasks and show that approaches for one task can be applied to another with a quantifiable and explainable performance difference. ![]() |