DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation

1RWTH Aachen University 2Eindhoven University of Technology 3Bosch Center for AI
Teaser Image
DITR architecture overview. We extract 2D image features from a frozen DINOv2 model and unproject them (2D-to-3D) onto the 3D point cloud. The unprojected features are subsequently max-pooled to create a multi-scale feature hierarchy. The raw point cloud is fed through a 3D backbone and the unprojected image features are added to the skip connection between the encoder \(\mathcal{E}_l\) and decoder \(\mathcal{D}_l\) block on each level. The model is then trained with the regular segmentation loss.

Abstract

Vision foundation models (VFMs) trained on large-scale image datasets provide high-quality features that have significantly advanced 2D visual recognition. However, their potential in 3D vision remains largely untapped, despite the common availability of 2D images alongside 3D point cloud datasets. While significant research has been dedicated to 2D-3D fusion, recent state-of-the-art 3D methods predominantly focus on 3D data, leaving the integration of VFMs into 3D models underexplored. In this work, we challenge this trend by introducing DITR, a simple yet effective approach that extracts 2D foundation model features, projects them to 3D, and finally injects them into a 3D point cloud segmentation model. DITR achieves state-of-the-art results on both indoor and outdoor 3D semantic segmentation benchmarks. To enable the use of VFMs even when images are unavailable during inference, we further propose to distill 2D foundation models into a 3D backbone as a pretraining task. By initializing the 3D backbone with knowledge distilled from 2D VFMs, we create a strong basis for downstream 3D segmentation tasks, ultimately boosting performance across various datasets.

Inference Pipeline
DITR (a) and D-DITR (b). In addition to our DITR injection approach, we also present D-DITR to distill DINOv2 features into 3D semantic segmentation models.

DITR Quantiative Results

D-DITR Quantiative Results

BibTeX

@article{abouzeid2025ditr,
  title   = {{DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation}},
  author  = {Abou Zeid, Karim and Yilmaz, Kadir and de Geus, Daan and Hermans, Alexander and Adrian, David and Linder, Timm and Leibe, Bastian},
  journal = {arXiv preprint arXiv:2503.18944},
  year    = {2025}
}