Direkt zum Inhalt springen

Master's thesis: Scene Completion in 3DGS for Street-Level Mapping with Hexagon AB

13.05.2026, Studentische Hilfskräfte, Praktikantenstellen, Studienarbeiten

 

Description

High-fidelity 3D maps of urban environments are central to a wide range of applications, from autonomous vehicle development and infrastructure inspection to city planning and digital twin construction. Street-level mapping systems, such as mobile mapping systems (MMS) equipped with LiDAR scanners and synchronized cameras, are capable of capturing detailed spatial data at scale. However, a persistent challenge in this setting is the presence of dynamic obstructions, most commonly parked vehicles, which occlude portions of the scene and introduce gaps or “holes” in the reconstructed map. Accurately detecting and filling these gaps is a prerequisite for producing geometrically complete and visually coherent representations that are directly usable in downstream applications.

In recent years, 3D Gaussian Splatting (3DGS) has emerged as an interesting framework for representing 3D scenes, capable of producing highly photorealistic visuals. The 3DGS encodes a scene as a collection of millions of small, colored, oriented volumetric primitives that can be rendered photorealistically from any viewpoint in real time. The GS-RoadPatching paper (Chen, 2025) demonstrated that object removal and road surface completion can be performed directly within the 3DGS representation, without retraining or generative models.

The primary insight is that road surfaces are geometrically repetitive, and a structurally similar patch of road can typically be found elsewhere in the same scene and transplanted into the gap left by a removed vehicle. However, the method has not been publicly released as code, limiting its adoption, and its evaluation has been restricted to specific driving datasets without a systematic investigation of how it performs under different sensor configurations or scene conditions.

The primary aim of this thesis is to address these gaps by re-implementing GS-RoadPatching (Chen, 2025) and conducting a rigorous evaluation on two complementary datasets: the Waymo Open Dataset (Pei Sun, 2020), which provides synchronized multi-camera imagery and LiDAR suitable for building 3DGS scenes, and real industrial-scale MMS data provided by Hexagon.

Tasks

  • Conduct a targeted literature review on 3D scene representation and completion, object removal in driving scenes, and patch-based inpainting methods.
  • Reconstruct 3DGS scenes from the Waymo Open Dataset using existing open-source pipelines as the baseline for scene completion.
  • Re-implement the GS-RoadPatching pipeline, covering vehicle segmentation and removal, candidate patch searching within the reconstructed Gaussian scene, and patch placement to complete the missing region.
  • Apply and demonstrate the pipeline on Hexagon MMS data, evaluating how well it generalizes beyond the original dataset and sensor setup.
  • Ensure the developed system is supported by clean, maintainable, and scalable code, adhering to best practices in software engineering and reproducible research.

Requirements

  • Strong interest in 3D computer vision, scene reconstruction, and their application to real-world mapping and spatial data problems.
  • Proficiency in Python; Familiarity with PyTorch or similar deep learning frameworks is expected. Prior exposure to 3D data formats (point clouds, radiance fields, or Gaussian splatting) is advantageous but not required.
  • Ability to commit 30 to 40 hours per week over six months, with the capacity to work independently and communicate progress regularly with both academic supervisor and the industry partner. 

Please send your resume and a copy of your transcript to salman.ahmed@tum.de. You can also write 5 lines on why you are motivated to do this thesis. Thanks


References

  1. Chen, GJ-S. (2025). GS-RoadPatching: Inpainting Gaussians via 3D Searching and Placing for Driving Scenes. Proceedings of the SIGGRAPH Asia 2025 Conference Papers, 1–11.
  2. Pei Sun, H.K. (2020). Scalability in perception for autonomous driving: Waymo Open Dataset. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2446–2454.

Kontakt: salman.ahmed@tum.de