Direkt zum Inhalt springen

IDP/Guided Research: Anatomy-Aware Deep Learning for 2D-to-3D Orthopedic Reconstruction

23.03.2026, Abschlussarbeiten, Bachelor- und Masterarbeiten

Accurate 3D bone models are vital for Total Knee Arthroplasty. While deep learning can reconstruct 3D CTs from 2D X-rays, boundaries often remain blurry. The aim of this project is to improve the reconstruction. By developing a 3D bone segmentation model and integrating it as a differentiable, anatomy-aware loss, we will penalize structural inaccuracies. This prevents bone blending and enforces sharp 3D boundaries, which are required for precise surgical planning.

Problem Statement

Pre-operative planning for Total Knee Arthroplasty (TKA) requires highly precise 3D bone morphology to ensure optimal implant sizing and alignment. While deep learning models can reconstruct 3D CT volumes directly from standard long-leg 2D X-rays, the resulting bone structures are often blurry. Standard pixel-level training losses do not "understand" human anatomy, causing adjacent bones to melt together and losing the crisp boundaries required for clinical use.

We hypothesize that a structurally accurate 3D reconstruction is one that can be easily and perfectly segmented. To achieve this, we need to teach our generative models (GANs/Diffusion) the actual structure of the human leg.

In this project, you will develop a dedicated 3D segmentation model for the lower body that accurately isolates the femur, tibia, patella, and fibula. Your segmentation model will then be integrated into our 2D-to-3D pipeline as a differentiable "anatomy-aware loss."

Requirements

  • Strong Coding: Proficiency in Python and PyTorch.
  • Deep Learning Core: Solid understanding of 3D image segmentation (e.g., U-Net), custom loss functions, and backpropagation.
  • Independent Research: Ability to independently read, understand, and implement concepts from deep learning papers.
  • Medical Imaging (Optional): Experience handling 3D data (NIfTI, DICOM) and libraries like MONAI or TotalSegmentator.

Goals

  • Develop a 3D Segmentation Model: Train a lightweight, efficient network to accurately segment the key long-leg bones (femur, tibia, patella, and fibula) from 3D CT volumes.
  • Design an Anatomy-Aware Loss: Integrate your trained segmentation model into our generative 2D-to-3D pipeline as a custom, differentiable loss function to penalize structural inaccuracies.
  • Evaluate Downstream Performance: Assess the impact of your loss function on the final 2D-to-3D reconstruction, specifically measuring improvements in anatomical accuracy, bone boundary sharpness, and clinical viability for TKA planning.

References

  • Wasserthal, J., Breit, H. C., Meyer, M. T., Pradella, M., Hinck, D., Sauter, A. W., Heye, T., Boll, D. T., Cyriac, J., Yang, S., Bach, M., & Segeroth, M. (2023). TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology. Artificial intelligence, 5(5), e230024. https://doi.org/10.1148/ryai.230024
  • Ying, X., Guo, H., Ma, K., Wu, J., Weng, Z., & Zheng, Y. (2019). X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10619-10628).

How to Apply

If you are interested in this project, please send an email with a brief introduction, your current CV, and your academic transcript to: [tim.mach@tum.de].

Kontakt: tim.mach@tum.de