Master's thesis: Conditional Generative Models for Contrastive Learning in Medical Image Classification
29.08.2025, Abschlussarbeiten, Bachelor- und Masterarbeiten
Background:
Medical image analysis is often challenged by limited labeled data, strong class imbalance, and subtle differences between classes (e.g., disease vs. healthy tissue, or early vs. advanced disease stages). Recent advances in generative models, like VAEs, GANs, or diffusion models, have shown great potential for generating realistic samples and even performing class-to-class transformations while preserving semantic content. At the same time, contrastive learning has emerged as a powerful approach for learning discriminative and robust representations, often leading to superior downstream classification performance. Combining these two directions, using conditional generative models to generate class-specific augmentations or hard negatives for contrastive learning, is a promising research idea that could help overcome data scarcity and improve classification robustness in the medical imaging domain. In recent work, similar approaches have already been investigated in natural image domains [1-3], therefore we want to explore the application of these two approaches to the medical field.
Task:
The aim of the thesis is the investigation of conditional generative models regarding their ability to generate class-to-class transformations to enhance contrastive learning for medical image classification. The work will begin with a literature review to identify existing approaches and research gaps, followed by the development or adaptation of a suitable conditional generative model, which will be combined with a contrastive learning setup. This approach will then be applied to multiple available medical imaging datasets (X-ray, MRI, etc.) in order to evaluate its potential for improving representation learning and downstream classification tasks. The thesis will conclude with an analysis of both the quantitative performance and the clinical plausibility of the generated samples, along with a discussion of the strengths and limitations of the proposed approach.Requirements:
- Masters student in Computer Science or related field
- Strong background in Machine/Deep Learning
- Strong experience with Python and Deep Learning frameworks (e.g. PyTorch)
- Strong motivation and interest in interdisciplinary research
[1] Z. Zang et al., “DiffAug: enhance unsupervised contrastive learning with domain-knowledge-free diffusion-based data augmentation,” in Proceedings of the 41st International Conference on Machine Learning, ICML 2024
[2] D. Zeng, Y. Wu, X. Hu, X. Xu, and Y. Shi, “Contrastive Learning with Synthetic Positives,” ECCV 2024
[3] S. A. Koohpayegani, A. Singh, K. L. Navaneet, H. Pirsiavash, and H. Jamali-Rad, “GeNIe: Generative Hard Negative Images Through Diffusion,”2024, arXiv: arXiv:2312.02548.
Kontakt: alexander.geiger@tum.de