This project investigates advanced strategies for enhancing 3D visualization of Magnetic Resonance Imaging (MRI) data through volume rendering. While transfer function–based rendering is highly effective for Computed Tomography (CT) due to standardized Hounsfield units, MRI intensities are sequence-dependent and lack a consistent physical scale, making direct visualization less robust and often ambiguous at tissue boundaries. To address these limitations, the project will compare three complementary approaches: direct MRI rendering, segmentation-driven rendering, and MRI-to-synthetic CT translation. The segmentation approach leverages deep learning–based tissue classification to assign distinct rendering properties per structure, while synthetic CT generation employs generative models to map MR volumes into a CT-like domain where well-established CT transfer functions can be applied.
The goal of this research is to systematically evaluate the visual quality, anatomical accuracy, and robustness of each method for clinical and research visualization tasks. Emphasis will be placed on identifying boundary artifacts, assessing the interpretability of segmented structures, and exploring the reliability of synthetic CT representations compared to direct MRI-derived renderings. By benchmarking these approaches side by side, the project aims to determine whether segmentation or synthesis-based pipelines can enable MRI data to achieve CT-like clarity in volume rendering, ultimately informing workflows for improved visualization in medical imaging applications.
Maxime Chamberland