Onat Dalmaz

Onat Dalmaz

Ph.D. Student in Electrical Engineering

Stanford University

About

I am a 1st year Ph.D. student in Electrical Engineering at Stanford University, advised by Prof. Brian Hargreaves and Prof. Akshay Chaudhari. I receieved my M.Sc. and B.Sc. in Electrical and Electronics Engineering at Bilkent University, where I was fortunate to work with Prof. Tolga Cukur. My interests lie in the intersection of machine learning, computer vision, medical imaging, and healthcare. The focal point of my research is developing novel deep generative models for multi-modal medical image synthesis and MRI reconstruction.

Latest version of my curriculum vitae is available here.

Interests
  • Machine Learning
  • Signal Processing
  • Medical Imaging
  • Computer Vision
  • Medical Image Analysis
  • Generative Models
Education
  • Ph.D., Electrical Engineering, 2023 - 2027

    Stanford University

  • M.Sc., Electrical and Electronics Engineering, 2020 - 2023

    Bilkent University

  • B.Sc., Electrical and Electronics Engineering, 2016 - 2020

    Bilkent University

Journal Publications

#Quickly discover relevant content by filtering publications.

Unsupervised Medical Image Translation with Adversarial Diffusion Models
Unsupervised Medical Image Translation with Adversarial Diffusion Models

Imputation of missing images via source-to-target modality translation can facilitate downstream tasks in medical imaging. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved reliability in medical image synthesis. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process to progressively map noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are coupled with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with two coupled diffusion processes to synthesize the target given source and the source given target. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.

One Model to Unite Them All: Personalized Federated Learning of Multi-Contrast MRI Synthesis
One Model to Unite Them All: Personalized Federated Learning of Multi-Contrast MRI Synthesis

Multi-institutional collaborations are key for learning generalizable MRI synthesis models that translate source- onto target-contrast images. To facilitate collaboration, federated learning (FL) adopts decentralized training and mitigates privacy concerns by avoiding sharing of imaging data. However, FL-trained synthesis models can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident when common or variable translation tasks are prescribed across sites. Here we introduce the first personalized FL method for MRI Synthesis (pFLSynth) to improve reliability against domain shifts. pFLSynth is based on an adversarial model that produces latents specific to individual sites and source-target contrasts, and leverages novel personalization blocks to adaptively tune the statistics and weighting of feature maps across the generator stages given latents. To further promote site specificity, partial model aggregation is employed over downstream layers of the generator while upstream layers are retained locally. As such, pFLSynth enables training of a unified synthesis model that can reliably generalize across multiple sites and translation tasks. Comprehensive experiments on multi-site datasets clearly demonstrate the enhanced performance of pFLSynth against prior federated methods in multi-contrast MRI synthesis.

BolT: Fused Window Transformers for fMRI Time Series Analysis
BolT: Fused Window Transformers for fMRI Time Series Analysis

Deep-learning models have enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) data. Yet, many previous methods are suboptimally sensitive for contextual representations across diverse time scales. Here, we present BolT, a blood-oxygen-level-dependent transformer model, for analyzing multi-variate fMRI time series. BolT leverages a cascade of transformer encoders equipped with a novel fused window attention mechanism. Encoding is performed on temporally-overlapped windows within the time series to capture local representations. To integrate information temporally, cross-window attention is computed between base tokens in each window and fringe tokens from neighboring windows. To gradually transition from local to global representations, the extent of window overlap and thereby number of fringe tokens are progressively increased across the cascade. Finally, a novel cross-window regularization is employed to align high-level classification features across the time series. Comprehensive experiments on large-scale public datasets demonstrate the superior performance of BolT against state-of-the-art methods. Furthermore, explanatory analyses to identify landmark time points and regions that contribute most significantly to model decisions corroborate prominent neuroscientific findings in the literature.

ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis
ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis

Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT’s generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.

Conference Proceedings

Adversarial Diffusion Models for Unsupervised Medical Image Synthesis

Presented in NeurIPS Medical Imaging Meets 2022 (oral)

pFLSynth: Personalized Federated Learning of Image Synthesis in Multi-Contrast MRI

Presented in NeurIPS Medical Imaging Meets 2022 (oral)

Cycle-Consistent Adversarial Transformers for Unpaired MR Image Translation

Presented in SPIE Medical Imaging 2022, Computer-Aided Diagnosis, San Diego, USA