Berk Tınaz

Hi there! I am a PhD student in Electrical and Computer Engineering at University of Southern California (USC). I'm fortunate to be advised by Prof. Mahdi Soltanolkotabi at the USC Center on AI Foundations for Science (AIF4S).

Previously, I was an undergraduate student in the Department of Electrical and Electronics Engineering at Bilkent University, where I worked in Imaging and Computational Neuroscience Laboratory (ICON Lab) in National Magnetic Resonance Research Center under the supervision of Prof. Tolga Çukur with a focus on deep learning for accelerated MRI synthesis and reconstruction.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github  /  Linkedin

profile photo
News
  • (Sep 2024) Visiting Simons Institute for the semester as part of the "Modern Paradigms in Generalization" and "Special Year on Large Language Models and Transformers" long programs!
  • (Aug 2024) Wrapped up my internship at Amazon.
  • (May 2024) DiracDiffusion (Poster) and Adapt-and-Diffuse (Spotlight) got accepted to ICML 2024!
  • (Jan 2024) Will be joining Amazon in the San Diego office as an Applied Science Intern for the summer of 2024!
  • (Dec 2022) Obtained my M.Sc. degree in EE!
  • (Apr 2022) Will be attending Princeton ML Theory Summer School organized by Boris Hanin this June. Excited to visit beautiful campus of Princeton University and IAS!
  • (Dec 2021) Passed SIPI screening exam (ranked 1st in the department)!
  • (May 2021) Will be attending CIFAR's Deep Learning + Reinforcement Learning (DLRL) and MLSS summer schools.
  • (Apr 2020) Website is live! Received offers from UCLA, USC, and UBC. Very excited to join USC for my PhD studies next fall.
Research

My current research focuses on analyzing the convergence of shallow neural networks with small initialization, as well as developing algorithms for inverse problems, such as denoising, deblurring, and MRI reconstruction. I also have experience working with large language models (LLMs) from past projects, including knowledge injection via continual pretraining during my internship at Amazon and investigating their ability for self-feedback. Recently, I've become interested in the mechanistic interpretability of vision-language models (VLMs) and diffusion models. Selected papers are shown below.

Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models
Zalan Fabian*, Berk Tinaz*, Mahdi Soltanolkotabi (* denote equal contribution)
ICML (Spotlight), 2024
NeurIPS Deep Inverse Workshop, 2023
GitHub / Paper Link

Latent diffusion based reconstruction of degraded images by estimating the severity of degradation and initiating the reverse diffusion sampling accordingly to achieve sample-adaptive inference times.

DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency
Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi,
ICML, 2024
GitHub / Paper Link

Novel framework for solving inverse problems that maintains consistency with the original measurement throughout the reverse process and allows for great flexibility in trading off perceptual quality for improved distortion metrics and sampling speedup via early-stopping.

HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction
Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi,
NeurIPS, 2022
GitHub / Paper Link

A hybrid architecture that combines the implicit bias and efficiency of conbolutions with the power of Transformer blocks in an unrolled and multi-scale network to establish SOTA on fastMRI dataset.


Website template is proudly stolen from Jon Barron (source code).