Julian Rosenberger

PhD Researcher | Interpretable ML & Human-AI Collaboration

I empirically validate interpretable ML systems through rigorous behavioral experiments and domain expert collaboration. Using pre-registered studies (N>250) and inferential statistics, I test which properties of transparent systems actually drive user understanding and adoption. My work challenges common assumptions about interpretability, revealing that adjustability and personalization often matter more than static transparency alone.

Focus Areas

Interpretable Machine Learning Human-AI Collaboration Behavioral Experiments Pre-registered Studies Conversational XAI Trust Calibration Cognitive Load GAMs LLMs

Selected Papers (9 total)

Navigating the Rashomon Effect
Developed adaptive interpretable ML framework using Contextual Bandits to leverage the Rashomon Effect, demonstrating that users develop distinct individual preferences for model complexity while maintaining predictive accuracy and interpretability.
Quantifying Visual Properties of GAM Shape Plots
Established objective computational metrics (e.g., "visual chunks," number of kinks) that predict 86% of variance in users' cognitive load when interpreting ML explanations, enabling interpretability assessment without subjective user testing.
Understanding Data-Sharing with AI Systems
Revealed through pre-registered experiment (N=240) that AI transparency alone does not increase data-sharing, but amplifies willingness to share when users already trust AI, challenging assumptions that transparency universally promotes adoption.
Leveraging Interpretable ML in Intensive Care
Reduced ICU prediction features by 97% while maintaining performance, demonstrating interpretable GAMs achieve parity with XGBoost, validated by physicians and challenging assumptions about the performance-interpretability tradeoff.
CareerBERT: Matching Resumes to Jobs
Built domain-adapted SBERT for job recommendation using ESCO taxonomy to mitigate hiring bias, validated by HR experts with strong performance (MAP@20: 0.71) across white-collar, blue-collar, and atypical career paths.
View Full Publication List on Google Scholar

Current Work

Conversational XAI Interfaces
Developing LLM-powered conversational interfaces that enable users to interactively query model explanations rather than passively consuming static visualizations, addressing scalability challenges in interpretable AI deployment.
Enterprise Job Matching System
Collaborating with Continental AG on enterprise-scale job matching system deployment, bridging research insights with production constraints and learning how interpretability requirements differ between research and industry contexts.

Selected Talks

Beyond Ad Matching: Using CareerBERT to Map Resumes to ESCO Job Categories
9th OJA Forum · Bertelsmann Stiftung
Presented CareerBERT's approach to semantic job matching using BERT embeddings and the ESCO taxonomy. Demonstrated how AI can move beyond traditional ad-matching to map unstructured resume data to standardized job categories, with evaluation results from both quantitative metrics and HR expert feedback.
Watch on YouTube (German)

Experience

PhD Researcher
University of Regensburg & TU Dresden · Supervised by Prof. Mathias Kraus & Prof. Patrick Zschech
  • Experimental Research: Designed and executed multiple pre-registered behavioral experiments (N>250) using custom React/Python platforms to test interpretability assumptions.
  • Technical Implementation: Implemented interpretable ML systems using GAMs, Contextual Bandits, and LLMs; currently developing conversational XAI interfaces.
  • Validation: Established multi-method validation combining experiments, domain expert evaluation (physicians, HR professionals), and objective metrics.
Student Research Assistant
FAU Erlangen-Nuremberg
Contributed to Computer Vision and NLP research pipelines; served as Teaching Assistant for "Introduction to Computer Science" (Harvard CS50 curriculum) supporting 80+ students.

Education & Awards

PhD in Information Systems
University of Regensburg & TU Dresden
Thesis: "The Role of Interpretability in Human-AI Collaboration."
Selected for ECIS 2025 Doctoral Consortium | Best Student Paper, WI 2024.
M.Sc. International Information Systems
FAU Erlangen-Nuremberg
Grade: 1.2 (Distinction). Top 3% of class. Specialization in AI & Human-Computer Interaction.
B.A. Economics (Insurance)
DHBW Mannheim (Dual Study with Allianz)
Grade: 1.6.

Skills & Service

Programming: Python (PyTorch, HuggingFace/Transformers, Scikit-learn), R (dplyr, ggplot2), SQL, JavaScript/React
Cloud & Infrastructure: GCP (BigQuery, Compute Engine), AWS (EC2), Docker, Git/GitHub
Data & Methods: MIMIC-III/IV, ESCO Taxonomy, Vector Embeddings, Pre-registered Experimental Design, Statistical Inference, Mixed-Effects Modeling
Tools: LaTeX, SoSciSurvey/Prolific
Languages: German (Native), English (Fluent · TOEFL 106)
Service: Reviewer for ECIS, WI, Data Technologies and Applications
Teaching: Master's Seminars on Deep Learning, Explainable AI, Scientific Writing