CMU FLAME Center Seminar - Sachin Goyal & Christina Baek February 14, 2025 12:30pm — 2:00pm Location: In Person and Virtual - ET - Tepper Building 1403 Speaker: SACHIN GOYAL, CHRISTINA BAEK, Ph.D. Students, Machine Learning Department, Carnegie Mellon University A standard practice when using large language models is for users to supplement their instruction with an input context containing new information for the model to process. However, models struggle to reliably follow the input context, especially when it conflicts with their parametric knowledge from pretraining. In-principle, one would expect models to adapt to the user context better after instruction finetuning, particularly when handling knowledge conflicts. However, we observe a surprising failure mode: during instruction tuning, the context reliance under knowledge conflicts initially increases as expected, but then gradually decreases as instruction finetuning progresses. This happens while the performance on standard benchmarks keeps on increasing far after this drop. We call this phenomenon context-parametric inversion and observe it across multiple general purpose instruction tuning datasets such as TULU, Alpaca and Ultrachat, across different model families like Llama, Mistral, and Pythia. We perform various controlled studies and theoretical analysis to show that context-parametric inversion occurs due to examples in the instruction finetuning data where the input context provides information that aligns with model's parametric knowledge. Our analysis suggests some natural mitigation strategies with limited but insightful gains, and serves as a useful starting point in addressing this deficiency in instruction finetuning. Paper — Sachin Goyal is a fourth year PhD student in the Machine Learning Department (MLD) at CMU, advised by Prof. Zico Kolter. His current research focus includes robust training and fine-tuning of foundation models. Christina Baek is a fourth-year PhD student in the Machine Learning Department (MLD) at CMU, advised by Zico Kolter and Aditi Raghunathan. She is broadly interested in ML safety and understanding deep learning through scientific methods. Her research focuses on understanding the out-of-distribution robustness and long-tail behaviors of models. She has worked on strategies for model assessment under real-world shifts with limited labeled data. Lately, she has been interested in ensuring the safety of agentic systems through theory-guided insights for model failures and how they snowball across training, inference, and interaction. In Person and Zoom Participation. See announcement. Event Website: https://www.cmu.edu/flame/events/index.html Add event to Google Add event to iCal