Computer Science Speaking Skills Talk May 8, 2024 2:00pm — 3:00pm Location: In Person - Gates Hillman 8102 Speaker: VICTOR AKINWANDE, Ph.D. Student, Computer Science Department, Carnegie Mellon University https://home.victorakinwande.com/ Understanding prompt engineering may not require rethinking generalization Zero-shot learning in prompted vision-language models, the practice of crafting prompts to build classifiers without an explicit training process, has achieved impressive performance in many settings. This success presents a seemingly surprising observation: these methods suffer relatively little from overfitting, i.e., when a prompt is manually engineered to achieve low error on a given training set (thus rendering the method no longer actually zero-shot), the approach still performs well on held-out test data. In this paper, we show that we can explain such performance well via recourse to classical PAC-Bayes bounds. Specifically, we show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature: for instance, the generalization bound of an ImageNet classifier is often within a few percentage points of the true test error. We demonstrate empirically that this holds for existing handcrafted prompts and prompts generated through simple greedy search. Furthermore, the resulting bound is well-suited for model selection: the models with the best bound typically also have the best test performance. This work thus provides a possible justification for the widespread practice of “prompt engineering,” even if it seems that such methods could potentially overfit the training data. Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.