Preprint
AcceleratedLiNGAM: Learning Causal DAGs at the speed of GPUs
Diffusing Differentiable Representations
FUSE-ing Language Models: Zero-Shot Adapter Discovery for Prompt Optimization Across Tokenizers
MANIFOLD PRESERVING GUIDED DIFFUSION
ON THE JOINT INTERACTION OF MODELS, DATA, AND FEATURES
One-Step Diffusion Distillation through Score Implicit Matching
Rethinking LLM Memorization through the Lens of Adversarial Compression