Computer Science Speaking Skills Talk

Tuesday, March 12, 2019 - 10:00am to 11:00am


Traffic21 Classroom 6501 Gates Hillman Centers



Selective-Backprop: Adaptive Importance Sampling for Training Large Datasets

Speaker: Angela Jiang

Location: GHC 6501

Selective-Backprop: Adaptive Importance Sampling for Training Large Datasets

This talk presents Selective-Backprop: a mechanism for adaptively selecting high-value training examples. Selective-Backprop makes a decision on the basis of the output of a forward pass of a candidate training example as to whether to compute gradients and update parameters or to skip immediately to the next candidate. Selective-Backprop is self- paced, training examples with a probability proportional to the Euclidean distance between the current model output for the training example and the intended output. Through evaluation on MNIST, CIFAR10, CIFAR100 and SVHN, across a variety of modern image models, we show that this mechanism results in convergence to higher accuracy and faster convergence-per- trained-example. Selective-Backprop a simple, lightweight, and generally applicable technique for sampling training examples that both accelerates learning and reduces final error.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement

For More Information, Contact:


Speaking Skills