Computer Science Speaking Skills Talk

Friday, April 9, 2021 - 12:00pm to 1:00pm


Virtual Presentation - ET Remote Access - Zoom



Segcache: a memory-efficient and scalable in-memory key-value cache for small objects

Modern web applications heavily rely on in-memory key-value caches to deliver low-latency, high-throughput services. In-memory caches store small objects of size in the range of 10s to 1000s of bytes, and use TTLs widely for data freshness and implicit delete. Current solutions have relatively large per-object metadata and cannot remove expired objects promptly without incurring a high overhead. We present Segcache, which uses a segment-structured design that stores data in fixed-size segments with three key features: (1) it groups objects with similar creation and expiration time into the segments for efficient expiration and eviction, (2) it approximates and lifts most per-object metadata into the shared segment header and shared information slot in the hash table for object metadata reduction, and (3) it performs segment-level bulk expiration and eviction with tiny critical sections for high scalability. Evaluation using production traces shows that Segcache uses 22-60% less memory than state-of-the-art designs for a variety of workloads. Segcache simultaneously delivers high throughput, up to 40% better than Memcached on a single thread. In addition, it exhibits close-to-linear scalability, providing a close to 8speedup over Memcached with 24 threads

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Zoom Participation. See announcement.

For More Information, Contact:


Speaking Skills