Nathan Beckmann

Nathan Beckmann

Associate Professor


Office 9021 Gates and Hillman Centers


Phone (412) 268-7412

Computer Science Department

Administrative Support Person
Michael Stanley

Teaching/Research Statement

I am interested improving the performance and energy efficiency of future processors. One major challenge faced by processors today is the rising cost of moving data onto and within the chip. My research has developed hardware and software techniques that tackle this important problem.

Reconfigurable memory systems. To reduce data movement, applications need their data placed nearby on-chip, but also require enough cache capacity to fit their working sets. This project introduces virtual caches, which reconfigure the physical cache banks in the system into an organization that meets both requirements. Essentially, virtual caches schedule data across the chip to achieve an application-specific design while remaining transparent to applications. Data scheduling requires new hardware mechanisms and algorithms to implement efficiently. Virtual caches significantly reduce data movement in multicore processors, e.g., halving the energy spent moving data in a 64-core processor. We are now looking at how to apply this technique to further reduce data movement, e.g., by co-scheduling threads and data, by extending it across a network of processors, and by applying it in tandem with specialized cores.

Analytical caching policies. In addition to scheduling data across the chip, it's important to make the best use of the limited cache capacity available. This project's goal is to understand cache behavior under different access patterns and policies, and then use these insights to develop policies that maximize cache performance. The key challenge is the uncertainty in how programs behave, which is addressed using a formal probabilistic model of memory references. This mathematical model enables accurate predictions of cache behavior, which is useful to manage caches between competing applications. It also yields policies that outperform the best heuristics while avoiding their pathologies. Going forward, we are applying these techniques to caches elsewhere in computer systems.