Dissemin is shutting down on January 1st, 2025

Published in

arXiv, 2020

DOI: 10.48550/arxiv.2001.01653

Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation - PLDI 2019, 2019

DOI: 10.1145/3314221.3314606

Links

Tools

Export citation

Search in Google Scholar

A fast analytical model of fully associative caches

Journal article published in 2019 by Tobias Gysi, Tobias Grosser ORCID, Laurin Brandner, Torsten Hoefler
This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

While the cost of computation is an easy to understand local property, the cost of data movement on cached architectures depends on global state, does not compose, and is hard to predict. As a result, programmers often fail to consider the cost of data movement. Existing cache models and simulators provide the missing information but are computationally expensive. We present a lightweight cache model for fully associative caches with least recently used (LRU) replacement policy that gives fast and accurate results. We count the cache misses without explicit enumeration of all memory accesses by using symbolic counting techniques twice: 1) to derive the stack distance for each memory access and 2) to count the memory accesses with stack distance larger than the cache size. While this technique seems infeasible in theory, due to non-linearities after the first round of counting, we show that the counting problems are sufficiently linear in practice. Our cache model often computes the results within seconds and contrary to simulation the execution time is mostly problem size independent. Our evaluation measures modeling errors below 0.6% on real hardware. By providing accurate data placement information we enable memory hierarchy aware software development.