Kempf S, Veldema R, Philippsen M (2014)
Publication Language: English
Publication Type: Conference contribution, Original article
Publication year: 2014
Publisher: Springer
Edited Volumes: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Series: Lecture Notes in Computer Science (LNCS)
City/Town: Berlin Heidelberg
Book Volume: 8664
Pages Range: 325-341
Conference Proceedings Title: Languages and Compilers for Parallel Computing, 26th International Workshop, LCPC 2013
Event location: San Jose, California, USA
ISBN: 978-3-319-09966-8
DOI: 10.1007/978-3-319-09967-5_19
An atomic block is a language construct that simplifies the programming of critical sections. In the past, software transactional memory (STM) and lock inference have been used to implement atomic blocks. Both approaches have strengths and weaknesses. STM provides fine-grained locking but has high overheads due to logging and potential rollbacks. Lock inference is a static analysis that computes which locks an atomic block must acquire in order to guarantee atomicity. Lock inference avoids both logging overhead and rollbacks, but with a growing number of variables accessed in an atomic block, locking becomes coarse-grained and hence reduces parallelism. The first contribution of this paper is an approach that combines these advantages without the drawbacks. A compiler analysis determines if lock inference can achieve a fine-grained synchronization or if STM is better for an atomic block. The generated code then either uses lock inference, STM, or a combination of both that allows the atomic block to switch from STM to lock inference during its execution. The second contribution are two optimizations that remove some of the limits of state-of-theart static lock inference analysis and therefore extend its applicability. These optimizations make more atomic blocks amenable to fine-grained lock inference. We use the STAMP benchmark suite to prove the practicability of our work. The reduced contention due to fine-grained locking and less transactional overhead lead to execution times that are between 1.1 and 6.0 times faster than a pure STM or lock inference implementation.
APA:
Kempf, S., Veldema, R., & Philippsen, M. (2014). Combining Lock Inference with Lock-Based Software Transactional Memory. In Călin Cașcaval, Pablo Montesinos (Eds.), Languages and Compilers for Parallel Computing, 26th International Workshop, LCPC 2013 (pp. 325-341). San Jose, California, USA, US: Berlin Heidelberg: Springer.
MLA:
Kempf, Stefan, Ronald Veldema, and Michael Philippsen. "Combining Lock Inference with Lock-Based Software Transactional Memory." Proceedings of the 26th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2013), San Jose, California, USA Ed. Călin Cașcaval, Pablo Montesinos, Berlin Heidelberg: Springer, 2014. 325-341.
BibTeX: Download