⑨ lab ≡ ByteLabs

Talk/Euro LLVM Conference 2012

Reducing dynamic compilation latency - concurrent and parallel dynamic compilation

I received an invitation to present the outcome of my PhD at the Euro LLVM'12 conference. It seems that at that time we were the first ones to have built a production ready concurrent JIT compiler using the LLVM framework. A few years later Google and Facebook apply our research results in their virtual machines.

Abstract

The main challenge faced by a dynamic compilation system is to detect and translate frequently executed program regions into highly efficient native code as fast as possible. Depending on application requirements, state-of-the-art dynamic-compilation systems either focus on peak-performance, applying many optimisations resulting in low compilation speeds, or response time, trading peak performance of generated machine code for compilation speed. Faster availability of optimised native code minimises the time spent in the unoptimised version, thereby improving application performance. As dynamic compilation adds to the overall execution time, it is often decoupled and operates in a separate thread independent from the main execution loop. This approach improves application responsiveness by reducing pause times due to dynamic compilation, it does not, however, reduce dynamic compilation latency.

In this talk we want to present two innovative contributions that work together to effectively reduce dynamic compilation latency. The first contribution is an incremental region based compilation approach that considers all frequently executed paths in a program for dynamic compilation, as opposed to previous trace based approaches where trace compilation is restricted to paths through loops. The second reduces dynamic compilation latency by compiling several hot regions in a concurrent and parallel task farm, using LLVM as the underlying compilation framework.

#SNPS #Work #Talk #PhD