News

Compiler Optimization-Space Exploration [abstract] (PDF)
Spyridon Triantafyllis, Manish Vachharajani, and David I. August
The Journal of Instruction-level Parallelism (JILP), February 2005.

To meet the performance demands of modern architectures, compilers incorporate an ever-increasing number of aggressive code transformations. Since most of these transformations are not universally beneficial, compilers traditionally control their application through predictive heuristics, which attempt to judge an optimization's effect on final code quality a priori. However, complex target architectures and unpredictable optimization interactions severely limit the accuracy of these judgments, leading to performance degradation because of poor optimization decisions.

This performance loss can be avoided through the iterative compilation approach, which advocates exploring many optimization options and selecting the best one a posteriori. However, existing iterative compilation systems suffer from excessive compile times and narrow application domains. By overcoming these limitations, Optimization-Space Exploration (OSE) becomes the first iterative compilation technique suitable for general-purpose production compilers. OSE narrows down the space of optimization options explored through limited use of heuristics. A compiler tuning phase further limits the exploration space. At compile time, OSE prunes the remaining optimization configurations in the search space by exploiting feedback from earlier configurations tried. Finally, rather than measuring actual runtimes, OSE compares optimization outcomes through static performance estimation, further enhancing compilation speed. An OSE-enhanced version of Intel's reference compiler for the Itanium architecture yields a performance improvement of more than 20% for some SPEC benchmarks.