Dissemin is shutting down on January 1st, 2025

Published in

The Journal of Logic Programming, 2-3(41), p. 279-316

DOI: 10.1016/s0743-1066(99)00031-x

Links

Tools

Export citation

Search in Google Scholar

Abstract Multiple Specialization and Its Application to Program Parallelization.

Journal article published in 1999 by Germán Puebla, Manuel V. Hermenegildo ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Program specialization optimizes programs for known values of the input. It is often the case that the set of possible input values is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract values (substitutions), rather than concrete ones. We study the multiple specialization of logic programs based on abstract interpretation. This involves in principle, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, “multiply specialized” program. While multiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper, we report on the incorporation of multiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting “abstract multiple specialization” is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.