In essence, we are confronted with the classic “
expression problem” of independently extending a data model
to new data variants and new operations. There are many
solutions to this problem but most of them are rather
heavyweight. More lightweight implementations are possible in languages that support multi-methods, that is, dispatch method calls dynamically based on the actual types
of all the arguments. Figure 6 shows how we can achieve
essentially the same (plus deep inspection of the arguments) using pattern matching and mix-in composition,
making use of the fact that composing traits is subject to
linearization.
15 We package each set of arithmetic optimizations into its own trait that inherits from ArithExp and
overrides the desired methods (e.g., infix_*). When the
arguments do not match the rewriting pattern, the overridden method will invoke the “parent” implementation
using super. When several such traits are combined, the
super calls will traverse the overridden method implementations according to the linearization order of their
containing traits.
Implementing multi-methods in a statically typed setting usually poses three problems: separate type check-ing/compilation, ensuring nonambiguity, and ensuring
exhaustiveness. The described encoding supports separate type-checking and compilation as far as traits do.
Ambiguity is ruled out by always following the linearization
order and the first-match semantics of pattern matching.
Exhaustiveness is ensured at the type level by requiring a
default implementation, although no guarantees can be
made that the default will not choose to throw an exception at runtime. In the particular case of applying optimizations, the default is always safe as it will just create an
expression object.
2. 3. Generating code
Code generation is an explicit operation. For the com-
mon case where generated code is to be loaded imme-
diately into the running program, the trait Compile
provides a suitable interface in form of the abstract
method compile (see Figure 8). The contract of com-
pile is to “unstage” a function from staged to staged
values into a function operating on present-stage values
that can be used just like any other function object in the
running program.
2. 4. Putting it all together
In the previous sections, we have discussed the major ingredients of lightweight modular staging, focusing mostly on
individual components. Figure 7 shows an overview of the
traits encountered so far and their relationships.
Using the staged FFT implementation as part of some
larger Scala program is straightforward but requires us
to interface the generic algorithm with a concrete data
representation. The algorithm in Figure 1 expects an
array of Complex objects as input, each of which contains fields of type Rep[Double]. The algorithm itself
has no notion of staged arrays but uses arrays only in the
generator stage, which means that it is agnostic to how
data is stored. The enclosing program, however, will
store arrays of complex numbers in some native format
which we will need to feed into the algorithm. A simple
choice of representation is to use Array[Double]
with the complex numbers flattened into adjacent slots.
When applying compile, we will thus receive input of
figure 7. Component architecture. Arrows denote extends relationships and dashed boxes represent units of functionality.
Expressions
Base
BaseExp
ScalaGenBase
Compile
CompileScala
Arith
ArithExp
ArithExpOpt
ArithExpOptFFT
ScalaGenArith
Arithmetic
Trig
TrigExp
TrigExpOpt
ScalaGenTrig
Trigonometry
Interface
Core Implementation
Optimizations
Specific Opts
Scala Code generation