Homework 8 - Optimizations

Assignment Instructions

  1. Accept the assignment on GitHub Classroom here.

  2. Do the assignment 🐫.

  3. Upload the assignment on Gradescope. The most convenient way to do this is by uploading your assignment repo through the integrated GitHub submission method on Gradescope, but you may also upload a .zip of your repo.

Introduction

In this homework, you'll implement some optimizations in your compiler. You'll also come up with benchmark programs and see how well your optimizations do on a collaboratively-developed benchmark suite.

You'll implement at least two optimizations, all of which we discussed in class:

In order to make inlining and common subexpression elimination easier to implement, you'll also write an AST pass (i.e., a function of type program -> program) to make sure all variable names are globally unique.

If you're taking the class as a capstone project, you should do constant propagation and pick two of the remaining 3 optimizations to implement. (The optional part of constant propogation is still optional.) You'll also write a short document about how your optimizations work and what kind of results you end up with.

Due dates:

Because grades are due not long after the project, you cannot use late days on this final homework.

You have some options as far as how much time and effort to put into this final homework. If you're short on time and want to be done with the semester--perfectly understandable!--we recommend implementing inlining and skipping the optional extension to constant propagation. If you feel like diving in a little deeper, implement common subexpression elimination and the optional extension to constant propagation. It's up to you, and won't affect your grade.

Starting code

The starting code is the same as for HW7, but without support for MLB syntax. Lambda expressions and function pointers are not supported.

You should write all of your optimizations in the file lib/optimize.ml. You can write tests in the usual way; the tester will run all of your optimizations on every test case.

You can run the compiler with specific optimization passes enabled using the bin/compile.exe executable, by passing the -p argument one or more times. For instance:

dune exec bin/compile.exe -- examples/ex1.lisp output -r -p propagate-constants -p uniquify-variables -p inline

will execute the compiler with constant propagation, globally unique names, and inlining enabled, and the passes will run in the order specified. You can also use this to execute an optimization more than once--for instance, doing constant propagation, then inlining, then constant propagation again. Executing the compiler without any -p flags will run all optimizations once, while -noopt will disable all optimizations.

Constant propagation

Constant propagation is a crucial optimization in which as much computation as possible is done at compile time instead of at run time. We implemented a sketch of a simple version of constant propagation in class. Your constant propagation implementation should support:

Optionally, you can also implement re-associating binary operations (possibly in a separate pass) to find opportunities for constant propagation. For instance, consider the expression

(+ 5 (+ 2 (read-num)))

This expression won't be modified by the constant propagation algorithm described above, but with re-association it could be optimized to

(+ 7 (read-num))

Globablly unique names

Many optimizations can benefit from a pass that ensures all names are globally unique. Implement this pass using gensym. This pass should be run before inlining and common subexpression elimination, and both of those optimizations can then assume globally-unique names (this is an exception to the usual principle that the order of optimizations shouldn't matter for correctness). The validate_passes function in optimize.ml ensures that this optimization is executed before inlining and common subexpression elimination.

Inlining

Implement function inlining for function definitions. In general, inlining functions can be tricky because of variable names; consider the following code:

(define (f x y) (+ x y))

(let ((x 2))
  (let ((y 3))
    (f y x)))

A naive inlining implementation might result in code like this:

(let ((x 2))
  (let ((y 3))
    (let ((x y))
      (let ((y x))
        (+ x y)))))

This expression, however, is not equivalent!

This problem can be solved by adding a simultaneous binding form like the one you implemented in HW3. It can also be solved by just ensuring that all variable and parameter names are globally unique.

You should implement a heuristic for when to inline a given function. This heuristic should involve both (1) the number of static call sites and (2) the size of the function body. For example, you could multiply some measure of the size of the function body by the number of call sites and see if this exceeds some target threshold. We recommend implementing your inliner as follows:

  1. Find a function to inline. This function should satisfy your heuristics and be a leaf function: one that doesn't contian any function calls.
  2. Inline static calls to the function and remove the function's definition.
  3. Go back to step 1. Now that you've inlined a function, more functions may now be leaf functions.

This process will never inline recursive functions, including mutually-recursive functions.

Please describe your heuristic in a comment in the optimizations.ml file.

Common subexpression elimination

Implement common subexpression elimination. This optimization pass should find common subexpressions, add names for those subexpressions, and replace the subexpressions with variable references.

This optimization is more challenging to implement than inlining is. Our suggested approach is to:

The most difficult part of this process is determining where to put the new let-binding. Consider replacing the (identical) subexpressions e1, e2, and e3 with the variable x. You'll need to find the lowest common ancestor e of e1, e2, and e3, then replace it with

(let ((x e1)) e)

In order to find this lowest common ancestor, it will likely be useful to track the "path" to a given expression: how to get to that subexpressson from the top level of the given definition. How exactly you do this is up to you.

Peephole Optimizations

This is a very open-ended option, with a different flavor from the others.

All of the optimizations we've seen so far happen at the AST level. Peephole optimizations slot in somewhere else: they examine the list of assembly directives produced by the compiler and analyze them directly for patterns that can be simplified.

Note: If you do implement this optimization, provide an example program where your peephole optimization changes the assembly code (i.e., where the .s file changes depending whether you use -p peephole or not), and specifically mention this program, along with the specific changes to the assembly code that your optimization does, in your write-up. If you do not mention such a program, you may not get credit for implementing your optimization.

You can find inspiration for possible peephole optimizations by simply opening the assembly code (the .s file) generated after compiling some program. Simply scan through the assembly code and find obvious opportunities to optimize, such as redundant mov instructions. The entire point of this type of optimization is to optimize the "obvious" opportunities that you would find by simply scanning the assembly code. The name peephole comes from how these optimizations typically scan the list of assembly directives in order, looking at a "window" of a just a few sequential instructions, and try to find ways to generate simpler, but equivalent, assembly code.

For a simple example, if the compiler produced the code

mov rax, r8
mov r8, rax
mov rax, 10

this would be equivalent to a single directive

mov rax, 10

One way to "rewrite" this sequence of directives with general rules is to note that

mov R, S
mov S, R

simplifies to

mov R, S

and

mov R, V1
mov R, V2

simplifies to

mov R, V2

However, note that this is trickier than it might first appear! For example, if V2 depends on R (for example, if it is a memory offset from R) the value of V2 will be changed by the first mov, thus the "optimization" described above may cause the program to error or compute incorrect values! You could check that V2 does not reference R, or you could just apply this optimization in cases where V2 is a constant. Similar considerations often apply to other peephole optimizations, since many "simple" assembly instructions may have non-obvious side-effects that you must consider to ensure that your optimization does not change the program's behavior.

We've provided the structure for peephole optimizations in the stencil. Simply implement the peephole function in optimize.ml. The flag -p peephole will specifically enable this optimization pass. Note that validate_passes ensures that peephole is the last optimization, since it runs on the assembly code, which is generated after all the other optimizations are run.

Note that this optimization is supposed to be simple! It operates on a simple data structure -- a flat list of assembly instructions -- and should implement simple optimizations. It is fine to implement a peephole optimization that is just several lines long, as long as you describe how it makes the compiled code simpler and how it maintains the program correctness in your write-up, and provide an example program where it applies.

Using OCaml pattern matching will make your life much easier while implementing this optimization. For example, matching the mov pattern shown above could be done as follows:

let rec peephole = function
| Mov (Reg r1, op) :: Mov (Reg r2, op) :: tl when r1 = r2 ->
  (* Generate new instructions, and recursively apply `peephole` to `tl`.
     You may also want to recursively apply `peephole` to the new instrs. *)
| e :: tl ->
  (* Recursively apply `peephole` to `tl` *)
| [] ->
  s[]

This is the general form that your peephole optimizations should take. Note that you need to be careful what you pass to recursive calls so that you don't end up with infinite loops. You may also find it better to implement a couple very simple peephole optimization passes as separate functions which are chained together (for example, by using the |> operator) in peephole.

Benchmarks

There's a benchmarks repository at https://github.com/BrownCS1260/final-benchmarks. You can add your benchmarks to that repository by forking the repository and then creating a pull request adding a file to the benchmarks directory. As part of your grade for this final homework, you should add at least three interesting benchmark programs to this repository by Saturday, Dec 9. Please include your cslogin somewhere in the pull request or commit.

The benchmark repository readme has directions for testing your compiler on these benchmarks.

Capstone

If you are taking CSCI 1260 as a capstone, you should submit a short (1-2 page) PDF document describing your implementation of these optimizations and their effects on your compiler's performance (the benchmarking scripts may help with this). This will serve as your capstone summary!