+ready> <b>def test(x) 1+2+x;</b>
+Read function definition:
+define double @test(double %x) {
+entry:
+ %addtmp = add double 1.000000e+00, 2.000000e+00
+ %addtmp1 = add double %addtmp, %x
+ ret double %addtmp1
+}
+</pre>
+</div>
+
+<p>This code is a very very literal transcription of the AST built by parsing
+our code, and as such, lacks optimizations like constant folding (we'd like to
+get "<tt>add x, 3.0</tt>" in the example above) as well as other more important
+optimizations. Constant folding in particular is a very common and very
+important optimization: so much so that many language implementors implement
+constant folding support in their AST representation.</p>
+
+<p>With LLVM, you don't need to. Since all calls to build LLVM IR go through
+the LLVM builder, it would be nice if the builder itself checked to see if there
+was a constant folding opportunity when you call it. If so, it could just do
+the constant fold and return the constant instead of creating an instruction.
+This is exactly what the <tt>LLVMFoldingBuilder</tt> class does. Lets make one
+change:
+
+<div class="doc_code">
+<pre>
+static LLVMFoldingBuilder Builder;
+</pre>
+</div>
+
+<p>All we did was switch from <tt>LLVMBuilder</tt> to
+<tt>LLVMFoldingBuilder</tt>. Though we change no other code, now all of our
+instructions are implicitly constant folded without us having to do anything
+about it. For example, our example above now compiles to:</p>
+
+<div class="doc_code">
+<pre>
+ready> <b>def test(x) 1+2+x;</b>
+Read function definition:
+define double @test(double %x) {
+entry:
+ %addtmp = add double 3.000000e+00, %x
+ ret double %addtmp
+}
+</pre>
+</div>
+
+<p>Well, that was easy. :) In practice, we recommend always using
+<tt>LLVMConstantBuilder</tt> when generating code like this. It has no
+"syntactic overhead" for its use (you don't have to uglify your compiler with
+constant checks everywhere) and it can dramatically reduce the amount of
+LLVM IR that is generated in some cases (particular for languages with a macro
+preprocessor or that use a lot of constants).</p>
+
+<p>On the other hand, the <tt>LLVMFoldingBuilder</tt> is limited by the fact
+that it does all of its analysis inline with the code as it is built. If you
+take a slightly more complex example:</p>
+
+<div class="doc_code">
+<pre>
+ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
+ready> Read function definition:
+define double @test(double %x) {
+entry:
+ %addtmp = add double 3.000000e+00, %x
+ %addtmp1 = add double %x, 3.000000e+00
+ %multmp = mul double %addtmp, %addtmp1
+ ret double %multmp
+}
+</pre>
+</div>
+
+<p>In this case, the LHS and RHS of the multiplication are the same value. We'd
+really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
+of computing "<tt>x*3</tt>" twice.</p>
+
+<p>Unfortunately, no amount of local analysis will be able to detect and correct
+this. This requires two transformations: reassociation of expressions (to
+make the add's lexically identical) and Common Subexpression Elimination (CSE)
+to delete the redundant add instruction. Fortunately, LLVM provides a broad
+range of optimizations that you can use, in the form of "passes".</p>
+
+</div>
+
+<!-- *********************************************************************** -->
+<div class="doc_section"><a name="optimizerpasses">LLVM Optimization
+ Passes</a></div>
+<!-- *********************************************************************** -->
+
+<div class="doc_text">
+
+<p>LLVM provides many optimization passes which do many different sorts of
+things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
+to the mistaken notion that one set of optimizations is right for all languages
+and for all situations. LLVM allows a compiler implementor to make complete
+decisions about what optimizations to use, in which order, and in what
+situation.</p>
+
+<p>As a concrete example, LLVM supports both "whole module" passes, which look
+across as large of body of code as they can (often a whole file, but if run
+at link time, this can be a substantial portion of the whole program). It also
+supports and includes "per-function" passes which just operate on a single
+function at a time, without looking at other functions. For more information
+on passes and how the get run, see the <a href="../WritingAnLLVMPass.html">How
+to Write a Pass</a> document.</p>
+
+<p>For Kaleidoscope, we are currently generating functions on the fly, one at
+a time, as the user types them in. We aren't shooting for the ultimate
+optimization experience in this setting, but we also want to catch the easy and
+quick stuff where possible. As such, we will choose to run a few per-function
+optimizations as the user types the function in. If we wanted to make a "static
+Kaleidoscope compiler", we would use exactly the code we have now, except that
+we would defer running the optimizer until the entire file has been parsed.</p>
+
+<p>In order to get per-function optimizations going, we need to set up a
+<a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
+organize the LLVM optimizations that we want to run. Once we have that, we can
+add a set of optimizations to run. The code looks like this:</p>
+
+<div class="doc_code">
+<pre>
+ ExistingModuleProvider OurModuleProvider(TheModule);
+ FunctionPassManager OurFPM(&OurModuleProvider);
+
+ // Set up the optimizer pipeline. Start with registering info about how the
+ // target lays out data structures.
+ OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
+ // Do simple "peephole" optimizations and bit-twiddling optzns.
+ OurFPM.add(createInstructionCombiningPass());
+ // Reassociate expressions.
+ OurFPM.add(createReassociatePass());
+ // Eliminate Common SubExpressions.
+ OurFPM.add(createGVNPass());
+ // Simplify the control flow graph (deleting unreachable blocks, etc).
+ OurFPM.add(createCFGSimplificationPass());
+
+ // Set the global so the code gen can use this.
+ TheFPM = &OurFPM;
+
+ // Run the main "interpreter loop" now.
+ MainLoop();
+</pre>
+</div>
+
+<p>This code defines two objects, a <tt>ExistingModuleProvider</tt> and a
+<tt>FunctionPassManager</tt>. The former is basically a wrapper around our
+<tt>Module</tt> that the PassManager requires. It provides certain flexibility
+that we're not going to take advantage of here, so I won't dive into what it is
+all about.</p>
+
+<p>The meat of the matter is the definition of the "<tt>OurFPM</tt>". It
+requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
+to construct itself. Once it is set up, we use a series of "add" calls to add
+a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
+so that later optimizations know how the data structures in the program are
+layed out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
+which we will get to in the next section.</p>
+
+<p>In this case, we choose to add 4 optimization passes. The passes we chose
+here are a pretty standard set of "cleanup" optimizations that are useful for
+a wide variety of code. I won't delve into what they do, but believe that they
+are a good starting place.</p>
+
+<p>Once the passmanager, is set up, we need to make use of it. We do this by
+running it after our newly created function is constructed (in
+<tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
+
+<div class="doc_code">
+<pre>
+ if (Value *RetVal = Body->Codegen()) {
+ // Finish off the function.
+ Builder.CreateRet(RetVal);
+
+ // Validate the generated code, checking for consistency.
+ verifyFunction(*TheFunction);
+
+ // Optimize the function.
+ TheFPM->run(*TheFunction);
+
+ return TheFunction;
+ }
+</pre>
+</div>
+
+<p>As you can see, this is pretty straight-forward. The
+<tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
+improving (hopefully) its body. With this in place, we can try our test above
+again:</p>
+
+<div class="doc_code">
+<pre>
+ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
+ready> Read function definition:
+define double @test(double %x) {
+entry:
+ %addtmp = add double %x, 3.000000e+00
+ %multmp = mul double %addtmp, %addtmp
+ ret double %multmp
+}
+</pre>
+</div>
+
+<p>As expected, we now get our nicely optimized code, saving a floating point
+add from the program.</p>
+
+<p>LLVM provides a wide variety of optimizations that can be used in certain
+circumstances. Unfortunately we don't have a good centralized description of
+what every pass does, but you can check out the ones that <tt>llvm-gcc</tt> or
+<tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
+experiment with passes from the command line, so you can see if they do
+anything.</p>
+
+<p>Now that we have reasonable code coming out of our front-end, lets talk about
+executing it!</p>
+
+</div>
+
+<!-- *********************************************************************** -->
+<div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
+<!-- *********************************************************************** -->
+
+<div class="doc_text">
+
+<p>Once the code is available in LLVM IR form a wide variety of tools can be
+applied to it. For example, you can run optimizations on it (as we did above),
+you can dump it out in textual or binary forms, you can compile the code to an
+assembly file (.s) for some target, or you can JIT compile it. The nice thing
+about the LLVM IR representation is that it is the common currency between many
+different parts of the compiler.
+</p>
+
+<p>In this chapter, we'll add JIT compiler support to our interpreter. The
+basic idea that we want for Kaleidoscope is to have the user enter function
+bodies as they do now, but immediately evaluate the top-level expressions they
+type in. For example, if they type in "1 + 2;", we should evaluate and print
+out 3. If they define a function, they should be able to call it from the
+command line.</p>
+
+<p>In order to do this, we first declare and initialize the JIT. This is done
+by adding a global variable and a call in <tt>main</tt>:</p>
+
+<div class="doc_code">
+<pre>
+static ExecutionEngine *TheExecutionEngine;
+...
+int main() {
+ ..
+ // Create the JIT.
+ TheExecutionEngine = ExecutionEngine::create(TheModule);
+ ..
+}
+</pre>
+</div>
+
+<p>This creates an abstract "Execution Engine" which can be either a JIT
+compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
+for you if one is available for your platform, otherwise it will fall back to
+the interpreter.</p>
+
+<p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
+There are a variety of APIs that are useful, but the most simple one is the
+"<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
+specified LLVM Function and returns a function pointer to the generated machine
+code. In our case, this means that we can change the code that parses a
+top-level expression to look like this:</p>
+
+<div class="doc_code">
+<pre>
+static void HandleTopLevelExpression() {
+ // Evaluate a top level expression into an anonymous function.
+ if (FunctionAST *F = ParseTopLevelExpr()) {
+ if (Function *LF = F->Codegen()) {
+ LF->dump(); // Dump the function for exposition purposes.
+
+ // JIT the function, returning a function pointer.
+ void *FPtr = TheExecutionEngine->getPointerToFunction(LF);
+
+ // Cast it to the right type (takes no arguments, returns a double) so we
+ // can call it as a native function.
+ double (*FP)() = (double (*)())FPtr;
+ fprintf(stderr, "Evaluated to %f\n", FP());
+ }
+</pre>
+</div>
+
+<p>Recall that we compile top-level expressions into a self-contained LLVM
+function that takes no arguments and returns the computed double. Because the
+LLVM JIT compiler matches the native platform ABI, this means that you can just
+cast the result pointer to a function pointer of that type and call it directly.
+As such, there is no difference between JIT compiled code and native machine
+code that is statically linked into your application.</p>
+
+<p>With just these two changes, lets see how Kaleidoscope works now!</p>
+
+<div class="doc_code">
+<pre>
+ready> <b>4+5;</b>
+define double @""() {
+entry:
+ ret double 9.000000e+00
+}
+
+<em>Evaluated to 9.000000</em>
+</pre>
+</div>
+
+<p>Well this looks like it is basically working. The dump of the function
+shows the "no argument function that always returns double" that we synthesize
+for each top level expression that is typed it. This demonstrates very basic
+functionality, but can we do more?</p>
+
+<div class="doc_code">
+<pre>
+ready> def testfunc(x y) x + y*2; </b>
+Read function definition:
+define double @testfunc(double %x, double %y) {
+entry:
+ %multmp = mul double %y, 2.000000e+00
+ %addtmp = add double %multmp, %x
+ ret double %addtmp
+}
+
+ready> <b>testfunc(4, 10);</b>
+define double @""() {
+entry:
+ %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
+ ret double %calltmp
+}
+
+<em>Evaluated to 24.000000</em>
+</pre>
+</div>
+
+<p>This illustrates that we can now call user code, but it is a bit subtle what
+is going on here. Note that we only invoke the JIT on the anonymous functions
+that <em>calls testfunc</em>, but we never invoked it on <em>testfunc
+itself</em>.</p>
+
+<p>What actually happened here is that the anonymous function is
+JIT'd when requested. When the Kaleidoscope app calls through the function
+pointer that is returned, the anonymous function starts executing. It ends up
+making the call for the "testfunc" function, and ends up in a stub that invokes
+the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
+it returns and the code reexecutes the call.</p>
+
+<p>In summary, the JIT will lazily JIT code on the fly as it is needed. The
+JIT provides a number of other more advanced interfaces for things like freeing
+allocated machine code, rejit'ing functions to update them, etc. However, even
+with this simple code, we get some surprisingly powerful capabilities - check
+this out (I removed the dump of the anonymous functions, you should get the idea
+by now :) :</p>
+
+<div class="doc_code">
+<pre>
+ready> <b>extern sin(x);</b>
+Read extern:
+declare double @sin(double)
+
+ready> <b>extern cos(x);</b>
+Read extern:
+declare double @cos(double)
+
+ready> <b>sin(1.0);</b>
+<em>Evaluated to 0.841471</em>
+ready> <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
+Read function definition:
+define double @foo(double %x) {
+entry:
+ %calltmp = call double @sin( double %x )
+ %multmp = mul double %calltmp, %calltmp
+ %calltmp2 = call double @cos( double %x )
+ %multmp4 = mul double %calltmp2, %calltmp2
+ %addtmp = add double %multmp, %multmp4
+ ret double %addtmp
+}
+
+ready> <b>foo(4.0);</b>
+<em>Evaluated to 1.000000</em>
+</pre>
+</div>
+
+<p>Whoa, how does the JIT know about sin and cos? The answer is simple: in this
+example, the JIT started execution of a function and got to a function call. It
+realized that the function was not yet JIT compiled and invoked the standard set
+of routines to resolve the function. In this case, there is no body defined
+for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on itself.
+Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
+patches up calls in the module to call the libm version of <tt>sin</tt>
+directly.</p>
+
+<p>The LLVM JIT provides a number of interfaces (look in the
+<tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
+resolved. It allows you to establish explicit mappings between IR objects and
+addresses (useful for LLVM global variables that you want to map to static
+tables, for example), allows you to dynamically decide on the fly based on the
+function name, and even allows you to have the JIT abort itself if any lazy
+compilation is attempted.</p>
+
+<p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
+this point, we can compile a non-Turing-complete programming language, optimize
+and JIT compile it in a user-driven way. Next up we'll look into <a
+href="LangImpl5.html">extending the language with control flow constructs</a>,
+tackling some interesting LLVM IR issues along the way.</p>
+
+</div>
+
+<!-- *********************************************************************** -->
+<div class="doc_section"><a name="code">Full Code Listing</a></div>
+<!-- *********************************************************************** -->
+
+<div class="doc_text">
+
+<p>
+Here is the complete code listing for our running example, enhanced with the
+LLVM JIT and optimizer. To build this example, use:
+</p>
+
+<div class="doc_code">
+<pre>
+ # Compile
+ g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
+ # Run
+ ./toy
+</pre>
+</div>
+
+<p>Here is the code:</p>
+
+<div class="doc_code">
+<pre>
+#include "llvm/DerivedTypes.h"
+#include "llvm/ExecutionEngine/ExecutionEngine.h"
+#include "llvm/Module.h"
+#include "llvm/ModuleProvider.h"
+#include "llvm/PassManager.h"
+#include "llvm/Analysis/Verifier.h"
+#include "llvm/Target/TargetData.h"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/Support/LLVMBuilder.h"
+#include <cstdio>
+#include <string>
+#include <map>
+#include <vector>
+using namespace llvm;
+
+//===----------------------------------------------------------------------===//
+// Lexer
+//===----------------------------------------------------------------------===//
+
+// The lexer returns tokens [0-255] if it is an unknown character, otherwise one
+// of these for known things.
+enum Token {
+ tok_eof = -1,
+
+ // commands
+ tok_def = -2, tok_extern = -3,
+
+ // primary
+ tok_identifier = -4, tok_number = -5,
+};
+
+static std::string IdentifierStr; // Filled in if tok_identifier
+static double NumVal; // Filled in if tok_number
+
+/// gettok - Return the next token from standard input.
+static int gettok() {
+ static int LastChar = ' ';
+
+ // Skip any whitespace.
+ while (isspace(LastChar))
+ LastChar = getchar();
+
+ if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
+ IdentifierStr = LastChar;
+ while (isalnum((LastChar = getchar())))
+ IdentifierStr += LastChar;
+
+ if (IdentifierStr == "def") return tok_def;
+ if (IdentifierStr == "extern") return tok_extern;
+ return tok_identifier;
+ }
+
+ if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
+ std::string NumStr;
+ do {
+ NumStr += LastChar;
+ LastChar = getchar();
+ } while (isdigit(LastChar) || LastChar == '.');
+
+ NumVal = strtod(NumStr.c_str(), 0);
+ return tok_number;
+ }
+
+ if (LastChar == '#') {
+ // Comment until end of line.
+ do LastChar = getchar();
+ while (LastChar != EOF && LastChar != '\n' & LastChar != '\r');
+
+ if (LastChar != EOF)
+ return gettok();
+ }
+
+ // Check for end of file. Don't eat the EOF.
+ if (LastChar == EOF)
+ return tok_eof;
+
+ // Otherwise, just return the character as its ascii value.
+ int ThisChar = LastChar;
+ LastChar = getchar();
+ return ThisChar;
+}
+
+//===----------------------------------------------------------------------===//
+// Abstract Syntax Tree (aka Parse Tree)
+//===----------------------------------------------------------------------===//
+