1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
14 <div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
16 <div class="doc_author">
17 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
20 <!-- *********************************************************************** -->
21 <div class="doc_section"><a name="intro">Part 4 Introduction</a></div>
22 <!-- *********************************************************************** -->
24 <div class="doc_text">
26 <p>Welcome to part 4 of the "<a href="index.html">Implementing a language with
27 LLVM</a>" tutorial. Parts 1-3 described the implementation of a simple language
28 and included support for generating LLVM IR. This chapter describes two new
29 techniques: adding optimizer support to your language, and adding JIT compiler
30 support. This shows how to get nice efficient code for your language.</p>
34 <!-- *********************************************************************** -->
35 <div class="doc_section"><a name="trivialconstfold">Trivial Constant
37 <!-- *********************************************************************** -->
39 <div class="doc_text">
42 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
43 it does not produce wonderful code. For example, when compiling simple code,
44 we don't get obvious optimizations:</p>
46 <div class="doc_code">
48 ready> <b>def test(x) 1+2+x;</b>
49 Read function definition:
50 define double @test(double %x) {
52 %addtmp = add double 1.000000e+00, 2.000000e+00
53 %addtmp1 = add double %addtmp, %x
59 <p>This code is a very very literal transcription of the AST built by parsing
60 our code, and as such, lacks optimizations like constant folding (we'd like to
61 get "<tt>add x, 3.0</tt>" in the example above) as well as other more important
62 optimizations. Constant folding in particular is a very common and very
63 important optimization: so much so that many language implementors implement
64 constant folding support in their AST representation.</p>
66 <p>With LLVM, you don't need to. Since all calls to build LLVM IR go through
67 the LLVM builder, it would be nice if the builder itself checked to see if there
68 was a constant folding opportunity when you call it. If so, it could just do
69 the constant fold and return the constant instead of creating an instruction.
70 This is exactly what the <tt>LLVMFoldingBuilder</tt> class does. Lets make one
73 <div class="doc_code">
75 static LLVMFoldingBuilder Builder;
79 <p>All we did was switch from <tt>LLVMBuilder</tt> to
80 <tt>LLVMFoldingBuilder</tt>. Though we change no other code, now all of our
81 instructions are implicitly constant folded without us having to do anything
82 about it. For example, our example above now compiles to:</p>
84 <div class="doc_code">
86 ready> <b>def test(x) 1+2+x;</b>
87 Read function definition:
88 define double @test(double %x) {
90 %addtmp = add double 3.000000e+00, %x
96 <p>Well, that was easy. :) In practice, we recommend always using
97 <tt>LLVMFoldingBuilder</tt> when generating code like this. It has no
98 "syntactic overhead" for its use (you don't have to uglify your compiler with
99 constant checks everywhere) and it can dramatically reduce the amount of
100 LLVM IR that is generated in some cases (particular for languages with a macro
101 preprocessor or that use a lot of constants).</p>
103 <p>On the other hand, the <tt>LLVMFoldingBuilder</tt> is limited by the fact
104 that it does all of its analysis inline with the code as it is built. If you
105 take a slightly more complex example:</p>
107 <div class="doc_code">
109 ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
110 ready> Read function definition:
111 define double @test(double %x) {
113 %addtmp = add double 3.000000e+00, %x
114 %addtmp1 = add double %x, 3.000000e+00
115 %multmp = mul double %addtmp, %addtmp1
121 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
122 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
123 of computing "<tt>x*3</tt>" twice.</p>
125 <p>Unfortunately, no amount of local analysis will be able to detect and correct
126 this. This requires two transformations: reassociation of expressions (to
127 make the add's lexically identical) and Common Subexpression Elimination (CSE)
128 to delete the redundant add instruction. Fortunately, LLVM provides a broad
129 range of optimizations that you can use, in the form of "passes".</p>
133 <!-- *********************************************************************** -->
134 <div class="doc_section"><a name="optimizerpasses">LLVM Optimization
136 <!-- *********************************************************************** -->
138 <div class="doc_text">
140 <p>LLVM provides many optimization passes which do many different sorts of
141 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
142 to the mistaken notion that one set of optimizations is right for all languages
143 and for all situations. LLVM allows a compiler implementor to make complete
144 decisions about what optimizations to use, in which order, and in what
147 <p>As a concrete example, LLVM supports both "whole module" passes, which look
148 across as large of body of code as they can (often a whole file, but if run
149 at link time, this can be a substantial portion of the whole program). It also
150 supports and includes "per-function" passes which just operate on a single
151 function at a time, without looking at other functions. For more information
152 on passes and how the get run, see the <a href="../WritingAnLLVMPass.html">How
153 to Write a Pass</a> document.</p>
155 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
156 a time, as the user types them in. We aren't shooting for the ultimate
157 optimization experience in this setting, but we also want to catch the easy and
158 quick stuff where possible. As such, we will choose to run a few per-function
159 optimizations as the user types the function in. If we wanted to make a "static
160 Kaleidoscope compiler", we would use exactly the code we have now, except that
161 we would defer running the optimizer until the entire file has been parsed.</p>
163 <p>In order to get per-function optimizations going, we need to set up a
164 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
165 organize the LLVM optimizations that we want to run. Once we have that, we can
166 add a set of optimizations to run. The code looks like this:</p>
168 <div class="doc_code">
170 ExistingModuleProvider OurModuleProvider(TheModule);
171 FunctionPassManager OurFPM(&OurModuleProvider);
173 // Set up the optimizer pipeline. Start with registering info about how the
174 // target lays out data structures.
175 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
176 // Do simple "peephole" optimizations and bit-twiddling optzns.
177 OurFPM.add(createInstructionCombiningPass());
178 // Reassociate expressions.
179 OurFPM.add(createReassociatePass());
180 // Eliminate Common SubExpressions.
181 OurFPM.add(createGVNPass());
182 // Simplify the control flow graph (deleting unreachable blocks, etc).
183 OurFPM.add(createCFGSimplificationPass());
185 // Set the global so the code gen can use this.
186 TheFPM = &OurFPM;
188 // Run the main "interpreter loop" now.
193 <p>This code defines two objects, a <tt>ExistingModuleProvider</tt> and a
194 <tt>FunctionPassManager</tt>. The former is basically a wrapper around our
195 <tt>Module</tt> that the PassManager requires. It provides certain flexibility
196 that we're not going to take advantage of here, so I won't dive into what it is
199 <p>The meat of the matter is the definition of the "<tt>OurFPM</tt>". It
200 requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
201 to construct itself. Once it is set up, we use a series of "add" calls to add
202 a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
203 so that later optimizations know how the data structures in the program are
204 layed out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
205 which we will get to in the next section.</p>
207 <p>In this case, we choose to add 4 optimization passes. The passes we chose
208 here are a pretty standard set of "cleanup" optimizations that are useful for
209 a wide variety of code. I won't delve into what they do, but believe that they
210 are a good starting place.</p>
212 <p>Once the passmanager, is set up, we need to make use of it. We do this by
213 running it after our newly created function is constructed (in
214 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
216 <div class="doc_code">
218 if (Value *RetVal = Body->Codegen()) {
219 // Finish off the function.
220 Builder.CreateRet(RetVal);
222 // Validate the generated code, checking for consistency.
223 verifyFunction(*TheFunction);
225 // Optimize the function.
226 TheFPM->run(*TheFunction);
233 <p>As you can see, this is pretty straight-forward. The
234 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
235 improving (hopefully) its body. With this in place, we can try our test above
238 <div class="doc_code">
240 ready> <b>def test(x) (1+2+x)*(x+(1+2));</b>
241 ready> Read function definition:
242 define double @test(double %x) {
244 %addtmp = add double %x, 3.000000e+00
245 %multmp = mul double %addtmp, %addtmp
251 <p>As expected, we now get our nicely optimized code, saving a floating point
252 add from the program.</p>
254 <p>LLVM provides a wide variety of optimizations that can be used in certain
255 circumstances. Some <a href="../Passes.html">documentation about the various
256 passes</a> is available, but it isn't very complete. Another good source of
257 ideas is to look at the passes that <tt>llvm-gcc</tt> or
258 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
259 experiment with passes from the command line, so you can see if they do
262 <p>Now that we have reasonable code coming out of our front-end, lets talk about
267 <!-- *********************************************************************** -->
268 <div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
269 <!-- *********************************************************************** -->
271 <div class="doc_text">
273 <p>Once the code is available in LLVM IR form a wide variety of tools can be
274 applied to it. For example, you can run optimizations on it (as we did above),
275 you can dump it out in textual or binary forms, you can compile the code to an
276 assembly file (.s) for some target, or you can JIT compile it. The nice thing
277 about the LLVM IR representation is that it is the common currency between many
278 different parts of the compiler.
281 <p>In this chapter, we'll add JIT compiler support to our interpreter. The
282 basic idea that we want for Kaleidoscope is to have the user enter function
283 bodies as they do now, but immediately evaluate the top-level expressions they
284 type in. For example, if they type in "1 + 2;", we should evaluate and print
285 out 3. If they define a function, they should be able to call it from the
288 <p>In order to do this, we first declare and initialize the JIT. This is done
289 by adding a global variable and a call in <tt>main</tt>:</p>
291 <div class="doc_code">
293 static ExecutionEngine *TheExecutionEngine;
298 TheExecutionEngine = ExecutionEngine::create(TheModule);
304 <p>This creates an abstract "Execution Engine" which can be either a JIT
305 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
306 for you if one is available for your platform, otherwise it will fall back to
309 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
310 There are a variety of APIs that are useful, but the most simple one is the
311 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
312 specified LLVM Function and returns a function pointer to the generated machine
313 code. In our case, this means that we can change the code that parses a
314 top-level expression to look like this:</p>
316 <div class="doc_code">
318 static void HandleTopLevelExpression() {
319 // Evaluate a top level expression into an anonymous function.
320 if (FunctionAST *F = ParseTopLevelExpr()) {
321 if (Function *LF = F->Codegen()) {
322 LF->dump(); // Dump the function for exposition purposes.
324 // JIT the function, returning a function pointer.
325 void *FPtr = TheExecutionEngine->getPointerToFunction(LF);
327 // Cast it to the right type (takes no arguments, returns a double) so we
328 // can call it as a native function.
329 double (*FP)() = (double (*)())FPtr;
330 fprintf(stderr, "Evaluated to %f\n", FP());
335 <p>Recall that we compile top-level expressions into a self-contained LLVM
336 function that takes no arguments and returns the computed double. Because the
337 LLVM JIT compiler matches the native platform ABI, this means that you can just
338 cast the result pointer to a function pointer of that type and call it directly.
339 As such, there is no difference between JIT compiled code and native machine
340 code that is statically linked into your application.</p>
342 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
344 <div class="doc_code">
346 ready> <b>4+5;</b>
347 define double @""() {
349 ret double 9.000000e+00
352 <em>Evaluated to 9.000000</em>
356 <p>Well this looks like it is basically working. The dump of the function
357 shows the "no argument function that always returns double" that we synthesize
358 for each top level expression that is typed it. This demonstrates very basic
359 functionality, but can we do more?</p>
361 <div class="doc_code">
363 ready> def testfunc(x y) x + y*2; </b>
364 Read function definition:
365 define double @testfunc(double %x, double %y) {
367 %multmp = mul double %y, 2.000000e+00
368 %addtmp = add double %multmp, %x
372 ready> <b>testfunc(4, 10);</b>
373 define double @""() {
375 %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
379 <em>Evaluated to 24.000000</em>
383 <p>This illustrates that we can now call user code, but it is a bit subtle what
384 is going on here. Note that we only invoke the JIT on the anonymous functions
385 that <em>calls testfunc</em>, but we never invoked it on <em>testfunc
388 <p>What actually happened here is that the anonymous function is
389 JIT'd when requested. When the Kaleidoscope app calls through the function
390 pointer that is returned, the anonymous function starts executing. It ends up
391 making the call for the "testfunc" function, and ends up in a stub that invokes
392 the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
393 it returns and the code reexecutes the call.</p>
395 <p>In summary, the JIT will lazily JIT code on the fly as it is needed. The
396 JIT provides a number of other more advanced interfaces for things like freeing
397 allocated machine code, rejit'ing functions to update them, etc. However, even
398 with this simple code, we get some surprisingly powerful capabilities - check
399 this out (I removed the dump of the anonymous functions, you should get the idea
402 <div class="doc_code">
404 ready> <b>extern sin(x);</b>
406 declare double @sin(double)
408 ready> <b>extern cos(x);</b>
410 declare double @cos(double)
412 ready> <b>sin(1.0);</b>
413 <em>Evaluated to 0.841471</em>
415 ready> <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
416 Read function definition:
417 define double @foo(double %x) {
419 %calltmp = call double @sin( double %x )
420 %multmp = mul double %calltmp, %calltmp
421 %calltmp2 = call double @cos( double %x )
422 %multmp4 = mul double %calltmp2, %calltmp2
423 %addtmp = add double %multmp, %multmp4
427 ready> <b>foo(4.0);</b>
428 <em>Evaluated to 1.000000</em>
432 <p>Whoa, how does the JIT know about sin and cos? The answer is simple: in this
433 example, the JIT started execution of a function and got to a function call. It
434 realized that the function was not yet JIT compiled and invoked the standard set
435 of routines to resolve the function. In this case, there is no body defined
436 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on itself.
437 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
438 patches up calls in the module to call the libm version of <tt>sin</tt>
441 <p>The LLVM JIT provides a number of interfaces (look in the
442 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
443 resolved. It allows you to establish explicit mappings between IR objects and
444 addresses (useful for LLVM global variables that you want to map to static
445 tables, for example), allows you to dynamically decide on the fly based on the
446 function name, and even allows you to have the JIT abort itself if any lazy
447 compilation is attempted.</p>
449 <p>One interesting application of this is that we can now extend the language
450 by writing arbitrary C++ code to implement operations. For example, if we add:
453 <div class="doc_code">
455 /// putchard - putchar that takes a double and returns 0.
457 double putchard(double X) {
464 <p>Now we can produce simple output to the console by using things like:
465 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
466 the console (120 is the ascii code for 'x'). Similar code could be used to
467 implement file I/O, console input, and many other capabilities in
470 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
471 this point, we can compile a non-Turing-complete programming language, optimize
472 and JIT compile it in a user-driven way. Next up we'll look into <a
473 href="LangImpl5.html">extending the language with control flow constructs</a>,
474 tackling some interesting LLVM IR issues along the way.</p>
478 <!-- *********************************************************************** -->
479 <div class="doc_section"><a name="code">Full Code Listing</a></div>
480 <!-- *********************************************************************** -->
482 <div class="doc_text">
485 Here is the complete code listing for our running example, enhanced with the
486 LLVM JIT and optimizer. To build this example, use:
489 <div class="doc_code">
492 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
498 <p>Here is the code:</p>
500 <div class="doc_code">
502 #include "llvm/DerivedTypes.h"
503 #include "llvm/ExecutionEngine/ExecutionEngine.h"
504 #include "llvm/Module.h"
505 #include "llvm/ModuleProvider.h"
506 #include "llvm/PassManager.h"
507 #include "llvm/Analysis/Verifier.h"
508 #include "llvm/Target/TargetData.h"
509 #include "llvm/Transforms/Scalar.h"
510 #include "llvm/Support/LLVMBuilder.h"
511 #include <cstdio>
512 #include <string>
514 #include <vector>
515 using namespace llvm;
517 //===----------------------------------------------------------------------===//
519 //===----------------------------------------------------------------------===//
521 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
522 // of these for known things.
527 tok_def = -2, tok_extern = -3,
530 tok_identifier = -4, tok_number = -5,
533 static std::string IdentifierStr; // Filled in if tok_identifier
534 static double NumVal; // Filled in if tok_number
536 /// gettok - Return the next token from standard input.
537 static int gettok() {
538 static int LastChar = ' ';
540 // Skip any whitespace.
541 while (isspace(LastChar))
542 LastChar = getchar();
544 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
545 IdentifierStr = LastChar;
546 while (isalnum((LastChar = getchar())))
547 IdentifierStr += LastChar;
549 if (IdentifierStr == "def") return tok_def;
550 if (IdentifierStr == "extern") return tok_extern;
551 return tok_identifier;
554 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
558 LastChar = getchar();
559 } while (isdigit(LastChar) || LastChar == '.');
561 NumVal = strtod(NumStr.c_str(), 0);
565 if (LastChar == '#') {
566 // Comment until end of line.
567 do LastChar = getchar();
568 while (LastChar != EOF && LastChar != '\n' & LastChar != '\r');
574 // Check for end of file. Don't eat the EOF.
578 // Otherwise, just return the character as its ascii value.
579 int ThisChar = LastChar;
580 LastChar = getchar();
584 //===----------------------------------------------------------------------===//
585 // Abstract Syntax Tree (aka Parse Tree)
586 //===----------------------------------------------------------------------===//
588 /// ExprAST - Base class for all expression nodes.
591 virtual ~ExprAST() {}
592 virtual Value *Codegen() = 0;
595 /// NumberExprAST - Expression class for numeric literals like "1.0".
596 class NumberExprAST : public ExprAST {
599 NumberExprAST(double val) : Val(val) {}
600 virtual Value *Codegen();
603 /// VariableExprAST - Expression class for referencing a variable, like "a".
604 class VariableExprAST : public ExprAST {
607 VariableExprAST(const std::string &name) : Name(name) {}
608 virtual Value *Codegen();
611 /// BinaryExprAST - Expression class for a binary operator.
612 class BinaryExprAST : public ExprAST {
616 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
617 : Op(op), LHS(lhs), RHS(rhs) {}
618 virtual Value *Codegen();
621 /// CallExprAST - Expression class for function calls.
622 class CallExprAST : public ExprAST {
624 std::vector<ExprAST*> Args;
626 CallExprAST(const std::string &callee, std::vector<ExprAST*> &args)
627 : Callee(callee), Args(args) {}
628 virtual Value *Codegen();
631 /// PrototypeAST - This class represents the "prototype" for a function,
632 /// which captures its argument names as well as if it is an operator.
635 std::vector<std::string> Args;
637 PrototypeAST(const std::string &name, const std::vector<std::string> &args)
638 : Name(name), Args(args) {}
643 /// FunctionAST - This class represents a function definition itself.
648 FunctionAST(PrototypeAST *proto, ExprAST *body)
649 : Proto(proto), Body(body) {}
654 //===----------------------------------------------------------------------===//
656 //===----------------------------------------------------------------------===//
658 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
659 /// token the parser it looking at. getNextToken reads another token from the
660 /// lexer and updates CurTok with its results.
662 static int getNextToken() {
663 return CurTok = gettok();
666 /// BinopPrecedence - This holds the precedence for each binary operator that is
668 static std::map<char, int> BinopPrecedence;
670 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
671 static int GetTokPrecedence() {
672 if (!isascii(CurTok))
675 // Make sure it's a declared binop.
676 int TokPrec = BinopPrecedence[CurTok];
677 if (TokPrec <= 0) return -1;
681 /// Error* - These are little helper functions for error handling.
682 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
683 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
684 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
686 static ExprAST *ParseExpression();
690 /// ::= identifer '(' expression* ')'
691 static ExprAST *ParseIdentifierExpr() {
692 std::string IdName = IdentifierStr;
694 getNextToken(); // eat identifer.
696 if (CurTok != '(') // Simple variable ref.
697 return new VariableExprAST(IdName);
700 getNextToken(); // eat (
701 std::vector<ExprAST*> Args;
703 ExprAST *Arg = ParseExpression();
707 if (CurTok == ')') break;
710 return Error("Expected ')'");
717 return new CallExprAST(IdName, Args);
720 /// numberexpr ::= number
721 static ExprAST *ParseNumberExpr() {
722 ExprAST *Result = new NumberExprAST(NumVal);
723 getNextToken(); // consume the number
727 /// parenexpr ::= '(' expression ')'
728 static ExprAST *ParseParenExpr() {
729 getNextToken(); // eat (.
730 ExprAST *V = ParseExpression();
734 return Error("expected ')'");
735 getNextToken(); // eat ).
740 /// ::= identifierexpr
743 static ExprAST *ParsePrimary() {
745 default: return Error("unknown token when expecting an expression");
746 case tok_identifier: return ParseIdentifierExpr();
747 case tok_number: return ParseNumberExpr();
748 case '(': return ParseParenExpr();
753 /// ::= ('+' primary)*
754 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
755 // If this is a binop, find its precedence.
757 int TokPrec = GetTokPrecedence();
759 // If this is a binop that binds at least as tightly as the current binop,
760 // consume it, otherwise we are done.
761 if (TokPrec < ExprPrec)
764 // Okay, we know this is a binop.
766 getNextToken(); // eat binop
768 // Parse the primary expression after the binary operator.
769 ExprAST *RHS = ParsePrimary();
772 // If BinOp binds less tightly with RHS than the operator after RHS, let
773 // the pending operator take RHS as its LHS.
774 int NextPrec = GetTokPrecedence();
775 if (TokPrec < NextPrec) {
776 RHS = ParseBinOpRHS(TokPrec+1, RHS);
777 if (RHS == 0) return 0;
781 LHS = new BinaryExprAST(BinOp, LHS, RHS);
786 /// ::= primary binoprhs
788 static ExprAST *ParseExpression() {
789 ExprAST *LHS = ParsePrimary();
792 return ParseBinOpRHS(0, LHS);
796 /// ::= id '(' id* ')'
797 static PrototypeAST *ParsePrototype() {
798 if (CurTok != tok_identifier)
799 return ErrorP("Expected function name in prototype");
801 std::string FnName = IdentifierStr;
805 return ErrorP("Expected '(' in prototype");
807 std::vector<std::string> ArgNames;
808 while (getNextToken() == tok_identifier)
809 ArgNames.push_back(IdentifierStr);
811 return ErrorP("Expected ')' in prototype");
814 getNextToken(); // eat ')'.
816 return new PrototypeAST(FnName, ArgNames);
819 /// definition ::= 'def' prototype expression
820 static FunctionAST *ParseDefinition() {
821 getNextToken(); // eat def.
822 PrototypeAST *Proto = ParsePrototype();
823 if (Proto == 0) return 0;
825 if (ExprAST *E = ParseExpression())
826 return new FunctionAST(Proto, E);
830 /// toplevelexpr ::= expression
831 static FunctionAST *ParseTopLevelExpr() {
832 if (ExprAST *E = ParseExpression()) {
833 // Make an anonymous proto.
834 PrototypeAST *Proto = new PrototypeAST("", std::vector<std::string>());
835 return new FunctionAST(Proto, E);
840 /// external ::= 'extern' prototype
841 static PrototypeAST *ParseExtern() {
842 getNextToken(); // eat extern.
843 return ParsePrototype();
846 //===----------------------------------------------------------------------===//
848 //===----------------------------------------------------------------------===//
850 static Module *TheModule;
851 static LLVMFoldingBuilder Builder;
852 static std::map<std::string, Value*> NamedValues;
853 static FunctionPassManager *TheFPM;
855 Value *ErrorV(const char *Str) { Error(Str); return 0; }
857 Value *NumberExprAST::Codegen() {
858 return ConstantFP::get(Type::DoubleTy, APFloat(Val));
861 Value *VariableExprAST::Codegen() {
862 // Look this variable up in the function.
863 Value *V = NamedValues[Name];
864 return V ? V : ErrorV("Unknown variable name");
867 Value *BinaryExprAST::Codegen() {
868 Value *L = LHS->Codegen();
869 Value *R = RHS->Codegen();
870 if (L == 0 || R == 0) return 0;
873 case '+': return Builder.CreateAdd(L, R, "addtmp");
874 case '-': return Builder.CreateSub(L, R, "subtmp");
875 case '*': return Builder.CreateMul(L, R, "multmp");
877 L = Builder.CreateFCmpULT(L, R, "multmp");
878 // Convert bool 0/1 to double 0.0 or 1.0
879 return Builder.CreateUIToFP(L, Type::DoubleTy, "booltmp");
880 default: return ErrorV("invalid binary operator");
884 Value *CallExprAST::Codegen() {
885 // Look up the name in the global module table.
886 Function *CalleeF = TheModule->getFunction(Callee);
888 return ErrorV("Unknown function referenced");
890 // If argument mismatch error.
891 if (CalleeF->arg_size() != Args.size())
892 return ErrorV("Incorrect # arguments passed");
894 std::vector<Value*> ArgsV;
895 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
896 ArgsV.push_back(Args[i]->Codegen());
897 if (ArgsV.back() == 0) return 0;
900 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
903 Function *PrototypeAST::Codegen() {
904 // Make the function type: double(double,double) etc.
905 std::vector<const Type*> Doubles(Args.size(), Type::DoubleTy);
906 FunctionType *FT = FunctionType::get(Type::DoubleTy, Doubles, false);
908 Function *F = new Function(FT, Function::ExternalLinkage, Name, TheModule);
910 // If F conflicted, there was already something named 'Name'. If it has a
911 // body, don't allow redefinition or reextern.
912 if (F->getName() != Name) {
913 // Delete the one we just made and get the existing one.
914 F->eraseFromParent();
915 F = TheModule->getFunction(Name);
917 // If F already has a body, reject this.
918 if (!F->empty()) {
919 ErrorF("redefinition of function");
923 // If F took a different number of args, reject.
924 if (F->arg_size() != Args.size()) {
925 ErrorF("redefinition of function with different # args");
930 // Set names for all arguments.
932 for (Function::arg_iterator AI = F->arg_begin(); Idx != Args.size();
934 AI->setName(Args[Idx]);
936 // Add arguments to variable symbol table.
937 NamedValues[Args[Idx]] = AI;
943 Function *FunctionAST::Codegen() {
946 Function *TheFunction = Proto->Codegen();
947 if (TheFunction == 0)
950 // Create a new basic block to start insertion into.
951 BasicBlock *BB = new BasicBlock("entry", TheFunction);
952 Builder.SetInsertPoint(BB);
954 if (Value *RetVal = Body->Codegen()) {
955 // Finish off the function.
956 Builder.CreateRet(RetVal);
958 // Validate the generated code, checking for consistency.
959 verifyFunction(*TheFunction);
961 // Optimize the function.
962 TheFPM->run(*TheFunction);
967 // Error reading body, remove function.
968 TheFunction->eraseFromParent();
972 //===----------------------------------------------------------------------===//
973 // Top-Level parsing and JIT Driver
974 //===----------------------------------------------------------------------===//
976 static ExecutionEngine *TheExecutionEngine;
978 static void HandleDefinition() {
979 if (FunctionAST *F = ParseDefinition()) {
980 if (Function *LF = F->Codegen()) {
981 fprintf(stderr, "Read function definition:");
985 // Skip token for error recovery.
990 static void HandleExtern() {
991 if (PrototypeAST *P = ParseExtern()) {
992 if (Function *F = P->Codegen()) {
993 fprintf(stderr, "Read extern: ");
997 // Skip token for error recovery.
1002 static void HandleTopLevelExpression() {
1003 // Evaluate a top level expression into an anonymous function.
1004 if (FunctionAST *F = ParseTopLevelExpr()) {
1005 if (Function *LF = F->Codegen()) {
1006 // JIT the function, returning a function pointer.
1007 void *FPtr = TheExecutionEngine->getPointerToFunction(LF);
1009 // Cast it to the right type (takes no arguments, returns a double) so we
1010 // can call it as a native function.
1011 double (*FP)() = (double (*)())FPtr;
1012 fprintf(stderr, "Evaluated to %f\n", FP());
1015 // Skip token for error recovery.
1020 /// top ::= definition | external | expression | ';'
1021 static void MainLoop() {
1023 fprintf(stderr, "ready> ");
1025 case tok_eof: return;
1026 case ';': getNextToken(); break; // ignore top level semicolons.
1027 case tok_def: HandleDefinition(); break;
1028 case tok_extern: HandleExtern(); break;
1029 default: HandleTopLevelExpression(); break;
1036 //===----------------------------------------------------------------------===//
1037 // "Library" functions that can be "extern'd" from user code.
1038 //===----------------------------------------------------------------------===//
1040 /// putchard - putchar that takes a double and returns 0.
1042 double putchard(double X) {
1047 //===----------------------------------------------------------------------===//
1048 // Main driver code.
1049 //===----------------------------------------------------------------------===//
1052 // Install standard binary operators.
1053 // 1 is lowest precedence.
1054 BinopPrecedence['<'] = 10;
1055 BinopPrecedence['+'] = 20;
1056 BinopPrecedence['-'] = 20;
1057 BinopPrecedence['*'] = 40; // highest.
1059 // Prime the first token.
1060 fprintf(stderr, "ready> ");
1063 // Make the module, which holds all the code.
1064 TheModule = new Module("my cool jit");
1067 TheExecutionEngine = ExecutionEngine::create(TheModule);
1070 ExistingModuleProvider OurModuleProvider(TheModule);
1071 FunctionPassManager OurFPM(&OurModuleProvider);
1073 // Set up the optimizer pipeline. Start with registering info about how the
1074 // target lays out data structures.
1075 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
1076 // Do simple "peephole" optimizations and bit-twiddling optzns.
1077 OurFPM.add(createInstructionCombiningPass());
1078 // Reassociate expressions.
1079 OurFPM.add(createReassociatePass());
1080 // Eliminate Common SubExpressions.
1081 OurFPM.add(createGVNPass());
1082 // Simplify the control flow graph (deleting unreachable blocks, etc).
1083 OurFPM.add(createCFGSimplificationPass());
1085 // Set the global so the code gen can use this.
1086 TheFPM = &OurFPM;
1088 // Run the main "interpreter loop" now.
1092 } // Free module provider and pass manager.
1095 // Print out all of the generated code.
1096 TheModule->dump();
1104 <!-- *********************************************************************** -->
1107 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1108 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1109 <a href="http://validator.w3.org/check/referer"><img
1110 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1112 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1113 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1114 Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $