From: Bill Wendling Date: Mon, 28 Aug 2006 02:26:32 +0000 (+0000) Subject: Added some preliminary text to the TargetJITInfo class section. X-Git-Url: http://plrg.eecs.uci.edu/git/?a=commitdiff_plain;h=91e10c42ea395627b7b0e28720a801bfffd87733;p=oota-llvm.git Added some preliminary text to the TargetJITInfo class section. Fixed some inconsistencies with format. Corrected some of the text. Put code inside of "code" div tags. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@29937 91177308-0d34-0410-b5e6-96231b3b80d8 --- diff --git a/docs/CodeGenerator.html b/docs/CodeGenerator.html index da23cf2d994..95eb4774949 100644 --- a/docs/CodeGenerator.html +++ b/docs/CodeGenerator.html @@ -74,7 +74,8 @@
-

Written by Chris Lattner

+

Written by Chris Lattner & + Bill Wendling

@@ -91,9 +92,10 @@

The LLVM target-independent code generator is a framework that provides a suite of reusable components for translating the LLVM internal representation to -the machine code for a specified target -- either in assembly form (suitable for -a static compiler) or in binary machine code format (usable for a JIT compiler). -The LLVM target-independent code generator consists of five main components:

+the machine code for a specified target—either in assembly form (suitable +for a static compiler) or in binary machine code format (usable for a JIT +compiler). The LLVM target-independent code generator consists of five main +components:

  1. Abstract target description interfaces which @@ -166,7 +168,7 @@ to the GCC RTL form and uses GCC to emit machine code for a target.

    implement radically different code generators in the LLVM system that do not make use of any of the built-in components. Doing so is not recommended at all, but could be required for radically different targets that do not fit into the -LLVM machine description model: programmable FPGAs for example.

    +LLVM machine description model: FPGAs for example.

@@ -228,23 +230,20 @@ format or in machine code. -

-The code generator is based on the assumption that the instruction selector will -use an optimal pattern matching selector to create high-quality sequences of +

The code generator is based on the assumption that the instruction selector +will use an optimal pattern matching selector to create high-quality sequences of native instructions. Alternative code generator designs based on pattern -expansion and -aggressive iterative peephole optimization are much slower. This design -permits efficient compilation (important for JIT environments) and +expansion and aggressive iterative peephole optimization are much slower. This +design permits efficient compilation (important for JIT environments) and aggressive optimization (used when generating code offline) by allowing components of varying levels of sophistication to be used for any step of compilation.

-

-In addition to these stages, target implementations can insert arbitrary +

In addition to these stages, target implementations can insert arbitrary target-specific passes into the flow. For example, the X86 target uses a special pass to handle the 80x87 floating point stack architecture. Other -targets with unusual requirements can be supported with custom passes as needed. -

+targets with unusual requirements can be supported with custom passes as +needed.

@@ -264,18 +263,17 @@ In order to allow the maximum amount of commonality to be factored out, the LLVM code generator uses the TableGen tool to describe big chunks of the target machine, which allows the use of domain-specific and target-specific abstractions to reduce the amount of -repetition. -

+repetition.

As LLVM continues to be developed and refined, we plan to move more and more -of the target description to be in .td form. Doing so gives us a +of the target description to the .td form. Doing so gives us a number of advantages. The most important is that it makes it easier to port -LLVM, because it reduces the amount of C++ code that has to be written and the +LLVM because it reduces the amount of C++ code that has to be written, and the surface area of the code generator that needs to be understood before someone -can get in an get something working. Second, it is also important to us because -it makes it easier to change things: in particular, if tables and other things -are all emitted by tblgen, we only need to change one place (tblgen) to update -all of the targets to a new interface.

+can get something working. Second, it makes it easier to change things. In +particular, if tables and other things are all emitted by tblgen, we +only need a change in one place (tblgen) to update all of the targets +to a new interface.

@@ -287,9 +285,9 @@ all of the targets to a new interface.

-

The LLVM target description classes (which are located in the +

The LLVM target description classes (located in the include/llvm/Target directory) provide an abstract description of the -target machine; independent of any particular client. These classes are +target machine independent of any particular client. These classes are designed to capture the abstract properties of the target (such as the instructions and registers it has), and do not incorporate any particular pieces of code generation algorithms.

@@ -349,14 +347,16 @@ little-endian or big-endian.

The TargetLowering class is used by SelectionDAG based instruction selectors primarily to describe how LLVM code should be lowered to SelectionDAG -operations. Among other things, this class indicates: -

@@ -372,14 +372,14 @@ operations. Among other things, this class indicates: target and any interactions between the registers.

Registers in the code generator are represented in the code generator by -unsigned numbers. Physical registers (those that actually exist in the target +unsigned integers. Physical registers (those that actually exist in the target description) are unique small numbers, and virtual registers are generally large. Note that register #0 is reserved as a flag value.

Each register in the processor description has an associated -TargetRegisterDesc entry, which provides a textual name for the register -(used for assembly output and debugging dumps) and a set of aliases (used to -indicate that one register overlaps with another). +TargetRegisterDesc entry, which provides a textual name for the +register (used for assembly output and debugging dumps) and a set of aliases +(used to indicate whether one register overlaps with another).

In addition to the per-register description, the MRegisterInfo class @@ -409,7 +409,8 @@ href="TableGenFundamentals.html">TableGen description of the register file. instruction the target supports. Descriptors define things like the mnemonic for the opcode, the number of operands, the list of implicit register uses and defs, whether the instruction has certain target-independent properties - (accesses memory, is commutable, etc), and holds any target-specific flags.

+ (accesses memory, is commutable, etc), and holds any target-specific + flags.

@@ -421,7 +422,7 @@ href="TableGenFundamentals.html">TableGen description of the register file.

The TargetFrameInfo class is used to provide information about the stack frame layout of the target. It holds the direction of stack growth, the known stack alignment on entry to each function, and the offset to the - locals area. The offset to the local area is the offset from the stack + local area. The offset to the local area is the offset from the stack pointer on function entry to the first location where function data (local variables, spill locations) can be stored.

@@ -432,13 +433,11 @@ href="TableGenFundamentals.html">TableGen description of the register file.
-

The TargetSubtarget class is used to provide information about the specific chip set being targeted. A sub-target informs code generation of which instructions are supported, instruction latencies and instruction execution itinerary; i.e., which processing units are used, in what order, and - for how long. -

+ for how long.

@@ -447,6 +446,14 @@ href="TableGenFundamentals.html">TableGen description of the register file. The TargetJITInfo class +
+

The TargetJITInfo class exposes an abstract interface used by the + Just-In-Time code generator to perform target-specific activities, such as + emitting stubs. If a TargetMachine supports JIT code generation, it + should provide one of these objects through the getJITInfo + method.

+
+
Machine code description classes @@ -455,16 +462,16 @@ href="TableGenFundamentals.html">TableGen description of the register file.
-

-At the high-level, LLVM code is translated to a machine specific representation -formed out of MachineFunction, -MachineBasicBlock, and At the high-level, LLVM code is translated to a machine specific +representation formed out of +MachineFunction, +MachineBasicBlock, and MachineInstr instances -(defined in include/llvm/CodeGen). This representation is completely target -agnostic, representing instructions in their most abstract form: an opcode and a -series of operands. This representation is designed to support both SSA -representation for machine code, as well as a register allocated, non-SSA form. -

+(defined in include/llvm/CodeGen). This representation is completely +target agnostic, representing instructions in their most abstract form: an +opcode and a series of operands. This representation is designed to support +both an SSA representation for machine code, as well as a register allocated, +non-SSA form.

@@ -480,17 +487,17 @@ representation for machine code, as well as a register allocated, non-SSA form. representing machine instructions. In particular, it only keeps track of an opcode number and a set of operands.

-

The opcode number is a simple unsigned number that only has meaning to a +

The opcode number is a simple unsigned integer that only has meaning to a specific backend. All of the instructions for a target should be defined in the *InstrInfo.td file for the target. The opcode enum values are auto-generated from this description. The MachineInstr class does not have any information about how to interpret the instruction (i.e., what the -semantics of the instruction are): for that you must refer to the +semantics of the instruction are); for that you must refer to the TargetInstrInfo class.

The operands of a machine instruction can be of several different types: -they can be a register reference, constant integer, basic block reference, etc. -In addition, a machine operand should be marked as a def or a use of the value +a register reference, a constant integer, a basic block reference, etc. In +addition, a machine operand should be marked as a def or a use of the value (though only registers are allowed to be defs).

By convention, the LLVM code generator orders instruction operands so that @@ -505,11 +512,13 @@ first.

list has several advantages. In particular, the debugging printer will print the instruction like this:

+
-  %r3 = add %i1, %i2
+%r3 = add %i1, %i2
 
+
-

If the first operand is a def, and it is also easier to Also if the first operand is a def, it is easier to create instructions whose only def is the first operand.

@@ -525,39 +534,44 @@ operand.

Machine instructions are created by using the BuildMI functions, located in the include/llvm/CodeGen/MachineInstrBuilder.h file. The BuildMI functions make it easy to build arbitrary machine -instructions. Usage of the BuildMI functions look like this: -

+instructions. Usage of the BuildMI functions look like this:

+
-  // Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42')
-  // instruction.  The '1' specifies how many operands will be added.
-  MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42);
+// Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42')
+// instruction.  The '1' specifies how many operands will be added.
+MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42);
 
-  // Create the same instr, but insert it at the end of a basic block.
-  MachineBasicBlock &MBB = ...
-  BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42);
+// Create the same instr, but insert it at the end of a basic block.
+MachineBasicBlock &MBB = ...
+BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42);
 
-  // Create the same instr, but insert it before a specified iterator point.
-  MachineBasicBlock::iterator MBBI = ...
-  BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42);
+// Create the same instr, but insert it before a specified iterator point.
+MachineBasicBlock::iterator MBBI = ...
+BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42);
 
-  // Create a 'cmp Reg, 0' instruction, no destination reg.
-  MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0);
-  // Create an 'sahf' instruction which takes no operands and stores nothing.
-  MI = BuildMI(X86::SAHF, 0);
+// Create a 'cmp Reg, 0' instruction, no destination reg.
+MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0);
+// Create an 'sahf' instruction which takes no operands and stores nothing.
+MI = BuildMI(X86::SAHF, 0);
 
-  // Create a self looping branch instruction.
-  BuildMI(MBB, X86::JNE, 1).addMBB(&MBB);
+// Create a self looping branch instruction.
+BuildMI(MBB, X86::JNE, 1).addMBB(&MBB);
 
+
-

-The key thing to remember with the BuildMI functions is that you have -to specify the number of operands that the machine instruction will take. This -allows for efficient memory allocation. You also need to specify if operands -default to be uses of values, not definitions. If you need to add a definition -operand (other than the optional destination register), you must explicitly -mark it as such. -

+

The key thing to remember with the BuildMI functions is that you +have to specify the number of operands that the machine instruction will take. +This allows for efficient memory allocation. You also need to specify if +operands default to be uses of values, not definitions. If you need to add a +definition operand (other than the optional destination register), you must +explicitly mark it as such:

+ +
+
+MI.addReg(Reg, MachineOperand::Def);
+
+
@@ -579,48 +593,54 @@ copies a virtual register into or out of a physical register when needed.

For example, consider this simple LLVM example:

+
-  int %test(int %X, int %Y) {
-    %Z = div int %X, %Y
-    ret int %Z
-  }
+int %test(int %X, int %Y) {
+  %Z = div int %X, %Y
+  ret int %Z
+}
 
+
-

The X86 instruction selector produces this machine code for the div -and ret (use +

The X86 instruction selector produces this machine code for the div +and ret (use "llc X.bc -march=x86 -print-machineinstrs" to get this):

+
-        ;; Start of div
-        %EAX = mov %reg1024           ;; Copy X (in reg1024) into EAX
-        %reg1027 = sar %reg1024, 31
-        %EDX = mov %reg1027           ;; Sign extend X into EDX
-        idiv %reg1025                 ;; Divide by Y (in reg1025)
-        %reg1026 = mov %EAX           ;; Read the result (Z) out of EAX
-
-        ;; Start of ret
-        %EAX = mov %reg1026           ;; 32-bit return value goes in EAX
-        ret
+;; Start of div
+%EAX = mov %reg1024           ;; Copy X (in reg1024) into EAX
+%reg1027 = sar %reg1024, 31
+%EDX = mov %reg1027           ;; Sign extend X into EDX
+idiv %reg1025                 ;; Divide by Y (in reg1025)
+%reg1026 = mov %EAX           ;; Read the result (Z) out of EAX
+
+;; Start of ret
+%EAX = mov %reg1026           ;; 32-bit return value goes in EAX
+ret
 
+

By the end of code generation, the register allocator has coalesced -the registers and deleted the resultant identity moves, producing the +the registers and deleted the resultant identity moves producing the following code:

+
-        ;; X is in EAX, Y is in ECX
-        mov %EAX, %EDX
-        sar %EDX, 31
-        idiv %ECX
-        ret 
+;; X is in EAX, Y is in ECX
+mov %EAX, %EDX
+sar %EDX, 31
+idiv %ECX
+ret 
 
+

This approach is extremely general (if it can handle the X86 architecture, it can handle anything!) and allows all of the target specific knowledge about the instruction stream to be isolated in the instruction selector. Note that physical registers should have a short lifetime for good -code generation, and all physical registers are assumed dead on entry and -exit of basic blocks (before register allocation). Thus if you need a value +code generation, and all physical registers are assumed dead on entry to and +exit from basic blocks (before register allocation). Thus, if you need a value to be live across basic block boundaries, it must live in a virtual register.

@@ -628,18 +648,18 @@ register.

- Machine code SSA form + Machine code in SSA form

MachineInstr's are initially selected in SSA-form, and are maintained in SSA-form until register allocation happens. For the most -part, this is trivially simple since LLVM is already in SSA form: LLVM PHI nodes +part, this is trivially simple since LLVM is already in SSA form; LLVM PHI nodes become machine code PHI nodes, and virtual registers are only allowed to have a single definition.

-

After register allocation, machine code is no longer in SSA-form, as there +

After register allocation, machine code is no longer in SSA-form because there are no virtual registers left in the code.

@@ -652,12 +672,12 @@ are no virtual registers left in the code.

The MachineBasicBlock class contains a list of machine instructions -(MachineInstr instances). It roughly corresponds to -the LLVM code input to the instruction selector, but there can be a one-to-many -mapping (i.e. one LLVM basic block can map to multiple machine basic blocks). -The MachineBasicBlock class has a "getBasicBlock" method, which returns -the LLVM basic block that it comes from. -

+(MachineInstr instances). It roughly +corresponds to the LLVM code input to the instruction selector, but there can be +a one-to-many mapping (i.e. one LLVM basic block can map to multiple machine +basic blocks). The MachineBasicBlock class has a +"getBasicBlock" method, which returns the LLVM basic block that it +comes from.

@@ -669,18 +689,16 @@ the LLVM basic block that it comes from.

The MachineFunction class contains a list of machine basic blocks -(MachineBasicBlock instances). It corresponds -one-to-one with the LLVM function input to the instruction selector. In -addition to a list of basic blocks, the MachineFunction contains a -the MachineConstantPool, MachineFrameInfo, MachineFunctionInfo, -SSARegMap, and a set of live in and live out registers for the function. See -MachineFunction.h for more information. -

+(MachineBasicBlock instances). It +corresponds one-to-one with the LLVM function input to the instruction selector. +In addition to a list of basic blocks, the MachineFunction contains a +a MachineConstantPool, a MachineFrameInfo, a +MachineFunctionInfo, a SSARegMap, and a set of live in and +live out registers for the function. See +include/llvm/CodeGen/MachineFunction.h for more information.

- -
Target-independent code generation algorithms @@ -706,14 +724,14 @@ Instruction Selection is the process of translating LLVM code presented to the code generator into target-specific machine instructions. There are several well-known ways to do this in the literature. In LLVM there are two main forms: the SelectionDAG based instruction selector framework and an old-style 'simple' -instruction selector (which effectively peephole selects each LLVM instruction -into a series of machine instructions). We recommend that all targets use the +instruction selector, which effectively peephole selects each LLVM instruction +into a series of machine instructions. We recommend that all targets use the SelectionDAG infrastructure.

Portions of the DAG instruction selector are generated from the target -description files (*.td) files. Eventually, we aim for the entire -instruction selector to be generated from these .td files.

+description (*.td) files. Our goal is for the entire instruction +selector to be generated from these .td files.

@@ -723,21 +741,18 @@ instruction selector to be generated from these .td files.

-

-The SelectionDAG provides an abstraction for code representation in a way that -is amenable to instruction selection using automatic techniques -(e.g. dynamic-programming based optimal pattern matching selectors), It is also -well suited to other phases of code generation; in particular, +

The SelectionDAG provides an abstraction for code representation in a way +that is amenable to instruction selection using automatic techniques +(e.g. dynamic-programming based optimal pattern matching selectors). It is also +well-suited to other phases of code generation; in particular, instruction scheduling (SelectionDAG's are very close to scheduling DAGs post-selection). Additionally, the SelectionDAG provides a host representation where a large variety of very-low-level (but target-independent) optimizations may be -performed: ones which require extensive information about the instructions -efficiently supported by the target. -

+performed; ones which require extensive information about the instructions +efficiently supported by the target.

-

-The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the +

The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the SDNode class. The primary payload of the SDNode is its operation code (Opcode) that indicates what operation the node performs and the operands to the operation. @@ -750,38 +765,33 @@ both the dividend and the remainder. Many other situations require multiple values as well. Each node also has some number of operands, which are edges to the node defining the used value. Because nodes may define multiple values, edges are represented by instances of the SDOperand class, which is -a <SDNode, unsigned> pair, indicating the node and result -value being used, respectively. Each value produced by an SDNode has an -associated MVT::ValueType, indicating what type the value is. -

- -

-SelectionDAGs contain two different kinds of values: those that represent data -flow and those that represent control flow dependencies. Data values are simple -edges with an integer or floating point value type. Control edges are -represented as "chain" edges which are of type MVT::Other. These edges provide -an ordering between nodes that have side effects (such as -loads/stores/calls/return/etc). All nodes that have side effects should take a -token chain as input and produce a new one as output. By convention, token -chain inputs are always operand #0, and chain results are always the last +a <SDNode, unsigned> pair, indicating the node and result +value being used, respectively. Each value produced by an SDNode has +an associated MVT::ValueType indicating what type the value is.

+ +

SelectionDAGs contain two different kinds of values: those that represent +data flow and those that represent control flow dependencies. Data values are +simple edges with an integer or floating point value type. Control edges are +represented as "chain" edges which are of type MVT::Other. These edges +provide an ordering between nodes that have side effects (such as +loads, stores, calls, returns, etc). All nodes that have side effects should +take a token chain as input and produce a new one as output. By convention, +token chain inputs are always operand #0, and chain results are always the last value produced by an operation.

-

-A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is -always a marker node with an Opcode of ISD::EntryToken. The Root node is the -final side-effecting node in the token chain. For example, in a single basic -block function, this would be the return node. -

+

A SelectionDAG has designated "Entry" and "Root" nodes. The Entry node is +always a marker node with an Opcode of ISD::EntryToken. The Root node +is the final side-effecting node in the token chain. For example, in a single +basic block function it would be the return node.

+ +

One important concept for SelectionDAGs is the notion of a "legal" vs. +"illegal" DAG. A legal DAG for a target is one that only uses supported +operations and supported types. On a 32-bit PowerPC, for example, a DAG with +a value of type i1, i8, i16, or i64 would be illegal, as would a DAG that uses a +SREM or UREM operation. The +legalize phase is responsible for turning +an illegal DAG into a legal DAG.

-

-One important concept for SelectionDAGs is the notion of a "legal" vs. "illegal" -DAG. A legal DAG for a target is one that only uses supported operations and -supported types. On a 32-bit PowerPC, for example, a DAG with any values of i1, -i8, i16, -or i64 type would be illegal, as would a DAG that uses a SREM or UREM operation. -The legalize -phase is responsible for turning an illegal DAG into a legal DAG. -

@@ -791,25 +801,23 @@ phase is responsible for turning an illegal DAG into a legal DAG.
-

-SelectionDAG-based instruction selection consists of the following steps: -

+

SelectionDAG-based instruction selection consists of the following steps:

    -
  1. Build initial DAG - This stage performs - a simple translation from the input LLVM code to an illegal SelectionDAG. -
  2. +
  3. Build initial DAG - This stage + performs a simple translation from the input LLVM code to an illegal + SelectionDAG.
  4. Optimize SelectionDAG - This stage - performs simple optimizations on the SelectionDAG to simplify it and - recognize meta instructions (like rotates and div/rem pairs) for - targets that support these meta operations. This makes the resultant code - more efficient and the 'select instructions from DAG' phase (below) simpler. -
  5. + performs simple optimizations on the SelectionDAG to simplify it, and + recognize meta instructions (like rotates and div/rem + pairs) for targets that support these meta operations. This makes the + resultant code more efficient and the select + instructions from DAG phase (below) simpler.
  6. Legalize SelectionDAG - This stage - converts the illegal SelectionDAG to a legal SelectionDAG, by eliminating + converts the illegal SelectionDAG to a legal SelectionDAG by eliminating unsupported operations and data types.
  7. Optimize SelectionDAG (#2) - This - second run of the SelectionDAG optimized the newly legalized DAG, to + second run of the SelectionDAG optimizes the newly legalized DAG to eliminate inefficiencies introduced by legalization.
  8. Select instructions from DAG - Finally, the target instruction selector matches the DAG operations to target @@ -831,8 +839,8 @@ of the code compiled (if you only get errors printed to the console while using this, you probably need to configure your system to add support for it). The -view-sched-dags option views the SelectionDAG output from the Select phase and input to the Scheduler -phase. -

    +phase.

    +
@@ -842,17 +850,15 @@ phase.
-

-The initial SelectionDAG is naively peephole expanded from the LLVM input by -the SelectionDAGLowering class in the SelectionDAGISel.cpp file. The -intent of this pass is to expose as much low-level, target-specific details -to the SelectionDAG as possible. This pass is mostly hard-coded (e.g. an LLVM -add turns into an SDNode add while a geteelementptr is expanded into the obvious -arithmetic). This pass requires target-specific hooks to lower calls and -returns, varargs, etc. For these features, the TargetLowering interface is -used. -

+

The initial SelectionDAG is naively peephole expanded from the LLVM input by +the SelectionDAGLowering class in the +lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp file. The intent of this +pass is to expose as much low-level, target-specific details to the SelectionDAG +as possible. This pass is mostly hard-coded (e.g. an LLVM add turns +into an SDNode add while a geteelementptr is expanded into the +obvious arithmetic). This pass requires target-specific hooks to lower calls, +returns, varargs, etc. For these features, the +TargetLowering interface is used.

@@ -875,38 +881,35 @@ tasks:

that all f32 values are promoted to f64 and that all i1/i8/i16 values are promoted to i32. The same target might require that all i64 values be expanded into i32 values. These changes can insert sign and zero - extensions as - needed to make sure that the final code has the same behavior as the - input.

+ extensions as needed to make sure that the final code has the same + behavior as the input.

A target implementation tells the legalizer which types are supported (and which register class to use for them) by calling the - "addRegisterClass" method in its TargetLowering constructor.

+ addRegisterClass method in its TargetLowering constructor.

  • Eliminate operations that are not supported by the target.

    Targets often have weird constraints, such as not supporting every operation on every supported datatype (e.g. X86 does not support byte conditional moves and PowerPC does not support sign-extending loads from - a 16-bit memory location). Legalize takes care by open-coding + a 16-bit memory location). Legalize takes care of this by open-coding another sequence of operations to emulate the operation ("expansion"), by - promoting to a larger type that supports the operation - (promotion), or using a target-specific hook to implement the - legalization (custom).

    + promoting one type to a larger type that supports the operation + ("promotion"), or by using a target-specific hook to implement the + legalization ("custom").

    A target implementation tells the legalizer which operations are not supported (and which of the above three actions to take) by calling the - "setOperationAction" method in its TargetLowering constructor.

    + setOperationAction method in its TargetLowering + constructor.

  • -

    -Prior to the existance of the Legalize pass, we required that every -target selector supported and handled every +

    Prior to the existance of the Legalize pass, we required that every target +selector supported and handled every operator and type even if they are not natively supported. The introduction of -the Legalize phase allows all of the -cannonicalization patterns to be shared across targets, and makes it very -easy to optimize the cannonicalized code because it is still in the form of -a DAG. -

    +the Legalize phase allows all of the cannonicalization patterns to be shared +across targets, and makes it very easy to optimize the cannonicalized code +because it is still in the form of a DAG.

    @@ -918,27 +921,24 @@ a DAG.
    -

    -The SelectionDAG optimization phase is run twice for code generation: once +

    The SelectionDAG optimization phase is run twice for code generation: once immediately after the DAG is built and once after legalization. The first run of the pass allows the initial code to be cleaned up (e.g. performing optimizations that depend on knowing that the operators have restricted type inputs). The second run of the pass cleans up the messy code generated by the Legalize pass, which allows Legalize to be very simple (it can focus on making -code legal instead of focusing on generating good and legal code). -

    +code legal instead of focusing on generating good and legal code).

    -

    -One important class of optimizations performed is optimizing inserted sign and -zero extension instructions. We currently use ad-hoc techniques, but could move -to more rigorous techniques in the future. Here are some good -papers on the subject:

    +

    One important class of optimizations performed is optimizing inserted sign +and zero extension instructions. We currently use ad-hoc techniques, but could +move to more rigorous techniques in the future. Here are some good papers on +the subject:

    -"Widening -integer arithmetic"
    -Kevin Redwine and Norman Ramsey
    -International Conference on Compiler Construction (CC) 2004 + "Widening + integer arithmetic"
    + Kevin Redwine and Norman Ramsey
    + International Conference on Compiler Construction (CC) 2004

    @@ -960,40 +960,44 @@ International Conference on Compiler Construction (CC) 2004

    The Select phase is the bulk of the target-specific code for instruction -selection. This phase takes a legal SelectionDAG as input, -pattern matches the instructions supported by the target to this DAG, and -produces a new DAG of target code. For example, consider the following LLVM -fragment:

    +selection. This phase takes a legal SelectionDAG as input, pattern matches the +instructions supported by the target to this DAG, and produces a new DAG of +target code. For example, consider the following LLVM fragment:

    +
    -   %t1 = add float %W, %X
    -   %t2 = mul float %t1, %Y
    -   %t3 = add float %t2, %Z
    +%t1 = add float %W, %X
    +%t2 = mul float %t1, %Y
    +%t3 = add float %t2, %Z
     
    +
    -

    This LLVM code corresponds to a SelectionDAG that looks basically like this: -

    +

    This LLVM code corresponds to a SelectionDAG that looks basically like +this:

    +
    -  (fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z)
    +(fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z)
     
    +

    If a target supports floating point multiply-and-add (FMA) operations, one of the adds can be merged with the multiply. On the PowerPC, for example, the output of the instruction selector might look like this DAG:

    +
    -  (FMADDS (FADDS W, X), Y, Z)
    +(FMADDS (FADDS W, X), Y, Z)
     
    +
    -

    -The FMADDS instruction is a ternary instruction that multiplies its first two -operands and adds the third (as single-precision floating-point numbers). The -FADDS instruction is a simple binary single-precision add instruction. To -perform this pattern match, the PowerPC backend includes the following -instruction definitions: -

    +

    The FMADDS instruction is a ternary instruction that multiplies its +first two operands and adds the third (as single-precision floating-point +numbers). The FADDS instruction is a simple binary single-precision +add instruction. To perform this pattern match, the PowerPC backend includes +the following instruction definitions:

    +
     def FMADDS : AForm_1<59, 29,
                         (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRC, F4RC:$FRB),
    @@ -1005,6 +1009,7 @@ def FADDS : AForm_2<59, 21,
                         "fadds $FRT, $FRA, $FRB",
                         [(set F4RC:$FRT, (fadd F4RC:$FRA, F4RC:$FRB))]>;
     
    +

    The portion of the instruction definition in bold indicates the pattern used to match the instruction. The DAG operators (like fmul/fadd) @@ -1012,8 +1017,8 @@ are defined in the lib/Target/TargetSelectionDAG.td file. "F4RC" is the register class of the input and result values.

    The TableGen DAG instruction selector generator reads the instruction -patterns in the .td and automatically builds parts of the pattern matching code -for your target. It has the following strengths:

    +patterns in the .td file and automatically builds parts of the pattern +matching code for your target. It has the following strengths:

    • At compiler-compiler time, it analyzes your instruction patterns and tells @@ -1021,7 +1026,8 @@ for your target. It has the following strengths:

    • It can handle arbitrary constraints on operands for the pattern match. In particular, it is straight-forward to say things like "match any immediate that is a 13-bit sign-extended value". For examples, see the - immSExt16 and related tblgen classes in the PowerPC backend.
    • + immSExt16 and related tblgen classes in the PowerPC + backend.
    • It knows several important identities for the patterns defined. For example, it knows that addition is commutative, so it allows the FMADDS pattern above to match "(fadd X, (fmul Y, Z))" as @@ -1029,55 +1035,58 @@ for your target. It has the following strengths:

      to specially handle this case.
    • It has a full-featured type-inferencing system. In particular, you should rarely have to explicitly tell the system what type parts of your patterns - are. In the FMADDS case above, we didn't have to tell tblgen that all of - the nodes in the pattern are of type 'f32'. It was able to infer and - propagate this knowledge from the fact that F4RC has type 'f32'.
    • + are. In the FMADDS case above, we didn't have to tell + tblgen that all of the nodes in the pattern are of type 'f32'. It + was able to infer and propagate this knowledge from the fact that + F4RC has type 'f32'.
    • Targets can define their own (and rely on built-in) "pattern fragments". Pattern fragments are chunks of reusable patterns that get inlined into your - patterns during compiler-compiler time. For example, the integer "(not x)" - operation is actually defined as a pattern fragment that expands as - "(xor x, -1)", since the SelectionDAG does not have a native 'not' - operation. Targets can define their own short-hand fragments as they see - fit. See the definition of 'not' and 'ineg' for examples.
    • + patterns during compiler-compiler time. For example, the integer + "(not x)" operation is actually defined as a pattern fragment that + expands as "(xor x, -1)", since the SelectionDAG does not have a + native 'not' operation. Targets can define their own short-hand + fragments as they see fit. See the definition of 'not' and + 'ineg' for examples.
    • In addition to instructions, targets can specify arbitrary patterns that - map to one or more instructions, using the 'Pat' class. For example, + map to one or more instructions using the 'Pat' class. For example, the PowerPC has no way to load an arbitrary integer immediate into a register in one instruction. To tell tblgen how to do this, it defines: - +
      +
      +
      -    // Arbitrary immediate support.  Implement in terms of LIS/ORI.
      -    def : Pat<(i32 imm:$imm),
      -              (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>;
      +// Arbitrary immediate support.  Implement in terms of LIS/ORI.
      +def : Pat<(i32 imm:$imm),
      +          (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))>;
           
      - +
      +
      If none of the single-instruction patterns for loading an immediate into a register match, this will be used. This rule says "match an arbitrary i32 - immediate, turning it into an ORI ('or a 16-bit immediate') and an LIS - ('load 16-bit immediate, where the immediate is shifted to the left 16 - bits') instruction". To make this work, the LO16/HI16 node transformations - are used to manipulate the input immediate (in this case, take the high or - low 16-bits of the immediate). -
    • + immediate, turning it into an ORI ('or a 16-bit immediate') and an + LIS ('load 16-bit immediate, where the immediate is shifted to the + left 16 bits') instruction". To make this work, the + LO16/HI16 node transformations are used to manipulate the + input immediate (in this case, take the high or low 16-bits of the + immediate).
    • While the system does automate a lot, it still allows you to write custom - C++ code to match special cases, in case there is something that is hard - to express.
    • + C++ code to match special cases if there is something that is hard to + express.
    -

    -While it has many strengths, the system currently has some limitations, -primarily because it is a work in progress and is not yet finished: -

    +

    While it has many strengths, the system currently has some limitations, +primarily because it is a work in progress and is not yet finished:

    • Overall, there is no way to define or match SelectionDAG nodes that define - multiple values (e.g. ADD_PARTS, LOAD, CALL, etc). This is the biggest - reason that you currently still have to write custom C++ code for - your instruction selector.
    • -
    • There is no great way to support match complex addressing modes yet. In the - future, we will extend pattern fragments to allow them to define multiple - values (e.g. the four operands of the X86 addressing - mode). In addition, we'll extend fragments so that a fragment can match - multiple different patterns.
    • + multiple values (e.g. ADD_PARTS, LOAD, CALL, + etc). This is the biggest reason that you currently still have to + write custom C++ code for your instruction selector. +
    • There is no great way to support matching complex addressing modes yet. In + the future, we will extend pattern fragments to allow them to define + multiple values (e.g. the four operands of the X86 + addressing mode). In addition, we'll extend fragments so that a + fragment can match multiple different patterns.
    • We don't automatically infer flags like isStore/isLoad yet.
    • We don't automatically generate the set of supported registers and operations for the Legalizer yet.
    • @@ -1102,9 +1111,8 @@ please let Chris know!

      phase and assigns an order. The scheduler can pick an order depending on various constraints of the machines (i.e. order for minimal register pressure or try to cover instruction latencies). Once an order is established, the DAG is -converted to a list of MachineInstrs and the -Selection DAG is destroyed. -

      +converted to a list of MachineInstrs and +the SelectionDAG is destroyed.

      Note that this phase is logically separate from the instruction selection phase, but is tied to it closely in the code because it operates on @@ -1121,7 +1129,7 @@ SelectionDAGs.

      1. Optional function-at-a-time selection.
      2. -
      3. Auto-generate entire selector from .td file.
      4. +
      5. Auto-generate entire selector from .td file.
      @@ -1151,25 +1159,19 @@ SelectionDAGs.

      - - +

      To Be Written

      - -
      - -
      - - +

      To Be Written

      -

      For the JIT or .o file writer

      +

      For the JIT or .o file writer

      @@ -1177,6 +1179,7 @@ SelectionDAGs.

      +

      To Be Written

      @@ -1194,8 +1197,7 @@ are specific to the code generator for a particular target.

      -

      -The X86 code generator lives in the lib/Target/X86 directory. This +

      The X86 code generator lives in the lib/Target/X86 directory. This code generator currently targets a generic P6-like processor. As such, it produces a few P6-and-above instructions (like conditional moves), but it does not make use of newer features like MMX or SSE. In the future, the X86 backend @@ -1210,11 +1212,10 @@ implementations.

      -

      -The following are the known target triples that are supported by the X86 -backend. This is not an exhaustive list, but it would be useful to add those -that people test. -

      + +

      The following are the known target triples that are supported by the X86 +backend. This is not an exhaustive list, and it would be useful to add those +that people test.

      • i686-pc-linux-gnu - Linux
      • @@ -1237,13 +1238,15 @@ that people test. forming memory addresses of the following expression directly in integer instructions (which use ModR/M addressing):

        +
        -   Base+[1,2,4,8]*IndexReg+Disp32
        +Base + [1,2,4,8] * IndexReg + Disp32
         
        +

        In order to represent this, LLVM tracks no less than 4 operands for each -memory operand of this form. This means that the "load" form of 'mov' has the -following MachineOperands in this order:

        +memory operand of this form. This means that the "load" form of 'mov' +has the following MachineOperands in this order:

         Index:        0     |    1        2       3           4
        @@ -1252,7 +1255,7 @@ OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg,   SignExtImm
         

        Stores, and all other instructions, treat the four memory operands in the -same way, in the same order.

        +same way and in the same order.

      @@ -1263,8 +1266,7 @@ same way, in the same order.

      -

      -An instruction name consists of the base name, a default operand size, and a +

      An instruction name consists of the base name, a default operand size, and a a character per operand with an optional special size. For example: