X-Git-Url: http://plrg.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FProgrammersManual.html;h=d096f5a722d80b5b8abae79c9c762fd0e24c99a7;hb=5520ad4dd9e3e726f96cf2c32c2b90f9467ff783;hp=ed81a734c0a656af00a38f4779dfe11e379faa23;hpb=2b78d967ea35ff0f4b501e7024cd5e36ca7e5940;p=oota-llvm.git diff --git a/docs/ProgrammersManual.html b/docs/ProgrammersManual.html index ed81a734c0a..d096f5a722d 100644 --- a/docs/ProgrammersManual.html +++ b/docs/ProgrammersManual.html @@ -2,6 +2,7 @@ "http://www.w3.org/TR/html4/strict.dtd"> + LLVM Programmer's Manual @@ -28,6 +29,13 @@
  • Helpful Hints for Common Operations @@ -97,6 +117,8 @@ complex example
  • the same way
  • Iterating over def-use & use-def chains
  • +
  • Iterating over predecessors & +successors of blocks
  • Making simple changes @@ -106,8 +128,10 @@ use-def chains
  • Deleting Instructions
  • Replacing an Instruction with another Value
  • +
  • Deleting GlobalVariables
  • +
  • How to Create Types
  • @@ -236,10 +274,9 @@ reference - an excellent reference for the STL and other parts of the standard C++ library.
  • C++ In a Nutshell - This is an -O'Reilly book in the making. It has a decent -Standard Library -Reference that rivals Dinkumware's, and is unfortunately no longer free since the book has been -published.
  • +O'Reilly book in the making. It has a decent Standard Library +Reference that rivals Dinkumware's, and is unfortunately no longer free since the +book has been published.
  • C++ Frequently Asked Questions
  • @@ -322,7 +359,7 @@ file (note that you very rarely have to include this file directly).

    cast<>:

    The cast<> operator is a "checked cast" operation. It - converts a pointer or reference from a base class to a derived cast, causing + converts a pointer or reference from a base class to a derived class, causing an assertion failure if it is not really an instance of the right type. This should be used in cases where you have some information that makes you believe that something is of the right type. An example of the isa<> @@ -402,6 +439,107 @@ are lots of examples in the LLVM source base.

    + + +
    + Passing strings (the StringRef +and Twine classes) +
    + +
    + +

    Although LLVM generally does not do much string manipulation, we do have +several important APIs which take strings. Two important examples are the +Value class -- which has names for instructions, functions, etc. -- and the +StringMap class which is used extensively in LLVM and Clang.

    + +

    These are generic classes, and they need to be able to accept strings which +may have embedded null characters. Therefore, they cannot simply take +a const char *, and taking a const std::string& requires +clients to perform a heap allocation which is usually unnecessary. Instead, +many LLVM APIs use a const StringRef& or a const +Twine& for passing strings efficiently.

    + +
    + + +
    + The StringRef class +
    + +
    + +

    The StringRef data type represents a reference to a constant string +(a character array and a length) and supports the common operations available +on std:string, but does not require heap allocation.

    + +

    It can be implicitly constructed using a C style null-terminated string, +an std::string, or explicitly with a character pointer and length. +For example, the StringRef find function is declared as:

    + +
    + iterator find(const StringRef &Key); +
    + +

    and clients can call it using any one of:

    + +
    +
    +  Map.find("foo");                 // Lookup "foo"
    +  Map.find(std::string("bar"));    // Lookup "bar"
    +  Map.find(StringRef("\0baz", 4)); // Lookup "\0baz"
    +
    +
    + +

    Similarly, APIs which need to return a string may return a StringRef +instance, which can be used directly or converted to an std::string +using the str member function. See +"llvm/ADT/StringRef.h" +for more information.

    + +

    You should rarely use the StringRef class directly, because it contains +pointers to external memory it is not generally safe to store an instance of the +class (unless you know that the external storage will not be freed).

    + +
    + + +
    + The Twine class +
    + +
    + +

    The Twine class is an efficient way for APIs to accept concatenated +strings. For example, a common LLVM paradigm is to name one instruction based on +the name of another instruction with a suffix, for example:

    + +
    +
    +    New = CmpInst::Create(..., SO->getName() + ".cmp");
    +
    +
    + +

    The Twine class is effectively a +lightweight rope +which points to temporary (stack allocated) objects. Twines can be implicitly +constructed as the result of the plus operator applied to strings (i.e., a C +strings, an std::string, or a StringRef). The twine delays the +actual concatenation of strings until it is actually required, at which point +it can be efficiently rendered directly into a character array. This avoids +unnecessary heap allocation involved in constructing the temporary results of +string concatenation. See +"llvm/ADT/Twine.h" +for more information.

    + +

    As with a StringRef, Twine objects point to external memory +and should almost never be stored or mentioned directly. They are intended +solely for use when defining a function which should be able to efficiently +accept concatenated strings.

    + +
    + +
    The DEBUG() macro and -debug option @@ -426,7 +564,7 @@ tool) is run with the '-debug' command line argument:

    -DOUT << "I am here!\n";
    +DEBUG(errs() << "I am here!\n");
     
    @@ -471,16 +609,16 @@ option as follows:

    -DOUT << "No debug type\n";
     #undef  DEBUG_TYPE
    +DEBUG(errs() << "No debug type\n");
     #define DEBUG_TYPE "foo"
    -DOUT << "'foo' debug type\n";
    +DEBUG(errs() << "'foo' debug type\n");
     #undef  DEBUG_TYPE
     #define DEBUG_TYPE "bar"
    -DOUT << "'bar' debug type\n";
    +DEBUG(errs() << "'bar' debug type\n"));
     #undef  DEBUG_TYPE
     #define DEBUG_TYPE ""
    -DOUT << "No debug type (2)\n";
    +DEBUG(errs() << "No debug type (2)\n");
     
    @@ -512,6 +650,21 @@ on when the name is specified. This allows, for example, all debug information for instruction scheduling to be enabled with -debug-type=InstrSched, even if the source lives in multiple files.

    +

    The DEBUG_WITH_TYPE macro is also available for situations where you +would like to set DEBUG_TYPE, but only for one specific DEBUG +statement. It takes an additional first parameter, which is the type to use. For +example, the preceding example could be written as:

    + + +
    +
    +DEBUG_WITH_TYPE("", errs() << "No debug type\n");
    +DEBUG_WITH_TYPE("foo", errs() << "'foo' debug type\n");
    +DEBUG_WITH_TYPE("bar", errs() << "'bar' debug type\n"));
    +DEBUG_WITH_TYPE("", errs() << "No debug type (2)\n");
    +
    +
    +
    @@ -575,14 +728,14 @@ $ opt -stats -mypassname < program.bc > /dev/null -

    When running gccas on a C file from the SPEC benchmark +

    When running opt on a C file from the SPEC benchmark suite, it gives a report that looks like this:

    -   7646 bytecodewriter  - Number of normal instructions
    -    725 bytecodewriter  - Number of oversized instructions
    - 129996 bytecodewriter  - Number of bytecode bytes written
    +   7646 bitcodewriter   - Number of normal instructions
    +    725 bitcodewriter   - Number of oversized instructions
    + 129996 bitcodewriter   - Number of bitcode bytes written
        2817 raise           - Number of insts DCEd or constprop'd
        3213 raise           - Number of cast-of-self removed
        5046 raise           - Number of expression trees converted
    @@ -645,14 +798,14 @@ systems with X11, install the graphviz
     toolkit, and make sure 'dot' and 'gv' are in your path.  If you are running on
     Mac OS/X, download and install the Mac OS/X Graphviz program, and add
    -/Applications/Graphviz.app/Contents/MacOS/ (or whereever you install
    +/Applications/Graphviz.app/Contents/MacOS/ (or wherever you install
     it) to your path.  Once in your system and path are set up, rerun the LLVM
     configure script and rebuild LLVM to enable this functionality.

    SelectionDAG has been extended to make it easier to locate interesting nodes in large complex graphs. From gdb, if you call DAG.setGraphColor(node, "color"), then the -next call DAG.viewGraph() would hilight the node in the +next call DAG.viewGraph() would highlight the node in the specified color (choices of colors can be found at colors.) More complex node attributes can be provided with call @@ -671,8 +824,8 @@ attributes, then you can call DAG.clearGraphAttrs().

    -

    LLVM has a plethora of datastructures in the llvm/ADT/ directory, - and we commonly use STL datastructures. This section describes the tradeoffs +

    LLVM has a plethora of data structures in the llvm/ADT/ directory, + and we commonly use STL data structures. This section describes the trade-offs you should consider when you pick one.

    @@ -682,7 +835,7 @@ thing when choosing a container is the algorithmic properties of how you plan to access the container. Based on that, you should use:

      -
    • a map-like container if you need efficient lookup +
    • a map-like container if you need efficient look-up of an value based on another value. Map-like containers also support efficient queries for containment (whether a key is in the map). Map-like containers generally do not support efficient reverse mapping (values to @@ -701,15 +854,24 @@ access the container. Based on that, you should use:

    • a sequential container provides the most efficient way to add elements and keeps track of the order they are added to the collection. They permit duplicates and support efficient - iteration, but do not support efficient lookup based on a key. + iteration, but do not support efficient look-up based on a key.
    • +
    • a string container is a specialized sequential + container or reference structure that is used for character or byte + arrays.
    • + +
    • a bit container provides an efficient way to store and + perform set operations on sets of numeric id's, while automatically + eliminating duplicates. Bit containers require a maximum of 1 bit for each + identifier you want to store. +

    -Once the proper catagory of container is determined, you can fine tune the +Once the proper category of container is determined, you can fine tune the memory use, constant factors, and cache behaviors of access by intelligently -picking a member of the catagory. Note that constant factors and cache behavior +picking a member of the category. Note that constant factors and cache behavior can be a big deal. If you have a vector that usually only contains a few elements (but could contain many), for example, it's much better to use SmallVector than vector @@ -751,7 +913,7 @@ before the array is allocated, and if the array is usually large (if not, consider a SmallVector). The cost of a heap allocated array is the cost of the new/delete (aka malloc/free). Also note that if you are allocating an array of a type with a constructor, the constructor and -destructors will be run for every element in the array (resizable vectors only +destructors will be run for every element in the array (re-sizable vectors only construct those elements actually used).

    @@ -797,6 +959,33 @@ rarely be a benefit) or if you will be allocating many instances of the vector itself (which would waste space for elements that aren't in the container). vector is also useful when interfacing with code that expects vectors :).

    + +

    One worthwhile note about std::vector: avoid code like this:

    + +
    +
    +for ( ... ) {
    +   std::vector<foo> V;
    +   use V;
    +}
    +
    +
    + +

    Instead, write this as:

    + +
    +
    +std::vector<foo> V;
    +for ( ... ) {
    +   use V;
    +   V.clear();
    +}
    +
    +
    + +

    Doing so will save (at least) one heap allocation and free per iteration of +the loop.

    +
    @@ -835,7 +1024,7 @@ not invalidate iterator or pointers to other elements in the list.

    @@ -843,15 +1032,102 @@ not invalidate iterator or pointers to other elements in the list.

    intrusive, because it requires the element to store and provide access to the prev/next pointers for the list.

    -

    ilist has the same drawbacks as std::list, and additionally requires an -ilist_traits implementation for the element type, but it provides some novel -characteristics. In particular, it can efficiently store polymorphic objects, -the traits class is informed when an element is inserted or removed from the -list, and ilists are guaranteed to support a constant-time splice operation. -

    +

    ilist has the same drawbacks as std::list, and additionally +requires an ilist_traits implementation for the element type, but it +provides some novel characteristics. In particular, it can efficiently store +polymorphic objects, the traits class is informed when an element is inserted or +removed from the list, and ilists are guaranteed to support a +constant-time splice operation.

    + +

    These properties are exactly what we want for things like +Instructions and basic blocks, which is why these are implemented with +ilists.

    + +Related classes of interest are explained in the following subsections: + +
    + + + + +
    +

    ilist_traits<T> is ilist<T>'s customization +mechanism. iplist<T> (and consequently ilist<T>) +publicly derive from this traits class.

    +
    + + +
    + iplist +
    + +
    +

    iplist<T> is ilist<T>'s base and as such +supports a slightly narrower interface. Notably, inserters from +T& are absent.

    + +

    ilist_traits<T> is a public base of this class and can be +used for a wide variety of customizations.

    +
    + + + + +
    +

    ilist_node<T> implements a the forward and backward links +that are expected by the ilist<T> (and analogous containers) +in the default manner.

    + +

    ilist_node<T>s are meant to be embedded in the node type +T, usually T publicly derives from +ilist_node<T>.

    +
    + + + + +
    +

    ilists have another specialty that must be considered. To be a good +citizen in the C++ ecosystem, it needs to support the standard container +operations, such as begin and end iterators, etc. Also, the +operator-- must work correctly on the end iterator in the +case of non-empty ilists.

    + +

    The only sensible solution to this problem is to allocate a so-called +sentinel along with the intrusive list, which serves as the end +iterator, providing the back-link to the last element. However conforming to the +C++ convention it is illegal to operator++ beyond the sentinel and it +also must not be dereferenced.

    + +

    These constraints allow for some implementation freedom to the ilist +how to allocate and store the sentinel. The corresponding policy is dictated +by ilist_traits<T>. By default a T gets heap-allocated +whenever the need for a sentinel arises.

    + +

    While the default policy is sufficient in most cases, it may break down when +T does not provide a default constructor. Also, in the case of many +instances of ilists, the memory overhead of the associated sentinels +is wasted. To alleviate the situation with numerous and voluminous +T-sentinels, sometimes a trick is employed, leading to ghostly +sentinels.

    -

    These properties are exactly what we want for things like Instructions and -basic blocks, which is why these are implemented with ilists.

    +

    Ghostly sentinels are obtained by specially-crafted ilist_traits<T> +which superpose the sentinel with the ilist instance in memory. Pointer +arithmetic is used to obtain the sentinel, which is relative to the +ilist's this pointer. The ilist is augmented by an +extra pointer, which serves as the back-link of the sentinel. This is the only +field in the ghostly sentinel which can be legally accessed.

    @@ -912,7 +1188,7 @@ efficiently queried with a standard binary or radix search.

    -

    If you have a set-like datastructure that is usually small and whose elements +

    If you have a set-like data structure that is usually small and whose elements are reasonably small, a SmallSet<Type, N> is a good choice. This set has space for N elements in place (thus, if the set is dynamically smaller than N, no malloc traffic is required) and accesses them with a simple linear search. @@ -936,7 +1212,7 @@ and erasing, but does not support iteration.

    SmallPtrSet has all the advantages of SmallSet (and a SmallSet of pointers is -transparently implemented with a SmallPtrSet), but also suports iterators. If +transparently implemented with a SmallPtrSet), but also supports iterators. If more than 'N' insertions are performed, a single quadratically probed hash table is allocated and grows as needed, providing extremely efficient access (constant time insertion/deleting/queries with low constant @@ -948,6 +1224,25 @@ visited in sorted order.

    + + + +
    + +

    +DenseSet is a simple quadratically probed hash table. It excels at supporting +small values: it uses a single allocation to hold all of the pairs that +are currently inserted in the set. DenseSet is a great way to unique small +values that are not simple pointers (use SmallPtrSet for pointers). Note that DenseSet has +the same requirements for the value type that DenseMap has. +

    + +
    +
    "llvm/ADT/FoldingSet.h" @@ -1016,8 +1311,9 @@ std::set is almost never a good choice.

    -

    LLVM's SetVector<Type> is actually a combination of a set along with -a Sequential Container. The important property +

    LLVM's SetVector<Type> is an adapter class that combines your choice of +a set-like container along with a Sequential +Container. The important property that this provides is efficient insertion with uniquing (duplicate elements are ignored) with iteration support. It implements this by inserting elements into both a set-like container and the sequential container, using the set-like @@ -1028,7 +1324,7 @@ container for uniquing and the sequential container for iteration. iteration is guaranteed to match the order of insertion into the SetVector. This property is really important for things like sets of pointers. Because pointer values are non-deterministic (e.g. vary across runs of the program on -different machines), iterating over the pointers in a std::set or other set will +different machines), iterating over the pointers in the set will not be in a well-defined order.

    @@ -1036,9 +1332,17 @@ The drawback of SetVector is that it requires twice as much space as a normal set and has the sum of constant factors from the set-like container and the sequential container that it uses. Use it *only* if you need to iterate over the elements in a deterministic order. SetVector is also expensive to delete -elements out of (linear time). +elements out of (linear time), unless you use it's "pop_back" method, which is +faster.

    +

    SetVector is an adapter class that defaults to using std::vector and std::set +for the underlying containers, so it is quite expensive. However, +"llvm/ADT/SetVector.h" also provides a SmallSetVector class, which +defaults to using a SmallVector and SmallSet of a specified size. If you use +this, and if your sets are dynamically smaller than N, you will save a lot of +heap traffic.

    +
    @@ -1070,21 +1374,16 @@ factors, and produces a lot of malloc traffic. It should be avoided.

    The STL provides several other options, such as std::multiset and the various -"hash_set" like containers (whether from C++ TR1 or from the SGI library).

    +"hash_set" like containers (whether from C++ TR1 or from the SGI library). We +never use hash_set and unordered_set because they are generally very expensive +(each insertion requires a malloc) and very non-portable. +

    std::multiset is useful if you're not interested in elimination of duplicates, but has all the drawbacks of std::set. A sorted vector (where you don't delete duplicate entries) or some other approach is almost always better.

    -

    The various hash_set implementations (exposed portably by -"llvm/ADT/hash_set") is a simple chained hashtable. This algorithm is as malloc -intensive as std::set (performing an allocation for each element inserted, -thus having really high constant factors) but (usually) provides O(1) -insertion/deletion of elements. This can be useful if your elements are large -(thus making the constant-factor cost relatively low) or if comparisons are -expensive. Element iteration does not visit elements in a useful order.

    -
    @@ -1116,7 +1415,7 @@ vectors for sets.
    @@ -1124,12 +1423,11 @@ vectors for sets.

    Strings are commonly used as keys in maps, and they are difficult to support efficiently: they are variable length, inefficient to hash and compare when -long, expensive to copy, etc. CStringMap is a specialized container designed to -cope with these issues. It supports mapping an arbitrary range of bytes that -does not have an embedded nul character in it ("C strings") to an arbitrary -other object.

    +long, expensive to copy, etc. StringMap is a specialized container designed to +cope with these issues. It supports mapping an arbitrary range of bytes to an +arbitrary other object.

    -

    The CStringMap implementation uses a quadratically-probed hash table, where +

    The StringMap implementation uses a quadratically-probed hash table, where the buckets store a pointer to the heap allocated entries (and some other stuff). The entries in the map must be heap allocated because the strings are variable length. The string data (key) and the element object (value) are @@ -1137,15 +1435,15 @@ stored in the same allocation with the string data immediately after the element object. This container guarantees the "(char*)(&Value+1)" points to the key string for a value.

    -

    The CStringMap is very fast for several reasons: quadratic probing is very +

    The StringMap is very fast for several reasons: quadratic probing is very cache efficient for lookups, the hash value of strings in buckets is not -recomputed when lookup up an element, CStringMap rarely has to touch the +recomputed when lookup up an element, StringMap rarely has to touch the memory for unrelated objects when looking up a value (even when hash collisions happen), hash table growth does not recompute the hash values for strings already in the table, and each pair in the map is store in a single allocation (the string data is stored in the same allocation as the Value of a pair).

    -

    CStringMap also provides query methods that take byte ranges, so it only ever +

    StringMap also provides query methods that take byte ranges, so it only ever copies a string if a value is inserted into the table.

    @@ -1187,11 +1485,28 @@ pointers, or map other small types to each other. There are several aspects of DenseMap that you should be aware of, however. The iterators in a densemap are invalidated whenever an insertion occurs, unlike map. Also, because DenseMap allocates space for a large number of key/value -pairs (it starts with 64 by default) if you have large keys or values, it can -waste a lot of space. Finally, you must implement a partial specialization of -DenseMapKeyInfo for the key that you want, if it isn't already supported. This +pairs (it starts with 64 by default), it will waste a lot of space if your keys +or values are large. Finally, you must implement a partial specialization of +DenseMapInfo for the key that you want, if it isn't already supported. This is required to tell DenseMap about two special marker values (which can never be -inserted into the map).

    +inserted into the map) that it needs internally.

    + + + + + + +
    + +

    +ValueMap is a wrapper around a DenseMap mapping +Value*s (or subclasses) to another type. When a Value is deleted or RAUW'ed, +ValueMap will update itself so the new version of the key is mapped to the same +value, just as if the key were a WeakVH. You can configure exactly how this +happens, and what else happens on these two events, by passing +a Config parameter to the ValueMap template.

    @@ -1224,22 +1539,95 @@ another element takes place).

    The STL provides several other options, such as std::multimap and the various -"hash_map" like containers (whether from C++ TR1 or from the SGI library).

    +"hash_map" like containers (whether from C++ TR1 or from the SGI library). We +never use hash_set and unordered_set because they are generally very expensive +(each insertion requires a malloc) and very non-portable.

    std::multimap is useful if you want to map a key to multiple values, but has all the drawbacks of std::map. A sorted vector or some other approach is almost always better.

    -

    The various hash_map implementations (exposed portably by -"llvm/ADT/hash_map") are simple chained hash tables. This algorithm is as -malloc intensive as std::map (performing an allocation for each element -inserted, thus having really high constant factors) but (usually) provides O(1) -insertion/deletion of elements. This can be useful if your elements are large -(thus making the constant-factor cost relatively low) or if comparisons are -expensive. Element iteration does not visit elements in a useful order.

    + + + +
    + +

    +TODO: const char* vs stringref vs smallstring vs std::string. Describe twine, +xref to #string_apis. +

    + +
    + + + + +
    +

    Unlike the other containers, there are only two bit storage containers, and +choosing when to use each is relatively straightforward.

    + +

    One additional option is +std::vector<bool>: we discourage its use for two reasons 1) the +implementation in many common compilers (e.g. commonly available versions of +GCC) is extremely inefficient and 2) the C++ standards committee is likely to +deprecate this container and/or change it significantly somehow. In any case, +please don't use it.

    +
    + + + + +
    +

    The BitVector container provides a dynamic size set of bits for manipulation. +It supports individual bit setting/testing, as well as set operations. The set +operations take time O(size of bitvector), but operations are performed one word +at a time, instead of one bit at a time. This makes the BitVector very fast for +set operations compared to other containers. Use the BitVector when you expect +the number of set bits to be high (IE a dense set). +

    +
    + + + + +
    +

    The SmallBitVector container provides the same interface as BitVector, but +it is optimized for the case where only a small number of bits, less than +25 or so, are needed. It also transparently supports larger bit counts, but +slightly less efficiently than a plain BitVector, so SmallBitVector should +only be used when larger counts are rare. +

    + +

    +At this time, SmallBitVector does not support set operations (and, or, xor), +and its operator[] does not provide an assignable lvalue. +

    +
    + + + + +
    +

    The SparseBitVector container is much like BitVector, with one major +difference: Only the bits that are set, are stored. This makes the +SparseBitVector much more space efficient than BitVector when the set is sparse, +as well as making set operations O(number of set bits) instead of O(size of +universe). The downside to the SparseBitVector is that setting and testing of random bits is O(N), and on large SparseBitVectors, this can be slower than BitVector. In our implementation, setting or testing bits in sorted order +(either forwards or reverse) is O(1) worst case. Testing and setting bits within 128 bits (depends on size) of the current bit is also O(1). As a general statement, testing/setting bits in a SparseBitVector is O(distance away from last set bit). +

    +
    @@ -1305,7 +1693,7 @@ an example that prints the name of a BasicBlock and the number of for (Function::iterator i = func->begin(), e = func->end(); i != e; ++i) // Print out the name of the basic block if it has one, and then the // number of instructions that it contains - llvm::cerr << "Basic block (name=" << i->getName() << ") has " + errs() << "Basic block (name=" << i->getName() << ") has " << i->size() << " instructions.\n";
    @@ -1338,14 +1726,14 @@ a BasicBlock:

    for (BasicBlock::iterator i = blk->begin(), e = blk->end(); i != e; ++i) // The next statement works since operator<<(ostream&,...) // is overloaded for Instruction& - llvm::cerr << *i << "\n"; + errs() << *i << "\n";

    However, this isn't really the best way to print out the contents of a BasicBlock! Since the ostream operators are overloaded for virtually anything you'll care about, you could have just invoked the print routine on the -basic block itself: llvm::cerr << *blk << "\n";.

    +basic block itself: errs() << *blk << "\n";.

    @@ -1369,21 +1757,24 @@ small example that shows how to dump all instructions in a function to the stand
     #include "llvm/Support/InstIterator.h"
     
    -// F is a ptr to a Function instance
    -for (inst_iterator i = inst_begin(F), e = inst_end(F); i != e; ++i)
    -  llvm::cerr << *i << "\n";
    +// F is a pointer to a Function instance
    +for (inst_iterator I = inst_begin(F), E = inst_end(F); I != E; ++I)
    +  errs() << *I << "\n";
     

    Easy, isn't it? You can also use InstIterators to fill a -worklist with its initial contents. For example, if you wanted to -initialize a worklist to contain all instructions in a Function +work list with its initial contents. For example, if you wanted to +initialize a work list to contain all instructions in a Function F, all you would need to do is something like:

     std::set<Instruction*> worklist;
    -worklist.insert(inst_begin(F), inst_end(F));
    +// or better yet, SmallPtrSet<Instruction*, 64> worklist;
    +
    +for (inst_iterator I = inst_begin(F), E = inst_end(F); I != E; ++I)
    +   worklist.insert(&*I);
     
    @@ -1424,7 +1815,7 @@ the last line of the last example,

    -Instruction* pinst = &*i;
    +Instruction *pinst = &*i;
     
    @@ -1432,7 +1823,7 @@ Instruction* pinst = &*i;
    -Instruction* pinst = i;
    +Instruction *pinst = i;
     
    @@ -1447,7 +1838,7 @@ without actually obtaining it via iteration over some structure:

    void printNextInstruction(Instruction* inst) { BasicBlock::iterator it(inst); ++it; // After this line, it refers to the instruction after *inst - if (it != inst->getParent()->end()) llvm::cerr << *it << "\n"; + if (it != inst->getParent()->end()) errs() << *it << "\n"; } @@ -1467,7 +1858,7 @@ locations in the entire module (that is, across every Function) where a certain function (i.e., some Function*) is already in scope. As you'll learn later, you may want to use an InstVisitor to accomplish this in a much more straight-forward manner, but this example will allow us to explore how -you'd do it if you didn't have InstVisitor around. In pseudocode, this +you'd do it if you didn't have InstVisitor around. In pseudo-code, this is what we want to do:

    @@ -1495,13 +1886,12 @@ class OurFunctionPass : public FunctionPass { virtual runOnFunction(Function& F) { for (Function::iterator b = F.begin(), be = F.end(); b != be; ++b) { - for (BasicBlock::iterator i = b->begin(); ie = b->end(); i != ie; ++i) { + for (BasicBlock::iterator i = b->begin(), ie = b->end(); i != ie; ++i) { if (CallInst* callInst = dyn_cast<CallInst>(&*i)) { // We know we've encountered a call instruction, so we // need to determine if it's a call to the - // function pointed to by m_func or not - + // function pointed to by m_func or not. if (callInst->getCalledFunction() == targetFunc) ++callCounter; } @@ -1510,7 +1900,7 @@ class OurFunctionPass : public FunctionPass { } private: - unsigned callCounter; + unsigned callCounter; };
    @@ -1562,12 +1952,12 @@ of F:

    -Function* F = ...;
    +Function *F = ...;
     
     for (Value::use_iterator i = F->use_begin(), e = F->use_end(); i != e; ++i)
       if (Instruction *Inst = dyn_cast<Instruction>(*i)) {
    -    llvm::cerr << "F is used in instruction:\n";
    -    llvm::cerr << *Inst << "\n";
    +    errs() << "F is used in instruction:\n";
    +    errs() << *Inst << "\n";
       }
     
    @@ -1582,10 +1972,10 @@ the particular Instruction):

    -Instruction* pi = ...;
    +Instruction *pi = ...;
     
     for (User::op_iterator i = pi->op_begin(), e = pi->op_end(); i != e; ++i) {
    -  Value* v = *i;
    +  Value *v = *i;
       // ...
     }
     
    @@ -1598,6 +1988,36 @@ for (User::op_iterator i = pi->op_begin(), e = pi->op_end(); i != e; ++i)
    + + + +
    + +

    Iterating over the predecessors and successors of a block is quite easy +with the routines defined in "llvm/Support/CFG.h". Just use code like +this to iterate over all predecessors of BB:

    + +
    +
    +#include "llvm/Support/CFG.h"
    +BasicBlock *BB = ...;
    +
    +for (pred_iterator PI = pred_begin(BB), E = pred_end(BB); PI != E; ++PI) {
    +  BasicBlock *Pred = *PI;
    +  // ...
    +}
    +
    +
    + +

    Similarly, to iterate over successors use +succ_iterator/succ_begin/succ_end.

    + +
    + +
    Making simple changes @@ -1630,12 +2050,12 @@ parameters. For example, an AllocaInst only requires a
    -AllocaInst* ai = new AllocaInst(Type::IntTy);
    +AllocaInst* ai = new AllocaInst(Type::Int32Ty);
     

    will create an AllocaInst instance that represents the allocation of -one integer in the current stack frame, at runtime. Each Instruction +one integer in the current stack frame, at run time. Each Instruction subclass is likely to have varying default parameters which change the semantics of the instruction, so refer to the doxygen documentation for the subclass of @@ -1649,7 +2069,7 @@ at generated LLVM machine code, you definitely want to have logical names associated with the results of instructions! By supplying a value for the Name (default) parameter of the Instruction constructor, you associate a logical name with the result of the instruction's execution at -runtime. For example, say that I'm writing a transformation that dynamically +run time. For example, say that I'm writing a transformation that dynamically allocates space for an integer on the stack, and that integer is going to be used as some kind of index by some other code. To accomplish this, I place an AllocaInst at the first point in the first BasicBlock of some @@ -1658,12 +2078,12 @@ used as some kind of index by some other code. To accomplish this, I place an

    where indexLoc is now the logical name of the instruction's -execution value, which is a pointer to an integer on the runtime stack.

    +execution value, which is a pointer to an integer on the run time stack.

    Inserting instructions

    @@ -1771,9 +2191,7 @@ erase function to remove your instruction. For example:

     Instruction *I = .. ;
    -BasicBlock *BB = I->getParent();
    -
    -BB->getInstList().erase(I);
    +I->eraseFromParent();
     
    @@ -1798,9 +2216,9 @@ and ReplaceInstWithInst.

    • ReplaceInstWithValue -

      This function replaces all uses (within a basic block) of a given - instruction with a value, and then removes the original instruction. The - following example illustrates the replacement of the result of a particular +

      This function replaces all uses of a given instruction with a value, + and then removes the original instruction. The following example + illustrates the replacement of the result of a particular AllocaInst that allocates memory for a single integer with a null pointer to an integer.

      @@ -1810,14 +2228,16 @@ AllocaInst* instToReplace = ...; BasicBlock::iterator ii(instToReplace); ReplaceInstWithValue(instToReplace->getParent()->getInstList(), ii, - Constant::getNullValue(PointerType::get(Type::IntTy))); + Constant::getNullValue(PointerType::getUnqual(Type::Int32Ty)));
  • ReplaceInstWithInst

    This function replaces a particular instruction with another - instruction. The following example illustrates the replacement of one - AllocaInst with another.

    + instruction, inserting the new instruction into the basic block at the + location where the old instruction was, and replacing any uses of the old + instruction with the new instruction. The following example illustrates + the replacement of one AllocaInst with another.

    @@ -1825,7 +2245,7 @@ AllocaInst* instToReplace = ...;
     BasicBlock::iterator ii(instToReplace);
     
     ReplaceInstWithInst(instToReplace->getParent()->getInstList(), ii,
    -                    new AllocaInst(Type::IntTy, 0, "ptrToReplacedInt"));
    +                    new AllocaInst(Type::Int32Ty, 0, "ptrToReplacedInt"));
     
  • @@ -1843,6 +2263,257 @@ ReplaceInstWithValue, ReplaceInstWithInst --> + + + +
    + +

    Deleting a global variable from a module is just as easy as deleting an +Instruction. First, you must have a pointer to the global variable that you wish + to delete. You use this pointer to erase it from its parent, the module. + For example:

    + +
    +
    +GlobalVariable *GV = .. ;
    +
    +GV->eraseFromParent();
    +
    +
    + +
    + + + + +
    + +

    In generating IR, you may need some complex types. If you know these types +statically, you can use TypeBuilder<...>::get(), defined +in llvm/Support/TypeBuilder.h, to retrieve them. TypeBuilder +has two forms depending on whether you're building types for cross-compilation +or native library use. TypeBuilder<T, true> requires +that T be independent of the host environment, meaning that it's built +out of types from +the llvm::types +namespace and pointers, functions, arrays, etc. built of +those. TypeBuilder<T, false> additionally allows native C types +whose size may depend on the host compiler. For example,

    + +
    +
    +FunctionType *ft = TypeBuilder<types::i<8>(types::i<32>*), true>::get();
    +
    +
    + +

    is easier to read and write than the equivalent

    + +
    +
    +std::vector<const Type*> params;
    +params.push_back(PointerType::getUnqual(Type::Int32Ty));
    +FunctionType *ft = FunctionType::get(Type::Int8Ty, params, false);
    +
    +
    + +

    See the class +comment for more details.

    + +
    + + + + + +
    +

    +This section describes the interaction of the LLVM APIs with multithreading, +both on the part of client applications, and in the JIT, in the hosted +application. +

    + +

    +Note that LLVM's support for multithreading is still relatively young. Up +through version 2.5, the execution of threaded hosted applications was +supported, but not threaded client access to the APIs. While this use case is +now supported, clients must adhere to the guidelines specified below to +ensure proper operation in multithreaded mode. +

    + +

    +Note that, on Unix-like platforms, LLVM requires the presence of GCC's atomic +intrinsics in order to support threaded operation. If you need a +multhreading-capable LLVM on a platform without a suitably modern system +compiler, consider compiling LLVM and LLVM-GCC in single-threaded mode, and +using the resultant compiler to build a copy of LLVM with multithreading +support. +

    +
    + + + + +
    + +

    +In order to properly protect its internal data structures while avoiding +excessive locking overhead in the single-threaded case, the LLVM must intialize +certain data structures necessary to provide guards around its internals. To do +so, the client program must invoke llvm_start_multithreaded() before +making any concurrent LLVM API calls. To subsequently tear down these +structures, use the llvm_stop_multithreaded() call. You can also use +the llvm_is_multithreaded() call to check the status of multithreaded +mode. +

    + +

    +Note that both of these calls must be made in isolation. That is to +say that no other LLVM API calls may be executing at any time during the +execution of llvm_start_multithreaded() or llvm_stop_multithreaded +. It's is the client's responsibility to enforce this isolation. +

    + +

    +The return value of llvm_start_multithreaded() indicates the success or +failure of the initialization. Failure typically indicates that your copy of +LLVM was built without multithreading support, typically because GCC atomic +intrinsics were not found in your system compiler. In this case, the LLVM API +will not be safe for concurrent calls. However, it will be safe for +hosting threaded applications in the JIT, though care +must be taken to ensure that side exits and the like do not accidentally +result in concurrent LLVM API calls. +

    +
    + + + + +
    +

    +When you are done using the LLVM APIs, you should call llvm_shutdown() +to deallocate memory used for internal structures. This will also invoke +llvm_stop_multithreaded() if LLVM is operating in multithreaded mode. +As such, llvm_shutdown() requires the same isolation guarantees as +llvm_stop_multithreaded(). +

    + +

    +Note that, if you use scope-based shutdown, you can use the +llvm_shutdown_obj class, which calls llvm_shutdown() in its +destructor. +

    + + + + +
    +

    +ManagedStatic is a utility class in LLVM used to implement static +initialization of static resources, such as the global type tables. Before the +invocation of llvm_shutdown(), it implements a simple lazy +initialization scheme. Once llvm_start_multithreaded() returns, +however, it uses double-checked locking to implement thread-safe lazy +initialization. +

    + +

    +Note that, because no other threads are allowed to issue LLVM API calls before +llvm_start_multithreaded() returns, it is possible to have +ManagedStatics of llvm::sys::Mutexs. +

    + +

    +The llvm_acquire_global_lock() and llvm_release_global_lock +APIs provide access to the global lock used to implement the double-checked +locking for lazy initialization. These should only be used internally to LLVM, +and only if you know what you're doing! +

    +
    + + + + +
    +

    +LLVMContext is an opaque class in the LLVM API which clients can use +to operate multiple, isolated instances of LLVM concurrently within the same +address space. For instance, in a hypothetical compile-server, the compilation +of an individual translation unit is conceptually independent from all the +others, and it would be desirable to be able to compile incoming translation +units concurrently on independent server threads. Fortunately, +LLVMContext exists to enable just this kind of scenario! +

    + +

    +Conceptually, LLVMContext provides isolation. Every LLVM entity +(Modules, Values, Types, Constants, etc.) +in LLVM's in-memory IR belongs to an LLVMContext. Entities in +different contexts cannot interact with each other: Modules in +different contexts cannot be linked together, Functions cannot be added +to Modules in different contexts, etc. What this means is that is is +safe to compile on multiple threads simultaneously, as long as no two threads +operate on entities within the same context. +

    + +

    +In practice, very few places in the API require the explicit specification of a +LLVMContext, other than the Type creation/lookup APIs. +Because every Type carries a reference to its owning context, most +other entities can determine what context they belong to by looking at their +own Type. If you are adding new entities to LLVM IR, please try to +maintain this interface design. +

    + +

    +For clients that do not require the benefits of isolation, LLVM +provides a convenience API getGlobalContext(). This returns a global, +lazily initialized LLVMContext that may be used in situations where +isolation is not a concern. +

    +
    + + + + +
    +

    +LLVM's "eager" JIT compiler is safe to use in threaded programs. Multiple +threads can call ExecutionEngine::getPointerToFunction() or +ExecutionEngine::runFunction() concurrently, and multiple threads can +run code output by the JIT concurrently. The user must still ensure that only +one thread accesses IR in a given LLVMContext while another thread +might be modifying it. One way to do that is to always hold the JIT lock while +accessing IR outside the JIT (the JIT modifies the IR by adding +CallbackVHs). Another way is to only +call getPointerToFunction() from the LLVMContext's thread. +

    + +

    When the JIT is configured to compile lazily (using +ExecutionEngine::DisableLazyCompilation(false)), there is currently a +race condition in +updating call sites after a function is lazily-jitted. It's still possible to +use the lazy JIT in a threaded program if you ensure that only one thread at a +time can call any particular lazy stub and that the JIT lock guards any IR +access, but we suggest using only the eager JIT in threaded programs. +

    +
    +
    Advanced Topics @@ -1877,7 +2548,7 @@ recursive types and late resolution of opaque types makes the situation very difficult to handle. Fortunately, for the most part, our implementation makes most clients able to be completely unaware of the nasty internal details. The primary case where clients are exposed to the inner workings of it are when -building a recursive type. In addition to this case, the LLVM bytecode reader, +building a recursive type. In addition to this case, the LLVM bitcode reader, assembly parser, and linker also have to be aware of the inner workings of this system.

    @@ -1921,8 +2592,8 @@ To build this, use the following LLVM APIs: // Create the initial outer struct PATypeHolder StructTy = OpaqueType::get(); std::vector<const Type*> Elts; -Elts.push_back(PointerType::get(StructTy)); -Elts.push_back(Type::IntTy); +Elts.push_back(PointerType::getUnqual(StructTy)); +Elts.push_back(Type::Int32Ty); StructType *NewSTy = StructType::get(Elts); // At this point, NewSTy = "{ opaque*, i32 }". Tell VMCore that @@ -2010,12 +2681,8 @@ Type is maintained by PATypeHolder objects.

    Some data structures need more to perform more complex updates when types get -resolved. The SymbolTable class, for example, needs -move and potentially merge type planes in its representation when a pointer -changes.

    - -

    -To support this, a class can derive from the AbstractTypeUser class. This class +resolved. To support this, a class can derive from the AbstractTypeUser class. +This class allows it to get callbacks when certain types are resolved. To register to get callbacks for a particular type, the DerivedType::{add/remove}AbstractTypeUser methods can be called on a type. Note that these methods only work for @@ -2027,183 +2694,265 @@ objects) can never be refined.

    -

    This class provides a symbol table that the The +ValueSymbolTable class provides a symbol table that the Function and -Module classes use for naming definitions. The symbol table can -provide a name for any Value. -SymbolTable is an abstract data type. It hides the data it contains -and provides access to it through a controlled interface.

    +Module classes use for naming value definitions. The symbol table +can provide a name for any Value. +The +TypeSymbolTable class is used by the Module class to store +names for types.

    Note that the SymbolTable class should not be directly accessed by most clients. It should only be used when iteration over the symbol table names themselves are required, which is very special purpose. Note that not all LLVM -Values have names, and those without names (i.e. they have +Values have names, and those without names (i.e. they have an empty name) do not exist in the symbol table.

    -

    To use the SymbolTable well, you need to understand the -structure of the information it holds. The class contains two -std::map objects. The first, pmap, is a map of -Type* to maps of name (std::string) to Value*. -Thus, Values are stored in two-dimensions and accessed by Type and -name.

    +

    These symbol tables support iteration over the values/types in the symbol +table with begin/end/iterator and supports querying to see if a +specific name is in the symbol table (with lookup). The +ValueSymbolTable class exposes no public mutator methods, instead, +simply call setName on a value, which will autoinsert it into the +appropriate symbol table. For types, use the Module::addTypeName method to +insert entries into the symbol table.

    -

    The interface of this class provides three basic types of operations: -

      -
    1. Accessors. Accessors provide read-only access to information - such as finding a value for a name with the - lookup method.
    2. -
    3. Mutators. Mutators allow the user to add information to the - SymbolTable with methods like - insert.
    4. -
    5. Iterators. Iterators allow the user to traverse the content - of the symbol table in well defined ways, such as the method - plane_begin.
    6. -
    +
    -

    Accessors

    -
    -
    Value* lookup(const Type* Ty, const std::string& name) const: -
    -
    The lookup method searches the type plane given by the - Ty parameter for a Value with the provided name. - If a suitable Value is not found, null is returned.
    - -
    bool isEmpty() const:
    -
    This function returns true if both the value and types maps are - empty
    -
    -

    Mutators

    -
    -
    void insert(Value *Val):
    -
    This method adds the provided value to the symbol table. The Value must - have both a name and a type which are extracted and used to place the value - in the correct type plane under the value's name.
    - -
    void insert(const std::string& Name, Value *Val):
    -
    Inserts a constant or type into the symbol table with the specified - name. There can be a many to one mapping between names and constants - or types.
    - -
    void remove(Value* Val):
    -
    This method removes a named value from the symbol table. The - type and name of the Value are extracted from \p N and used to - lookup the Value in the correct type plane. If the Value is - not in the symbol table, this method silently ignores the - request.
    - -
    Value* remove(const std::string& Name, Value *Val):
    -
    Remove a constant or type with the specified name from the - symbol table.
    - -
    Value *remove(const value_iterator& It):
    -
    Removes a specific value from the symbol table. - Returns the removed value.
    - -
    bool strip():
    -
    This method will strip the symbol table of its names leaving - the type and values.
    - -
    void clear():
    -
    Empty the symbol table completely.
    -
    -

    Iteration

    -

    The following functions describe three types of iterators you can obtain -the beginning or end of the sequence for both const and non-const. It is -important to keep track of the different kinds of iterators. There are -three idioms worth pointing out:

    - - - - - - - - - - - -
    UnitsIteratorIdiom
    Planes Of name/Value mapsPI
    
    -for (SymbolTable::plane_const_iterator PI = ST.plane_begin(),
    -     PE = ST.plane_end(); PI != PE; ++PI ) {
    -  PI->first  // This is the Type* of the plane
    -  PI->second // This is the SymbolTable::ValueMap of name/Value pairs
    -}
    -    
    name/Value pairs in a planeVI
    
    -for (SymbolTable::value_const_iterator VI = ST.value_begin(SomeType),
    -     VE = ST.value_end(SomeType); VI != VE; ++VI ) {
    -  VI->first  // This is the name of the Value
    -  VI->second // This is the Value* value associated with the name
    -}
    -    
    + + -

    Using the recommended iterator names and idioms will help you avoid -making mistakes. Of particular note, make sure that whenever you use -value_begin(SomeType) that you always compare the resulting iterator -with value_end(SomeType) not value_end(SomeOtherType) or else you -will loop infinitely.

    +
    +

    The +User class provides a basis for expressing the ownership of User +towards other +Values. The +Use helper class is employed to do the bookkeeping and to facilitate O(1) +addition and removal.

    -
    + + -
    plane_iterator plane_begin():
    -
    Get an iterator that starts at the beginning of the type planes. - The iterator will iterate over the Type/ValueMap pairs in the - type planes.
    +
    +

    +A subclass of User can choose between incorporating its Use objects +or refer to them out-of-line by means of a pointer. A mixed variant +(some Uses inline others hung off) is impractical and breaks the invariant +that the Use objects belonging to the same User form a contiguous array. +

    +
    -
    plane_const_iterator plane_begin() const:
    -
    Get a const_iterator that starts at the beginning of the type - planes. The iterator will iterate over the Type/ValueMap pairs - in the type planes.
    +

    +We have 2 different layouts in the User (sub)classes: +

      +
    • Layout a) +The Use object(s) are inside (resp. at fixed offset) of the User +object and there are a fixed number of them.

      + +
    • Layout b) +The Use object(s) are referenced by a pointer to an +array from the User object and there may be a variable +number of them.

      +
    +

    +As of v2.4 each layout still possesses a direct pointer to the +start of the array of Uses. Though not mandatory for layout a), +we stick to this redundancy for the sake of simplicity. +The User object also stores the number of Use objects it +has. (Theoretically this information can also be calculated +given the scheme presented below.)

    +

    +Special forms of allocation operators (operator new) +enforce the following memory layouts:

    -
    plane_iterator plane_end():
    -
    Get an iterator at the end of the type planes. This serves as - the marker for end of iteration over the type planes.
    +
      +
    • Layout a) is modelled by prepending the User object by the Use[] array.

      -
      plane_const_iterator plane_end() const:
      -
      Get a const_iterator at the end of the type planes. This serves as - the marker for end of iteration over the type planes.
      +
      +...---.---.---.---.-------...
      +  | P | P | P | P | User
      +'''---'---'---'---'-------'''
      +
      -
      value_iterator value_begin(const Type *Typ):
      -
      Get an iterator that starts at the beginning of a type plane. - The iterator will iterate over the name/value pairs in the type plane. - Note: The type plane must already exist before using this.
      +
    • Layout b) is modelled by pointing at the Use[] array.

      +
      +.-------...
      +| User
      +'-------'''
      +    |
      +    v
      +    .---.---.---.---...
      +    | P | P | P | P |
      +    '---'---'---'---'''
      +
      +
    +(In the above figures 'P' stands for the Use** that + is stored in each Use object in the member Use::Prev) -
    value_const_iterator value_begin(const Type *Typ) const:
    -
    Get a const_iterator that starts at the beginning of a type plane. - The iterator will iterate over the name/value pairs in the type plane. - Note: The type plane must already exist before using this.
    + + -
    value_iterator value_end(const Type *Typ):
    -
    Get an iterator to the end of a type plane. This serves as the marker - for end of iteration of the type plane. - Note: The type plane must already exist before using this.
    +
    +

    +Since the Use objects are deprived of the direct (back)pointer to +their User objects, there must be a fast and exact method to +recover it. This is accomplished by the following scheme:

    +
    -
    value_const_iterator value_end(const Type *Typ) const:
    -
    Get a const_iterator to the end of a type plane. This serves as the - marker for end of iteration of the type plane. - Note: the type plane must already exist before using this.
    +A bit-encoding in the 2 LSBits (least significant bits) of the Use::Prev allows to find the +start of the User object: +
      +
    • 00 —> binary digit 0
    • +
    • 01 —> binary digit 1
    • +
    • 10 —> stop and calculate (s)
    • +
    • 11 —> full stop (S)
    • +
    +

    +Given a Use*, all we have to do is to walk till we get +a stop and we either have a User immediately behind or +we have to walk to the next stop picking up digits +and calculating the offset:

    +
    +.---.---.---.---.---.---.---.---.---.---.---.---.---.---.---.---.----------------
    +| 1 | s | 1 | 0 | 1 | 0 | s | 1 | 1 | 0 | s | 1 | 1 | s | 1 | S | User (or User*)
    +'---'---'---'---'---'---'---'---'---'---'---'---'---'---'---'---'----------------
    +    |+15                |+10            |+6         |+3     |+1
    +    |                   |               |           |       |__>
    +    |                   |               |           |__________>
    +    |                   |               |______________________>
    +    |                   |______________________________________>
    +    |__________________________________________________________>
    +
    +

    +Only the significant number of bits need to be stored between the +stops, so that the worst case is 20 memory accesses when there are +1000 Use objects associated with a User.

    -
    plane_const_iterator find(const Type* Typ ) const:
    -
    This method returns a plane_const_iterator for iteration over - the type planes starting at a specific plane, given by \p Ty.
    + + -
    plane_iterator find( const Type* Typ :
    -
    This method returns a plane_iterator for iteration over the - type planes starting at a specific plane, given by \p Ty.
    +
    +

    +The following literate Haskell fragment demonstrates the concept:

    +
    -
    +
    +
    +> import Test.QuickCheck
    +> 
    +> digits :: Int -> [Char] -> [Char]
    +> digits 0 acc = '0' : acc
    +> digits 1 acc = '1' : acc
    +> digits n acc = digits (n `div` 2) $ digits (n `mod` 2) acc
    +> 
    +> dist :: Int -> [Char] -> [Char]
    +> dist 0 [] = ['S']
    +> dist 0 acc = acc
    +> dist 1 acc = let r = dist 0 acc in 's' : digits (length r) r
    +> dist n acc = dist (n - 1) $ dist 1 acc
    +> 
    +> takeLast n ss = reverse $ take n $ reverse ss
    +> 
    +> test = takeLast 40 $ dist 20 []
    +> 
    +
    +

    +Printing <test> gives: "1s100000s11010s10100s1111s1010s110s11s1S"

    +

    +The reverse algorithm computes the length of the string just by examining +a certain prefix:

    +
    +
    +> pref :: [Char] -> Int
    +> pref "S" = 1
    +> pref ('s':'1':rest) = decode 2 1 rest
    +> pref (_:rest) = 1 + pref rest
    +> 
    +> decode walk acc ('0':rest) = decode (walk + 1) (acc * 2) rest
    +> decode walk acc ('1':rest) = decode (walk + 1) (acc * 2 + 1) rest
    +> decode walk acc _ = walk + acc
    +> 
    +
    +
    +

    +Now, as expected, printing <pref test> gives 40.

    +

    +We can quickCheck this with following property:

    +
    +
    +> testcase = dist 2000 []
    +> testcaseLength = length testcase
    +> 
    +> identityProp n = n > 0 && n <= testcaseLength ==> length arr == pref arr
    +>     where arr = takeLast n testcase
    +> 
    +
    +
    +

    +As expected <quickCheck identityProp> gives:

    - +
    +*Main> quickCheck identityProp
    +OK, passed 100 tests.
    +
    +

    +Let's be a bit more exhaustive:

    + +
    +
    +> 
    +> deepCheck p = check (defaultConfig { configMaxTest = 500 }) p
    +> 
    +
    +
    +

    +And here is the result of <deepCheck identityProp>:

    + +
    +*Main> deepCheck identityProp
    +OK, passed 500 tests.
    +
    + + + + +

    +To maintain the invariant that the 2 LSBits of each Use** in Use +never change after being set up, setters of Use::Prev must re-tag the +new Use** on every modification. Accordingly getters must strip the +tag bits.

    +

    +For layout b) instead of the User we find a pointer (User* with LSBit set). +Following this pointer brings us to the User. A portable trick ensures +that the first bytes of User (if interpreted as a pointer) never has +the LSBit set. (Portability is relying on the fact that all known compilers place the +vptr in the first word of the instances.)

    + +
    + + @@ -2244,15 +2993,15 @@ the lib/VMCore directory.

      -
    • bool isInteger() const: Returns true for any integer type.
    • +
    • bool isIntegerTy() const: Returns true for any integer type.
    • -
    • bool isFloatingPoint(): Return true if this is one of the two +
    • bool isFloatingPointTy(): Return true if this is one of the five floating point types.
    • bool isAbstract(): Return true if the type is abstract (contains @@ -2266,7 +3015,7 @@ the lib/VMCore directory.

      @@ -2298,15 +3047,15 @@ the lib/VMCore directory.

    PointerType
    Subclass of SequentialType for pointer types.
    -
    PackedType
    -
    Subclass of SequentialType for packed (vector) types. A - packed type is similar to an ArrayType but is distinguished because it is - a first class type wherease ArrayType is not. Packed types are used for +
    VectorType
    +
    Subclass of SequentialType for vector types. A + vector type is similar to an ArrayType but is distinguished because it is + a first class type whereas ArrayType is not. Vector types are used for vector operations and are usually small vectors of of an integer or floating point type.
    StructType
    Subclass of DerivedTypes for struct types.
    -
    FunctionType
    +
    FunctionType
    Subclass of DerivedTypes for function types.
    • bool isVarArg() const: Returns true if its a vararg @@ -2500,7 +3249,7 @@ method. In addition, all LLVM values can be named. The "name" of the -

      The name of this instruction is "foo". NOTE +

      The name of this instruction is "foo". NOTE that the name of any value may be missing (an empty string), so names should ONLY be used for debugging (making the source code easier to read, debugging printouts), they should not be used to keep track of values or map @@ -2732,10 +3481,20 @@ a subclass, which represents the address of a global variable or function.

    • ConstantInt : This subclass of Constant represents an integer constant of any width.
        -
      • int64_t getSExtValue() const: Returns the underlying value of - this constant as a sign extended signed integer value.
      • -
      • uint64_t getZExtValue() const: Returns the underlying value - of this constant as a zero extended unsigned integer value.
      • +
      • const APInt& getValue() const: Returns the underlying + value of this constant, an APInt value.
      • +
      • int64_t getSExtValue() const: Converts the underlying APInt + value to an int64_t via sign extension. If the value (not the bit width) + of the APInt is too large to fit in an int64_t, an assertion will result. + For this reason, use of this method is discouraged.
      • +
      • uint64_t getZExtValue() const: Converts the underlying APInt + value to a uint64_t via zero extension. IF the value (not the bit width) + of the APInt is too large to fit in a uint64_t, an assertion will result. + For this reason, use of this method is discouraged.
      • +
      • static ConstantInt* get(const APInt& Val): Returns the + ConstantInt object that represents the value provided by Val. + The type is implied as the IntegerType that corresponds to the bit width + of Val.
      • static ConstantInt* get(const Type *Ty, uint64_t Val): Returns the ConstantInt object that represents the value provided by Val for integer type Ty.
      • @@ -2851,7 +3610,7 @@ Superclasses: GlobalValue, Value

        The Function class represents a single procedure in LLVM. It is -actually one of the more complex classes in the LLVM heirarchy because it must +actually one of the more complex classes in the LLVM hierarchy because it must keep track of a large amount of data. The Function class keeps track of a list of BasicBlocks, a list of formal Arguments, and a @@ -2860,7 +3619,7 @@ of a list of BasicBlocks, a list of formal

        The list of BasicBlocks is the most commonly used part of Function objects. The list imposes an implicit ordering of the blocks in the function, which indicate how the code will be -layed out by the backend. Additionally, the first BasicBlock is the implicit entry node for the Function. It is not legal in LLVM to explicitly branch to this initial block. There are no implicit exit nodes, and in fact there may be multiple exit @@ -2906,13 +3665,13 @@ is its address (after linking) which is guaranteed to be constant.

        create and what type of linkage the function should have. The FunctionType argument specifies the formal arguments and return value for the function. The same - FunctionType value can be used to + FunctionType value can be used to create multiple functions. The Parent argument specifies the Module in which the function is defined. If this argument is provided, the function will automatically be inserted into that module's list of functions.

        -
      • bool isExternal() +
      • bool isDeclaration()

        Return whether or not the Function has a body defined. If the function is "external", it does not have a body, and thus must be resolved @@ -2990,7 +3749,7 @@ Superclasses: GlobalValue, User, Value

        -

        Global variables are represented with the (suprise suprise) +

        Global variables are represented with the (surprise surprise) GlobalVariable class. Like functions, GlobalVariables are also subclasses of GlobalValue, and as such are always referenced by their address (global values must live in memory, so their @@ -3018,11 +3777,12 @@ never change at runtime).

        Create a new global variable of the specified type. If isConstant is true then the global variable will be marked as unchanging for the program. The Linkage parameter specifies the type of - linkage (internal, external, weak, linkonce, appending) for the variable. If - the linkage is InternalLinkage, WeakLinkage, or LinkOnceLinkage,  then - the resultant global variable will have internal linkage. AppendingLinkage - concatenates together all instances (in different translation units) of the - variable into a single variable but is only applicable to arrays.  See + linkage (internal, external, weak, linkonce, appending) for the variable. + If the linkage is InternalLinkage, WeakAnyLinkage, WeakODRLinkage, + LinkOnceAnyLinkage or LinkOnceODRLinkage,  then the resultant + global variable will have internal linkage. AppendingLinkage concatenates + together all instances (in different translation units) of the variable + into a single variable but is only applicable to arrays.  See the LLVM Language Reference for further details on linkage types. Optionally an initializer, a name, and the module to put the variable into may be specified for the global variable as @@ -3039,7 +3799,7 @@ never change at runtime).

      • Constant *getInitializer() -

        Returns the intial value for a GlobalVariable. It is not legal +

        Returns the initial value for a GlobalVariable. It is not legal to call this method if there is no initializer.

      @@ -3055,7 +3815,7 @@ never change at runtime).

      #include "llvm/BasicBlock.h"
      -doxygen info: BasicBlock +doxygen info: BasicBlock Class
      Superclass: Value

      @@ -3154,9 +3914,9 @@ arguments. An argument has a pointer to the parent Function.


      Valid CSS! + src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"> Valid HTML 4.01! + src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01 Strict"> Dinakar Dhurjati and Chris Lattner