LLVM 3.0 Release Notes

LLVM Dragon Logo
  1. Introduction
  2. Sub-project Status Update
  3. External Projects Using LLVM 3.0
  4. What's New in LLVM 3.0?
  5. Installation Instructions
  6. Known Problems
  7. Additional Information

Written by the LLVM Team

Introduction

This document contains the release notes for the LLVM Compiler Infrastructure, release 3.0. Here we describe the status of LLVM, including major improvements from the previous release, improvements in various subprojects of LLVM, and some of the current users of the code. All LLVM releases may be downloaded from the LLVM releases web site.

For more information about LLVM, including information about the latest release, please check out the main LLVM web site. If you have questions or comments, the LLVM Developer's Mailing List is a good place to send them.

Note that if you are reading this file from a Subversion checkout or the main LLVM web page, this document applies to the next release, not the current one. To see the release notes for a specific release, please see the releases page.

Sub-project Status Update

The LLVM 3.0 distribution currently consists of code from the core LLVM repository (which roughly includes the LLVM optimizers, code generators and supporting tools), and the Clang repository. In addition to this code, the LLVM Project includes other sub-projects that are in development. Here we include updates on these subprojects.

Clang: C/C++/Objective-C Frontend Toolkit

Clang is an LLVM front end for the C, C++, and Objective-C languages. Clang aims to provide a better user experience through expressive diagnostics, a high level of conformance to language standards, fast compilation, and low memory use. Like LLVM, Clang provides a modular, library-based architecture that makes it suitable for creating or integrating with other development tools. Clang is considered a production-quality compiler for C, Objective-C, C++ and Objective-C++ on x86 (32- and 64-bit), and for Darwin/ARM targets.

In the LLVM 3.0 time-frame, the Clang team has made many improvements:

For more details about the changes to Clang since the 2.9 release, see the Clang release notes

If Clang rejects your code but another compiler accepts it, please take a look at the language compatibility guide to make sure this is not intentional or a known issue.

DragonEgg: GCC front-ends, LLVM back-end

DragonEgg is a gcc plugin that replaces GCC's optimizers and code generators with LLVM's. It works with gcc-4.5 or gcc-4.6, targets the x86-32 and x86-64 processor families, and has been successfully used on the Darwin, FreeBSD, KFreeBSD, Linux and OpenBSD platforms. It fully supports Ada, C, C++ and Fortran. It has partial support for Go, Java, Obj-C and Obj-C++.

The 3.0 release has the following notable changes:

compiler-rt: Compiler Runtime Library

The new LLVM compiler-rt project is a simple library that provides an implementation of the low-level target-specific hooks required by code generation and other runtime components. For example, when compiling for a 32-bit target, converting a double to a 64-bit unsigned integer is compiled into a runtime call to the "__fixunsdfdi" function. The compiler-rt library provides highly optimized implementations of this and other low-level routines (some are 3x faster than the equivalent libgcc routines).

In the LLVM 3.0 timeframe, the target specific ARM code has converted to "unified" assembly syntax, and several new functions have been added to the library.

LLDB: Low Level Debugger

LLDB is a ground-up implementation of a command line debugger, as well as a debugger API that can be used from other applications. LLDB makes use of the Clang parser to provide high-fidelity expression parsing (particularly for C++) and uses the LLVM JIT for target support.

LLDB has advanced by leaps and bounds in the 3.0 timeframe. It is dramatically more stable and useful, and includes both a new tutorial and a side-by-side comparison with GDB.

libc++: C++ Standard Library

Like compiler_rt, libc++ is now dual licensed under the MIT and UIUC license, allowing it to be used more permissively.

Libc++ has been ported to FreeBSD and imported into the base system. It is planned to be the default STL implementation for FreeBSD 10.

VMKit

The VMKit project is an implementation of a Java Virtual Machine (Java VM or JVM) that uses LLVM for static and just-in-time compilation.

In the LLVM 3.0 time-frame, VMKit has had significant improvements on both runtime and startup performance:

  • Precompilation: by compiling ahead of time a small subset of Java's core library, the startup performance have been highly optimized to the point that running a 'Hello World' program takes less than 30 milliseconds.
  • Customization: by customizing virtual methods for individual classes, the VM can statically determine the target of a virtual call, and decide to inline it.
  • Inlining: the VM does more inlining than it did before, by allowing more bytecode instructions to be inlined, and thanks to customization. It also inlines GC barriers, and object allocations.
  • New exception model: the generated code for a method that does not do any try/catch is not penalized anymore by the eventuality of calling a method that throws an exception. Instead, the method that throws the exception jumps directly to the method that could catch it.

LLBrowse: IR Browser

LLBrowse is an interactive viewer for LLVM modules. It can load any LLVM module and displays its contents as an expandable tree view, facilitating an easy way to inspect types, functions, global variables, or metadata nodes. It is fully cross-platform, being based on the popular wxWidgets GUI toolkit.

External Open Source Projects Using LLVM 3.0

An exciting aspect of LLVM is that it is used as an enabling technology for a lot of other language and tools projects. This section lists some of the projects that have already been updated to work with LLVM 3.0.

AddressSanitizer

AddressSanitizer uses compiler instrumentation and a specialized malloc library to find C/C++ bugs such as use-after-free and out-of-bound accesses to heap, stack, and globals. The key feature of the tool is speed: the average slowdown introduced by AddressSanitizer is less than 2x.

ClamAV

Clam AntiVirus is an open source (GPL) anti-virus toolkit for UNIX, designed especially for e-mail scanning on mail gateways.

Since version 0.96 it has bytecode signatures that allow writing detections for complex malware. It uses LLVM's JIT to speed up the execution of bytecode on X86, X86-64, PPC32/64, falling back to its own interpreter otherwise. The git version was updated to work with LLVM 3.0.

clang_complete for VIM

clang_complete is a VIM plugin, that provides accurate C/C++ autocompletion using the clang front end. The development version of clang complete, can directly use libclang which can maintain a cache to speed up auto completion.

clReflect

clReflect is a C++ parser that uses clang/LLVM to derive a light-weight reflection database suitable for use in game development. It comes with a very simple runtime library for loading and querying the database, requiring no external dependencies (including CRT), and an additional utility library for object management and serialisation.

Cling C++ Interpreter

Cling is an interactive compiler interface (aka C++ interpreter). It supports C++ and C, and uses LLVM's JIT and the Clang parser. It has a prompt interface, runs source files, calls into shared libraries, prints the value of expressions, even does runtime lookup of identifiers (dynamic scopes). And it just behaves like one would expect from an interpreter.

Crack Programming Language

Crack aims to provide the ease of development of a scripting language with the performance of a compiled language. The language derives concepts from C++, Java and Python, incorporating object-oriented programming, operator overloading and strong typing.

Eero

Eero is a fully header-and-binary-compatible dialect of Objective-C 2.0, implemented with a patched version of the Clang/LLVM compiler. It features a streamlined syntax, Python-like indentation, and new operators, for improved readability and reduced code clutter. It also has new features such as limited forms of operator overloading and namespaces, and strict (type-and-operator-safe) enumerations. It is inspired by languages such as Smalltalk, Python, and Ruby.

FAUST Real-Time Audio Signal Processing Language

FAUST is a compiled language for real-time audio signal processing. The name FAUST stands for Functional AUdio STream. Its programming model combines two approaches: functional programming and block diagram composition. In addition with the C, C++, Java output formats, the Faust compiler can now generate LLVM bitcode, and works with LLVM 2.7-3.0.

Glasgow Haskell Compiler (GHC)

GHC is an open source, state-of-the-art programming suite for Haskell, a standard lazy functional programming language. It includes an optimizing static compiler generating good code for a variety of platforms, together with an interactive system for convenient, quick development.

GHC 7.0 and onwards include an LLVM code generator, supporting LLVM 2.8 and later. Since LLVM 2.9, GHC now includes experimental support for the ARM platform with LLVM 3.0.

gwXscript

gwXscript is an object oriented, aspect oriented programming language which can create both executables (ELF, EXE) and shared libraries (DLL, SO, DYNLIB). The compiler is implemented in its own language and translates scripts into LLVM-IR which can be optimized and translated into native code by the LLVM framework. Source code in gwScript contains definitions that expand the namespaces. So you can build your project and simply 'plug out' features by removing a file. The remaining project does not leave scars since you directly separate concerns by the 'template' feature of gwX. It is also possible to add new features to a project by just adding files and without editing the original project. This language is used for example to create games or content management systems that should be extendable.

gwXscript is strongly typed and offers comfort with its native types string, hash and array. You can easily write new libraries in gwXscript or native code. gwXscript is type safe and users should not be able to crash your program or execute malicious code except code that is eating CPU time.

include-what-you-use

include-what-you-use is a tool to ensure that a file directly #includes all .h files that provide a symbol that the file uses. It also removes superfluous #includes from source files.

ispc: The Intel SPMD Program Compiler

ispc is a compiler for "single program, multiple data" (SPMD) programs. It compiles a C-based SPMD programming language to run on the SIMD units of CPUs; it often delivers 5-6x speedups on a single core of a CPU with an 8-wide SIMD unit compared to serial code, while still providing a clean and easy-to-understand programming model. For an introduction to the language and its performance, see the walkthrough of a short example program. ispc is licensed under the BSD license.

The Julia Programming Language

Julia is a high-level, high-performance dynamic language for technical computing. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The compiler uses type inference to generate fast code without any type declarations, and uses LLVM's optimization passes and JIT compiler. The language is designed around multiple dispatch, giving programs a large degree of flexibility. It is ready for use on many kinds of problems.

LanguageKit and Pragmatic Smalltalk

LanguageKit is a framework for implementing dynamic languages sharing an object model with Objective-C. It provides static and JIT compilation using LLVM along with its own interpreter. Pragmatic Smalltalk is a dialect of Smalltalk, built on top of LanguageKit, that interfaces directly with Objective-C, sharing the same object representation and message sending behaviour. These projects are developed as part of the Étoilé desktop environment.

LuaAV

LuaAV is a real-time audiovisual scripting environment based around the Lua language and a collection of libraries for sound, graphics, and other media protocols. LuaAV uses LLVM and Clang to JIT compile efficient user-defined audio synthesis routines specified in a declarative syntax.

Mono

An open source, cross-platform implementation of C# and the CLR that is binary compatible with Microsoft.NET. Has an optional, dynamically-loaded LLVM code generation backend in Mini, the JIT compiler.

Note that we use a Git mirror of LLVM with some patches.

Polly

Polly is an advanced data-locality optimizer and automatic parallelizer. It uses an advanced, mathematical model to calculate detailed data dependency information which it uses to optimize the loop structure of a program. Polly can speed up sequential code by improving memory locality and consequently the cache use. Furthermore, Polly is able to expose different kind of parallelism which it exploits by introducing (basic) OpenMP and SIMD code. A mid-term goal of Polly is to automatically create optimized GPU code.

Portable OpenCL (pocl)

Portable OpenCL is an open source implementation of the OpenCL standard which can be easily adapted for new targets. One of the goals of the project is improving performance portability of OpenCL programs, avoiding the need for target-dependent manual optimizations. A "native" target is included, which allows running OpenCL kernels on the host (CPU).

Pure

Pure is an algebraic/functional programming language based on term rewriting. Programs are collections of equations which are used to evaluate expressions in a symbolic fashion. The interpreter uses LLVM as a backend to JIT-compile Pure programs to fast native code. Pure offers dynamic typing, eager and lazy evaluation, lexical closures, a hygienic macro system (also based on term rewriting), built-in list and matrix support (including list and matrix comprehensions) and an easy-to-use interface to C and other programming languages (including the ability to load LLVM bitcode modules, and inline C, C++, Fortran and Faust code in Pure programs if the corresponding LLVM-enabled compilers are installed).

Pure version 0.48 has been tested and is known to work with LLVM 3.0 (and continues to work with older LLVM releases >= 2.5).

Renderscript

Renderscript is Android's advanced 3D graphics rendering and compute API. It provides a portable C99-based language with extensions to facilitate common use cases for enhancing graphics and thread level parallelism. The Renderscript compiler frontend is based on Clang/LLVM. It emits a portable bitcode format for the actual compiled script code, as well as reflects a Java interface for developers to control the execution of the compiled bitcode. Executable machine code is then generated from this bitcode by an LLVM backend on the device. Renderscript is thus able to provide a mechanism by which Android developers can improve performance of their applications while retaining portability.

SAFECode

SAFECode is a memory safe C/C++ compiler built using LLVM. It takes standard, unannotated C/C++ code, analyzes the code to ensure that memory accesses and array indexing operations are safe, and instruments the code with run-time checks when safety cannot be proven statically. SAFECode can be used as a debugging aid (like Valgrind) to find and repair memory safety bugs. It can also be used to protect code from security attacks at run-time.

The Stupid D Compiler (SDC)

The Stupid D Compiler is a project seeking to write a self-hosting compiler for the D programming language without using the frontend of the reference compiler (DMD).

TTA-based Co-design Environment (TCE)

TCE is a toolset for designing application-specific processors (ASP) based on the Transport triggered architecture (TTA). The toolset provides a complete co-design flow from C/C++ programs down to synthesizable VHDL and parallel program binaries. Processor customization points include the register files, function units, supported operations, and the interconnection network.

TCE uses Clang and LLVM for C/C++ language support, target independent optimizations and also for parts of code generation. It generates new LLVM-based code generators "on the fly" for the designed TTA processors and loads them in to the compiler backend as runtime libraries to avoid per-target recompilation of larger parts of the compiler chain.

Tart Programming Language

Tart is a general-purpose, strongly typed programming language designed for application developers. Strongly inspired by Python and C#, Tart focuses on practical solutions for the professional software developer, while avoiding the clutter and boilerplate of legacy languages like Java and C++. Although Tart is still in development, the current implementation supports many features expected of a modern programming language, such as garbage collection, powerful bidirectional type inference, a greatly simplified syntax for template metaprogramming, closures and function literals, reflection, operator overloading, explicit mutability and immutability, and much more. Tart is flexible enough to accommodate a broad range of programming styles and philosophies, while maintaining a strong commitment to simplicity, minimalism and elegance in design.

ThreadSanitizer

ThreadSanitizer is a data race detector for (mostly) C and C++ code, available for Linux, Mac OS and Windows. On different systems, we use binary instrumentation frameworks (Valgrind and Pin) as frontends that generate the program events for the race detection algorithm. On Linux, there's an option of using LLVM-based compile-time instrumentation.

What's New in LLVM 3.0?

This release includes a huge number of bug fixes, performance tweaks and minor improvements. Some of the major improvements and new features are listed in this section.

Major New Features

LLVM 3.0 includes several major changes and big features:

  • llvm-gcc is no longer supported, and not included in the release. We recommend switching to Clang or DragonEgg.
  • The linear scan register allocator has been replaced with a new "greedy" register allocator, enabling live range splitting and many other optimizations that lead to better code quality. Please see its blog post or its talk at the Developer Meeting for more information.
  • LLVM IR now includes full support for atomics memory operations intended to support the C++'11 and C'1x memory models. This includes atomic load and store, compare and exchange, and read/modify/write instructions as well as a full set of memory ordering constraints. Please see the Atomics Guide for more information.
  • The LLVM IR exception handling representation has been redesigned and reimplemented, making it more elegant, fixing a huge number of bugs, and enabling inlining and other optimizations. Please see its blog post and the Exception Handling documentation for more information.
  • The LLVM IR Type system has been redesigned and reimplemented, making it faster and solving some long-standing problems. Please see its blog post for more information.
  • The MIPS backend has made major leaps in this release, going from an experimental target to being virtually production quality and supporting a wide variety of MIPS subtargets. See the MIPS section below for more information.
  • The optimizer and code generator now supports gprof and gcov-style coverage and profiling information, and includes a new llvm-cov tool (but also works with gcov). Clang exposes coverage and profiling through GCC-compatible command line options.

LLVM IR and Core Improvements

LLVM IR has several new features for better support of new targets and that expose new optimization opportunities:

Optimizer Improvements

In addition to many minor performance tweaks and bug fixes, this release includes a few major enhancements and additions to the optimizers:

  • The pass manager now has an extension API that allows front-ends and plugins to insert their own optimizations in the well-known places in the standard pass optimization pipeline.
  • Information about branch probability and basic block frequency is now available within LLVM, based on a combination of static branch prediction heuristics and __builtin_expect calls. That information is currently used for register spill placement and if-conversion, with additional optimizations planned for future releases. The same framework is intended for eventual use with profile-guided optimization.
  • The "-indvars" induction variable simplification pass only modifies induction variables when profitable. Sign and zero extension elimination, linear function test replacement, loop unrolling, and other simplifications that require induction variable analysis have been generalized so they no longer require loops to be rewritten into canonical form prior to optimization. This new design preserves more IR level information, avoids undoing earlier loop optimizations (particularly hand-optimized loops), and no longer requires the code generator to reconstruct loops into an optimal form - an intractable problem.
  • LLVM now includes a pass to optimize retain/release calls for the Automatic Reference Counting (ARC) Objective-C language feature (in lib/Transforms/Scalar/ObjCARC.cpp). It is a decent example of implementing a source-language-specific optimization in LLVM.

MC Level Improvements

The LLVM Machine Code (aka MC) subsystem was created to solve a number of problems in the realm of assembly, disassembly, object file format handling, and a number of other related areas that CPU instruction-set level tools work in. For more information, please see the Intro to the LLVM MC Project Blog Post.

  • The MC layer has undergone significant refactoring to eliminate layering violations that caused it to pull in the LLVM compiler backend code.
  • The ELF object file writers are much more full featured.
  • The integrated assembler now supports #line directives.
  • An early implementation of a JIT built on top of the MC framework (known as MC-JIT) has been implemented and will eventually replace the old JIT. It emits object files direct to memory and uses a runtime dynamic linker to resolve references and drive lazy compilation. The MC-JIT enables much greater code reuse between the JIT and the static compiler and provides better integration with the platform ABI as a result.
  • The assembly printer now makes uses of assemblers instruction aliases (InstAliases) to print simplified mneumonics when possible.
  • TableGen can now autogenerate MC expansion logic for pseudo instructions that expand to multiple MC instructions (through the PseudoInstExpansion class).
  • A new llvm-dwarfdump tool provides a start of a drop-in replacement for the corresponding tool that use LLVM libraries. As part of this, LLVM has the beginnings of a dwarf parsing library.
  • llvm-objdump has more output including, symbol by symbol disassembly, inline relocations, section headers, symbol tables, and section contents. Support for archive files has also been added.
  • llvm-nm has gained support for archives of binary files.
  • llvm-size has been added. This tool prints out section sizes.

Target Independent Code Generator Improvements

We have put a significant amount of work into the code generator infrastructure, which allows us to implement more aggressive algorithms and make it run faster:

  • LLVM can now produce code that works with libgcc to dynamically allocate stack segments, as opposed to allocating a worst-case chunk of virtual memory for each thread.
  • LLVM generates substantially better code for indirect gotos due to a new tail duplication pass, which can be a substantial performance win for interpreter loops that use them.
  • Exception handling and debug frame information is now emitted with CFI directives. This lets the assembler produce more compact info as it knows the final offsets, yielding much smaller executables for some C++ applications. If the system assembler doesn't support it, MC exands the directives when the integrated assembler is not used.
  • The code generator now supports vector "select" operations on vector comparisons, turning them into various optimized code sequences (e.g. using the SSE4/AVX "blend" instructions).
  • The SSE execution domain fix pass and the ARM NEON move fix pass have been merged to a target independent execution dependency fix pass. This pass is used to select alternative equivalent opcodes in a way that minimizes execution domain crossings. Closely connected instructions are moved to the same execution domain when possible. Targets can override the getExecutionDomain and setExecutionDomain hooks to use the pass.

X86-32 and X86-64 Target Improvements

New features and major changes in the X86 target include:

  • The X86 backend, assembler and disassembler now have full support for AVX 1. To enable it pass -mavx to the compiler. AVX2 implementation is underway on mainline.
  • The integrated assembler and disassembler now support a broad range of new instructions including Atom, Ivy Bridge, SSE4a/BMI instructions, rdrand and many others.
  • The X86 backend now fully supports the X87 floating point stack inline assembly constraints.
  • The integrated assembler now supports the .code32 and .code64 directives to switch between 32-bit and 64-bit instructions.
  • The X86 backend now synthesizes horizontal add/sub instructions from generic vector code when the appropriate instructions are enabled.
  • The X86-64 backend generates smaller and faster code at -O0 due to improvements in fast instruction selection.
  • Native Client subtarget support has been added.
  • The CRC32 intrinsics have been renamed. The intrinsics were previously @llvm.x86.sse42.crc32.[8|16|32] and @llvm.x86.sse42.crc64.[8|64]. They have been renamed to @llvm.x86.sse42.crc32.32.[8|16|32] and @llvm.x86.sse42.crc32.64.[8|64].

ARM Target Improvements

New features of the ARM target include:

  • The ARM backend generates much faster code for Cortex-A9 chips.
  • The ARM backend has improved support for Cortex-M series processors.
  • The ARM inline assembly constraints have been implemented and are now fully supported.
  • NEON code produced by Clang often runs much faster due to improvements in the Scalar Replacement of Aggregates pass.
  • The old ARM disassembler is replaced with a new one based on autogenerated encoding information from ARM .td files.
  • The integrated assembler has made major leaps forward, but is still beta quality in LLVM 3.0.

MIPS Target Improvements

This release has seen major new work on just about every aspect of the MIPS backend. Some of the major new features include:

  • Most MIPS32r1 and r2 instructions are now supported.
  • LE/BE MIPS32r1/r2 has been tested extensively.
  • O32 ABI has been fully tested.
  • MIPS backend has migrated to using the MC infrastructure for assembly printing. Initial support for direct object code emission has been implemented too.
  • Delay slot filler has been updated. Now it tries to fill delay slots with useful instructions instead of always filling them with NOPs.
  • Support for old-style JIT is complete.
  • Support for old architectures (MIPS1 and MIPS2) has been removed.
  • Initial support for MIPS64 has been added.

PTX Target Improvements

The PTX back-end is still experimental, but is fairly usable for compute kernels in LLVM 3.0. Most scalar arithmetic is implemented, as well as intrinsics to access the special PTX registers and sync instructions. The major missing pieces are texture/sampler support and some vector operations.

That said, the backend is already being used for domain-specific languages and can be used by Clang to compile OpenCL C code into PTX.

Other Target Specific Improvements

  • Many PowerPC improvements have been implemented for ELF targets, including support for varargs and initial support for direct .o file emission.
  • MicroBlaze scheduling itineraries were added that model the 3-stage and the 5-stage pipeline architectures. The 3-stage pipeline model can be selected with -mcpu=mblaze3 and the 5-stage pipeline model can be selected with -mcpu=mblaze5.

Major Changes and Removed Features

If you're already an LLVM user or developer with out-of-tree changes based on LLVM 2.9, this section lists some "gotchas" that you may run into upgrading from the previous release.

  • LLVM 3.0 removes support for reading LLVM 2.8 and earlier files, and LLVM 3.1 will eliminate support for reading LLVM 2.9 files. Going forward, we aim for all future versions of LLVM to read bitcode files and .ll files produced by LLVM 3.0.
  • Tablegen has been split into a library, allowing the clang tblgen pieces to now live in the clang tree. The llvm version has been renamed to llvm-tblgen instead of tblgen.
  • The LLVMC meta compiler driver was removed.
  • The unused PostOrder Dominator Frontiers and LowerSetJmp passes were removed.
  • The old TailDup pass was not used in the standard pipeline and was unable to update ssa form, so it has been removed.
  • The syntax of volatile loads and stores in IR has been changed to "load volatile"/"store volatile". The old syntax ("volatile load"/"volatile store") is still accepted, but is now considered deprecated and will be removed in 3.1.
  • llvm-gcc's frontend tests have been removed from llvm/test/Frontend*, sunk into the clang and dragonegg testsuites.
  • The old atomic intrinsics (llvm.memory.barrier and llvm.atomic.*) are now gone. Please use the new atomic instructions, described in the atomics guide.
  • LLVM's configure script doesn't depend on llvm-gcc anymore, eliminating a strange circular dependence between projects.

Windows (32-bit)

  • On Win32(MinGW32 and MSVC), Windows 2000 will not be supported. Windows XP or higher is required.

Internal API Changes

In addition, many APIs have changed in this release. Some of the major LLVM API changes are:

  • The biggest and most pervasive change is that the type system has been rewritten: PATypeHolder and OpaqueType are gone, and all APIs deal with Type* instead of const Type*. If you need to create recursive structures, then create a named structure, and use setBody() when all its elements are built. Type merging and refining is gone too: named structures are not merged with other structures, even if their layout is identical. (of course anonymous structures are still uniqued by layout).
  • PHINode::reserveOperandSpace has been removed. Instead, you must specify how many operands to reserve space for when you create the PHINode, by passing an extra argument into PHINode::Create.
  • PHINodes no longer store their incoming BasicBlocks as operands. Instead, the list of incoming BasicBlocks is stored separately, and can be accessed with new functions PHINode::block_begin and PHINode::block_end.
  • Various functions now take an ArrayRef instead of either a pair of pointers (or iterators) to the beginning and end of a range, or a pointer and a length. Others now return an ArrayRef instead of a reference to a SmallVector or std::vector. These include:
    • CallInst::Create
    • ComputeLinearIndex (in llvm/CodeGen/Analysis.h)
    • ConstantArray::get
    • ConstantExpr::getExtractElement
    • ConstantExpr::getGetElementPtr
    • ConstantExpr::getInBoundsGetElementPtr
    • ConstantExpr::getIndices
    • ConstantExpr::getInsertElement
    • ConstantExpr::getWithOperands
    • ConstantFoldCall (in llvm/Analysis/ConstantFolding.h)
    • ConstantFoldInstOperands (in llvm/Analysis/ConstantFolding.h)
    • ConstantVector::get
    • DIBuilder::createComplexVariable
    • DIBuilder::getOrCreateArray
    • ExtractValueInst::Create
    • ExtractValueInst::getIndexedType
    • ExtractValueInst::getIndices
    • FindInsertedValue (in llvm/Analysis/ValueTracking.h)
    • gep_type_begin (in llvm/Support/GetElementPtrTypeIterator.h)
    • gep_type_end (in llvm/Support/GetElementPtrTypeIterator.h)
    • GetElementPtrInst::Create
    • GetElementPtrInst::CreateInBounds
    • GetElementPtrInst::getIndexedType
    • InsertValueInst::Create
    • InsertValueInst::getIndices
    • InvokeInst::Create
    • IRBuilder::CreateCall
    • IRBuilder::CreateExtractValue
    • IRBuilder::CreateGEP
    • IRBuilder::CreateInBoundsGEP
    • IRBuilder::CreateInsertValue
    • IRBuilder::CreateInvoke
    • MDNode::get
    • MDNode::getIfExists
    • MDNode::getTemporary
    • MDNode::getWhenValsUnresolved
    • SimplifyGEPInst (in llvm/Analysis/InstructionSimplify.h)
    • TargetData::getIndexedOffset
  • All forms of StringMap::getOrCreateValue have been remove except for the one which takes a StringRef.
  • The LLVMBuildUnwind function from the C API was removed. The LLVM unwind instruction has been deprecated for a long time and isn't used by the current front-ends. So this was removed during the exception handling rewrite.
  • The LLVMAddLowerSetJmpPass function from the C API was removed because the LowerSetJmp pass was removed.
  • The DIBuilder interface used by front ends to encode debugging information in the LLVM IR now expects clients to use DIBuilder::finalize() at the end of translation unit to complete debugging information encoding.
  • TargetSelect.h moved to Support/ from Target/
  • UpgradeIntrinsicCall no longer upgrades pre-2.9 intrinsic calls (for example llvm.memset.i32).
  • It is mandatory to initialize all out-of-tree passes too and their dependencies now with INITIALIZE_PASS{BEGIN,END,} and INITIALIZE_{PASS,AG}_DEPENDENCY.
  • The interface for MemDepResult in MemoryDependenceAnalysis has been enhanced with new return types Unknown and NonFuncLocal, in addition to the existing types Clobber, Def, and NonLocal.

Known Problems

LLVM is generally a production quality compiler, and is used by a broad range of applications and shipping in many products. That said, not every subsystem is as mature as the aggregate, particularly the more obscure targets. If you run into a problem, please check the LLVM bug database and submit a bug if there isn't already one or ask on the LLVMdev list.

Known problem areas include:

  • The Alpha, Blackfin, CellSPU, MSP430, PTX, SystemZ and XCore backends are experimental, and the Alpha, Blackfin and SystemZ targets have already been removed from mainline.
  • The integrated assembler, disassembler, and JIT is not supported by several targets. If an integrated assembler is not supported, then a system assembler is required. For more details, see the Target Features Matrix.
  • The C backend has numerous problems and is not being actively maintained. Depending on it for anything serious is not advised.

Additional Information

A wide variety of additional information is available on the LLVM web page, in particular in the documentation section. The web page also contains versions of the API documentation which is up-to-date with the Subversion version of the source code. You can access versions of these documents specific to this release by going into the "llvm/doc/" directory in the LLVM tree.

If you have any questions or comments about LLVM, please feel free to contact us via the mailing lists.


Valid CSS Valid HTML 4.01 LLVM Compiler Infrastructure
Last modified: $Date$