Getting Started with the LLVM System
By: Guochun Shi, Chris Lattner, John Criswell, and Vikram Adve

Contents

Overview

Welcome to LLVM! In order to get started, you first need to know some basic information.

First, LLVM comes in two pieces. The first piece is the LLVM suite. This contains all of the tools, libraries, and header files needed to use the low level virtual machine. It also contains a test suite that can be used to test the LLVM tools and the C front end.

The second piece is the C front end. This component provides a version of GCC that compiles C code into LLVM bytecode. Currently, the C front end is a modified version of GCC 3.4 (we track the GCC 3.4 development). Once compiled into LLVM bytecode, a program can be manipulated with the LLVM tools from the LLVM suite.

Requirements

Before you begin to use the LLVM system, review the requirements given below. This may save you some trouble by knowing ahead of time what hardware and software you will need.

Hardware

LLVM is known to work on the following platforms:

If you want to compile your own version of the C front end, you will need additional disk space:

LLVM may compile on other platforms. The LLVM utilities should work on other platforms, so it should be possible to generate and produce LLVM bytecode on unsupported platforms (although bytecode generated on one platform may not work on another platform). However, the code generators and Just-In-Time (JIT) compilers only generate SparcV9 or x86 machine code.

Software

Unpacking the distribution requires the following tools:

GNU Zip (gzip)
GNU Tar
These tools are needed to uncompress and unarchive the software. Regular Solaris tar may work for unpacking the TAR archive but is untested.
Compiling LLVM requires that you have several different software packages installed:
GCC
The GNU Compiler Collection must be installed with C and C++ language support. GCC 3.2.x works, and GCC 3.x is generally supported.

Note that we currently do not support any other C++ compiler.

GNU Make
The LLVM build system relies upon GNU Make extensions. Therefore, you will need GNU Make (sometimes known as gmake) to build LLVM.

Flex and Bison
The LLVM source code is built using flex and bison. You will not be able to configure and compile LLVM without them.

GNU M4
If you are installing Bison on your machine for the first time, you will need GNU M4 (version 1.4 or higher).

There are some additional tools that you may want to have when working with LLVM:

The next section of this guide is meant to get you up and running with LLVM and to give you some basic information about the LLVM environment. The first subsection gives a short summary for those who are already familiar with the system and want to get started as quickly as possible.

The later sections of this guide describe the general layout of the the LLVM source-tree, a simple example using the LLVM tool chain, and links to find more information about LLVM or to get help via e-mail.

Getting Started with LLVM

Getting Started Quickly (A Summary)

Here's the short story for getting up and running quickly with LLVM:
  1. Install the C front end:
    1. cd where-you-want-the-C-front-end-to-live
    2. gunzip --stdout cfrontend.platform.tar.gz | tar -xvf -

  2. Get the Source Code

  3. Configure the LLVM Build Environment
    1. Run configure to configure the Makefiles and header files for the default platform. Useful options include:
      • --with-objroot=directory
        Specify where object files should be placed during the build.
      • --with-llvmgccdir=directory
        Specify where the LLVM C frontend is going to be installed.

  4. Build the LLVM Suite
    1. Set your LLVM_LIB_SEARCH_PATH environment variable.
    2. gmake -k |& tee gnumake.out    # this is csh or tcsh syntax

See Setting Up Your Environment on tips to simplify working with the LLVM front-end and compiled tools. See the other sub-sections below for other useful details in working with LLVM, or go straight to Program Layout to learn about the layout of the source code tree. For information on building the C front end yourself, see Compiling the LLVM C Front End for information.

Terminology and Notation

Throughout this manual, the following names are used to denote paths specific to the local system and working environment. These are not environment variables you need to set but just strings used in the rest of this document below. In any of the examples below, simply replace each of these names with the appropriate pathname on your local system. All these paths are absolute:

CVSROOTDIR
This is the path for the CVS repository containing the LLVM source code. Ask the person responsible for your local LLVM installation to give you this path.

OBJ_ROOT
This is the top level directory for where the LLVM suite object files will be placed during the build.

LLVMGCCDIR
This is the pathname to the location where the LLVM C Front End will be installed. Note that the C front end does not need to be installed during the LLVM suite build; you will just need to know where it will go for configuring the build system and running the test suite later.

For the pre-built C front end binaries, the LLVMGCCDIR is cfrontend/platform/llvm-gcc.

GCCSRC
This is the pathname of the directory where the LLVM C front end source code can be found.

GCCOBJ
This is the pathname of the directory where the LLVM C front end object code will be placed during the build. It can be safely removed once the build is complete.

Setting Up Your Environment

In order to compile and use LLVM, you will need to set some environment variables. There are also some shell aliases which you may find useful. You can set these on the command line, or better yet, set them in your .cshrc or .profile.

LLVM_LIB_SEARCH_PATH=LLVMGCCDIR/llvm-gcc/bytecode-libs
This environment variable helps the LLVM C front end find bytecode libraries that it will need for compilation.

alias llvmgcc LLVMGCCDIR/bin/llvm-gcc
This alias allows you to use the LLVM C front end without putting it in your PATH or typing in its complete pathname.

Unpacking the LLVM Archives

If you have the LLVM distribution, you will need to unpack it before you can begin to compile it. LLVM is distributed as a set of four files. Each file is a TAR archive that is compressed with the gzip program.

The four files are as follows:

llvm.tar.gz
This is the source code to the LLVM suite.

cfrontend.sparc.tar.gz
This is the binary release of the C front end for Solaris/Sparc.

cfrontend.x86.tar.gz
This is the binary release of the C front end for Linux/x86.

cfrontend-src.tar.gz
This is the source code release of the C front end.

Checkout LLVM from CVS

If you have access to our CVS repository, you can get a fresh copy of the entire source code. All you need to do is check it out from CVS as follows:

This will create an 'llvm' directory in the current directory and fully populate it with the LLVM source code, Makefiles, test directories, and local copies of documentation files.

Note that the C front end is not included in the CVS repository. You should have either downloaded the source, or better yet, downloaded the binary distribution for your platform.

Install the C Front End

Before configuring and compiling the LLVM suite, it is best to extract the LLVM C front end. While not used in building, the C front end is used by the LLVM test suite, and its location must be given to the configure script before the LLVM suite can be built.

To install the C front end, do the following:

  1. cd where-you-want-the-front-end-to-live
  2. gunzip --stdout cfrontend.platform.tar.gz | tar -xvf -

Local LLVM Configuration

Once checked out from the CVS repository, the LLVM suite source code must be configured via the configure script. This script sets variables in llvm/Makefile.config and llvm/include/Config/config.h.

The following environment variables are used by the configure script to configure the build system:

Variable Purpose
CC Tells configure which C compiler to use. By default, configure will look for the first GCC compiler in PATH. Use this variable to override configure's default behavior.
CXX Tells configure which C++ compiler to use. By default, configure will look for the first GCC compiler in PATH. Use this variable to override configure's default behavior.

The following options can be used to set or enable LLVM specific options:

--with-objroot=OBJ_ROOT
Path to the directory where object files, libraries, and executables should be placed. If this is set to ., then the object files will be placed within the source code tree. If left unspecified, the default value is .. (See the Section on The Location of LLVM Object Files for more information.)

--with-llvmgccdir=LLVMGCCDIR
Path to the location where the LLVM C front end binaries and associated libraries will be installed.

--enable-optimized
Enables optimized compilation (debugging symbols are removed and GCC optimization flags are enabled). The default is to use an unoptimized build (also known as a debug build).

--enable-jit
Compile the Just In Time (JIT) functionality. This is not available on all platforms. The default is dependent on platform, so it is best to explicitly enable it if you want it.
In addition to running configure, you must set the LLVM_LIB_SEARCH_PATH environment variable in your startup scripts. This environment variable is used to locate "system" libraries like "-lc" and "-lm" when linking. This variable should be set to the absolute path for the bytecode-libs subdirectory of the C front-end install, or LLVMGCCDIR/llvm-gcc/bytecode-libs. For example, one might set LLVM_LIB_SEARCH_PATH to /home/vadve/lattner/local/x86/llvm-gcc/bytecode-libs for the X86 version of the C front-end on our research machines.

Compiling the LLVM Suite Source Code

Once you have configured LLVM, you can build it. There are three types of builds:
Debug Builds
These builds are the default. They compile the tools and libraries with debugging information.

Release (Optimized) Builds
These builds are enabled with the --enable-optimized option to configure. They compile the tools and libraries with GCC optimizer flags on and strip debugging information from the libraries and executables it generates.

Profile Builds
These builds are for use with profiling. They compile profiling information into the code for use with programs like gprof. Profile builds must be started by setting variables on the gmake command line.
Once you have LLVM configured, you can build it by entering the top level llvm directory and issuing the following command:

gmake

If you have multiple processors in your machine, you may wish to use some of the parallel build options provided by GNU Make. For example, you could use the command:

gmake -j2

There are several other targets which are useful when working with the LLVM source code:

gmake clean
Removes all files generated by the build. This includes object files, generated C/C++ files, libraries, and executables.

gmake distclean
Removes everything that gmake clean does, but also removes files generated by configure. It attempts to return the source tree to the original state in which it was shipped.

It is also possible to override default values from configure by declaring variables on the command line. The following are some examples:
gmake ENABLE_OPTIMIZED=1
Perform a Release (Optimized) build.

gmake ENABLE_PROFILING=1
Perform a Profiling build.

gmake VERBOSE=1
Print what gmake is doing on standard output.

Every directory in the LLVM source tree includes a Makefile to build it and any subdirectories that it contains. Entering any directory inside the LLVM source tree and typing gmake should rebuild anything in or below that directory that is out of date.

The Location of LLVM Object Files

The LLVM build system sends most output files generated during the build into the directory defined by the variable OBJ_ROOT in llvm/Makefile.config, which is set by the --with-objroot option in configure. This can be either just your normal LLVM source tree or some other directory writable by you. You may wish to put object files on a different filesystem either to keep them from being backed up or to speed up local builds.

If OBJ_ROOT is specified, then the build system will create a directory tree underneath it that resembles the source code's pathname relative to your home directory.

For example, suppose that OBJ_ROOT is set to /tmp and the LLVM suite source code is located in /usr/home/joe/src/llvm, where /usr/home/joe is the home directory of a user named Joe. Then, the object files will be placed in /tmp/src/llvm.

The LLVM build will place files underneath OBJ_ROOT in directories named after the build type:

Debug Builds
Tools
OBJ_ROOT/llvm/tools/Debug
Libraries
OBJ_ROOT/llvm/lib/Debug

Release Builds
Tools
OBJ_ROOT/llvm/tools/Release
Libraries
OBJ_ROOT/llvm/lib/Release

Profile Builds
Tools
OBJ_ROOT/llvm/tools/Profile
Libraries
OBJ_ROOT/llvm/lib/Profile

Program Layout

One useful source of information about the LLVM source base is the LLVM doxygen documentation, available at http://llvm.cs.uiuc.edu/doxygen/. The following is a brief introduction to code layout:

CVS directories

Every directory checked out of CVS will contain a CVS directory; for the most part these can just be ignored.

llvm/include

This directory contains public header files exported from the LLVM library. The three main subdirectories of this directory are:

  1. llvm/include/llvm - This directory contains all of the LLVM specific header files. This directory also has subdirectories for different portions of LLVM: Analysis, CodeGen, Reoptimizer, Target, Transforms, etc...
  2. llvm/include/Support - This directory contains generic support libraries that are independent of LLVM, but are used by LLVM. For example, some C++ STL utilities and a Command Line option processing library.
  3. llvm/include/Config - This directory contains header files configured by the configure script. They wrap "standard" UNIX and C header files. Source code can include these header files which automatically take care of the conditional #includes that the configure script generates.

llvm/lib

This directory contains most of the source files of the LLVM system. In LLVM almost all code exists in libraries, making it very easy to share code among the different tools.

llvm/lib/VMCore/
This directory holds the core LLVM source files that implement core classes like Instruction and BasicBlock.
llvm/lib/AsmParser/
This directory holds the source code for the LLVM assembly language parser library.
llvm/lib/ByteCode/
This directory holds code for reading and write LLVM bytecode.
llvm/lib/CWriter/
This directory implements the LLVM to C converter.
llvm/lib/Analysis/
This directory contains a variety of different program analyses, such as Dominator Information, Call Graphs, Induction Variables, Interval Identification, Natural Loop Identification, etc...
llvm/lib/Transforms/
This directory contains the source code for the LLVM to LLVM program transformations, such as Aggressive Dead Code Elimination, Sparse Conditional Constant Propagation, Inlining, Loop Invarient Code Motion, Dead Global Elimination, Pool Allocation, and many others...
llvm/lib/Target/
This directory contains files that describe various target architectures for code generation. For example, the llvm/lib/Target/Sparc directory holds the Sparc machine description.
llvm/lib/CodeGen/
This directory contains the major parts of the code generator: Instruction Selector, Instruction Scheduling, and Register Allocation.
llvm/lib/Reoptimizer/
This directory holds code related to the runtime reoptimizer framework that is currently under development.
llvm/lib/Support/
This directory contains the source code that corresponds to the header files located in llvm/include/Support/.

llvm/test

This directory contains regression tests and source code that is used to test the LLVM infrastructure...

llvm/tools

The tools directory contains the executables built out of the libraries above, which form the main part of the user interface. You can always get help for a tool by typing tool_name --help. The following is a brief introduction to the most important tools.

as
The assembler transforms the human readable LLVM assembly to LLVM bytecode.

dis
The disassembler transforms the LLVM bytecode to human readable LLVM assembly. Additionally it can convert LLVM bytecode to C, which is enabled with the -c option.

lli
lli is the LLVM interpreter, which can directly execute LLVM bytecode (although very slowly...). In addition to a simple interpreter, lli is also has debugger and tracing modes (entered by specifying -debug or -trace on the command line, respectively). Finally, for architectures that support it (currently only x86 and Sparc), by default, lli will function as a Just-In-Time compiler (if the functionality was compiled in), and will execute the code much faster than the interpreter.

llc
llc is the LLVM backend compiler, which translates LLVM bytecode to a SPARC or x86 assembly file.

llvmgcc
llvmgcc is a GCC based C frontend that has been retargeted to emit LLVM code as the machine code output. It works just like any other GCC compiler, taking the typical -c, -S, -E, -o options that are typically used. The source code for the llvmgcc tool is currently not included in the LLVM cvs tree because it is quite large and not very interesting.

    gccas
    This tool is invoked by the llvmgcc frontend as the "assembler" part of the compiler. This tool actually assembles LLVM assembly to LLVM bytecode, performs a variety of optimizations, and outputs LLVM bytecode. Thus when you invoke llvmgcc -c x.c -o x.o, you are causing gccas to be run, which writes the x.o file (which is an LLVM bytecode file that can be disassembled or manipulated just like any other bytecode file). The command line interface to gccas is designed to be as close as possible to the system 'as' utility so that the gcc frontend itself did not have to be modified to interface to a "weird" assembler.

    gccld
    gccld links together several LLVM bytecode files into one bytecode file and does some optimization. It is the linker invoked by the gcc frontend when multiple .o files need to be linked together. Like gccas the command line interface of gccld is designed to match the system linker, to aid interfacing with the GCC frontend.

opt
opt reads LLVM bytecode, applies a series of LLVM to LLVM transformations (which are specified on the command line), and then outputs the resultant bytecode. The 'opt --help' command is a good way to get a list of the program transformations available in LLVM.

analyze
analyze is used to run a specific analysis on an input LLVM bytecode file and print out the results. It is primarily useful for debugging analyses, or familiarizing yourself with what an analysis does.

Compiling the LLVM C Front End

This step is optional if you have the C front end binary distrubtion for your platform.

Now that you have the LLVM suite built, you can build the C front end. For those of you that have built GCC before, the process is very similar.

Be forewarned, though: the build system for the C front end is not as polished as the rest of the LLVM code, so there will be many warnings and errors that you will need to ignore for now:

  1. Ensure that OBJ_ROOT/llvm/tools/Debug is at the end of your PATH environment variable. The front end build needs to know where to find the LLVM tools, but you want to ensure that these tools are not found before the system assembler and linker that you normally use for compilation.
  2. cd GCCOBJ
  3. Configure the source code:
    • On Linux/x86, use
      • GCCSRC/configure --prefix=LLVMGCCDIR --enable-languages=c
    • On Solaris/Sparc, use
      • GCCSRC/configure --prefix=LLVMGCCDIR --enable-languages=c --target=sparcv9-sun-solaris2
  4. gmake
  5. The build will eventually fail. Don't worry; chances are good that everything that needed to build is built.
  6. gmake -k install

Once this is done, you should have a built front end compiler in LLVMGCCDIR.

An Example Using the LLVM Tool Chain

  1. First, create a simple C file, name it 'hello.c':
       #include <stdio.h>
       int main() {
         printf("hello world\n");
         return 0;
       }
           
  2. Next, compile the C file into a LLVM bytecode file:

    % llvmgcc hello.c -o hello

    This will create two result files: hello and hello.bc. The hello.bc is the LLVM bytecode that corresponds the the compiled program and the library facilities that it required. hello is a simple shell script that runs the bytecode file with lli, making the result directly executable.

  3. Run the program. To make sure the program ran, execute one of the following commands:

    % ./hello

    or

    % lli hello.bc

  4. Use the dis utility to take a look at the LLVM assembly code:

    % dis < hello.bc | less

  5. Compile the program to native Sparc assembly using the code generator (assuming you are currently on a Sparc system):

    % llc hello.bc -o hello.s

  6. Assemble the native sparc assemble file into a program:

    % /opt/SUNWspro/bin/cc -xarch=v9 hello.s -o hello.sparc

  7. Execute the native sparc program:

    % ./hello.sparc

Common Problems

Below are common problems and their remedies:
When I run configure, it finds the wrong C compiler.
The configure script attempts to locate first gcc and then cc, unless it finds compiler paths set in CC and CXX for the C and C++ compiler, respectively. If configure finds the wrong compiler, either adjust your PATH environment variable or set CC and CXX explicitly.

I compile the code, and I get some error about /localhome.
There are several possible causes for this. The first is that you didn't set a pathname properly when using configure, and it defaulted to a pathname that we use on our research machines.

Another possibility is that we hardcoded a path in our Makefiles. If you see this, please email the LLVM bug mailing list with the name of the offending Makefile and a description of what is wrong with it.

The configure script finds the right C compiler, but it uses the LLVM linker from a previous build. What do I do?
The configure script uses the PATH to find executables, so if it's grabbing the wrong linker/assembler/etc, there are two ways to fix it:
  1. Adjust your PATH environment variable so that the correct program appears first in the PATH. This may work, but may not be convenient when you want them first in your path for other work.

  2. Run configure with an alternative PATH that is correct. In a Borne compatible shell, the syntax would be:

    PATH= ./configure ...

    This is still somewhat inconvenient, but it allows configure to do its work without having to adjust your PATH permanently.

Links

This document is just an introduction to how to use LLVM to do some simple things... there are many more interesting and complicated things that you can do that aren't documented here (but we'll gladly accept a patch if you want to write something up!). For more information about LLVM, check out:


If you have any questions or run into any snags (or you have any additions...), please send an email to Chris Lattner.

Last modified: Tue Jun 3 22:06:43 CDT 2003