LLVM Test Suite Guide
  1. Overview
  2. Requirements
  3. Quick Start
  4. LLVM Test Suite Organization
  5. LLVM Test Suite Tree
  6. QMTest Structure
  7. Programs Structure
  8. Running the LLVM Tests
  9. Written by John T. Criswell

Overview

This document is the reference manual for the LLVM test suite. It documents the structure of the LLVM test suite, the tools needed to use it, and how to add and run tests.

Requirements

In order to use the LLVM test suite, you will need all of the software required to build LLVM, plus the following:

QMTest
The LLVM test suite uses QMTest to organize and run tests.
Python
You will need a Python interpreter that works with QMTest. Python will need zlib and SAX support enabled.
Quick Start

The tests are located in the LLVM source tree under the directory llvm/test. To run all of the tests in LLVM, use the Master Makefile in that directory:

	 % gmake -C llvm/test
	

To run only the code fragment tests (i.e. those that do basic testing of LLVM), run the tests organized by QMTest:

	 % gmake -C llvm/test qmtest
	

To run only the tests that compile and execute whole programs, run the Programs tests:

	 % gmake -C llvm/test/Programs
	

LLVM Test Suite Organization

The LLVM test suite contains two major categories of tests: code fragments and whole programs.

Code Fragments

Code fragments are small pieces of code that test a specific feature of LLVM or trigger a specific bug in LLVM. They are usually written in LLVM assembly language, but can be written in other languages if the test targets a particular language front end.

Code fragments are not complete programs, and they are never executed to determine correct behavior.

The tests in the Features and Regression directories contain code fragments.

Whole Programs

Whole Programs are pieces of code which can be compiled and linked into a stand-alone program that can be executed. These programs are generally written in high level languages such as C or C++, but sometimes they are written straight in LLVM assembly.

These programs are compiled and then executed using several different methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, etc). The output of these programs is compared to ensure that LLVM is compiling the program correctly.

In addition to compiling and executing programs, whole program tests serve as a way of benchmarking LLVM performance, both in terms of the efficiency of the programs generated as well as the speed with which LLVM compiles, optimizes, and generates code.

The Programs directory contains all tests which compile and benchmark whole programs.

LLVM Test Suite Tree

Each type of test in the LLVM test suite has its own directory. The major subtrees of the test suite directory tree are as follows:

QMTest Structure

The LLVM test suite is partially driven by QMTest and partially driven by GNU Make. Specifically, the Features and Regression tests are all driven by QMTest. The Programs directory is currently driven by a set of Makefiles.

The QMTest system needs to have several pieces of information available; these pieces of configuration information are known collectively as the "context" in QMTest parlance. Since the context for LLVM is relatively large, the master Makefile in llvm/test sets it for you.

The LLVM database class makes the subdirectories of llvm/test a QMTest test database. For each directory that contains tests driven by QMTest, it knows what type of test the source file is and how to run it.

Hence, the QMTest namespace is essentially what you see in the Feature and Regression directories, but there is some magic that the database class performs (as described below).

The QMTest namespace is currently composed of the following tests and test suites:

Programs Structure

As mentioned previously, the Programs tree in llvm/test provides three types of tests: MultiSource, SingleSource, and External. Each tree is then subdivided into several categories, including applications, benchmarks, regression tests, code that is strange grammatically, etc. These organizations should be relatively self explanatory.

In addition to the regular Programs tests, the Programs tree also provides a mechanism for compiling the programs in different ways. If the variable TEST is defined on the gmake command line, the test system will include a Makefile named TEST.<value of TEST variable>.Makefile. This Makefile can modify build rules to yield different results.

For example, the LLVM nightly tester uses TEST.nightly.Makefile to create the nightly test reports. To run the nightly tests, run gmake TEST=nightly.

There are several TEST Makefiles available in the tree. Some of them are designed for internal LLVM research and will not work outside of the LLVM research group. They may still be valuable, however, as a guide to writing your own TEST Makefile for any optimization or analysis passes that you develop with LLVM.

Running the LLVM Tests

First, all tests are executed within the LLVM object directory tree. They are not executed inside of the LLVM source tree. This is because the test suite creates temporary files during execution.

The master Makefile in llvm/test is capable of running both the QMTest driven tests and the Programs tests. By default, it will run all of the tests.

To run only the QMTest driven tests, run gmake qmtest at the command line in llvm/tests. To run a specific qmtest, suffix the test name with ".t" when running gmake.

For example, to run the Regression.LLC tests, type gmake Regression.LLC.t in llvm/tests.

Note that the Makefiles in llvm/test/Features and llvm/test/Regression are gone. You must now use QMTest from the llvm/test directory to run them.

To run the Programs test, cd into the llvm/test/Programs directory and type gmake. Alternatively, you can type gmake TEST=<type> test to run one of the specialized tests in llvm/test/Programs/TEST.<type>.Makefile. For example, you could run the nightly tester tests using the following commands:

	 % cd llvm/test/Programs
	 % gmake TEST=nightly test
	

Regardless of which test you're running, the results are printed on standard output and standard error. You can redirect these results to a file if you choose.

Some tests are known to fail. Some are bugs that we have not fixed yet; others are features that we haven't added yet (or may never add). In QMTest, the result for such tests will be XFAIL (eXpected FAILure). In this way, you can tell the difference between an expected and unexpected failure.

The Programs tests have no such feature as of this time. If the test passes, only warnings and other miscellaneous output will be generated. If a test fails, a large <program> FAILED message will be displayed. This will help you separate benign warnings from actual test failures.


John T. Criswell
Last modified: $Date$