X-Git-Url: http://plrg.eecs.uci.edu/git/?a=blobdiff_plain;f=docs%2FTestingGuide.html;h=51ffa450b0b50fb7f54db7a7d16a2a3239f022c8;hb=9ea9fcdf654b9a54a072a3e28cb2091b6c84cf1c;hp=ca21ade09fcf32163165e2a103431962277ffe9b;hpb=a2dee0131fb2ab6051aee0cff04068ffcd07ab1d;p=oota-llvm.git diff --git a/docs/TestingGuide.html b/docs/TestingGuide.html index ca21ade09fc..51ffa450b0b 100644 --- a/docs/TestingGuide.html +++ b/docs/TestingGuide.html @@ -2,29 +2,47 @@ "http://www.w3.org/TR/html4/strict.dtd"> - LLVM Test Suite Guide + LLVM Testing Infrastructure Guide
- LLVM Test Suite Guide + LLVM Testing Infrastructure Guide
  1. Overview
  2. -
  3. Requirements
  4. -
  5. Quick Start
  6. -
  7. LLVM Test Suite Organization +
  8. Requirements
  9. +
  10. LLVM testing infrastructure organization
  11. -
  12. LLVM Test Suite Tree
  13. -
  14. DejaGNU Structure
  15. -
  16. llvm-test Structure
  17. -
  18. Running the LLVM Tests
  19. +
  20. Quick start + +
  21. +
  22. DejaGNU structure + +
  23. +
  24. Test suite structure
  25. +
  26. Running the test suite + +
  27. Running the nightly tester
@@ -39,19 +57,19 @@
-

This document is the reference manual for the LLVM test suite. It documents -the structure of the LLVM test suite, the tools needed to use it, and how to add -and run tests.

+

This document is the reference manual for the LLVM testing infrastructure. It documents +the structure of the LLVM testing infrastructure, the tools needed to use it, +and how to add and run tests.

-
Requirements
+
Requirements
-

In order to use the LLVM test suite, you will need all of the software +

In order to use the LLVM testing infrastructure, you will need all of the software required to build LLVM, plus the following:

@@ -61,177 +79,722 @@ required to build LLVM, plus the following:

Expect is required by DejaGNU.
tcl
Tcl is required by DejaGNU.
+
-
F2C
-
For now, LLVM does not have a Fortran front-end, but using F2C, we can run -Fortran benchmarks. F2C support must be enabled via configure if not -installed in a standard place. F2C requires three items: the f2c -executable, f2c.h to compile the generated code, and libf2c.a -to link generated code. By default, given an F2C directory $DIR, the -configure script will search $DIR/bin for f2c, -$DIR/include for f2c.h, and $DIR/lib for -libf2c.a. The default $DIR values are: /usr, -/usr/local, /sw, and /opt. If you installed F2C in a -different location, you must tell configure: +
- - + +
LLVM testing infrastructure organization
+ + +
+ +

The LLVM testing infrastructure contains two major categories of tests: code +fragments and whole programs. Code fragments are referred to as the "DejaGNU +tests" and are in the llvm module in subversion under the +llvm/test directory. The whole programs tests are referred to as the +"Test suite" and are in the test-suite module in subversion. +

+ +
+ + +
DejaGNU tests
+ + +
+ +

Code fragments are small pieces of code that test a specific +feature of LLVM or trigger a specific bug in LLVM. They are usually +written in LLVM assembly language, but can be written in other +languages if the test targets a particular language front end (and the +appropriate --with-llvmgcc options were used +at configure time of the llvm module). These tests +are driven by the DejaGNU testing framework, which is hidden behind a +few simple makefiles.

+ +

These code fragments are not complete programs. The code generated +from them is never executed to determine correct behavior.

+ +

These code fragment tests are located in the llvm/test +directory.

+ +

Typically when a bug is found in LLVM, a regression test containing +just enough code to reproduce the problem should be written and placed +somewhere underneath this directory. In most cases, this will be a small +piece of LLVM assembly language code, often distilled from an actual +application or benchmark.

+ +
+ + +
Test suite
+ + +
+ +

The test suite contains whole programs, which are pieces of +code which can be compiled and linked into a stand-alone program that can be +executed. These programs are generally written in high level languages such as +C or C++, but sometimes they are written straight in LLVM assembly.

+ +

These programs are compiled and then executed using several different +methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, +etc). The output of these programs is compared to ensure that LLVM is compiling +the program correctly.

+ +

In addition to compiling and executing programs, whole program tests serve as +a way of benchmarking LLVM performance, both in terms of the efficiency of the +programs generated as well as the speed with which LLVM compiles, optimizes, and +generates code.

+ +

The test-suite is located in the test-suite Subversion module.

-

Mac OS X developers can simplify installation of Expect and tcl by using -fink. fink install expect will install both.

-
Quick Start
+
Quick start
-

The tests are located in two separate CVS modules. The basic feature and -regression tests are in the main "llvm" module under the directory -llvm/test. A more comprehensive test suite that includes whole -programs in C and C++ is in the llvm-test module. This module should -be checked out to the llvm/projects directory. When you -configure the llvm module, the llvm-test module -will be automatically configured. Alternatively, you can configure the llvm-test module manually.

-

To run all of the simple tests in LLVM using DejaGNU, use the master Makefile in the -llvm/test directory:

+

The tests are located in two separate Subversion modules. The + DejaGNU tests are in the main "llvm" module under the directory + llvm/test (so you get these tests for free with the main llvm tree). + The more comprehensive test suite that includes whole +programs in C and C++ is in the test-suite module. This module should +be checked out to the llvm/projects directory (don't use another name +then the default "test-suite", for then the test suite will be run every time +you run make in the main llvm directory). +When you configure the llvm module, +the test-suite directory will be automatically configured. +Alternatively, you can configure the test-suite module manually.

+ + +
DejaGNU tests
+ +

To run all of the simple tests in LLVM using DejaGNU, use the master Makefile + in the llvm/test directory:

+ +
 % gmake -C llvm/test
 
-or
+
+ +

or

+ +
 % gmake check
 
+
-

To run only a subdirectory of tests in llvm/test using DejaGNU (ie. -Regression/Transforms), just set the TESTSUITE variable to the path of the +

To run only a subdirectory of tests in llvm/test using DejaGNU (ie. +Transforms), just set the TESTSUITE variable to the path of the subdirectory (relative to llvm/test):

+ +
-% gmake -C llvm/test TESTSUITE=Regression/Transforms
+% gmake TESTSUITE=Transforms check
 
+

Note: If you are running the tests with objdir != subdir, you must have run the complete testsuite before you can specify a subdirectory.

+

To run only a single test, set TESTONE to its path (relative to +llvm/test) and make the check-one target:

+ +
+
+% gmake TESTONE=Feature/basictest.ll check-one
+
+
+ +

To run the tests with Valgrind (Memcheck by default), just append +VG=1 to the commands above, e.g.:

+ +
+
+% gmake check VG=1
+
+
+ + +
Test suite
+ +

To run the comprehensive test suite (tests that compile and execute whole -programs), run the llvm-test tests:

+programs), first checkout and setup the test-suite module:

+
 % cd llvm/projects
-% cvs co llvm-test
-% cd llvm-test
-% ./configure --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT
+% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
+% cd ..
+% ./configure --with-llvmgccdir=$LLVM_GCC_DIR
+
+
+ +

where $LLVM_GCC_DIR is the directory where +you installed llvm-gcc, not it's src or obj +dir. The --with-llvmgccdir option assumes that +the llvm-gcc-4.2 module was configured with +--program-prefix=llvm-, and therefore that the C and C++ +compiler drivers are called llvm-gcc and llvm-g++ +respectively. If this is not the case, +use --with-llvmgcc/--with-llvmgxx to specify each +executable's location.

+ +

Then, run the entire test suite by running make in the test-suite +directory:

+ +
+
+% cd projects/test-suite
 % gmake
 
+
+ +

Usually, running the "nightly" set of tests is a good idea, and you can also +let it generate a report by running:

+ +
+
+% cd projects/test-suite
+% gmake TEST=nightly report report.html
+
+
+ +

Any of the above commands can also be run in a subdirectory of +projects/test-suite to run the specified test only on the programs in +that subdirectory.

-
LLVM Test Suite Organization
+
DejaGNU structure
+
+

The LLVM DejaGNU tests are driven by DejaGNU together with GNU Make and are + located in the llvm/test directory. + +

This directory contains a large array of small tests + that exercise various features of LLVM and to ensure that regressions do not + occur. The directory is broken into several sub-directories, each focused on + a particular area of LLVM. A few of the important ones are:

+ + + +
+ + +
Writing new DejaGNU tests
+ +
+

The DejaGNU structure is very simple, but does require some information to + be set. This information is gathered via configure and is written + to a file, site.exp in llvm/test. The llvm/test + Makefile does this work for you.

+ +

In order for DejaGNU to work, each directory of tests must have a + dg.exp file. DejaGNU looks for this file to determine how to run the + tests. This file is just a Tcl script and it can do anything you want, but + we've standardized it for the LLVM regression tests. If you're adding a + directory of tests, just copy dg.exp from another directory to get + running. The standard dg.exp simply loads a Tcl + library (test/lib/llvm.exp) and calls the llvm_runtests + function defined in that library with a list of file names to run. The names + are obtained by using Tcl's glob command. Any directory that contains only + directories does not need the dg.exp file.

+ +

The llvm-runtests function lookas at each file that is passed to + it and gathers any lines together that match "RUN:". This are the "RUN" lines + that specify how the test is to be run. So, each test script must contain + RUN lines if it is to do anything. If there are no RUN lines, the + llvm-runtests function will issue an error and the test will + fail.

+ +

RUN lines are specified in the comments of the test program using the + keyword RUN followed by a colon, and lastly the command (pipeline) + to execute. Together, these lines form the "script" that + llvm-runtests executes to run the test case. The syntax of the + RUN lines is similar to a shell's syntax for pipelines including I/O + redirection and variable substitution. However, even though these lines + may look like a shell script, they are not. RUN lines are interpreted + directly by the Tcl exec command. They are never executed by a + shell. Consequently the syntax differs from normal shell script syntax in a + few ways. You can specify as many RUN lines as needed.

+ +

Each RUN line is executed on its own, distinct from other lines unless + its last character is \. This continuation character causes the RUN + line to be concatenated with the next one. In this way you can build up long + pipelines of commands without making huge line lengths. The lines ending in + \ are concatenated until a RUN line that doesn't end in \ is + found. This concatenated set of RUN lines then constitutes one execution. + Tcl will substitute variables and arrange for the pipeline to be executed. If + any process in the pipeline fails, the entire line (and test case) fails too. +

+ +

Below is an example of legal RUN lines in a .ll file:

+ +
+
+; RUN: llvm-as < %s | llvm-dis > %t1
+; RUN: llvm-dis < %s.bc-13 > %t2
+; RUN: diff %t1 %t2
+
+
+ +

As with a Unix shell, the RUN: lines permit pipelines and I/O redirection + to be used. However, the usage is slightly different than for Bash. To check + what's legal, see the documentation for the + Tcl exec + command and the + tutorial. + The major differences are:

+ + +

There are some quoting rules that you must pay attention to when writing + your RUN lines. In general nothing needs to be quoted. Tcl won't strip off any + ' or " so they will get passed to the invoked program. For example:

+ +
+
+... | grep 'find this string'
+
+
+ +

This will fail because the ' characters are passed to grep. This would + instruction grep to look for 'find in the files this and + string'. To avoid this use curly braces to tell Tcl that it should + treat everything enclosed as one value. So our example would become:

+ +
+
+... | grep {find this string}
+
+
+ +

Additionally, the characters [ and ] are treated + specially by Tcl. They tell Tcl to interpret the content as a command to + execute. Since these characters are often used in regular expressions this can + have disastrous results and cause the entire test run in a directory to fail. + For example, a common idiom is to look for some basicblock number:

+ +
+
+... | grep bb[2-8]
+
+
+ +

This, however, will cause Tcl to fail because its going to try to execute + a program named "2-8". Instead, what you want is this:

+ +
+
+... | grep {bb\[2-8\]}
+
+
+ +

Finally, if you need to pass the \ character down to a program, + then it must be doubled. This is another Tcl special character. So, suppose + you had: + +

+
+... | grep 'i32\*'
+
+
+ +

This will fail to match what you want (a pointer to i32). First, the + ' do not get stripped off. Second, the \ gets stripped off + by Tcl so what grep sees is: 'i32*'. That's not likely to match + anything. To resolve this you must use \\ and the {}, like + this:

+ +
+
+... | grep {i32\\*}
+
+
+ +

If your system includes GNU grep, make sure +that GREP_OPTIONS is not set in your environment. Otherwise, +you may get invalid results (both false positives and false +negatives).

+ +
+ + +
The FileCheck utility
+
-

The LLVM test suite contains two major categories of tests: code -fragments and whole programs. Code fragments are in the llvm module -under the llvm/test directory. The whole programs -test suite is in the llvm-test module under the main directory.

+

A powerful feature of the RUN: lines is that it allows any arbitrary commands + to be executed as part of the test harness. While standard (portable) unix + tools like 'grep' work fine on run lines, as you see above, there are a lot + of caveats due to interaction with Tcl syntax, and we want to make sure the + run lines are portable to a wide range of systems. Another major problem is + that grep is not very good at checking to verify that the output of a tools + contains a series of different output in a specific order. The FileCheck + tool was designed to help with these problems.

+ +

FileCheck (whose basic command line arguments are described in the FileCheck man page is + designed to read a file to check from standard input, and the set of things + to verify from a file specified as a command line argument. A simple example + of using FileCheck from a RUN line looks like this:

+ +
+
+; RUN: llvm-as < %s | llc -march=x86-64 | FileCheck %s
+
+
+ +

This syntax says to pipe the current file ("%s") into llvm-as, pipe that into +llc, then pipe the output of llc into FileCheck. This means that FileCheck will +be verifying its standard input (the llc output) against the filename argument +specified (the original .ll file specified by "%s"). To see how this works, +lets look at the rest of the .ll file (after the RUN line):

+
+
+define void @sub1(i32* %p, i32 %v) {
+entry:
+; CHECK: sub1:
+; CHECK: subl
+        %0 = tail call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %p, i32 %v)
+        ret void
+}
+
+define void @inc4(i64* %p) {
+entry:
+; CHECK: inc4:
+; CHECK: incq
+        %0 = tail call i64 @llvm.atomic.load.add.i64.p0i64(i64* %p, i64 1)
+        ret void
+}
+
-
Code Fragments +

Here you can see some "CHECK:" lines specified in comments. Now you can see +how the file is piped into llvm-as, then llc, and the machine code output is +what we are verifying. FileCheck checks the machine code output to verify that +it matches what the "CHECK:" lines specify.

+ +

The syntax of the CHECK: lines is very simple: they are fixed strings that +must occur in order. FileCheck defaults to ignoring horizontal whitespace +differences (e.g. a space is allowed to match a tab) but otherwise, the contents +of the CHECK: line is required to match some thing in the test file exactly.

+ +

One nice thing about FileCheck (compared to grep) is that it allows merging +test cases together into logical groups. For example, because the test above +is checking for the "sub1:" and "inc4:" labels, it will not match unless there +is a "subl" in between those labels. If it existed somewhere else in the file, +that would not count: "grep subl" matches if subl exists anywhere in the +file.

+
+ +
The FileCheck -check-prefix option
+
-

Code fragments are small pieces of code that test a specific feature of LLVM -or trigger a specific bug in LLVM. They are usually written in LLVM assembly -language, but can be written in other languages if the test targets a particular -language front end.

+

The FileCheck -check-prefix option allows multiple test configurations to be +driven from one .ll file. This is useful in many circumstances, for example, +testing different architectural variants with llc. Here's a simple example:

-

Code fragments are not complete programs, and they are never executed to -determine correct behavior.

+
+
+; RUN: llvm-as < %s | llc -mtriple=i686-apple-darwin9 -mattr=sse41 \
+; RUN:              | FileCheck %s -check-prefix=X32
+; RUN: llvm-as < %s | llc -mtriple=x86_64-apple-darwin9 -mattr=sse41 \
+; RUN:              | FileCheck %s -check-prefix=X64
+
+define <4 x i32> @pinsrd_1(i32 %s, <4 x i32> %tmp) nounwind {
+        %tmp1 = insertelement <4 x i32> %tmp, i32 %s, i32 1
+        ret <4 x i32> %tmp1
+; X32: pinsrd_1:
+; X32:    pinsrd $1, 4(%esp), %xmm0
+
+; X64: pinsrd_1:
+; X64:    pinsrd $1, %edi, %xmm0
+}
+
+
-

These code fragment tests are located in the llvm/test/Features and -llvm/test/Regression directories.

+

In this case, we're testing that we get the expected code generation with +both 32-bit and 64-bit code generation.

-
Whole Programs
+ +
The "CHECK-NEXT:" directive
-

Whole Programs are pieces of code which can be compiled and linked into a -stand-alone program that can be executed. These programs are generally written -in high level languages such as C or C++, but sometimes they are written -straight in LLVM assembly.

+

Sometimes you want to match lines and would like to verify that matches +happen on exactly consequtive lines with no other lines in between them. In +this case, you can use CHECK: and CHECK-NEXT: directives to specify this. If +you specified a custom check prefix, just use "<PREFIX>-NEXT:". For +example, something like this works as you'd expect:

-

These programs are compiled and then executed using several different -methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, -etc). The output of these programs is compared to ensure that LLVM is compiling -the program correctly.

+
+
+define void @t2(<2 x double>* %r, <2 x double>* %A, double %B) {
+	%tmp3 = load <2 x double>* %A, align 16
+	%tmp7 = insertelement <2 x double> undef, double %B, i32 0
+	%tmp9 = shufflevector <2 x double> %tmp3,
+                              <2 x double> %tmp7,
+                              <2 x i32> < i32 0, i32 2 >
+	store <2 x double> %tmp9, <2 x double>* %r, align 16
+	ret void
+        
+; CHECK: t2:
+; CHECK: 	movl	8(%esp), %eax
+; CHECK-NEXT: 	movapd	(%eax), %xmm0
+; CHECK-NEXT: 	movhpd	12(%esp), %xmm0
+; CHECK-NEXT: 	movl	4(%esp), %eax
+; CHECK-NEXT: 	movapd	%xmm0, (%eax)
+; CHECK-NEXT: 	ret
+}
+
+
-

In addition to compiling and executing programs, whole program tests serve as -a way of benchmarking LLVM performance, both in terms of the efficiency of the -programs generated as well as the speed with which LLVM compiles, optimizes, and -generates code.

+

CHECK-NEXT: directives reject the input unless there is exactly one newline +between it an the previous directive. A CHECK-NEXT cannot be the first +directive in a file.

-

All "whole program" tests are located in the llvm-test CVS -module.

+
+ + +
Variables and +substitutions
+ +
+

With a RUN line there are a number of substitutions that are permitted. In + general, any Tcl variable that is available in the substitute + function (in test/lib/llvm.exp) can be substituted into a RUN line. + To make a substitution just write the variable's name preceded by a $. + Additionally, for compatibility reasons with previous versions of the test + library, certain names can be accessed with an alternate syntax: a % prefix. + These alternates are deprecated and may go away in a future version. +

+

Here are the available variable names. The alternate syntax is listed in + parentheses.

+ +
+
$test (%s)
+
The full path to the test case's source. This is suitable for passing + on the command line as the input to an llvm tool.
+ +
$srcdir
+
The source directory from where the "make check" was run.
+ +
objdir
+
The object directory that corresponds to the $srcdir.
+ +
subdir
+
A partial path from the test directory that contains the + sub-directory that contains the test source being executed.
+ +
srcroot
+
The root directory of the LLVM src tree.
+ +
objroot
+
The root directory of the LLVM object tree. This could be the same + as the srcroot.
+ +
path
+
The path to the directory that contains the test case source. This is + for locating any supporting files that are not generated by the test, but + used by the test.
+ +
tmp
+
The path to a temporary file name that could be used for this test case. + The file name won't conflict with other test cases. You can append to it if + you need multiple temporaries. This is useful as the destination of some + redirected output.
+ +
llvmlibsdir (%llvmlibsdir)
+
The directory where the LLVM libraries are located.
+ +
target_triplet (%target_triplet)
+
The target triplet that corresponds to the current host machine (the one + running the test cases). This should probably be called "host".
+ +
prcontext (%prcontext)
+
Path to the prcontext tcl script that prints some context around a + line that matches a pattern. This isn't strictly necessary as the test suite + is run with its PATH altered to include the test/Scripts directory where + the prcontext script is located. Note that this script is similar to + grep -C but you should use the prcontext script because + not all platforms support grep -C.
+ +
llvmgcc (%llvmgcc)
+
The full path to the llvm-gcc executable as specified in the + configured LLVM environment
+ +
llvmgxx (%llvmgxx)
+
The full path to the llvm-gxx executable as specified in the + configured LLVM environment
+ +
llvmgcc_version (%llvmgcc_version)
+
The full version number of the llvm-gcc executable.
+ +
llvmgccmajvers (%llvmgccmajvers)
+
The major version number of the llvm-gcc executable.
+ +
gccpath
+
The full path to the C compiler used to build LLVM. Note that + this might not be gcc.
+ +
gxxpath
+
The full path to the C++ compiler used to build LLVM. Note that + this might not be g++.
+ +
compile_c (%compile_c)
+
The full command line used to compile LLVM C source code. This has all + the configured -I, -D and optimization options.
+ +
compile_cxx (%compile_cxx)
+
The full command used to compile LLVM C++ source code. This has + all the configured -I, -D and optimization options.
+ +
link (%link)
+
This full link command used to link LLVM executables. This has all the + configured -I, -L and -l options.
+ +
shlibext (%shlibext)
+
The suffix for the host platforms share library (dll) files. This + includes the period as the first character.
+
+

To add more variables, two things need to be changed. First, add a line in + the test/Makefile that creates the site.exp file. This will + "set" the variable as a global in the site.exp file. Second, in the + test/lib/llvm.exp file, in the substitute proc, add the variable name + to the list of "global" declarations at the beginning of the proc. That's it, + the variable can then be used in test scripts.

+
+ + +
Other Features
+ +
+

To make RUN line writing easier, there are several shell scripts located + in the llvm/test/Scripts directory. This directory is in the PATH + when running tests, so you can just call these scripts using their name. For + example:

+
+
ignore
+
This script runs its arguments and then always returns 0. This is useful + in cases where the test needs to cause a tool to generate an error (e.g. to + check the error output). However, any program in a pipeline that returns a + non-zero result will cause the test to fail. This script overcomes that + issue and nicely documents that the test case is purposefully ignoring the + result code of the tool
+ +
not
+
This script runs its arguments and then inverts the result code from + it. Zero result codes become 1. Non-zero result codes become 0. This is + useful to invert the result of a grep. For example "not grep X" means + succeed only if you don't find X in the input.
+
+ +

Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. + You can easily mark a test as XFAIL just by including XFAIL: on a + line near the top of the file. This signals that the test case should succeed + if the test fails. Such test cases are counted separately by DejaGnu. To + specify an expected fail, use the XFAIL keyword in the comments of the test + program followed by a colon and one or more regular expressions (separated by + a comma). The regular expressions allow you to XFAIL the test conditionally + by host platform. The regular expressions following the : are matched against + the target triplet or llvmgcc version number for the host machine. If there is + a match, the test is expected to fail. If not, the test is expected to + succeed. To XFAIL everywhere just specify XFAIL: *. When matching + the llvm-gcc version, you can specify the major (e.g. 3) or full version + (i.e. 3.4) number. Here is an example of an XFAIL line:

+ +
+
+; XFAIL: darwin,sun,llvmgcc4
+
+
+ +

To make the output more useful, the llvm_runtest function wil + scan the lines of the test case for ones that contain a pattern that matches + PR[0-9]+. This is the syntax for specifying a PR (Problem Report) number that + is related to the test case. The number after "PR" specifies the LLVM bugzilla + number. When a PR number is specified, it will be used in the pass/fail + reporting. This is useful to quickly get some context when a test fails.

+ +

Finally, any line that contains "END." will cause the special + interpretation of lines to terminate. This is generally done right after the + last RUN: line. This has two side effects: (a) it prevents special + interpretation of lines that are part of the test program, not the + instructions to the test case, and (b) it speeds things up for really big test + cases by avoiding interpretation of the remainder of the file.

-
LLVM Test Suite Tree
+
Test suite +Structure
-

Each type of test in the LLVM test suite has its own directory. The major -subtrees of the test suite directory tree are as follows:

- -
+ -
DejaGNU Structure
+
Running the test suite
-

The LLVM test suite is partially driven by DejaGNU and partially -driven by GNU Make. Specifically, the Features and Regression tests -are all driven by DejaGNU. The llvm-test -module is currently driven by a set of Makefiles.

- -

The DejaGNU structure is very simple, but does require some -information to be set. This information is gathered via configure and -is written to a file, site.exp in llvm/test. The -llvm/test -Makefile does this work for you.

- -

In order for DejaGNU to work, each directory of tests must have a -dg.exp file. This file is a program written in tcl that calls -the llvm-runtests procedure on each test file. The -llvm-runtests procedure is defined in -llvm/test/lib/llvm-dg.exp. Any directory that contains only -directories does not need the dg.exp file.

- -

In order for a test to be run, it must contain information within -the test file on how to run the test. These are called RUN -lines. Run lines are specified in the comments of the test program -using the keyword RUN followed by a colon, and lastly the -commands to execute. These commands will be executed in a bash script, -so any bash syntax is acceptable. You can specify as many RUN lines as -necessary. Each RUN line translates to one line in the resulting bash -script. Below is an example of legal RUN lines in a .ll -file:

+ +

First, all tests are executed within the LLVM object directory tree. They +are not executed inside of the LLVM source tree. This is because the +test suite creates temporary files during execution.

+ +

To run the test suite, you need to use the following steps:

+ +
    +
  1. cd into the llvm/projects directory in your source tree. +
  2. + +
  3. Check out the test-suite module with:

    + +
    -; RUN: llvm-as < %s | llvm-dis > %t1
    -; RUN: llvm-dis < %s.bc-13 > %t2
    -; RUN: diff %t1 %t2
    +% svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
     
    -

    There are a couple patterns within a RUN line that the -llvm-runtest procedure looks for and replaces with the appropriate -syntax:

    - -
    -
    %p
    -
    The path to the source directory. This is for locating -any supporting files that are not generated by the test, but used by -the test.
    -
    %s
    -
    The test file.
    - -
    %t
    -
    Temporary filename: testscript.test_filename.tmp, where -test_filename is the name of the test file. All temporary files are -placed in the Output directory within the directory the test is -located.
    - -
    %prcontext
    -
    Path to a script that performs grep -C. Use this since not all -platforms support grep -C.
    - -
    %llvmgcc
    Full path to the llvm-gcc executable.
    -
    %llvmgxx
    Full path to the llvm-g++ executable.
    -
    - -

    There are also several scripts in the llvm/test/Scripts directory -that you might find useful when writing RUN lines.

    +
    +

    This will get the test suite into llvm/projects/test-suite.

    +
  4. +
  5. Configure and build llvm.

  6. +
  7. Configure and build llvm-gcc.

  8. +
  9. Install llvm-gcc somewhere.

  10. +
  11. Re-configure llvm from the top level of + each build tree (LLVM object directory tree) in which you want + to run the test suite, just as you do before building LLVM.

    +

    During the re-configuration, you must either: (1) + have llvm-gcc you just built in your path, or (2) + specify the directory where your just-built llvm-gcc is + installed using --with-llvmgccdir=$LLVM_GCC_DIR.

    +

    You must also tell the configure machinery that the test suite + is available so it can be configured for your build tree:

    +
    +
    +% cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure [--with-llvmgccdir=$LLVM_GCC_DIR]
    +
    +
    +

    [Remember that $LLVM_GCC_DIR is the directory where you + installed llvm-gcc, not its src or obj directory.]

    +
  12. -

    Lastly, you can easily mark a test that is expected to fail on a -specific platform by using the XFAIL keyword. Xfail lines are -specified in the comments of the test program using XFAIL, -followed by a colon, and one or more regular expressions (separated by -a comma) that will match against the target triplet for the -machine. You can use * to match all targets. Here is an example of an -XFAIL line:

    +
  13. You can now run the test suite from your build tree as follows:

    +
    -; XFAIL: darwin,sun
    +% cd $LLVM_OBJ_ROOT/projects/test-suite
    +% make
     
    +
    +
  14. +
+

Note that the second and third steps only need to be done once. After you +have the suite checked out and configured, you don't need to do it again (unless +the test code or configure script changes).

- -
llvm-test -Structure
- + +
+Configuring External Tests
+
+

In order to run the External tests in the test-suite + module, you must specify --with-externals. This + must be done during the re-configuration step (see above), + and the llvm re-configuration must recognize the + previously-built llvm-gcc. If any of these is missing or + neglected, the External tests won't work.

+
+
--with-externals
+
--with-externals=<directory>
+
+ This tells LLVM where to find any external tests. They are expected to be + in specifically named subdirectories of <directory>. + If directory is left unspecified, + configure uses the default value + /home/vadve/shared/benchmarks/speccpu2000/benchspec. + Subdirectory names known to LLVM include: +
+
spec95
+
speccpu2000
+
speccpu2006
+
povray31
+
+ Others are added from time to time, and can be determined from + configure. +
-

As mentioned previously, the llvm-test module provides three types -of tests: MultiSource, SingleSource, and External. Each tree is then subdivided -into several categories, including applications, benchmarks, regression tests, -code that is strange grammatically, etc. These organizations should be -relatively self explanatory.

- -

In addition to the regular "whole program" tests, the llvm-test + +

+Running different tests
+ +
+

In addition to the regular "whole program" tests, the test-suite module also provides a mechanism for compiling the programs in different ways. -If the variable TEST is defined on the gmake command line, the test system will +If the variable TEST is defined on the gmake command line, the test system will include a Makefile named TEST.<value of TEST variable>.Makefile. This Makefile can modify build rules to yield different results.

@@ -361,109 +946,111 @@ research group. They may still be valuable, however, as a guide to writing your own TEST Makefile for any optimization or analysis passes that you develop with LLVM.

-

Note, when configuring the llvm-test module, you might want to -specify the following configuration options:

-
-
--enable-spec2000 -
--enable-spec2000=<directory> -
- Enable the use of SPEC2000 when testing LLVM. This is disabled by default - (unless configure finds SPEC2000 installed). By specifying - directory, you can tell configure where to find the SPEC2000 - benchmarks. If directory is left unspecified, configure - uses the default value - /home/vadve/shared/benchmarks/speccpu2000/benchspec. -

-

--enable-spec95 -
--enable-spec95=<directory> -
- Enable the use of SPEC95 when testing LLVM. It is similar to the - --enable-spec2000 option. -

-

--enable-povray -
--enable-povray=<directory> -
- Enable the use of Povray as an external test. Versions of Povray written - in C should work. This option is similar to the --enable-spec2000 - option. -
- -
Running the LLVM Tests
- - + +
+Generating test output
+
+

There are a number of ways to run the tests and generate output. The most + simple one is simply running gmake with no arguments. This will + compile and run all programs in the tree using a number of different methods + and compare results. Any failures are reported in the output, but are likely + drowned in the other output. Passes are not reported explicitely.

+ +

Somewhat better is running gmake TEST=sometest test, which runs + the specified test and usually adds per-program summaries to the output + (depending on which sometest you use). For example, the nightly test + explicitely outputs TEST-PASS or TEST-FAIL for every test after each program. + Though these lines are still drowned in the output, it's easy to grep the + output logs in the Output directories.

+ +

Even better are the report and report.format targets + (where format is one of html, csv, text or + graphs). The exact contents of the report are dependent on which + TEST you are running, but the text results are always shown at the + end of the run and the results are always stored in the + report.<type>.format file (when running with + TEST=<type>). + + The report also generate a file called + report.<type>.raw.out containing the output of the entire test + run. +

-

First, all tests are executed within the LLVM object directory tree. They -are not executed inside of the LLVM source tree. This is because the -test suite creates temporary files during execution.

- -

The master Makefile in llvm/test is capable of running only the DejaGNU driven -tests. By default, it will run all of these tests.

+ +
+Writing custom tests for the test suite
+ -

To run only the DejaGNU driven tests, run gmake at the -command line in llvm/test. To run a specific directory of tests, use -the TESTSUITE variable. -

+
-

For example, to run the Regression tests, type -gmake TESTSUITE=Regression in llvm/tests.

+

Assuming you can run the test suite, (e.g. "gmake TEST=nightly report" +should work), it is really easy to run optimizations or code generator +components against every program in the tree, collecting statistics or running +custom checks for correctness. At base, this is how the nightly tester works, +it's just one example of a general framework.

-

Note that there are no Makefiles in llvm/test/Features and -llvm/test/Regression. You must use DejaGNU from the llvm/test -directory to run them.

+

Lets say that you have an LLVM optimization pass, and you want to see how +many times it triggers. First thing you should do is add an LLVM +statistic to your pass, which +will tally counts of things you care about.

-

To run the llvm-test suite, you need to use the following steps: -

-
    -
  1. cd into the llvm/projects directory
  2. -
  3. check out the llvm-test module with:
    - cvs -d :pserver:anon@llvm.org:/var/cvs/llvm co -PR llvm-test
    - This will get the test suite into llvm/projects/llvm-test
  4. -
  5. configure the test suite. You can do this one of two ways: -
      -
    1. Use the regular llvm configure:
      - cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure
      - This will ensure that the projects/llvm-test directory is also - properly configured.
    2. -
    3. Use the configure script found in the llvm-test source - directory:
      - $LLVM_SRC_ROOT/projects/llvm-test/configure --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT -
    4. -
    -
  6. gmake
  7. -
-

Note that the second and third steps only need to be done once. After you -have the suite checked out and configured, you don't need to do it again (unless -the test code or configure script changes).

+

Following this, you can set up a test and a report that collects these and +formats them for easy viewing. This consists of two files, an +"test-suite/TEST.XXX.Makefile" fragment (where XXX is the name of your +test) and an "llvm-test/TEST.XXX.report" file that indicates how to +format the output into a table. There are many example reports of various +levels of sophistication included with the test suite, and the framework is very +general.

-

To make a specialized test (use one of the -llvm-test/TEST.<type>.Makefiles), just run:
-gmake TEST=<type> test
For example, you could run the -nightly tester tests using the following commands:

+

If you are interested in testing an optimization pass, check out the +"libcalls" test as an example. It can be run like this:

+

- % cd llvm/projects/llvm-test
- % gmake TEST=nightly test
+% cd llvm/projects/test-suite/MultiSource/Benchmarks  # or some other level
+% make TEST=libcalls report
 
+
-

Regardless of which test you're running, the results are printed on standard -output and standard error. You can redirect these results to a file if you -choose.

+

This will do a bunch of stuff, then eventually print a table like this:

-

Some tests are known to fail. Some are bugs that we have not fixed yet; -others are features that we haven't added yet (or may never add). In DejaGNU, -the result for such tests will be XFAIL (eXpected FAILure). In this way, you -can tell the difference between an expected and unexpected failure.

+
+
+Name                                  | total | #exit |
+...
+FreeBench/analyzer/analyzer           | 51    | 6     | 
+FreeBench/fourinarow/fourinarow       | 1     | 1     | 
+FreeBench/neural/neural               | 19    | 9     | 
+FreeBench/pifft/pifft                 | 5     | 3     | 
+MallocBench/cfrac/cfrac               | 1     | *     | 
+MallocBench/espresso/espresso         | 52    | 12    | 
+MallocBench/gs/gs                     | 4     | *     | 
+Prolangs-C/TimberWolfMC/timberwolfmc  | 302   | *     | 
+Prolangs-C/agrep/agrep                | 33    | 12    | 
+Prolangs-C/allroots/allroots          | *     | *     | 
+Prolangs-C/assembler/assembler        | 47    | *     | 
+Prolangs-C/bison/mybison              | 74    | *     | 
+...
+
+
-

The tests in llvm-test have no such feature at this time. If the -test passes, only warnings and other miscellaneous output will be generated. If -a test fails, a large <program> FAILED message will be displayed. This -will help you separate benign warnings from actual test failures.

+

This basically is grepping the -stats output and displaying it in a table. +You can also use the "TEST=libcalls report.html" target to get the table in HTML +form, similarly for report.csv and report.tex.

+ +

The source for this is in test-suite/TEST.libcalls.*. The format is pretty +simple: the Makefile indicates how to run the test (in this case, +"opt -simplify-libcalls -stats"), and the report contains one line for +each column of the output. The first value is the header for the column and the +second is the regex to grep the output of the command for. There are lots of +example reports that can do fancy stuff.

+
Running the nightly tester
@@ -471,46 +1058,56 @@ will help you separate benign warnings from actual test failures.

-The LLVM Nightly Testers +The LLVM Nightly Testers automatically check out an LLVM tree, build it, run the "nightly" -program test (described above), run all of the feature and regression tests, -and then delete the checked out tree. This tester is designed to ensure that -programs don't break as well as keep track of LLVM's progress over time.

- -

If you'd like to set up an instance of the nightly tester to run on your -machine, take a look at the comments at the top of the -utils/NightlyTester.pl file. We usually run it from a crontab entry -that looks ilke this:

- -
-
-5 3 * * *  $HOME/llvm/utils/NightlyTest.pl -parallel $CVSROOT $HOME/buildtest-X86 $HOME/cvs/testresults-X86
-
-
- -

Or, you can create a shell script to encapsulate the running of the script. +program test (described above), run all of the DejaGNU tests, +delete the checked out tree, and then submit the results to +http://llvm.org/nightlytest/. +After test results are submitted to +http://llvm.org/nightlytest/, +they are processed and displayed on the tests page. An email to + +llvm-testresults@cs.uiuc.edu summarizing the results is also generated. +This testing scheme is designed to ensure that programs don't break as well +as keep track of LLVM's progress over time.

+ +

If you'd like to set up an instance of the nightly tester to run on your +machine, take a look at the comments at the top of the +utils/NewNightlyTest.pl file. If you decide to set up a nightly tester +please choose a unique nickname and invoke utils/NewNightlyTest.pl +with the "-nickname [yournickname]" command line option. + +

You can create a shell script to encapsulate the running of the script. The optimized x86 Linux nightly test is run from just such a script:

 #!/bin/bash
 BASE=/proj/work/llvm/nightlytest
-export CVSROOT=:pserver:anon@llvm.org:/var/cvs/llvm
 export BUILDDIR=$BASE/build 
 export WEBDIR=$BASE/testresults 
 export LLVMGCCDIR=/proj/work/llvm/cfrontend/install
 export PATH=/proj/install/bin:$LLVMGCCDIR/bin:$PATH
 export LD_LIBRARY_PATH=/proj/install/lib
 cd $BASE
-cp /proj/work/llvm/llvm/utils/NightlyTest.pl .
-nice ./NightlyTest.pl -nice -release -verbose -parallel -enable-linscan -noexternals 2>&1 > output.log
-mail -s 'X86 nightly tester results' llvm-testresults@cs.uiuc.edu < output.log
+cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl .
+nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \
+   -nickname NightlyTester -noexternals > output.log 2>&1 
 
-

Take a look at the NightlyTest.pl file to see what all of the flags -and strings do. If you start running the nightly tests, please let us know and -we'll link your page to the global tester page. Thanks!

+

It is also possible to specify the the location your nightly test results +are submitted. You can do this by passing the command line option +"-submit-server [server_address]" and "-submit-script [script_on_server]" to +utils/NewNightlyTest.pl. For example, to submit to the llvm.org +nightly test results page, you would invoke the nightly test script with +"-submit-server llvm.org -submit-script /nightlytest/NightlyTestAccept.cgi". +If these options are not specified, the nightly test script sends the results +to the llvm.org nightly test results page.

+ +

Take a look at the NewNightlyTest.pl file to see what all of the +flags and strings do. If you start running the nightly tests, please let us +know. Thanks!

@@ -519,12 +1116,12 @@ we'll link your page to the global tester page. Thanks!


Valid CSS! + src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"> Valid HTML 4.01! + src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"> John T. Criswell, Reid Spencer, and Tanya Lattner
- The LLVM Compiler Infrastructure
+ The LLVM Compiler Infrastructure
Last modified: $Date$